Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Fabric Ideas just got better! New features, better search, and direct team engagement. Learn more

Use SparkSQL without Default Lakehouse

Make it possible to write SparkSQL without having a Default Lakehouse.

 

With 3- or 4- part naming, i.e.

[workspace].[lakehouse].[schema].[table] 

 

there should be no need to attach a Lakehouse in order to use SparkSQL.

 

Needing to attach a Lakehouse is annoying and adds extra complexity.

Status: New
Comments
raym85
Regular Visitor
Oh nice! this is a good one! Agreed it can be a pain, especially when merging from feature branch to main. Seems to work well with deployment pipelines though.
BHouston1
Regular Visitor
Not a bad idea on the 4-part naming, but I could see an issue if a workspace is renamed and this breaks notebooks. I think that shortcuts are designed to solve for this, but then unfortunately you would need a default lakehouse before you can use them...
smpa01
Super User

Not exactly the way you are asking, but spark sql is capable of reading tables from unattached lakehouse by utilizing Azure Blob File System Secure path as following

 

%%sql
CREATE OR REPLACE TEMPORARY VIEW df
USING delta
OPTIONS (
  path 'abfss://'
);

SELECT * FROM df LIMIT 10;

 

BHouston1
Regular Visitor
@smpa01 Yes, that is available in out of the box Spark SQL, but I believe this idea is about being able to refer to tables using Fabric workspace/lakehouse conventions rather than abfss paths. Similar to Unity Catalog in Databricks. You therefore don't need a default lakehouse if you choose to always specify which lakehouse in each query.