Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Become a Certified Power BI Data Analyst! Prepare for Exam PL-300 with expert-led live sessions. Get registered!

Reply
mtomova
Helper III
Helper III

Drop table in a Fabric Lakehouse before exporting a new one via a python notebook

Hi,

 

I have a table in a lakehouse and this table is loaded to a python notebook within Fabric.

The data is manipulated in python and then I am writing back updated data as a file in my lakehouse.

 

What I want to do next, but I am failing, is to delete the old table in my lakehouse and create a new one from the updated file. 

 

Please see below the steps that the notebook is executing:

 

1. Loads data (table) from a Lakehouse.

 
2. Python function manipulates the data.
 
3. I am exporting the dataframe as a csv file to 'Files' in the lakehouse.
 
4. Then I load the csv file as a delta table (manually, but I am looking to automate this step!)
 
Not sure how to drop the table in the lakehouse and then create a new one with the exported csv file automatically.
I have read and read and read online and I can't find the correct steps in python nor in a pipeline.
 
Hopefully someone can help.
 
Thanks,
Maria
1 ACCEPTED SOLUTION
mtomova
Helper III
Helper III

Hi, I found a solution, but in pyspark.

 

I was unable to find a way to do what I wanted via python.

 

Once I've updated the user defined function, I was able to drop the existing table and save a new version.

 

Some of code I used matches with the code posted by @andrewsommer .

 

spark.sql("DROP TABLE IF EXISTS LakehouseName.MyTableName")

df.write.mode("overwrite").format("delta").saveAsTable("MyTableName")

View solution in original post

5 REPLIES 5
mtomova
Helper III
Helper III

Hi, I found a solution, but in pyspark.

 

I was unable to find a way to do what I wanted via python.

 

Once I've updated the user defined function, I was able to drop the existing table and save a new version.

 

Some of code I used matches with the code posted by @andrewsommer .

 

spark.sql("DROP TABLE IF EXISTS LakehouseName.MyTableName")

df.write.mode("overwrite").format("delta").saveAsTable("MyTableName")
mtomova
Helper III
Helper III

Hi, thanks for the detailed information.

 

However, the notebook is in python and I am using python libraries and functions that don't run in Spark.

 

I am trying to perform the steps you have described above using python.

 

Unfortunutely, I can't execute what I am after in spark.. I have tried to re-write the code, but I have lack of knowledge to do so.

 

Thanks,

Maria

v-vpabbu
Community Support
Community Support

Hi @mtomova,

 

Thanks @andrewsommer for Addressing the issue.

 

we would like to follow up to see if the solution provided by the super user resolved your issue. Please let us know if you need any further assistance.
If our super user response resolved your issue, please mark it as "Accept as solution" and click "Yes" if you found it helpful.

 

Regards,
Vinay Pabbu

Hi,

 

I am using python and the steps @andrewsommer has provided, although super detailed and clear, are not working in my case.

 

When I try to run my function it is failing, because I have used Python syntax and I am unable to re-write it in pyspark.

 

Is there a way to perform the automation I need using Python or Fabric does not support that and I need to use pyspark?

 

Thanks,

Maria 

andrewsommer
Memorable Member
Memorable Member

You’re on the right track

  1. Load the Lakehouse Table
CopyEdit
df = spark.read.table("LakehouseName.TableName")
​
  • Manipulate Your DataFrame
CopyEdit
df_updated = your_transformation_function(df)
​
  1. Write the New Data to a Temporary Delta Path.  It's better to write as Delta, not CSV, for easier ingestion into a Lakehouse table.
CopyEdit
output_path = "Files/your_path/new_table_data"

df_updated.write.format("delta").mode("overwrite").save(f"lakehouse:///path/{output_path}")
​
  • Drop the Existing table.
CopyEdit
spark.sql("DROP TABLE IF EXISTS LakehouseName.TableName")
​
  • Create a New Table from the Delta File

CopyEdit
spark.sql(f"""
CREATE TABLE LakehouseName.TableName
USING DELTA
LOCATION 'lakehouse:///path/{output_path}'
""")
  • Automate:
    1. Add a notebook activity with your script
    2. Optionally add a Lakehouse Activity to clean up files or validate the table.
    3. Schedule or trigger the pipeline as needed.

 

Please mark this post as solution if it helps you. Appreciate Kudos.

Helpful resources

Announcements
May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

May 2025 Monthly Update

Fabric Community Update - May 2025

Find out what's new and trending in the Fabric community.