Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredJoin us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM. Register now.
QuestionForGroup
Hi All.
I have a dataset of 2.9M rows which upon unpivoting and merging with 1 table becomes some 246M rows. After that I have to remove duplicates from the table(based on Col1, Col2, Col3…. Col7). After this step if I close and apply the dataset loads for 3/4 hours and then error pops as “buffer memory out of space”.
Could anyone please help me out on this?
Possible solutions tried till now:
1. Taken a dataset of 75000 rows and did all steps and checked all working or not. After getting satisfactory results I uploaded the dashboard in power bi service and renamed the file of 2.9M rows same as file name of 75000 rows and refreshed it in power bi service. —- Result: This process didn’t work as it is not taking 2.9M rows.
2. Tried joining tables but this doesn’t work out as well. (Many to one join but corresponding values are unique.) Hence count is not matching.
Your dataset blows up (246M rows) and deduplication happens after folding breaks, causing memory errors.
Fix: push unpivot/merge/deduplication to the source (SQL/Dataflow), reduce columns early, and keep query folding alive. If needed, use incremental refresh or DirectQuery.
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.