Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

See when key Fabric features will launch and what’s already live, all in one place and always up to date. Explore the new Fabric roadmap

Reply
nlucero
Advocate I
Advocate I

Azure SQL Mirrored DB says "metadata tables are corrupted" when stopping then restarting replication

I created a Mirrored SQL DB in fabric that connects to a source Azure SQL DB via a service principal. I was able to get the replication up and running as expected. It began mirroring tables as expected and the SQL Analytics Endpoint worked as expected. When I made changes to the source DB, those changes were mirrored into Fabric as expected. However...

 

When I turned off my Fabric compute and turned it back on agin, the Mirroring DB (in Fabric) said it was still replicating but changes in the source DB were not actually being updated in Fabric. I waited hours and still nothing. The last update timestamp on my mirrored tables were all from before I first turned off the Fabric compute. Finally, I stopped the replication in Fabric and tried to restart it ("Start Replication"). When I did, the replication failed to restart with this error: Code: SqlChangeFeedError, Type: UserError, Message: Cannot enable fabric link on the database because the metadata tables are corrupted. ArtifactId: 352bb680-e0cc-4927-85d9-333b0b391c78. Since then, my mirroring will not restart.

 

Replication is disabled in my source Azure SQL DB because when I manually "enable" it and then try to start replication, Fabric throws a different error saying that my source DB's replication is already on. But, with replication off in my DB, I have no way to troubleshoot the actual problem.

 

The only thing that has worked so far is restoring the source DB from a restore point before I ever started replicating, creating a brand new Mirrored DB in Fabric, and rebuilding the mirroring from scratch. But that will be a non-starter in production. I have to be able to fix a DB mirror without reverting my DB to a backup. And, more fundamentally, I need my DB mirrors in Fabric to be resiliant to disruptions in Fabric compute or other intermittent network partitions.

 

Any advice?

1 ACCEPTED SOLUTION
nlucero
Advocate I
Advocate I

I heard back from MS support. TL;DR: you need to grant VIEW PERFORMANCE DEFINITION to the managed identity fabric is using to connect to the db:

GRANT SELECT, ALTER ANY EXTERNAL MIRROR, VIEW PERFORMANCE DEFINITION TO [User];

The official docs only mention the need for `SELECT, ALTER ANY EXTERNAL MIRROR`.

 

This does not fix a database that already has corrupted metadata, but it makes it so that the reproduction steps I mention in the first post do not generate a metadata corruption in the first place.

 

I believe MS is planning to push a fix for this before the end of the month, but I don't know whether that will be just an update to the documentation or a code fix that makes the "ANY EXTERNAL MIRROR" permission sufficient.

View solution in original post

12 REPLIES 12
nlucero
Advocate I
Advocate I

I heard back from MS support. TL;DR: you need to grant VIEW PERFORMANCE DEFINITION to the managed identity fabric is using to connect to the db:

GRANT SELECT, ALTER ANY EXTERNAL MIRROR, VIEW PERFORMANCE DEFINITION TO [User];

The official docs only mention the need for `SELECT, ALTER ANY EXTERNAL MIRROR`.

 

This does not fix a database that already has corrupted metadata, but it makes it so that the reproduction steps I mention in the first post do not generate a metadata corruption in the first place.

 

I believe MS is planning to push a fix for this before the end of the month, but I don't know whether that will be just an update to the documentation or a code fix that makes the "ANY EXTERNAL MIRROR" permission sufficient.

I can confirm that this access right solves my problem when using a "Service Principal" identity as registered in Entra against the workspace. i.e. the identity when registering the connection for the Mirror must pre-exist in SQL with the relevant logon, user and grants. You cannot create logins using the preview brower based session for querying Azure SQL Database. (You must use SQL Management Studio).

Assuming workspace name XXX and using Azure SQL Database (NOT managed instance)

 

1. Use management studio to execute

CREATE LOGIN [XXX] FROM EXTERNAL PROVIDER;



2. Change to target Azure SQL Database and create user 

CREATE USER [XXX] FOR LOGIN [XXX]

The documentation makes mention of a server role [##MS_ServerStateReader##] to which the login is added, but this does not make sense in the context of Azure SQL database where you cannot access master.


3. Grant the rights as specified in the documentation.

GRANT SELECT, ALTER ANY EXTERNAL MIRROR, VIEW PERFORMANCE DEFINITION TO [XXX];
GO


Within Entra you will have to find the app registration for the workspace and generate a secret. Use that secret, the Entra Tenant ID and the Client ID of XXX (from Entra) to register the Service Principal  based authenticated connection while registering the mirror connection.

I managed to stop and start the replication with impunity and many variations without causing any corruption to the target Azure SQL Satabase META data.

My only caveat is that I am working in a Trial Capacity that I cannot pause as per @nlucero !!!

v-ssriganesh
Community Support
Community Support

Hi @nlucero,
Thank you for reaching out to the Microsoft Fabric Forum Community.

After thoroughly reviewing the details you provided, here’s how you can troubleshoot this without needing to restore your source database:

  • Go to the mirrored database in the Fabric portal and use the Monitor Replication section. Look for outdated timestamps or error alerts on your tables. This will confirm if replication is stalled.
  • In the Azure Portal, ensure the System Assigned Managed Identity (SAMI) for your Azure SQL logical server is enabled (check under Identity settings). Then, in Fabric, go to the mirrored database item, select Manage Permissions, and confirm the SAMI has Read and Write access.
  • Check your Azure SQL Database allows connections from Azure services (check Networking settings in the Azure Portal). If you’re using a private endpoint, you may need to set up a virtual network data gateway in Fabric to maintain a stable connection.

If the error persists, you might need to delete the mirrored database item in Fabric (this won’t affect your source database) and create a new one using the same connection details. This can often resolve metadata issues without touching the source database.

If this information is helpful, please “Accept as solution” and give a "kudos" to assist other community members in resolving similar issues more efficiently.
Thank you.

I was able to isolate the issue and the specific scenario in which it occurs:
 
If I have a working Fabric SQL Mirror connected to a source Azure SQL DB and then I pause and then resume the Fabric compute, the Fabric mirror will report that it is still replicating but it is not. If I "Stop Replication" and then "Start Replication" on the mirror in the Fabric UI after restarting the fabric compute, then the metadata will become corrupted.
 
However, if after restarting the compute I instead go to "Configure Replication" in the Fabric Mirrored DB, change nothing, but then "Apply Changes", it will successfully resume the replication.
 
So the problem occurs specifically in the following sequence of events:
  1. Start with a Fabric Mirror that is successfully replicating.
  2. Stop the Fabric compute that the mirror depends on.
  3. Restart the Fabric compute
  4. "Stop Replication" on the mirror
  5. "Start Replication" on the mirror
  6. This always produces a metadata corruption in the Azure SQL DB.
 
The problem also intermittently occurs when:
  1. Start with a Fabric Mirror that is successfully replicating.
  2. Stop the replication (keeping the fabric compute on)
  3. Start the replication again
  4. This intermittently produces a metadata corruption in the Azure SQL DB.
 
And to avoid the metadata corruption:
  1. Start with a Fabric Mirror that is successfully replicating.
  2. Stop the Fabric compute that the mirror depends on.
  3. Restart the Fabric compute.
  4. In the Fabric Mirror UI, click "Configure replication". Change nothing, then click "Apply changes"
  5. The replication will resume as expected.
 
Even though I now know how to avoid this issue (namely, by reconfiguring the replication instead of ever starting or stopping it), I still want to emphasize that this is a significant issue for anyone putting a production workload on Fabric because if the metadata becomes corrupted for any reason, it seems to require an Azure SQL restore from backup to resolve it. Creating a new mirrored DB on top of an Azure SQL DB that has already experienced metadata corruption does not work; it will not replicate once corrupted. So, I still need to know if there is a way to recover or reset an Azure SQL DB with corrupted replication metadata without restoring the DB from backup, because restoring from a backup means downtime and lost data.

Hi @nlucero,

We sincerely regret the inconvenience this issue has caused and appreciate your detailed investigation, especially your valuable workaround using the reconfiguration approach. Given that the metadata corruption resides in the Azure SQL Database and requires a reset beyond standard portal options, we recommend raising a Microsoft support ticket for further assistance. You can create a support ticket using the link below:
https://learn.microsoft.com/en-us/power-bi/support/create-support-ticket

Please include the following details in your ticket to help the support team:

  • The full error message (Msg 22710... Cannot enable fabric link on the database because the metadata tables are corrupted).
  • The ArtifactId (352bb680-e0cc-4927-85d9-333b0b391c78 from your original post).
  • The specific sequence triggering the issue (e.g., compute pause > stop/start replication).
  • Your finding that reconfiguration avoids corruption.

If this guidance helps, please “Accept as Solution” and drop a “kudos” to make it easier for other community members to find. We hope this resolves the issue for you.

Thank you.

Hi @nlucero,
Could you please confirm if the issue has been resolved after raising a support case? If a solution has been found, it would be greatly appreciated if you could share your insights with the community. This would be helpful for other members who may encounter similar issues.

Thank you for your understanding and assistance.

I raised a support case several days ago. Support confirmed that other users had reported a similar error and the product team is looking into it. A member of the product team was able to manually uncorrupt the metadata in my Azure SQL db, but I am still awaiting information about how/if I can do that for myself. And, in the interim, a different Azure SQL db has experienced metadata corruption in a slightly different scenario. 

 

It has taken many days and I've received almost no useful information, despite following up with support almost daily. With the state of this technology and its support channel, I will likely need to find a different solution for this workload.

Hi @nlucero,

Thank you for the update, and I truly appreciate you taking the time to share your continued experience.

You've clearly gone above and beyond in your investigation, and your findings have already helped others in the community understand and avoid this issue. At this point, continuing to work through your existing support ticket is the appropriate channel for resolution

In the meantime, I’ll continue monitoring this thread closely and will share any related community updates or documentation changes as they become available.

Thank you again for your detailed contributions they’re helping improve visibility into this important issue for the broader community.

If you feel any part of this discussion has been helpful, please consider clicking “Accept as Solution” so it can gain more visibility and assist others encountering similar challenges.
Thank you.

I have also raised a support ticket. Since what I was working on was just PoC I did not require a "then and there" fix. They have confirmed that the only way to fix is by "fiddling" behind the scenes and that end-users are in no position to solve the problem for themselves. I was assured that it has been escalated with the Product Team with no commitment to an estimated fix date.  The lack of "traceability" leaves me feeling uncomfortable that the issue has not made it's way onto the official "known issues" list. All that the support desk could provide is verbal assurance.

As it stand I will not take Fabric Mirror anywhere near a production workload without a root cause fix!



I brought it up with the PG members (pointing them to this thread) and at least they seem to be acknowledging the issue now. Didn't sound like an easy fix though.

Just to confirm I have experienced the exact same problem and managed to replicate the scenario using a Fabric Trial. Based on this risk we would not consider talking the technology to production. A simple start and stop of the replication should not have such a high impact on the source. For platform as a managed service this is truly alarming.

My steps were simply to:

1. Stop the fabric mirror

2. Start the fabric mirror


Since it was a Fabric Trial I could find no way to pause the capacity. 

 

DBCC and using DMV's could not identify any problems / corruptions in the source Azure SQL Database. Dynamic Management Views are probably fruitless since the link itself cannot be established.

The exact same error : "Code: SqlChangeFeedError, Type: UserError, Message: Cannot enable fabric link on the database because the metadata tables are corrupted. ArtifactId: e21c6419-bdca-497c-8f00-96ad0c262608"

 

Destroying the mirror and trying to recreate it from scratch has absolutely no effect on the outcome ... which re-inforces the suspicion that something fundamentally changes in the source database.

Thanks for the reply @v-ssriganesh - I confirmed all of the steps above that they're not the root problem. Details below. But, in short, I am able to see the same error in the DB itself when I run

EXEC sys.sp_change_feed_enable_db  
    @destination_type = 2;  
GO

nlucero_1-1744904630765.png

The full error message is: 

Msg 22710, Level 16, State 1, Procedure sys.sp_synapse_link_enable_db_internal, Line 415 [Batch Start Line 0]
Could not update the metadata. The failure occurred when executing the command 'SetTridentLink(Value = 1)'. The error/state returned was 22697/1: 'Cannot enable fabric link on the database because the metadata tables are corrupted.'. Use the action and error to determine the cause of the failure and resubmit the request.

So, it appears that the corrupted metadata tables are in my Azure SQL DB, independent of Fabric, and I don't know how to uncorrupt or reset them. Further, it will be a problem if the metadata tables become corrupted any time I stop replication and/or turn off my Fabric compute.

 

More Detail:

 

This is the view of the mirrored db in my Fabric portal. Notice that even though I previously had tables succesfully mirroring, the portal won't even list them when this error is thrown.

nlucero_2-1744904875662.png

I went to "manage permissions" and confirmed that my SAMI is configured with Read/Write permissions on the db. I then went into the connection definition itself, removed the credentials, and then re-added them to confirm that the connection parameters are valid and the connection is active (it is).

 

Also, the database does allow connections from Azure services. I confirmed this. But, additionally, Fabric is having no problem connecting to that Azure SQL DB in general. I opened a new Data Factory job and successfully confirmed a connection to the same DB using the same access credentials. So, the connection itself doesn't seem to be the problem. As summarized above, I'm getting the "currupt metadata" error in the Azure SQL DB itself, even when not accessing from Fabric.

Helpful resources

Announcements
May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

May 2025 Monthly Update

Fabric Community Update - May 2025

Find out what's new and trending in the Fabric community.