This is the Trace Id: c5c74c497cb57f6a2d12c2b716080b59
Skip to main content
Corporate Responsibility

Digital Safety Content Report

The Digital Safety Content Report (DSCR) covers actions that Microsoft has taken in relation to child sexual exploitation and abuse imagery (CSEAI), grooming of children for sexual purposes, violent extremist and terrorist (VET) content, and non-consensual intimate imagery (NCII).

Digital technologies allow people across the globe to share information, news, and opinions that, together, span a broad range of human expression. Unfortunately, some people use online platforms and services to exploit the darkest sides of humanity, which diminishes both safety and the free exchange of ideas.

At Microsoft, we believe digital safety is a shared responsibility requiring a whole-of-society approach. This means that the private sector, academic researchers, civil society, and governmental and intergovernmental actors all work together to address challenges that are too complex – and too important – for any one group to tackle alone. 

For our part, we prohibit certain content and conduct on our services, and we enforce rules that we’ve set to help keep our customers safe. We use a combination of automated detection and human content moderation to remove violating content and suspend accounts. Additional information is available on Microsoft’s Digital Safety site.

The Microsoft Services Agreement includes a Code of Conduct that outlines what’s allowed and what’s prohibited when using a Microsoft account. Some services offer additional guidance, such as the Community Standards for Xbox, to show how the Code of Conduct applies on their services. Reporting violations of the Code of Conduct is critical to helping keep our online communities safe for everyone. More information on how to report content and conduct is included below.

Protecting children online

Practices

Microsoft has a long-standing commitment to child safety online. We develop tools and engage with a variety of stakeholders to help address this issue. As specified in our Code of Conduct and on our Digital Safety site, we prohibit any child sexual exploitation or abuse. This is content or activity that harms or threatens to harm a child through exploitation, trafficking, extortion, or endangerment. This also includes the sharing of visual media that contains sexual content that involves or sexualizes a child or through grooming of children for sexual purposes.

Microsoft is a member of the WePROTECT Global Alliance, the multistakeholder organization fighting child sexual exploitation and abuse online. Microsoft also supports the Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse.

Microsoft is a founding member of the Technology Coalition, the tech industry’s non-profit association to combat online child sexual exploitation and abuse. We also support and/or hold leadership and advisory roles with numerous other child safety organizations, including the Family Online Safety Institute, INHOPE, the Internet Watch Foundation, and the National Center for Missing and Exploited Children (NCMEC).

 

Processes and systems

Detection and removal of child sexual exploitation and abuse imagery (CSEAI)

We deploy tools to detect child sexual exploitation and abuse imagery (CSEAI), including hash-matching technology (e.g., PhotoDNA or MD5) and other forms of proactive detection. In-product reporting is also available for services such as OneDrive, Skype, Xbox, and Bing, whereby users can report suspected child exploitation or other content. Microsoft developed PhotoDNA, a robust hash-matching technology, to help find duplicates of known child sexual exploitation and abuse imagery. We continue to make PhotoDNA freely available to qualified organizations, and we leverage PhotoDNA across Microsoft’s consumer services.

As a U.S.-based company, Microsoft reports apparent CSEAI, including grooming, to NCMEC via the CyberTipline, as required by U.S. law. We take action on the account(s) associated with the content we have reported to NCMEC. Users have the opportunity to appeal these account actions by visiting the Moderation and enforcement webpage and using this Account Reinstatement webform.

Addressing AI-generated child sexual abuse risks
In 2024, Microsoft joined Thorn, All Tech Is Human, and other industry leaders in committing to Safety by Design Principles that guard against the creation and spread of AI-generated child sexual abuse material (AIG-CSAM). These Principles – Develop, Deploy, and Maintain – align with Microsoft’s longstanding approach to responsible AI and online child safety. In its Responsible AI Transparency Report, Microsoft details how it builds and oversees AI systems responsibly. This Digital Safety Content Report outlines how we tackle child sexual abuse and exploitation risks, including addressing any AIG-CSAM on our services. For additional detail on our approach to AIG-CSAM, read this overview.

 

Outcomes – July through December 2024

During the period, Microsoft submitted 49,617 reports to NCMEC.

For our hosted consumer services – such as OneDrive, Outlook, Skype and Xbox – Microsoft actioned 53,982 pieces of content and 9,269 consumer accounts associated with CSEAI, including grooming of children for sexual purposes during this period. Microsoft detected 99.64% of the content that was actioned through automated technologies, while the remainder was reported to Microsoft by users or third parties. Of the accounts actioned, 4.49% were reinstated upon appeal and further review of the content.

Microsoft works to prevent CSEAI from entering the Bing search index by leveraging block lists of sites containing CSEAI identified by credible sources, and through PhotoDNA scanning of the index and visual search references when users upload images on one of Bing’s hosted features such as visual search. During this reporting period, Microsoft actioned 226,8111 pieces of content which were flagged2 as apparent CSEAI through content moderation processes and reported to NCMEC, with 99.82% detected through PhotoDNA scanning and other proactive measures.

Note: Data in this report represents the period July-December 2024 and includes Microsoft hosted consumer services such as OneDrive, Outlook, Skype, Xbox and Bing. Note: Skype was retired on May 5, 2025, but was available during the period covered by this report. This report does not include data representing LinkedIn or GitHub which issue their own transparency reports.

  • [1] The pieces of content for this period are higher when compared with data reported in prior periods because of the enhancements in data collection which have resulted in an increase in the volume of actionable content.
  • [2] Flagged content encompasses both automatically detected content and content reported by users.

Select previous Digital Safety Content Report to download

FAQ

Questions about Child Sexual Exploitation and Abuse Imagery

  • Microsoft has in-product reporting for services such as OneDrive, Skype, Xbox, and Bing, whereby users can report suspected CSEAI or other types of content.
  • In 2009, Microsoft partnered with Dartmouth College to develop PhotoDNA, a technology that aids in finding and removing known images of child sexual exploitation and abuse.

    PhotoDNA creates a unique digital signature (known as a “hash”) of an image which is then compared against signatures (hashes) of other photos to find copies of the same image. When matched with a database containing hashes of previously identified illegal child sexual abuse images, PhotoDNA helps detect, disrupt, and report the distribution of child sexual exploitation material. PhotoDNA is not facial recognition software and cannot be used to identify a person or an object in an image. A PhotoDNA hash is not reversible, meaning it cannot be used to recreate an image.

    Microsoft has made PhotoDNA freely available to qualified organizations, including technology companies, law enforcement agencies, developers, and non-profit organizations.

    More information can be found on the PhotoDNA site.

  • As explained by the National Center for Missing & Exploited Children (NCMEC), the CyberTipline “is the nation’s centralized reporting system” through which “the public and electronic service providers can make reports of suspected online enticement of children for sexual acts, extra-familial child sexual molestation, child pornography, child sex tourism, child sex trafficking, unsolicited obscene materials sent to a child, misleading domain names, and misleading words or digital images on the internet.”

    As a U.S.-based company, Microsoft reports all apparent CSEAI to NCMEC, as required by US law. According to NCMEC, staff review each tip to work to find a potential location for the incident reported so that it may be made available to the appropriate law enforcement agency across the globe. A CyberTip report to NCMEC can include one or multiple items.

  • Microsoft complies with global regulations to take action against child sexual exploitation and abuse content it discovers on its services. For example, pursuant to 18 USC 2258A, we report apparent child sexual exploitation content to the National Center for Missing and Exploited Children, which serves as a clearinghouse to notify law enforcement globally of suspected illegal child sexual exploitation content.

Addressing violent extremist and terrorist content

Practices

At Microsoft, we recognize that we have an important role to play in helping to prevent violent extremists from exploiting digital platforms by addressing violent extremist and terrorist (VET) content on our hosted consumer services. As specified in our Code of Conduct and on our Digital Safety site, we prohibit content that promotes or glorifies violent extremists or terrorists, helps them to recruit, or encourages or enables their activities. We look to the United Nations Security Council’s Consolidated List to identify terrorists or terrorist groups. Violent extremists include people who embrace an ideology of violence or violent hatred towards another group.

Microsoft's approach to addressing VET content is consistent with our responsibility to manage our services in a way that respects fundamental values such as safety, privacy, and freedom of expression. We collaborate with multistakeholder partners—including the Global Internet Forum to Counter Terrorism (GIFCT), the Christchurch Call to Action, and the EU Internet Forum—to work collectively to eliminate VET content online.

Microsoft is a founding member of GIFCT and, in 2024, held the Chair of the GIFCT Operating Board. Through GIFCT, Microsoft participates in a range of activities, including GIFCT’s Incident Response processes. In the event GIFCT activates a Content Incident or Content Incident Protocol, Microsoft ingests related hashes from GIFCT’s hash-sharing database. This allows Microsoft to quickly become aware of, assess, and address content circulating on its hosted consumer services resulting from an offline terrorist or violent extremist event, consistent with Microsoft policies. For further information, reference GIFCT's annual transparency report, which includes information on the hash-sharing database.

 

Processes and systems

VET Content Prevention and Detection

Detection and enforcement related to VET Content

We review reports from users and third parties on potential VET content, take action on content, and, if necessary, take action on accounts associated with violations of our Code of Conduct. Users have the opportunity to appeal these account actions using this Account Reinstatement webform. In addition, Microsoft leverages hash-matching technology to address the reappearance of online content that has been previously identified as VET content in violation of Microsoft’s policies. The hashing technology compares the hashes generated from user-generated content (UGC) with hashes of reported (known) VET content, in a process called “hash matching.”

 

Outcomes – July through December 2024

During this period, for our hosted consumer services – such as OneDrive, Outlook, Skype and Xbox – Microsoft actioned 599 pieces of content associated with VET. Microsoft detected 97.16% of the content that was actioned through automated technologies, while the remainder was reported to Microsoft by users or third parties. Of the accounts actioned for VET content, none were reinstated upon appeal.

Note: Data in this report represents the period July-December 2024 and includes Microsoft hosted consumer services such as OneDrive, Outlook, Skype, and Xbox. Note: Skype was retired on May 5, 2025, but was available during the period covered by this report. This report does not include data representing LinkedIn or GitHub which issue their own transparency reports.

Select previous Digital Safety Content Report to download

FAQ

Questions about Violent Extremist and Terrorist Content

  • Microsoft leverages hashes to detect duplicates of known violent extremist and terrorist content on our hosted consumer services. Microsoft determines whether to action matched content according to our own content policies.

    For more information on the GIFCT hash-sharing database, including information on total number of hashes and breakdown by type, please refer to the annual GIFCT transparency report.

  • Our Bing search engine strives to be an unbiased information and action tool, presenting links to all relevant information available on the Internet. Therefore, we will remove links to VET-related content from Bing only when that takedown is required of search providers under local law. Government requests for content removal are reported as part of our Government Requests for Content Removal Report.

Non-consensual intimate imagery

Practices

Microsoft takes seriously the harm caused by the sharing of non-consensual intimate imagery. Sharing intimate images of another person without that person’s consent violates their personal privacy and dignity.

Microsoft prohibits the creation and distribution of non-consensual intimate imagery (NCII). Microsoft also prohibits content soliciting NCII or advocating for the production or redistribution of intimate imagery without the subject’s consent. This includes photorealistic NCII content that was created or altered using technology. An overview of Microsoft’s approach to intimate imagery can be found here.

In the period covered by this report, Microsoft commenced a new partnership with StopNCII.org to pilot a victim-centered approach in Bing, leveraging hashes from victim reports to detect duplicate content in Bing’s image search results. StopNCII.org is a free tool designed to support victims of NCII and enable action across multiple participating platforms. Find out more on their site: StopNCII.org

 

Processes and systems

NCII Prevention and Detection

Any member of the public can request the removal of a nude or sexually explicit image or video of themselves which has been shared on a Microsoft consumer service without their consent. Microsoft has a dedicated Report a Concern page that gives guidance on how to report digital safety harms like NCII and others. Once violating content is reviewed and confirmed, Microsoft removes reported links to photos and videos from search results in Bing globally and/or removes access to the content itself when shared on Microsoft hosted consumer services. This includes both real content or synthetic, “deepfake” imagery.

Adults concerned about non-consensual sharing of their imagery can also report it to StopNCII.org.

Outcomes – July through December 2024

During this period, for our hosted consumer services – such as OneDrive, Outlook, Skype and Xbox – and Bing, Microsoft received 933 requests for removal of non-consensual intimate imagery (NCII). Of the requests, 540 were actioned, accounting for 57.88% of the total requests received.

Note: Numbers are aggregated across Bing and Microsoft hosted consumer services for which a content removal request was received during this reporting period. Note: Skype was retired on May 5, 2025, but was available during the period covered by this report.

Select previous Digital Safety Content Report to download

FAQ

Questions about non-consensual intimate imagery

  • Microsoft does not allow the sharing or creation of sexually intimate images of someone without their permission—also called non-consensual intimate imagery, or NCII. This includes photorealistic NCII content that was created or altered using technology. We do not allow NCII to be distributed on our services, nor do we allow any content that praises, supports, or requests NCII.

    In previous years, we have reported this as “non-consensual pornography.” However, we have updated this term to “non-consensual intimate imagery” to ensure that the language we use to refer to this type of violation is respectful to victims and reflects the intrusive and damaging nature of this type of content.

  • Microsoft takes a range of steps to address the risk that our services are misused to facilitate NCII harms, including tackling abusive AI-generated content. An overview is available in this blog: An update on our approach to tackling intimate image abuse - Microsoft On the Issues
  • Microsoft has a dedicated Report a Concern page that gives guidance on how to report digital safety harms like NCII and others.
Back to tabs

General questions about this report

  • This report addresses Microsoft consumer services including (but not limited to) OneDrive, Outlook, Skype, Xbox and Bing. Xbox also publishes its own transparency report, outlining our approach to safety in gaming. This report does not include data representing LinkedIn or GitHub which issue their own transparency reports. Skype was retired on May 5, 2025, but data from Skype is included in the period covered by this report.
  • When we refer to “hosted consumer services,” we are talking about Microsoft services where Microsoft hosts content generated or uploaded by credentialed users (i.e., those logged into a Microsoft account). Examples of these services include OneDrive, Outlook, and Xbox.
  • For this report, “content actioned” refers to when we remove a piece of user-generated content, such as images and videos, from our services and/or block user access to a piece of user-generated content.

    For Bing, “content actioned” may also mean filtering or de-listing a URL from the search engine index.

  • For this report, “account actioned” refers to when we suspend or block access to an account or restrict access to content within the account.
  • “Proactive detection” refers to Microsoft-initiated flagging of content on our services, whether through automated or manual review.
  • “Accounts reinstated” refers to actioned accounts that were fully restored, including content and account access, upon appeal.
  • Hash-matching technology uses a mathematical algorithm to create a unique signature (known as a “hash”) for digital images and videos.
Follow Microsoft