Back

09 October 2025

The month in 5 bytes

  • EU Data Act
  • European Commission issues guidance for AI incidents
  • U.S. and UK sign a new Technology Prosperity Deal
  • White House Memo on Science & Technology spending
  • China’s AI labeling rules now in effect
EU Data Act

As of 12 September 2025, the EU Data Act has been in full force, introducing fundamental changes for manufacturers of “connected products” and providers of related services. It creates new rights for users, and consequently, new obligations on businesses. Companies involved in producing or handling connected products must now take the following actions:

  • Data Inventory: Businesses should map which product and service data is generated, where it is stored, and how it can be accessed. Collaboration with relevant technical teams is essential.
  • Scoping Products and Services: Manufacturers of connected products and providers of related services should identify which offerings fall within the scope of the Data Act.
  • Data Licensing and Usage Rights: Manufacturers may only use product and service data under clear contractual terms. If not already in place, data licence agreements should be established with users, for example by updating existing terms and conditions or by adding a Data Act Addendum.
  • User Requests: Users can now demand access to “their” data and request onward sharing with third parties. Processes for handling these requests and assigning responsible roles within the organisation are critical.
  • Product Information: Before contract conclusion, clear details on data types, storage location, access options, and intended use must be provided. These transparency obligations apply to sellers, lessors, and providers of connected products and related services.

Providers of digital add-on services (apps, platforms, remote services) are also affected. They must disclose detailed information on data use and sharing, and ensure non-discriminatory access and transfer. That also affects providers of data processing services such as cloud computing and SaaS. Switching barriers (“vendor lock-in”) must be removed, data portability ensured, and exit processes enabled both contractually and technically.

European Commission issues guidance for AI incidents

On 26 September 2025, the European Commission released draft guidance and a reporting template to support providers of high-risk AI systems in meeting their upcoming serious incident reporting obligations under Article 73 of the EU AI Act, which is applicable from August 2026. Under the AI Act, the provider is required to report any serious incident to the market surveillance authorities of the Member States where that incident occurred.

The guidance aims to clarify how and when to report serious incidents, with a view to ensuring early risk detection, rapid intervention, and increased public trust in high-risk AI technologies. A serious incident is defined as one that directly or indirectly leads to:

  • Death or serious harm to health or safety;
  • Serious and irreversible disruption of critical infrastructure;
  • A breach of fundamental rights under EU law; or
  • Serious harm to property or the environment.

If a high-risk AI system is subject to the serious incident reporting obligation under both the AI Act and other existing sectorial regulations such as DORA, NIS2 and the Medical Devices Regulation, the reporting obligation under the AI Act will apply only to serious incidents involving infringements of fundamental rights. Other types of serious incidents, such as those involving safety risks or critical infrastructure disruptions, would need to be reported under the applicable sectoral regulations in order to avoid duplicative reporting.

The European Commission has also introduced a streamlined, user-friendly reporting template for providers and authorised representatives. Stakeholders can submit feedback on the draft materials by 7 November 2025.

U.S. and UK sign a new Technology Prosperity Deal

On 18 September, the U.S. and UK entered a new Technology Prosperity Deal accompanied by a string of headline grabbing UK investment announcements by major U.S. tech companies. The deal, formalized through a Memorandum of Understanding (MoU), marks a strategic collaboration between the two governments aimed at fostering technological innovation and economic growth. While the MoU itself is light on binding commitments, it sets the stage for enhanced cooperation in key areas including artificial intelligence, quantum computing, civil nuclear energy, and frontier technologies. In particular, the deal outlines the intention of the two governments to collaborate in relation to research initiatives, developing secure AI infrastructure, and creating the “workforce of the future” (however, there are no specific funding commitments in relation to these objectives).

It is likely that the investment commitments made alongside the deal will result in an uptick in investment and M&A activity in the UK digital infrastructure and wider tech industry. However, there remain obstacles to attracting technology investment in the UK that are not addressed by the bill, such as taxation, immigration requirements and regulatory uncertainty, which have been the subject of discussion, in connection with the arrival of the new deal.

White House Memo on Science & Technology spending

On 23 September 2025, the White House issued a Memorandum on 2027 Budget Priorities and Cross-Cutting Actions in support of long-term Research & Development - addressed to the heads of executive departments and government agencies to guide how they shape science and technology budgets and programs. The Memo sets out five headline priorities – (1) leadership in critical and emerging technologies (AI, quantum, semiconductors, advanced networks/computing, advanced manufacturing), (2) energy, (3) security, (4) health & biotechnology, and (5) space – and pairs them with five system‑wide actions: (1) embed Gold Standard Science, (2) build a STEM workforce, (3) expand access to world‑class research infrastructure, (4) revitalize the S&T ecosystem through partnerships, and (5) technology transfer, and focus funding on high‑value, mission‑aligned R&D.

Overall, it is a practical compass for where U.S. public investment, standards work, and policy attention will concentrate; as government agencies will translate it into programs and guidance, we can expect some significant ripple effects also outside the U.S., in particular for international research collaboration, supply chains (especially chips and quantum), and the operational norms that shape AI assurance, research security, and technology governance.

China’s AI labeling rules now in effect

On 1 September 2025, China’s latest regulatory instruments targeting AI-generated content took effect, marking a pivotal shift in the country’s approach to content authenticity. Under the Measures for Labeling AI-Generated Synthetic Content and national standard GB 45438–2025, providers and platforms must ensure that AI-generated text, images, audio, video, and virtual scenes are clearly marked. This includes both visible and embedded labels such as metadata and watermarks.

The new standard outlines detailed technical specifications, including the required placement and persistence of labels. Platforms are further tasked with detecting unlabelled content, issuing risk notifications, and enforcing content traceability across the distribution chain. These measures build on earlier “deep synthesis” regulations, closing compliance gaps and reinforcing accountability for the use and spread of generative media.

Together, the new measures create a structured compliance environment for generative content, integrating legal and technical controls to prevent misuse, misinformation, and loss of content provenance. Their enforcement is likely to converge with China’s broader regulatory strategy for online information ecosystems.