📢 From tomorrow, the 2nd August 2025, the EU’s rules for the most advanced AI models officially apply. This day is a pivotal moment: not just for the #AIAct but in general for trust in AI, innovation, and safety within the European Union. The AI Office must now step up and show that it can ensure compliance, especially from the largest model providers, many of whom are based outside the EU. Every new model release will be a test — of the industry's commitment to safe innovation and of the AI Office’s ability to enforce the rules. ✅ Many #GPAI companies like Mistral, Google, Anthropic, OpenAI or Microsoft have already signed the new Code of Practice. To me that is a very good signal: the Code is doable and it sets the bare minimum expectations for transparency and safety in Europe. At the same time, some serious concerns remain and I am still waiting for a reply to my letter of 14 July to the AI Office leadership. The Commission’s formal enforcement powers will not apply until August 2026. Yet that is far too late if we want to start building credibility within Europe. ⚡️That’s why I call on the AI Office to: ▪︎ Monitor compliance from day one ▪︎ Thoroughly check providers' actions ▪︎ Engage proactively on each major model release ▪︎ Report back to the European Parliament at least every two months For the AI Act applies the same as for all other laws. The digital future of the EU depends on strong enforcement. Therefore, we cannot afford a weak start. The AI Office must earn the public’s trust from day one!
The immediate implementation of the new EU regulations for advanced AI models and the role of the AI Office as a supervisory authority is an important goal. The measures aim to strengthen transparency, safety, and trust in AI technologies within the EU. A positive aspect is that major companies have voluntarily signed the Code of Practice. However, the delayed availability of formal enforcement powers until 2026 remains critical, as it could undermine the credibility of the regulation. The calls for comprehensive monitoring, proactive oversight, and regular reporting emphasize the purpose: protecting users and preventing a weak start to AI legislation.
Thanks for sharing, Axel. It will be interesting to see which chapters of the Code each developer has chosen to adhere to! or not! I do hope the AI Office will monitor compliance from day 1. As you say another year is way too long when things are moving so fast.
Thanks Axel Voss for highlighting this development. With a majority of AI labs and foundation model developers signing on, this initiative has strong potential to become a crucial monitoring component in global AI governance and safety. As you noted, AI offices will need to scale their teams appropriately to ensure successful implementation while maintaining an environment that supports continued AI innovation.
A crucial reminder that AI governance can’t succeed without global coordination. The GPAI’s efforts to bridge regional frameworks - like the EU AI Act - with broader international collaboration are timely and much needed.
Head of Legal, Special Situations
2moAxel, do you think it is realistic to require US and Chinese companies to adhere to EU rules?