Why the AI Act Isn’t a Monster Under the Bed You Need to Fear
Once upon a time, in a galaxy that now feels far, far away, I was a young computer science student at Cambridge. I vividly remember sitting in a final-year AI lecture, listening to a professor tell us that AI hadn’t lived up to the hype, wasn’t all that special, and would never take over the world.
As someone raised on a steady diet of 2001: A Space Odyssey and T2: Judgement Day, that was both a relief - and, if I'm honest, a bit disappointing.
Fast forward to today, and that early “AI is nothing special” mindset is hard to reconcile with how AI now dominates the headlines - and our lives. From the mundane (targeted ads, smart thermostats, face-grouped photo albums on our phones) to the consequential (self-driving cars, AI-assisted medical diagnoses) to the dystopian (autonomous weapons), AI is everywhere.
So yes - AI is special. Whether because of its black-box complexity, its rapidly expanding social impact, or the challenge of regulating it responsibly, it deserves that label. Some refer to the impact AI has on our lives as the “Fourth Industrial Revolution.” Maybe they’re right.
This makes it all the more surprising how often I hear the EU AI Act described as regulatory overreach or anti-innovation.
I’ll be the first to admit - it’s not perfect legislation. The legislative process didn’t anticipate, or adapt well to, the meteoric rise of general-purpose AI (GPAI) models, for one. But given its status as the world’s first comprehensive AI law, it’s actually pretty decent.
Yes, there are ambiguities around how it will be applied and enforced. But that’s normal. Every major regulatory framework begins with uncertainty - remember the early days of the GDPR? Guidance evolves. Industry standards emerge. Case law develops. This will be no different.
And for most organisations, the truth is this: compliance with the AI Act will not be that hard. Here’s why:
True, I’ve glossed over some nuances above, including fine-tuning of models and repurposing existing AI systems. But the core message stands: for the average enterprise, compliance with the AI Act isn’t that onerous. I’m not being dismissive of the Act. Quite the opposite - I see it as critically important legislation that rightly places the heaviest burden on those who should rightly bear it.
But let’s move past the regulatory fear-mongering. For most organisations, the AI Act will not prove a straitjacket on innovation or business efficiency. It’s a measured, mostly sensible framework that reflects both the seriousness of the technology and our responsibility to wield it wisely.
If the AI Act is a burden, it’s one we should welcome.
AI, Cybersecurity and Data Privacy Lawyer | | FIP | AIGP | CIPP/E | Dual Qualified (E&W, NY)
4moThe Kid, the toaster and the fork are going into my illustrations bank - simple and pragmatic as always Phil - thank you!
Specialist Counsel at Thommessen
4moHalleluja! 🙌🙌 I agree fully with this and I’m happy to see this view on LinkedIn. To be honest, I think our profession (lawyers) are part of the reason why the AI Act is perceived as a bigger monster than it really is. If we think of the requirements in GDPR (transparency, fairness, legality, data minimization, accountabiliity etc.), unfair marketing provision, IPR protection and infringement regulations, anti-discrimination laws etc., most of the obligations in the AI Act arguably already apply to the use of AI, either explicitly or indirectly. 🤷♀️
AI Governance and Data Privacy in Healthcare | Future-Proofing Tech | AI Safety
4moPhil Lee I love it! Very well written. 👏
Senior Legal Counsel, UK, Africa, Group Services at Travelex
5moAs ever Phil you cut right through all the endless paragraphs being written about this subject to get straight to the point, and sensibly.
Privacy & AI Lawyer
5moGreat perspective, thanks Phil