Why the AI Act Isn’t a Monster Under the Bed You Need to Fear

Why the AI Act Isn’t a Monster Under the Bed You Need to Fear

Once upon a time, in a galaxy that now feels far, far away, I was a young computer science student at Cambridge. I vividly remember sitting in a final-year AI lecture, listening to a professor tell us that AI hadn’t lived up to the hype, wasn’t all that special, and would never take over the world.

As someone raised on a steady diet of 2001: A Space Odyssey and T2: Judgement Day, that was both a relief - and, if I'm honest, a bit disappointing.

Fast forward to today, and that early “AI is nothing special” mindset is hard to reconcile with how AI now dominates the headlines - and our lives. From the mundane (targeted ads, smart thermostats, face-grouped photo albums on our phones) to the consequential (self-driving cars, AI-assisted medical diagnoses) to the dystopian (autonomous weapons), AI is everywhere.

So yes - AI is special. Whether because of its black-box complexity, its rapidly expanding social impact, or the challenge of regulating it responsibly, it deserves that label. Some refer to the impact AI has on our lives as the “Fourth Industrial Revolution.” Maybe they’re right.

This makes it all the more surprising how often I hear the EU AI Act described as regulatory overreach or anti-innovation.

I’ll be the first to admit - it’s not perfect legislation. The legislative process didn’t anticipate, or adapt well to, the meteoric rise of general-purpose AI (GPAI) models, for one. But given its status as the world’s first comprehensive AI law, it’s actually pretty decent.

Yes, there are ambiguities around how it will be applied and enforced. But that’s normal. Every major regulatory framework begins with uncertainty - remember the early days of the GDPR? Guidance evolves. Industry standards emerge. Case law develops. This will be no different.

And for most organisations, the truth is this: compliance with the AI Act will not be that hard. Here’s why:

  1. Prohibited AI? Don’t do it. If you’re developing or using prohibited AI, it’s a single rule: stop. Given the nature of what’s banned (manipulative, subliminal techniques; exploiting vulnerabilities etc.), you probably aren’t (or shouldn’t have been!) doing it anyway.
  2. AI with deception risk? Call it out. The Act categorises certain types of AI systems - like chatbots or generative AI - as having the potential to mislead people, because people may be unaware they are exposed to AI or AI-generated content. In these circumstances, your duty is one of transparency. Mark it or disclose it. That’s it.
  3. High-risk AI system deployers? Follow the rules - but they’re reasonable. If you’re deploying a high-risk AI system, your duties are mostly common sense: follow the provider’s instructions, maintain human oversight, and give it sensible inputs, to name a few. It’s like watching your kids (human oversight) to make sure they use the toaster properly (in accordance with instructions) and don’t stick a fork in it (bad input) - it’s about using tools safely and as intended.
  4. High-risk AI system providers? Yes, more is expected - rightly so. If you’re building high-risk AI systems - for use in areas like recruitment, education, essential public services, or critical infrastructure - your responsibilities include data governance, risk management, conformity assessments, post-market monitoring, and incident reporting, among others. There may be additional sector-specific or product-specific requirements beyond the AI Act. But isn’t that as it should be? These are systems that can have serious effects on people. Much of what’s required should already be good practice - the extra steps needed for AI Act compliance needn’t be a massive leap.
  5. General-purpose AI models - the greatest burden will fall on the shoulders of a few. That leaves rules for providers of general-purpose AI models. Unless you’re developing GPAI models with systemic risk, GPAI rules for the most part concern the need to prepare certain documentation (technical-, integration- and policy-related). Yes, there will be some internal coordination needed to produce these things. However, in practice, few organisations have the skills or resources necessary to develop their own GPAI models (let alone models with systemic risk) so instead license them in - meaning GPAI model compliance responsibilities will mostly end up falling on the shoulders of large AI vendors.

True, I’ve glossed over some nuances above, including fine-tuning of models and repurposing existing AI systems. But the core message stands: for the average enterprise, compliance with the AI Act isn’t that onerous. I’m not being dismissive of the Act. Quite the opposite - I see it as critically important legislation that rightly places the heaviest burden on those who should rightly bear it.

But let’s move past the regulatory fear-mongering. For most organisations, the AI Act will not prove a straitjacket on innovation or business efficiency. It’s a measured, mostly sensible framework that reflects both the seriousness of the technology and our responsibility to wield it wisely.

If the AI Act is a burden, it’s one we should welcome.

Dominika Kupczyk

AI, Cybersecurity and Data Privacy Lawyer | | FIP | AIGP | CIPP/E | Dual Qualified (E&W, NY)

4mo

The Kid, the toaster and the fork are going into my illustrations bank - simple and pragmatic as always Phil - thank you!

Like
Reply
Eli Karine Navestad

Specialist Counsel at Thommessen

4mo

Halleluja! 🙌🙌 I agree fully with this and I’m happy to see this view on LinkedIn. To be honest, I think our profession (lawyers) are part of the reason why the AI Act is perceived as a bigger monster than it really is. If we think of the requirements in GDPR (transparency, fairness, legality, data minimization, accountabiliity etc.), unfair marketing provision, IPR protection and infringement regulations, anti-discrimination laws etc., most of the obligations in the AI Act arguably already apply to the use of AI, either explicitly or indirectly. 🤷♀️

Soomin Aga L.

AI Governance and Data Privacy in Healthcare | Future-Proofing Tech | AI Safety

4mo

Phil Lee I love it! Very well written. 👏

Like
Reply
Candace Taylor-Gregg

Senior Legal Counsel, UK, Africa, Group Services at Travelex

5mo

As ever Phil you cut right through all the endless paragraphs being written about this subject to get straight to the point, and sensibly.

Like
Reply

Great perspective, thanks Phil

To view or add a comment, sign in

More articles by Phil Lee

Explore content categories