Ban It or Harness It? Smart Governance Shifts AI from Risk to Reward
Introducing the issue
Many firms are still asking whether to lock down AI tools, to let then run wild, or to steer a middle course. A recent dispute in Dutch universities captures the heart of that question and offers clues for business practice.
Pedagogy, the theory and practice of how teaching and learning happen, is at the core of their concerns. While the word comes from classrooms, the same logic applies whenever people need to build knowledge and skills at work.
What Dutch academics fear
An open letter signed by dozens of Dutch lecturers urges universities to ban generative-AI tools in coursework. The authors argue that ChatGPT “hinders learning and deskills critical thought”, diverting students from the slow work of grappling with novel problems and deep understanding. They insist that classrooms must stay “spaces where students form their own deeply considered opinions”.
Although the letter also cites environmental and labour worries, its core pedagogical claim is clear: using AI at the point of learning weakens the very thinking it is meant to enhance.
The MIT “cognitive debt” experiment
Research from MIT’s Media Lab appears to back that worry. In a study of 54 participants writing essays, those who used ChatGPT showed the weakest brain-connectivity patterns on EEG and produced the least original prose. When the AI was withdrawn, they struggled to regain earlier engagement, a phenomenon the authors call “cognitive debt”.
The message for both lecturers and line managers would seem to be straightforward: if people lean on a large language model to do the thinking for them, their own cognitive muscles atrophy.
The counter-evidence: a Nature meta-analysis
A broader view tells a different story. A meta-analysis of 51 experimental studies published in Nature Humanities & Social Sciences Communications finds that, when ChatGPT is used with clear learning designs, it delivers a large positive impact on learning performance and moderate gains in higher-order thinking. Benefits are strongest when the tool acts as an “intelligent tutor or learning partner” in problem-based tasks over four to eight weeks, supported by frameworks such as Bloom’s taxonomy.
In other words, structure and purpose turn the same technology from a shortcut into a scaffold.
Putting the pieces together
What we can see from these publications is that if AIs are used as an unstructured shortcut to answers then we might expect diminished engagement and potential “cognitive debt”. However, if they are used within a planned learning design that demands explanation, critique and iteration, then we should expect higher performance and stronger thinking.
The Dutch letter correctly spots the risk but assumes it is inevitable. The Nature findings show it is contingent on design and governance.
Lessons for business adoption
Businesses face the same split that universities do. Some organisations block public LLMs outright. Others allow a free-for-all. The early leaders occupy the usable middle ground:
Scaffolding
Define when and how people should rely on AI, and build prompts or workflows that force critical input rather than passive copying.
Governance
Set procurement standards, monitor data security and energy use, and require transparent reporting on model updates and performance.
Outcome tracking
Treat AI initiatives as learning experiments: measure quality, creativity and employee capability over time, and adjust the scaffolds accordingly.
Conclusion
The debate is not about whether generative AI is good or bad, but under what conditions it adds value without eroding human expertise. Universities, like businesses, can fall into two easy traps: blanket bans or a “wild-west” anything-goes culture. The organisations that will win are those that pair adoption with explicit scaffolding and firm governance, keeping people, not the model, at the centre of learning and work.
Founder, AI Institute | Helping organisations move from AI training to AI adoption | 1,200+ trained
3moNice measured article from Stephen Redmond 👇 The organisations that will win are those that pair adoption with explicit scaffolding and firm governance, keeping people, not the model, at the centre of learning and work.