AI Still Hallucinates. Here's How to Reduce It:
(This article I wrote was originally published on Reworked)
AI hallucinations aren’t slowing adoption, but they highlight why strong data governance is essential to improving accuracy and trust.
When artificial intelligence (AI) first burst into the public sphere, hallucinations were seen by many in the business world as a minor, temporary glitch — something that could be solved through troubleshooting and iterative development.
But now, several years on from the public release of ChatGPT, new research from OpenAI shows that its models are producing more incorrect information than ever before. Its latest reasoning model, o4-mini, hallucinates almost half (48%) of the time, while its slightly older o3 model hallucinates at a rate of 33%, according to the company’s own figures. These models, which provide more complex outputs than previous versions , are hallucinating at a much higher rate than their predecessors. OpenAI’s o1 model, for example, hallucinates at a rate of 16%.
As AI products become better at complex reasoning, certain models seem to be getting worse at generating information that’s error-free. Some view this as a new and destabilizing development for the future of AI, but in fact, AI experts have been aware of these issues for years, and high rates of hallucination have not deterred AI use whatsoever. Rather than causing us to reevaluate the future or purpose of AI, the high rates of hallucination in models like OpenAI’s o4 only reinforce the importance of strong data governance as major components of AI success.
How Strong Data Governance Improves AI Outcomes
AI use has recently reached an all-time high, despite how common it is for users to receive inaccurate information. ChatGPT now has 800 million weekly users, and according to McKinsey, more than 78% of organizations now use AI—the highest percentage yet recorded. The growing use of AI should debunk the idea that hallucinations are a real deterrent to use or adoption, and this data shows that the relatively minor setbacks caused by hallucinations are outweighed by the benefits yielded from jumps in reasoning ability.
Consequently, it pays to have AI outputs that are more accurate and relevant. While hallucinations haven’t stopped or even slowed the rapid spread of AI, it might be even more popular, or practical business applications might be easier to find, if organizations could restrict hallucinations and improve the accuracy and relevance of AI outputs. That’s why strong data governance is an important component of AI success.
It’s long been understood that data quality affects AI outputs (that’s the “garbage in, garbage out” hypothesis), but as AI use matures, more organizations are realizing that data quality is only part of the picture, and that strong data governance—which goes beyond data quality—is the top priority.
A recent report from Precisely and Drexel University, for example, found that a lack of strong data governance is currently seen as the top challenge to AI implementation. That study also found a notable increase in the prevalence of data governance programs; 71% of organizations said that they had data governance policies and technology in 2024, up from 60% the year prior. Improved data quality is an important component of any data governance strategy, but data governance includes other business and security considerations related to AI, and that’s why it’s on the rise today.
Data governance defines the policies, roles and responsibilities that guide how data is managed, used and protected across an organization. While improving data quality is an objective of a governance strategy, a strong governance strategy also addresses issues such as compliance, accountability and strategic alignment with business goals. It ensures that data management practices align with organizational objectives and regulatory requirements, which not only improves the performance of AI, but also makes sure that AI tools can be used efficiently and securely within an organization.
3 Ways to Improve Data Governance and Reduce Hallucinations
AI is already widespread, but many organizations have work to do when it comes to shoring up data governance. Here are a few things you can start doing today to improve data governance and reduce the impacts of hallucinations:
Securing AI’s Future
Even though we haven’t solved the hallucination “problem,” AI is still more effective and widely used than ever before. As AI matures, it is more important to think strategically and broadly about the data that powers AI tools. It’s not just an issue of data quality; to improve AI results, we have to go well beyond that and think holistically about the way we regulate and govern entire estates.
Cobalt AI Founder | Google 16 yrs | Keynote Speaker | Writer | Consultant AI
1moIf Socrates himself could opine, "all I know is that I know nothing," AI too must develop the intellectual humility and self-doubt that give rise to critical thinking. The challenge ahead is designing AI that values honesty as much as eagerness to please.
I YOUmanize™ LinkedIn for 7–8 figure founders—be the trusted answer buyers see first. Top 3 AI-Ranked LinkedIn Expert • Keynote • Podcast Host. LinkedIn AEO + Human Algorithm™ → inbound that compounds—no ads, no cold DMs
1moSpot on, Dux! AI hallucinations highlight the messy brilliance of innovation and it’s progress, not perfection, that really counts. But here’s the twist, can we trust businesses to prioritize governance over just riding the AI hype?
Demand Generation Manager at UNITED & STERLING #talksabout #mspdata #emailmarketing #leadgeneration #emailcampaign #listbuilding #technologiesinstallbase #250millionplusdatabase
1moAbsolutely! AI adoption isn’t about perfection — it’s about trust, governance, and continuous improvement. Strong data governance ensures AI becomes not just powerful, but reliable and actionable over time. Excited to see how organizations apply these practices to unlock real value.
Tired of Debugging AI Hallucinations? I Teach Engineers How to Build Production-Ready, Trustworthy AI | Trained 1094 Engineers | Prev: Qwak ($230M Exit), Fandom (350M Users)
1moYou can make it work anyway, even with an imperfect model. Just don't promise 100% correctness to anyone.
🧢 Microsoft MVP | SharePoint & Copilot Strategist | Empowering teams & orgs to work smarter with optimised processes
1moDux you touch on one of the most important points here "about building the systems of trust, accountability, and alignment". If you have that and provide AI the right and more importantly accurate context hallucinations are barely there if at all. I am experiencing that first hand with clients and Copilot.... Great insights yet again buddy