AI Still Hallucinates. Here's How to Reduce It:

AI Still Hallucinates. Here's How to Reduce It:

(This article I wrote was originally published on Reworked)

AI hallucinations aren’t slowing adoption, but they highlight why strong data governance is essential to improving accuracy and trust.

When artificial intelligence (AI) first burst into the public sphere, hallucinations were seen by many in the business world as a minor, temporary glitch — something that could be solved through troubleshooting and iterative development.  

But now, several years on from the public release of ChatGPT, new research from OpenAI shows that its models are producing more incorrect information than ever before. Its latest reasoning model, o4-mini, hallucinates almost half (48%) of the time, while its slightly older o3 model hallucinates at a rate of 33%, according to the company’s own figures. These models, which provide more complex outputs than previous versions , are hallucinating at a much higher rate than their predecessors. OpenAI’s o1 model, for example, hallucinates at a rate of 16%. 

As AI products become better at complex reasoning, certain models seem to be getting worse at generating information that’s error-free. Some view this as a new and destabilizing development for the future of AI, but in fact, AI experts have been aware of these issues for years, and high rates of hallucination have not deterred AI use whatsoever. Rather than causing us to reevaluate the future or purpose of AI, the high rates of hallucination in models like OpenAI’s o4 only reinforce the importance of strong data governance as major components of AI success.

How Strong Data Governance Improves AI Outcomes

AI use has recently reached an all-time high, despite how common it is for users to receive inaccurate information. ChatGPT now has 800 million weekly users, and according to McKinsey, more than 78% of organizations now use AI—the highest percentage yet recorded. The growing use of AI should debunk the idea that hallucinations are a real deterrent to use or adoption, and this data shows that the relatively minor setbacks caused by hallucinations are outweighed by the benefits yielded from jumps in reasoning ability. 

Consequently, it pays to have AI outputs that are more accurate and relevant. While hallucinations haven’t stopped or even slowed the rapid spread of AI, it might be even more popular, or practical business applications might be easier to find, if organizations could restrict hallucinations and improve the accuracy and relevance of AI outputs. That’s why strong data governance is an important component of AI success.

It’s long been understood that data quality affects AI outputs (that’s the “garbage in, garbage out” hypothesis), but as AI use matures, more organizations are realizing that data quality is only part of the picture, and that strong data governance—which goes beyond data quality—is the top priority. 

A recent report from Precisely and Drexel University, for example, found that a lack of strong data governance is currently seen as the top challenge to AI implementation. That study also found a notable increase in the prevalence of data governance programs; 71% of organizations said that they had data governance policies and technology in 2024, up from 60% the year prior. Improved data quality is an important component of any data governance strategy, but data governance includes other business and security considerations related to AI, and that’s why it’s on the rise today.

Data governance defines the policies, roles and responsibilities that guide how data is managed, used and protected across an organization. While improving data quality is an objective of a governance strategy, a strong governance strategy also addresses issues such as compliance, accountability and strategic alignment with business goals. It ensures that data management practices align with organizational objectives and regulatory requirements, which not only improves the performance of AI, but also makes sure that AI tools can be used efficiently and securely within an organization. 

3 Ways to Improve Data Governance and Reduce Hallucinations 

AI is already widespread, but many organizations have work to do when it comes to shoring up data governance. Here are a few things you can start doing today to improve data governance and reduce the impacts of hallucinations:

  1. Prioritize Data Quality: Establish robust frameworks and protocols for ensuring that all data collected, processed and stored within the organization is consistent, accurate and relevant. This includes implementing automated validation tools, regular audits and creating a culture of accountability among data users. By improving data quality, organizations reduce hallucinations, make better decisions and increase the value generated from AI and analytics initiatives.
  2. Develop Clear Data Ownership and Accountability: Define roles and responsibilities explicitly for data management across all levels of the organization. Assign data stewards or custodians to oversee specific datasets, making it clear who is responsible for maintaining, securing and updating information. This approach improves accountability and prevents gaps or overlaps in governance. In addition, organizations should provide training and resources to employees to help them understand their roles in the governance process and the importance of adhering to established policies. Clear data ownership and accountability within an organization leads to better data, which reduces hallucinations.
  3. Continuously Refine: Ensure the data reflects real-world scenarios and user preferences through continuous refinement. While this won't eliminate hallucinations, this approach helps AI systems produce responses that are not only accurate but also tailored effectively, improving relevance and user satisfaction over time.

Securing AI’s Future 

Even though we haven’t solved the hallucination “problem,” AI is still more effective and widely used than ever before. As AI matures, it is more important to think strategically and broadly about the data that powers AI tools. It’s not just an issue of data quality; to improve AI results, we have to go well beyond that and think holistically about the way we regulate and govern entire estates. 

Rupert Breheny

Cobalt AI Founder | Google 16 yrs | Keynote Speaker | Writer | Consultant AI

1mo

If Socrates himself could opine, "all I know is that I know nothing," AI too must develop the intellectual humility and self-doubt that give rise to critical thinking. The challenge ahead is designing AI that values honesty as much as eagerness to please.

Like
Reply
Joshua B. Lee

I YOUmanize™ LinkedIn for 7–8 figure founders—be the trusted answer buyers see first. Top 3 AI-Ranked LinkedIn Expert • Keynote • Podcast Host. LinkedIn AEO + Human Algorithm™ → inbound that compounds—no ads, no cold DMs

1mo

Spot on, Dux! AI hallucinations highlight the messy brilliance of innovation and it’s progress, not perfection, that really counts. But here’s the twist, can we trust businesses to prioritize governance over just riding the AI hype?

Like
Reply
Jessica Tremor

Demand Generation Manager at UNITED & STERLING #talksabout #mspdata #emailmarketing #leadgeneration #emailcampaign #listbuilding #technologiesinstallbase #250millionplusdatabase

1mo

Absolutely! AI adoption isn’t about perfection — it’s about trust, governance, and continuous improvement. Strong data governance ensures AI becomes not just powerful, but reliable and actionable over time. Excited to see how organizations apply these practices to unlock real value.

Like
Reply
Bartosz Mikulski

Tired of Debugging AI Hallucinations? I Teach Engineers How to Build Production-Ready, Trustworthy AI | Trained 1094 Engineers | Prev: Qwak ($230M Exit), Fandom (350M Users)

1mo

You can make it work anyway, even with an imperfect model. Just don't promise 100% correctness to anyone.

Daniel Anderson

🧢 Microsoft MVP | SharePoint & Copilot Strategist | Empowering teams & orgs to work smarter with optimised processes

1mo

Dux you touch on one of the most important points here "about building the systems of trust, accountability, and alignment". If you have that and provide AI the right and more importantly accurate context hallucinations are barely there if at all. I am experiencing that first hand with clients and Copilot.... Great insights yet again buddy

To view or add a comment, sign in

More articles by Dux Raymond Sy

Others also viewed

Explore content categories