AI Tools Demonstrate Hidden Risks of Imperfect OutputsAI Tools Demonstrate Hidden Risks of Imperfect Outputs
As AI use grows, so does the need for disclaimers, vetting AI-generated results, and avoiding expensive errors.
May 22, 2025

When it comes to deployment of AI, some deployers are listening to their lawyers. Many documents that are created with AI now contain disclaimers like this one which I found at the bottom of a document reviewed in legal community platform Justia: “Some case metadata and case summaries were written with the help of AI, which can produce inaccuracies. You should read the full case before relying on it for legal research purposes.” At least, this entity has the courage to announce and warn about its use of AI. At least this entity has the courage to announce and warn about its use of AI. Many documents and entities are not, including a radio station with which I’m familiar, which uses AI to produce weather reports, signed off by “staff meteorologists” who are not real. The AI-generated forecasts are close to accurate, which I view as positive but not determinative, but the named staff meteorologists who provide the forecasts are identified as real people, which they are not..
Well respected analyst and No Jitter contributor Blair Pleasant shared this gem from her local government’s website with me:
"Answers given by this chatbot, or any live agent(s) may contain errors or may be incomplete. The Judicial Council of California and the California courts (and their officers and personnel):
- Make no representations or warranties about the Virtual Customer Service Center, and are not responsible for any damage, loss, claim or liability arising out of your use of the Virtual Customer Service Center, or information provided by the chatbot or in the live chat.
- Do not warrant the Virtual Customer Service Center or any related materials will be error-free, or free of viruses or other harmful components."
With disclaimers like this, who needs chatbots? What’s clear is that at least some of the entities deploying AI tools are recognizing the vulnerabilities that use of those tools expose. This is not a new argument for me to make, but while AI-driven chatbots may come through with correct information much of the time, they’re not always right. And that can have consequences for any business that used those chatbots in its operations.
As creative entities work feverishly to create AI aps that solve every problem, it’s dismaying to observe how those who work under extreme pressure to create the next best thing can often miss business- or industry-critical items that slip through the cracks, wreaking havoc on those who have relied upon AI-generated outcomes to make critical decisions.
As AI deployment increases, so will the number of errors. These will likely occur in every sector where AI tools are deployed. Few, however, are more visible than those that occur in the legal profession. In mid-May, The MIT publication “The Algorithm” highlighted several occurrences of “AI hallucinations” that have occurred recently in courtrooms across the country.
According to Science News Today, “In the world of artificial intelligence, a hallucination refers to when an AI model generates information that is not true, not supported by any data, or entirely fictional. These “hallucinations” may take the form of fake facts, invented quotes, incorrect citations, or completely fabricated people, places, or events … An AI hallucination occurs when [a large language] model produces a response that sounds plausible but is factually incorrect, logically flawed, or completely invented.”
By this definition, such information does not qualify as misinformation because its creation is not intentional, but simply a function of the system that’s answering the question.
The amount of bad publicity—not to mention outcomes—when lawyers get caught unknowingly relying on AI hallucinations can be staggering and potentially career ending. Recently, the Algorithm even cited a case in a court filing produced by the AI company Anthropic Clearly someone—or many someones—have taken their collective eyes off the ball. As a most current example, the Chicago Sun-Times actually published a summer reading list on Sunday, May 18th where 10 of the 15 recommended books do not exist. How did they do it? They did it with AI. However, the bigger question is why? And why was there no review process to check the AI output. Sadly, this is not an outlier. And as confidence in media sources has decreased, “accidents” like this only make such sources that much less trustworthy.
How to Anticipate Hallucinations and AI Errors
For vendors and those acquiring AI tools, it’s critical to consider not only the potential downsides of product use but also identifying the potential vulnerabilities that could result in litigation.
When an AI hallucination is exposed, the offending company—as well as everyone in the chain from creation through deployment--not only may lose customers/clients, but it is also at a higher risk of litigation that it has been before. It’s certainly also at a higher risk of negative exposure and reputational risk.
With this in mind, I recommend several important steps.
First, understand what you’re getting. This is not as simple as it sounds. In order to mitigate risk, you need to know where it lies, and you simply can’t do that if you have only the barest working knowledge of the technology, how any AI model is being trained, what data it’s trained upon, and what feedback mechanisms there are for reporting errors and correcting factually wrong statements.
Secondly, create acceptable use guidelines. These should be living, breathing documents that evolve as the technology and its uses do. Regularly scheduled reviews, as well as unannounced spot checks is certainly recommended.
Thirdly, test the system regularly and frequently so that you can be comfortable that the system is working as intended. When problems occur, be able to move quickly and nimbly to resolve them quickly to minimize repeated errors and exposure.
Finally, bring the legal team to the acquisition and ongoing maintenance process so that if something happens, they will be able to respond quickly and appropriately. Disclaimers essentially serve as warnings, but they are far from ironclad, and based on how the tools are used, the exposure can be very real and costly.
About the Author
You May Also Like