Article from NY Times: More than two years after ChatGPT's introduction, organizations and individuals are using AI systems for an increasingly wide range of tasks. However, ensuring these systems provide accurate information remains an unsolved challenge. Surprisingly, the newest and most powerful "reasoning systems" from companies like OpenAI, Google, and Chinese startup DeepSeek are generating more errors rather than fewer. While their mathematical abilities have improved, their factual reliability has declined, with hallucination rates higher in certain tests. The root of this problem lies in how modern AI systems function. They learn by analyzing enormous amounts of digital data and use mathematical probabilities to predict the best response, rather than following strict human-defined rules about truth. As Amr Awadallah, CEO of Vectara and former Google executive, explained: "Despite our best efforts, they will always hallucinate. That will never go away." This persistent limitation raises concerns about reliability as these systems become increasingly integrated into business operations and everyday tasks. 6 Practical Tips for Ensuring AI Accuracy 1) Always cross-check every key fact, name, number, quote, and date from AI-generated content against multiple reliable sources before accepting it as true. 2) Be skeptical of implausible claims and consider switching tools if an AI consistently produces outlandish or suspicious information. 3) Use specialized fact-checking tools to efficiently verify claims without having to conduct extensive research yourself. 4) Consult subject matter experts for specialized topics where AI may lack nuanced understanding, especially in fields like medicine, law, or engineering. 5) Remember that AI tools cannot really distinguish truth from fiction and rely on training data that may be outdated or contain inaccuracies. 6)Always perform a final human review of AI-generated content to catch spelling errors, confusing wording, and any remaining factual inaccuracies. https://lnkd.in/gqrXWtQZ
How to Ensure AI Accuracy
Explore top LinkedIn content from expert professionals.
-
-
Data Quality is a blocker to AI adoption. If you don't know what your core data means, who is using it, what they are using it for, and what "good" looks like - it is terrifying to take AI-based production dependencies on data that might change or disappear entirely. As data engineers, ensuring the accuracy and reliability of your data is non-negotiable. Specifically, effective data testing is your secret weapon for building and maintaining trust. Want to improve data testing? Start by... 1. Understand what data assets exist and how they interact via data lineage. 2. Identify the data assets that bring the most value or have the most risk. 3. Create a set of key tests that protect these data assets. (more below) 4. Establish an alerting protocol with an emphasis on avoiding alert fatigue. 5. Utilize continuous testing within your CI/CD pipelines with the above. The CI/CD component is crucial, as automating your testing process can streamline operations, save time, and reduce errors. Some of the tests you should consider include: - Data accuracy (e.g. null values, incorrect formats, and data drift) - Data freshness - Performance testing for efficiency (e.g. costly pipelines in the cloud) - Security and compliance (e.g. GDPR) testing to protect your data - Testing assumptions of business logic. The other reason CI/CD testing is critical is because it informs data producers that something is going wrong BEFORE the changes have been made in a proactive and preventative fashion, and it provides context to both the software engineer and data engineer about what changes are coming, what is being impacted, and what expectations of both sides should be. Data Quality Strategy is not just about the technology you use or the types of tests that have been put in place, but on the communication patterns between producers and consumers put into place when failure events or potential failure events happen. Good luck!
-
LLMs are great for data processing, but using new techniques doesn't mean you get to abandon old best practices. The precision and accuracy of LLMs still need to be monitored and maintained, just like with any other AI model. Tips for maintaining accuracy and precision with LLMs: • Define within your team EXACTLY what the desired output looks like. Any area of ambiguity should be resolved with a concrete answer. Even if the business "doesn't care," you should define a behavior. Letting the LLM make these decisions for you leads to high variance/low precision models that are difficult to monitor. • Understand that the most gorgeously-written, seemingly clear and concise prompts can still produce trash. LLMs are not people and don't follow directions like people do. You have to test your prompts over and over and over, no matter how good they look. • Make small prompt changes and carefully monitor each change. Changes should be version tracked and vetted by other developers. • A small change in one part of the prompt can cause seemingly-unrelated regressions (again, LLMs are not people). Regression tests are essential for EVERY change. Organize a list of test case inputs, including those that demonstrate previously-fixed bugs and test your prompt against them. • Test cases should include "controls" where the prompt has historically performed well. Any change to the control output should be studied and any incorrect change is a test failure. • Regression tests should have a single documented bug and clearly-defined success/failure metrics. "If the output contains A, then pass. If output contains B, then fail." This makes it easy to quickly mark regression tests as pass/fail (ideally, automating this process). If a different failure/bug is noted, then it should still be fixed, but separately, and pulled out into a separate test. Any other tips for working with LLMs and data processing?
-
Don't be afraid of hallucinations! It's usually an early question in most talks I give on GenAI "But doesn't in hallucinate? How do you use a technology that makes things up?". It's a real issue, but it's a manageable one. 1. Decide what level of accuracy you really need in your GenAI application. For many applications it just needs to be better than a human, or good enough for a human first draft. It may not need to be perfect. 2. Control your inputs. If you do your "context engineering" well, you can point it to the data you want better. Well written prompts will also reduce the need for unwanted creativity! 3. Pick a "temperature". You can select a model setting that is more "creative" or one that sticks more narrowly to the facts. This adjusts the internal probabilities. The "higher temperature" results can often be more human-like and more interesting. 4. Cite your sources. RAG and other approaches allow you to be transparent about what the answers are based on, to give a degree of comfort to the user. 5. AI in the loop. You can build an AI "checker" to assess the quality of the output 6. Human in the loop. You aren't going to just rely on the AI checker of course! In the course of a few months we've seen concern around hallucinations go from a "show stopper" to a "technical parameter to be managed" for many business applications. It's by no means a fully solved problem, but we are highly encouraged by the pace of progress. #mckinseydigital #quantumblack #generativeai
-
“Garbage in, garbage out” is the reason that a lot of AI-generated text reads like boring, SEO-spam marketing copy. 😴😴😴 If you’re training your organization's self-hosted AI model, it’s probably because you want better, more reliable output for specific tasks. (Or it’s because you want more confidentiality than the general use models offer. 🥸 But you’ll take advantage of the additional training capabilities, right?) So don’t let your in-house model fall into the same problem! Cull the garbage data, only feed it the good stuff. Consider these three practices to ensure only high-quality data ends up in your organization’s LLM. 1️⃣ Establish Data Quality Standards: Define what “good” data looks like. Clear standards are a good defense against junk info. 2️⃣ Review Data Thoroughly: Your standard is meaningless if nobody uses it. Check that data meets your standards before using it for training. 3️⃣ Set a Cut-off Date: Your sales contracts from 3 years ago might not look anything like the ones you use today. If you’re training an LLM to generate proposals, don’t give them examples that don’t match your current practices! With better data, your LLM will provide more reliable results with less revision needed. #AI #machinelearning #fciso
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development