Truth & AI: What Can We Still Believe?

Truth & AI: What Can We Still Believe?

I want to talk about truth & AI - how do we know what to believe any more? 

There was a time when knowledge felt solid. It came from libraries, newspapers, classrooms, or trusted experts. You might disagree with interpretations, but there was a sense that information itself carried some stability. 

That era is fading fast. Today, truth feels more fragile. Images appear that no camera ever captured. Voices sound authentic, yet belong to no speaker. “Facts” spread at the speed of light, only to be contradicted within hours. The more information we have, the less certainty we seem to possess. 

This paradox is unsettling. Abundance should bring clarity; instead, it breeds confusion. If we are to lead organisations responsibly, if we are to govern transformations effectively, we must understand how truth is shaped - and reshaped - in the age of AI. 

This isn’t entirely new. Propaganda, political spin, and selective reporting have always existed. But AI accelerates, personalises, and industrialises distortion in ways that demand urgent attention. To grasp what is changing, we need to step back - first to today’s AI landscape, and then to the long history of contested truth.

AI Today - The New Frontier of Distortion

Artificial intelligence has taken information manipulation to a new dimension. The technologies now available were once the domain of science fiction; they are now accessible from a laptop.

  • Synthetic media. Deepfakes can generate photorealistic video of anyone saying anything. Voice clones replicate tone, pitch, and accent so precisely that even family members may not detect the difference.
  • Automated misinformation. Large language models can produce thousands of articles, posts, or comments at scale. What once required teams of propagandists can now be orchestrated by a single operator
  • Hyper-personalised persuasion. AI tailors evidence to the worldview of the recipient, reinforcing existing beliefs. A board director searching for cost savings may find “evidence” suggesting automation always delivers ROI; another may see curated “proof” that people-focused strategies outperform. Both are AI-sculpted realities.

The scale is unprecedented. MIT researchers (Vosoughi et al., 2018) found that news spreads six times faster than true news on social platforms and was 70% more likely to be retweeted - and that was before generative Controlled studies show AI‑synthesized faces are indistinguishable from real faces—and even judged more trustworthy—underscoring how plausibility alone can mislead.

Case studies are mounting:

  • During recent conflicts, AI-generated images of bombed cities circulated widely before being debunked - long after public opinion had shifted.
  • In the 2024 Indian elections, manipulated videos of candidates went viral on WhatsApp and Twitter within hours; credible reporting confirms viral deepfakes and regulatory concern. Balanced coverage also notes the measurable overall impact is still being studied.
  • Corporate sectors are not immune: stock markets have reacted to AI-generated “news” stories that never occurred, such as fake reports of explosions or CEO resignations.
  • Voice cloning is hard to spot - humans correctly identify deepfake speech only around 73% of the time (UCL, 2023).

For executives, the implication is stark: not everything placed before you can be assumed authentic. What arrives in a board pack, what circulates among stakeholders, what is presented as “evidence” may be partially synthetic. Verification is no longer optional; it is a governance requirement.

Before AI - Truth Has Always Been Contested

To believe this is entirely new would be naïve. Truth has always been contested, often shaped by those with the loudest voices or deepest pockets.

  • Yellow journalism. In the late 19th century, William Randolph Hearst’s newspapers sensationalised events, fuelling public sentiment that pushed the United States into the Spanish-American War. The slogan “you furnish the pictures, I’ll furnish the war” may be apocryphal, but the distortion was real.
  • Propaganda in war. During World War II, both Allied and Axis powers mastered radio, film, and posters to mobilise populations. The British “Keep Calm and Carry On” campaign was not mere reassurance; it was psychological strategy.
  • Media ownership. In the modern era, figures like Rupert Murdoch have wielded immense influence, shaping political landscapes in the UK, US, and Australia. Studies of agenda-setting theory (McCombs & Shaw, 1972) show how media does not tell us what to think, but it powerfully tells us what to think about.

This is framing at scale. The way an issue is presented determines the way it is understood. AI has not invented distortion; it has industrialised it. Where once framing required editorial choices, now algorithms can generate millions of micro-frames, each targeted to reinforce specific viewpoints.

Executives should take note: the media lens has always been selective. AI simply makes the lens invisible.

Science as Anchor - and Its Limits

For decades, science was considered the anchor of truth: objective, testable, cumulative. Yet science itself is not free from distortion.

Consider some well-known reversals:

  • Five-a-day fruit and vegetables. This became a global health campaign in the 1990s. Later large studies suggest diminishing returns: the PURE study found benefits evident and near‑maximal at about 3–4 servings/day (≈375–500 g) and a 2021 pooled analysis indicated a plateau around five servings/day.rashkin
  • Eggs and cholesterol. In the 1970s, eggs were vilified as a cause of heart disease. Subsequent meta-analyses (Rong et al., 2013) showed little evidence of significant risk.
  • Red meat. Long condemned, more recent guidelines (Johnston et al., 2019) argue risks are modest and context-dependent.
  • Mobile phones on planes. For years, passengers were told to switch off devices to prevent interference with aircraft systems. The precaution became accepted truth. Yet as evidence accumulated, regulators confirmed the risk was negligible.

Beyond nutrition, the replication crisis has shaken confidence in psychology, medicine, and economics. Studies once celebrated have failed to replicate when tested again (Open Science Collaboration, 2015). Nobel laureate Daniel Kahneman, whose book Thinking, Fast and Slow popularised behavioural economics, has himself acknowledged that some findings in the field may not be robust.

Why does this happen?

  • Funding bias. Studies funded by industry are more likely to favour sponsor interests. The tobacco industry famously promoted research to sow doubt about smoking harms.
  • Publication bias. Journals prefer positive results, discouraging replication studies or null findings.
  • Statistical fragility. Ioannidis (2005) argued that “most published research findings may in fact be false” due to small sample sizes, flexible study designs, and selective reporting.

Science evolves. But early findings often calcify into cultural norms long before consensus emerges. Executives must therefore ask: are we acting on robust evidence, or on the inertia of outdated science?

The Fragility of Qualitative Research

Numbers can mislead, but so can narratives. Qualitative research - interviews, focus groups, case studies - provides texture and meaning. Yet it is vulnerable to the fragility of memory and interpretation.

Sir Frederic Bartlett’s classic 1932 experiment, War of the Ghosts, showed how people reconstruct memories to fit cultural schemas. Over time, details were forgotten, altered, or rationalised to align with familiar patterns.

Elizabeth Loftus (1996) demonstrated how easily false memories can be implanted. A leading question - “how fast were the cars going when they smashed into each other?” - altered recall of speed, damage, even the presence of broken glass.

Daniel Schacter (1999) summarised these vulnerabilities as the “seven sins of memory”: transience, absent-mindedness, blocking, misattribution, suggestibility, bias, and persistence.

In organisational settings, these biases matter:

  • Steering committees rely on leaders’ recollections of prior programmes.
  • Transformation reviews often use interviews to understand what went wrong.
  • Cultural assessments hinge on narratives of employees recalling “what it felt like.”

Each is vulnerable to distortion. What executives receive is not unfiltered truth, but reconstructed stories.

Why Beliefs Are So Hard to Change

This leads us to a critical asymmetry: beliefs form easily, but they are extremely resistant to change.

Ross & Anderson (1982) coined the term belief perseverance: people cling to initial beliefs even after the evidence behind them is discredited. Festinger’s (1957) theory of cognitive dissonance explains why: discarding a belief requires admitting we were wrong, which is psychologically costly.

The anchoring effect (Tversky & Kahneman, 1974) shows how initial numbers, even arbitrary ones, influence subsequent estimates. Once a cost forecast or project timeline is presented, it shapes expectations, making later corrections feel implausible.

Nyhan & Reifler (2010) documented the “backfire effect”: correcting misinformation can actually strengthen the original false belief, as individuals double down to protect their worldview.

Executives see this daily:

  • A programme budget is presented optimistically. Later evidence shows overruns are inevitable. Yet boards resist acknowledging the change.
  • A “green” milestone report early in delivery convinces stakeholders of smooth progress. Even as issues mount, the belief in programme health persists.

 Changing belief requires overwhelming, repeated, and trusted evidence. A single corrective slide rarely suffices.

AI Overlaid on Belief Systems

If beliefs are already sticky, AI compounds the problem.

Eli Pariser (2011) coined “filter bubble” to describe how algorithms deliver information aligned with our preferences. C. Thi Nguyen (2020) distinguishes between “echo chambers” (where contrary voices are discredited) and “epistemic bubbles” (where contrary voices are absent).

Generative AI intensifies both. Personalised feeds deliver narratives that fit our worldview. Bots reinforce our opinions. AI-generated “evidence” can even simulate the authority of experts.

Consider COVID-19. AI-boosted misinformation about vaccines spread rapidly, reinforced by bots amplifying certain narratives. Studies showed that exposure to repeated falsehoods increases perceived truth - the “illusory truth effect” (Hasher et al., 1977).

The result is belief ossification: false or distorted views become harder to shift, shielded by AI-generated reinforcement.

Evaluating Truth in the Age of AI

Executives must adapt. The challenge is not simply is this real? but how distorted might this be?

Practical responses are emerging:

  • Provenance technology. Watermarking and blockchain attestations can certify source authenticity.
  • Fact-checking consortia. Initiatives like the International Fact-Checking Network coordinate global verification.
  • OSINT practices. Open-source intelligence methods are increasingly used by journalists and analysts to cross-validate evidence.

Governance frameworks are also advancing:

  • The NIST AI Risk Management Framework (2023) emphasises trustworthiness, accountability, and validation.
  • The EU AI Act (2024) requires transparency and disclosure for high-risk AI applications.
  • The OECD Principles on AI stress robustness and accountability in both design and use.

For executives, this means due diligence must extend beyond what data says to how data was shaped. Provenance, context, and distortion must be interrogated as part of decision-making.

Even the most advanced AI systems carry built-in limitations. Their “knowledge” is shaped by the data they were trained on - often a static baseline of public and freely available information. For example, models like ChatGPT-5 have a cut-off in mid-2024, meaning that anything after that point must be retrieved from search engines. This creates two compounding constraints:

  • Baseline gaps. AI models cannot “know” events, research, or regulatory changes after their last training update.
  • Search constraints. When reaching beyond the baseline, AI relies on external search engines, which themselves are shaped by proprietary algorithms that decide what to show - not always what is most accurate.
  • Hallucinations. AI can produce convincing but fabricated references, statistics, or case studies. These outputs are fluent and persuasive, but may have no basis in reality - making them more dangerous than obvious errors.
  • Bias inheritance. AI reflects the biases embedded in its training data. If the underlying sources over-represent certain geographies, demographics, or viewpoints, the outputs will mirror and sometimes amplify those imbalances.

The result is that AI responses should be treated as indicative, not definitive. They can guide enquiry and broaden perspective, but they cannot be assumed to represent authoritative fact.

For boards, the implication is clear: AI outputs require the same scrutiny as any other evidence stream. They may be useful, but they are not truth.

Executive Decision-Making in the Next Three Years

The implications are profound. By 2028, decision quality will not be judged solely by outcomes, but by the resilience of the process under distortion.

Scenarios illustrate the risk:

  • False acquisition. A multinational pursues an acquisition based on AI-generated “market sentiment.” Later, it emerges that much of the online buzz was synthetic, seeded by competitors.
  • Digital manipulation. A government steering group relies on citizen feedback about a digital service. Subsequent investigation shows bot farms inflated negative reviews to sway procurement.
  • Supply chain visibility. A global retailer bases procurement shifts on AI-scraped market data showing supplier reliability in Asia. Later analysis reveals much of the “evidence” was generated by automated content farms linked to the suppliers themselves.
  • ESG reporting. A listed company highlights positive sustainability sentiment analysis in its annual report. Months later, it emerges the dataset had been artificially boosted by bot-driven campaigns, undermining investor trust.
  • Cybersecurity readiness. A board reviews an external benchmarking report showing its cyber controls in the top quartile. The report is later found to rely on datasets partially corrupted by synthetic vulnerability data seeded by competitors.
  • Healthcare investment. A private equity fund commits capital to a health-tech firm based on strong patient testimonials and clinical trial reviews. Subsequent investigation finds that both reviews and supporting medical content were generated by AI, with minimal real-world validation.

Boards must adapt:

  • Independent validation as standard. Assurance providers will be tasked not just with financial audits, but with information provenance checks.
  • Scenario testing against misinformation. Risk committees will model not just cyberattacks, but disinformation campaigns.
  • Ethical AI oversight. Boards will extend governance to include not only how they use AI, but how they guard against AI-shaped inputs.
  • Continuous evidence monitoring. Instead of treating data as static, boards will need mechanisms to track whether evidence remains valid over time. A fact accepted in January may be contradicted by March - decisions must account for decay in reliability.
  • Diverse information channels. Boards should mandate triangulation of evidence across independent sources, ensuring no single dataset, report, or AI output dominates strategic choices.
  • AI literacy at the board level. Directors must be trained to understand how generative AI distorts or amplifies information, so they can interrogate assumptions with informed scepticism rather than blind trust.
  • Red-teaming for information integrity. Just as cyber red teams test technical defences, boards should commission deliberate challenges to the credibility of their information inputs - stress-testing decisions against the risk of distortion.

This intersects with transformation governance. Milestone risk, already under-acknowledged, will now include information integrity risk. Steering groups must ask: are we basing our decisions on undistorted evidence, or on narratives subtly shaped by AI?

Truth is not disappearing. It is becoming harder to discern

Humans crave certainty. We want facts to be solid, truth to be stable. But truth has always been contested. Science evolves, media frames, memory distorts. AI has not invented this - it has magnified it, accelerated it, and hidden it beneath layers of plausibility.

Leaders cannot afford paralysis. The task is not to retreat into cynicism or despair, but to cultivate discernment:

  • To question more deeply.
  • To validate more rigorously.
  • To remain alert to distortion, without being paralysed by doubt.

Truth is not disappearing. It is becoming harder to discern. And in the next three years, the executives who thrive will be those who accept this reality - and lead with both scepticism and courage.

Jamie Rollings

Senior Program Manager | Digital Transformation | Banking | Financial Crime Platforms | PMP® | MBA | SAFe Scrum Master | AI learner!

2mo

Good article Adam, very thought provoking especially when considering what we are already experiencing when we consider the narratives pushed on us by once trusted media outlets alongside the previously accepted facts from government bodies that we assume are impartial and looking out for us. The rise of the AI era certainly adds to the information confusion and I fully agree that on a work level it’s crucial to be even more thorough and challenging in our acceptance of facts that feed decision making and actions, although I’m not sure we are preparing ourselves properly right now!

Daniel Steyn

Building better through constant improvement

2mo

When you talk about the actions boards and leaders should take, it seems to me that the sort of quintessential leadership most associated with corporate leadership - decisive, certain, confident, forge-ahead, arrogant even - is not a good fit in this landscape. What is needed is curiosity, self-analysing, open minded and open to persuasion and the ability to evaluate information without ego, while maintaining principles and perspective. Thank you for this very concise yet comprehensive breakdown of a complex and fascinating topic!

Adam, this article is urgent, clear, and a warning that we need to evaluate what's true and real. I think too many leaders are skimming over this. Truth was never perfect. These days it feels like we’re walking on quicksand. A lot of what looks solid in the moment collapses under the simplest fact check. Not just fake videos or dodgy articles. It extends to the foundations on which we base our decisions. If the inputs are manipulated, the outputs will follow. Truth is a governance issue now, not a philosophical one. AI doesn’t replace judgment. Anyone can spin up tools. Anyone can generate noise. Leaders own the risk, and they need to pause long enough to ask: “Where did this come from? Who shaped it? What am I missing?” Treat discernment like a muscle. Question everything, assume narrative distortion, cross-check relentlessly, and never outsource your thinking. Thanks for putting this on the table Adam. It’s a conversation we need to keep alive.

Dr Denis Cauvier

Speaker/Trainer/ 28X Best Selling Author helping companies recruit, develop & retain exceptional people. CEO Whisperer. Real Leaders Magazine 2025 Top Executive Coach

2mo

In the immortal words of Marshall McLuhan, a well done, medium is rare!

To view or add a comment, sign in

More articles by Adam Wright

Others also viewed

Explore content categories