When AI systems know us better than we know ourselves, who truly holds the power?

When AI systems know us better than we know ourselves, who truly holds the power?

Article is derived from WIRED interview of 'Yuval Noah Harari’ (see bio below) "AI, Information, and Humanity's Future: A Connected Perspective" .

Here is a short summary with examples and personal point of view (Wired article in Reference below, it is worth reading it if you have 20 minutes).


The Fundamental Shift: From Tools to Autonomous AI Decision Making

In his WIRED interview, Yuval Noah Harari emphasises a critical distinction that forms the foundation of his thinking: unlike previous technologies, AI represents not just another tool but an autonomous agent. While technologies like the printing press required human operation and decision-making, AI can independently create content, make decisions, and potentially develop its own goals. This fundamental shift transforms our relationship with technology and presents unprecedented challenges for humanity.

Example: Consider YouTube's recommendation algorithm. Unlike a traditional television guide that merely lists available programmes, YouTube's AI actively decides what you should watch next. As Harari notes, "70 percent of what people watch on YouTube is driven by recommendations from the algorithm," meaning an AI agent—not human choice—is increasingly determining what information billions of people consume daily.


The Information Challenge: Truth in the Digital Age

Building on this foundation, Harari examines the nature of information itself. He argues that the internet promised to democratise information but instead created a marketplace where fiction often overwhelms fact. Information, in Harari's view, is primarily about connection rather than truth – stories connect people, whether those stories are true or not. This creates a vulnerability that AI can exploit as it becomes increasingly capable of generating and disseminating its own narratives.

Example: Harari points to how a factual video about 9/11 on YouTube might lead viewers to conspiracy theory videos from sources like InfoWars. The algorithm doesn't prioritise truth; it prioritises engagement. As Tristan Harris notes in the interview, "The thing that works best at keeping a teenage girl watching a dieting video on YouTube the longest is to say here's an anorexia video." The AI optimises for attention, not accuracy or wellbeing.


The Historical Context: AI Making Decisions Beyond Human Understanding

Harari leverages his historian's perspective to contextualise the AI revolution. Previous technological revolutions, from the industrial to the digital, ultimately remained under human control. Humans decided how to structure societies around these technologies. With AI, however, Harari warns that we may be approaching a point where AI systems participate in these decisions, potentially moving beyond human understanding. This historical contrast helps clarify why AI represents such a profound shift.

Example: Harari contrasts the steam engine with modern AI: "When the steam engine was invented, it was still humans who decided how to create industrial societies... The steam engine itself did not make any decision." By contrast, he imagines a future where "AI has its own money, makes its own decisions about how to spend it, and even starts investing it in the stock market." In such a world, understanding the financial system would require understanding not just human thinking but also AI thinking.


The Storytelling Challenge: AI as Most Engaging Narrator

At the heart of human civilisation, according to Harari, is our unique ability to create shared stories that enable large-scale cooperation. Money, religion, and nations all represent stories that connect millions of people who will never meet personally. AI now threatens humanity's monopoly on storytelling, as it can create and network stories potentially more effectively than humans. This raises profound questions about who will control the narratives that shape our future societies.

Example: Harari mentions an AI that "created a religion and wrote a holy book of the new religion and also created or helped to spread a new cryptocurrency," accumulating the equivalent of £30 million. This demonstrates how AI can already create and propagate powerful narratives that influence human behaviour and beliefs, potentially reshaping the shared stories that form the foundation of our societies.


The Trust Paradox: Who Controls Whom?

As AI becomes more integrated into our lives, Harari identifies a disturbing paradox: people who struggle to trust other humans often place unwarranted trust in AI systems. The race to develop increasingly powerful AI is driven by distrust between companies and nations, yet these same entities believe they can trust the AI they create. This paradox exposes a philosophical crisis about authority, choice, and what it means to be human in an age where our minds can be "hacked" by algorithms.

Example: Harari describes the contradictory thinking of AI developers: "When you talk with the people who lead the AI revolution... and you ask them, 'Why are you moving so fast?' They almost all say... 'We know it's risky, we understand it's dangerous... But the other company or the other country doesn't slow down.'" Yet when asked if they can trust their own AI, "they answer yes." This paradox—distrusting humans while trusting alien intelligence—reveals a profound philosophical inconsistency.


The Path Forward: Navigating the AI Revolution

Despite these challenges, Harari advocates for a middle path between paralysing fear and blind optimism. He suggests several approaches for both individuals and society: developing greater self-knowledge to resist manipulation, joining organisations to create collective responses, creating systems where AI serves human interests rather than corporate or government ones, and developing global cooperation on AI regulation. Most crucially, he calls for a new philosophical framework that moves beyond 18th-century notions of human choice and acknowledges our vulnerabilities while preserving the aspects of human freedom we value most.

Example: Harari proposes an "AI sidekick" that would act as a digital guardian: "Let's say you have an AI sidekick who monitors you all the time, 24 hours a day... But this AI is serving you... it gets to know your weaknesses, and by knowing your weaknesses it can protect you against other agents trying to hack you." Rather than abandoning technology, Harari suggests reimagining it to serve genuine human flourishing, perhaps protecting us from our own biases and vulnerabilities while respecting our agency.


The Recursive Security Challenge: Protecting Our Protectors

Harari's concept of an "AI sidekick" raises an important recursive question: if we rely on AI to protect us from manipulation, what protects that AI from being compromised? Just as we need antivirus software for our computers, our AI guardians would require their own protection systems, creating potentially infinite layers of security concerns.

Example: Consider a personal AI assistant that monitors your online activity to protect you from targeted manipulation. This AI itself becomes a high-value target for hackers, as compromising it would provide direct access to your digital life. The security of this protective AI becomes as crucial as the security of nuclear launch codes—perhaps more so, as it mediates your perception of reality. This recursive security problem highlights why AI safety cannot be an afterthought but must be fundamental to AI development.


Conclusion: A Call for Responsible Stewardship

The interconnected challenges Harari identifies—AI agency, information integrity, historical perspective, narrative control, trust paradoxes, and security recursion—point to a singular conclusion: we need a new framework for responsible AI stewardship that begins immediately.

This framework must recognise that decisions made today by developers, regulators, and users will shape the trajectory of human-AI relations for generations. As AI becomes increasingly integrated into our decision-making processes, financial systems, and social structures, the window for establishing ethical guardrails narrows.

Harari's analysis suggests we are at a critical juncture, comparable to the early days of nuclear technology but potentially more consequential. Just as the dawn of nuclear power required new international agreements, safety protocols, and ethical frameworks, the AI revolution demands similar coordinated responses—but with greater urgency, as AI development moves at digital rather than industrial speeds.


The responsibility falls not just on tech companies and governments but on every stakeholder in society to engage with these questions. Educational systems must prepare citizens to understand AI's capabilities and limitations. Legal systems must evolve to address novel questions of AI agency and accountability. And individuals must develop both technical literacy and philosophical depth to navigate a world where the line between human and machine intelligence increasingly blurs.

The time for this responsible stewardship is not in some distant future after superintelligence emerges, but now—in the formative stages of AI development—when we still have the opportunity to shape its trajectory toward human flourishing rather than human diminishment.


References

1. Yuval Noah Harari: 'How Do We Share the Planet With This New Superintelligence?' | WIRED (April 2025). Available at: https://www.wired.com/story/questions-answered-by-yuval-noah-harari-for-wired-ai-artificial-intelligence-singularity/

2. Watch Yuval Noah Harari Sees the Future of Humanity, AI, and Information | The Big Interview | WIRED (May 2025). Available at: https://www.wired.com/video/watch/big-interview-the-big-interview-yuval-noah-harari

3. When Tech Knows You Better Than You Know Yourself | WIRED (October 2018). Available at: https://www.wired.com/story/artificial-intelligence-yuval-noah-harari-tristan-harris/

About Yuval Noah Harari:

Professor Yuval Noah Harari is a historian, philosopher, and bestselling author of books including "Sapiens: A Brief History of Humankind," "Homo Deus: A Brief History of Tomorrow," and "21 Lessons for the 21st Century." His latest book, "Nexus: A Brief History of Information Networks From the Stone Age to AI" explores the unprecedented threat posed by artificial intelligence. Harari is a research fellow at the Center for the Study of Survival Risk at the University of Cambridge and lectures at the Hebrew University of Jerusalem.

★ Nicolas Pasquier

Founder | CEO at Levegh | 25,000+ 1st level contacts

5mo

Yuval Noah Harari’s insights are always thought-provoking, and this interview is no exception. The paradox of trust in AI—racing ahead while questioning control—is a dilemma that touches every industry. As businesses navigate this transformation, those who embrace precision, heritage, and innovation will define the future. Fascinating read!    Check it out : www.levegh.com

  • No alternative text description for this image
Like
Reply
Mark Belosa, CLSSDC

Award-winning songwriter and storyteller helping small business owners grow through creativity and collaboration | Proud Godfather-tutor-uncle to the world’s most AUsome kid.

5mo

This hits a nerve in the best way. The idea that AI is no longer just a tool but a co-author of our stories—and maybe even our habits—is both awe-inspiring and unsettling. What struck me most is the framing of AI as a narrator with no regard for truth, only engagement. It reminds me how crucial it is that we stay deeply anchored in values and wisdom—especially as the systems around us get smarter at mimicking what persuades us. Feels like we’re not just building technology anymore. We’re shaping who we become through how we use it. Thanks for distilling this so clearly.

Like
Reply

This is a fascinating perspective, Eric Janvier. Harari’s take on AI shifting from a tool to an autonomous agent raises important questions about control and trust. A critical moment for businesses and society to adapt responsibly.

Rekha Nand

President at PepsiCo

5mo

What keeps you awake at night regarding your AI strategy

Like
Reply
Girish Bhardwaj

Chief Technology Officer (CTO) at Facebook

5mo

Has your company experienced unintended consequences from algorithmic systems

Like
Reply

To view or add a comment, sign in

More articles by Eric Janvier

Others also viewed

Explore content categories