Psychology of Human-AI collaboration
Partnering with Meaning-Making Machines
Most of the current buzz centers on what GenAI can do or how risky it might be. In this newsletter I raise a couple of other questions: How should we actually collaborate with GenAI? What is the role of psychology in shaping this collaboration? What role do end-users have?
At this point you can choose to switch to the podcast style of this newsletter edition or continue reading.
Minds and machines: A shared history
People have been comparing minds to machines for decades. In the mid-20th century, cognitive psychology used the computer metaphor to explain how we think—memory as storage, attention as a processor, decisions as algorithms. At the same time, pioneers like Alan Turing asked whether machines could think like humans, introducing ideas like learning machines and randomness in computation. Now, with GenAI’s probabilistic nature, machines can simulate meaning making: adjusting, predicting, and generating based on patterns. The metaphor has evolved again: the computer as a meaning-making mind.
What kind of psychology do we need to guide us in working with GenAI? If we want to collaborate with meaning-making machines, we need a psychology that speaks that language: Constructivist Psychology. What constructivist psychology focuses on is the following:
It’s about how we construct and revise our own working theories of the world. A good basis for navigating the messy, generative space between humans and AI.
Four constructivist psychology principles for Human–AI collaboration
The four principles below can guide how we design and engage with AI as collaborators in human-artificial meaning making.
1. AI as Co-Creator. AI doesn’t just answer. It contributes. You and AI engage in two-way, natural dialogue: it doesn’t dominate, dictate or close conversation. No “final answers”, just possibilities, questions.
2. Partnering with AI. The interaction itself is a shared process, not a fixed script. You assign AI a role: peer, critic, advisor? AI adapts in real time, reflects on the interaction.
3. Distributed Metacognition. You share reflection, strategy, and evaluation. AI scaffolds your thinking: “What assumptions are we working with?” It explains its reasoning and offers confidence levels.
4. Distributed Agency. Humans and AI negotiate roles and decision-making. You choose what to delegate—and what to own. AI makes automation transparent and empowers you to reclaim control.
From consumers to creators
Using these principles, anyone, even non-coders, can design AI agents that align with their values, voice, and way of thinking. The principles above relate to AI interaction design, but also involve a mindset shift:
From passive user to collaborative designers. From input-output to meaning exchange. From tool operators to relationship builders
And as AI becomes more generative and adaptive, we need to take the role of creators. Psychologists, coaches, educators and professionals of all kinds have a role to play—designing interactions, not just outcomes. We could say that the future of AI literacy isn’t technical, but psychological.
Let's take a look of two different examples.
This is what human-centered AI could look like.
Design for Change
Constructivist principles remind us that meaning, identity, and knowledge are never static. So why should our AI interactions be? Design for adaptation. Experiment. Shape your own engagement.
Gain clarity in your products and drive growth | Agile Product Management | Training, Mentoring, Consulting | Speaker | Product Discovery & Strategy | PhD
4moThis way of human in the loop with ai interactions is fantastic and unexpected, thanks, very important, and too much more than interesting!
Founder @Koučing centar | ICF Trainer | Leadership Development w/ Copilot Studio AI | Professor of Organizational Change
5moDr. Gary Bradley thank you. Truly happy this resonates with you. Explored much more in depth here https://www.routledge.com/Partnering-with-AI-in-Coaching-and-Human-Skills-Development-A-Constructivist-Guidebook-for-Innovation/Pavlovic/p/book/9781032950853?srsltid=AfmBOor0Z-vpSa3Ei_MbzR7LBr5CdxuQkiWdpCaRwT5pbZp01vz0pxMj
Psychologist. Well-being coaching, training and and consultancy. Assessment and interventions for stress, well-being, resilience, burnout, performance. Risk and health communication.
5moThis is fascinating. My PhD used Kelly's personal construct theory to examine risk perception. It always held promise for many things. Using it in this way in AI makes a lot of sense. Thank you.