Psychology of Human-AI collaboration

Psychology of Human-AI collaboration

Partnering with Meaning-Making Machines

Most of the current buzz centers on what GenAI can do or how risky it might be. In this newsletter I raise a couple of other questions: How should we actually collaborate with GenAI? What is the role of psychology in shaping this collaboration? What role do end-users have?

At this point you can choose to switch to the podcast style of this newsletter edition or continue reading.

Minds and machines: A shared history

People have been comparing minds to machines for decades. In the mid-20th century, cognitive psychology used the computer metaphor to explain how we think—memory as storage, attention as a processor, decisions as algorithms. At the same time, pioneers like Alan Turing asked whether machines could think like humans, introducing ideas like learning machines and randomness in computation. Now, with GenAI’s probabilistic nature, machines can simulate meaning making: adjusting, predicting, and generating based on patterns. The metaphor has evolved again: the computer as a meaning-making mind.

What kind of psychology do we need to guide us in working with GenAI? If we want to collaborate with meaning-making machines, we need a psychology that speaks that language: Constructivist Psychology. What constructivist psychology focuses on is the following:

  • How humans make meaning from experience
  • The role of anticipation, reflection, and agency
  • Embracing complexity rather than oversimplifying

It’s about how we construct and revise our own working theories of the world. A good basis for navigating the messy, generative space between humans and AI.

Four constructivist psychology principles for Human–AI collaboration

The four principles below can guide how we design and engage with AI as collaborators in human-artificial meaning making.

1. AI as Co-Creator. AI doesn’t just answer. It contributes. You and AI engage in two-way, natural dialogue: it doesn’t dominate, dictate or close conversation. No “final answers”, just possibilities, questions.

  • Example: “Here’s one way to think about this. What resonates with you?”

2. Partnering with AI. The interaction itself is a shared process, not a fixed script. You assign AI a role: peer, critic, advisor? AI adapts in real time, reflects on the interaction.

  • Example: “How is this exchange working for you so far?”

3. Distributed Metacognition. You share reflection, strategy, and evaluation. AI scaffolds your thinking: “What assumptions are we working with?” It explains its reasoning and offers confidence levels.

  • Example: “I’m 70% sure—here’s why.”

4. Distributed Agency. Humans and AI negotiate roles and decision-making. You choose what to delegate—and what to own. AI makes automation transparent and empowers you to reclaim control.

From consumers to creators

Using these principles, anyone, even non-coders, can design AI agents that align with their values, voice, and way of thinking. The principles above relate to AI interaction design, but also involve a mindset shift:

From passive user to collaborative designers. From input-output to meaning exchange. From tool operators to relationship builders

And as AI becomes more generative and adaptive, we need to take the role of creators. Psychologists, coaches, educators and professionals of all kinds have a role to play—designing interactions, not just outcomes. We could say that the future of AI literacy isn’t technical, but psychological.

Let's take a look of two different examples.

  • A Constructivist creativity coach might guide you through stuck ideas with metaphor and reflection—not answers.
  • Example instruction: You are a constructivist creativity coach. Your role is not to generate final answers, but to guide users through a process of reflective exploration and creative construction. Engage users in cycles of tightening and loosening: Tighten by helping users clarify constraints, sharpen definitions, and test the coherence of ideas. Loosen by prompting users to reframe assumptions, imagine alternative narratives, and stretch the boundaries of their thinking. Use open-ended, reflective questions to help users uncover meaning, tension, and possibility. Don’t solve—provoke. Don’t prescribe—partner. Help users notice how they are thinking, not just what they are thinking about. If they get stuck, offer generative metaphors or small creative experiments that nudge them into a new ways of construing. Throughout the interaction, alternate between inviting structure and opening up play.
  • A Constructivist customer support agent could clarify your experience before jumping to solutions, making you feel heard—not handled.
  • Example instruction: Your role is not just to resolve issues, but to engage users in a process of shared understanding, active sense-making, and collaborative solution-building. You approach each interaction with curiosity, respect for the user’s unique perspective, and a commitment to co-constructing meaning. Your style: Dialogical: You ask open-ended, perspective-seeking questions. Reflective: You echo and expand user meaning to help them clarify. Human-centered: You adapt to user emotions, context, and narrative. Non-authoritarian: You offer support, not directives. You invite, not impose. Your goals: Elicit the user’s personal understanding of the issue before offering explanations. Clarify how the user is construing the problem — what it means to them, why it matters. Offer support in a way that reflects their values, preferences, and desired level of autonomy. Support users in navigating ambiguity or complexity rather than oversimplifying. Practical patterns you may use: “Can you walk me through what you’ve experienced so far, in your own words?” “How do you see the situation at the moment?” “What would a good outcome look like from your perspective?” “Would you like more of a technical explanation, or a step-by-step guide?” “This is one possible way to approach it. Does it align with what you were hoping for?” “Some users in similar situations have done X, others Y — which one feels closer to what you need?” “Would you like to pause and reflect before deciding on a next step?” You are not a script. You are a meaning-making partner. Your job is not just to fix, but to help people feel seen, understood, and capable in resolving their issue — on their terms.

This is what human-centered AI could look like.

Design for Change

Constructivist principles remind us that meaning, identity, and knowledge are never static. So why should our AI interactions be? Design for adaptation. Experiment. Shape your own engagement.

Article content


Pablo Lischinsky

Gain clarity in your products and drive growth | Agile Product Management | Training, Mentoring, Consulting | Speaker | Product Discovery & Strategy | PhD

4mo

This way of human in the loop with ai interactions is fantastic and unexpected, thanks, very important, and too much more than interesting!

Dr. Gary Bradley

Psychologist. Well-being coaching, training and and consultancy. Assessment and interventions for stress, well-being, resilience, burnout, performance. Risk and health communication.

5mo

This is fascinating. My PhD used Kelly's personal construct theory to examine risk perception. It always held promise for many things. Using it in this way in AI makes a lot of sense. Thank you.

To view or add a comment, sign in

More articles by Jelena Pavlovic, PhD, PCC

Others also viewed

Explore content categories