#40 - AI Didn’t Free You – It Exposed You

#40 - AI Didn’t Free You – It Exposed You

The first sensation isn’t always empowerment. It can be exposure.

There’s a moment – maybe the first time a vague business concept becomes a detailed plan or a casual doodle becomes a rendered image – when you realize that the story you’ve told yourself about why you aren’t doing the thing no longer holds. “I don’t know how” once felt like a sensible alibi. Now it feels hollow. The practical limitations that protected our ambitions from scrutiny are thinning, and what remains is something more intimate: will.

This isn’t a simple productivity tale. Capability has become ambient, woven through browsers, documents, and workflows. Yet most workers still say they rarely use AI in their jobs, and many feel more worried than hopeful about its role. If the means are available, inaction starts to look less like circumstance and more like choice. And choice is heavy.


A culture of rising expectations

When prototyping compresses from months to minutes, expectations inflate. Benchmarks silently harden – one-person teams are asked to perform like small companies, and small companies like large ones. As capability rises, so too does a quieter cost: the psychological tax of expectation. This is the paradox of empowerment, and it generates its own pressure.

Psychologists call it the dark side of choice: more options do not always mean more freedom. In one well-known study, shoppers facing 24 jam flavors were less likely to buy than those offered six. Too much possibility can feel less like liberation and more like paralysis.

Excuses once shielded us from that pressure. Now they’ve evaporated, and the pressure lands more squarely on individuals. If you’re not using the tools, is that a structural limitation, a deliberate choice, or avoidance? Whatever the reason, the absence of excuses makes the question sharper, both in the workplace and within ourselves.


Accountability’s strange new geometry

When something goes right with intelligent tools, credit diffuses; when it goes wrong, blame concentrates. Madeleine Elish has called this the moral crumple zone: humans absorb responsibility for failures in complex automated systems – even when their actual control was limited.

There’s another irony here, captured decades earlier by Lisanne Bainbridge. The more routine tasks a system handles, the more humans are left only with rare, high-stakes interventions for which they’re least prepared. As systems handle the routine, humans are left underprepared for the rare – and often most consequential – exceptions. The end of excuses does not mean the end of uncertainty; it means a new kind of responsibility.


Philosophical lenses for a post-excuse world

Philosophy has long wrestled with the tension between freedom and responsibility. Generative AI doesn’t invent that tension, but it magnifies it. Old obstacles fall away, and what remains is the sharper question of how we live with choices that are undeniably ours. A few perspectives offer guidance.

Existentialism. Jean-Paul Sartre argued that we are “condemned to be free”; we choose, even in constraint. Generative AI multiplies feasible choices and therefore multiplies responsibility. The friction that remains – putting your name to the work, risking judgment – is the friction of authenticity. This is magnified by cultural shifts toward perfectionism, which research shows has been steadily rising across generations (Curran & Hill, 2019). The easier it is to begin, the more we worry about not being flawless when we do.

Stoicism. If the sphere of control expands instrumentally (you can do more), the counsel is not to do everything but to clarify what is yours to do. That invites constraints as virtues: rituals, checklists, decision rules that protect attention from infinite possibility. The paradox of our era is that wisdom looks like adding friction back where the world removed it.

Self-Determination Theory. Deci and Ryan remind us that intrinsic motivation rests on autonomy, competence, and relatedness. Tools that feign competence without building felt competence can undermine motivation; workflows that isolate us from peers can drain relatedness. The way we use AI must support, not erode, those three needs.

Together, these lenses converge on a simple truth: capability alone does not create meaning. It is our willingness to choose, to focus, and to connect that transforms possibility into something worth doing. AI may strip away our excuses, but it cannot supply our reasons.


Living without excuses

Why delay when the path is clear? Because clarity about means doesn’t resolve conflict about ends.

Procrastination often tracks with task aversion and fear of exposure. Piers Steel describes it not as poor time management but as a failure of self-regulation, which thrives precisely when friction is gone but self-doubt remains. Choice overload adds another twist. Even healthy ambition can sputter when multiplied across too many plausible projects. The rational response to abundance is selective excellence. The irrational response is to try everything and finish nothing, or to wait for a perfect signal that never arrives.

Meanwhile, social comparison grows louder. As Leon Festinger observed, we calibrate ourselves by others around us, but when the field shifts quickly, those comparisons can destabilize more than they guide. AI accelerates this dynamic, making it harder to hide behind excuses when others are visibly producing more, faster.

If excuses are fading, we need something to replace them. A few practices, tested by research and common sense:

  • Shrink the unit of action. Move from mission statements to commitments you can keep this week. Small wins build felt competence, which sustains motivation.
  • Design “good friction.” Insert checks where it counts: a second pass, a source review, a user test. Bainbridge warned that automation tempts us to neglect rare but crucial judgment calls; ritualized friction keeps humans in shape for those moments.
  • Name the true blocker. Keep an “excuse ledger.” Each time you stall, write the reason. Is it external (tool-solvable) or internal (value conflict, fear)? If internal, either narrow the task until it’s non-threatening or consciously decline it.
  • Practice honorable “no’s.” In a culture that equates abundance with obligation, you’ll need clean refusals. Without excuses, the sentence becomes: “I’m choosing not to pursue this because it doesn’t meet my criteria.” Choice, stated plainly, is lighter than avoidance.
  • Protect relatedness. Sherry Turkle’s work reminds us that technology can simulate connection while draining the stuff that builds resilience: conversations, mentorship, belonging.

Without excuses, we are left with choice, and the responsibility to design our lives in ways that make those choices deliberate, sustainable, and aligned with what we value most.


Closing: freedom with form

If generative AI has a psychological headline, it’s this: the tools make action cheap and avoidance expensive. They reveal, with awkward clarity, whether we actually want the things we say we want.

Living well in this landscape requires a quiet combination of courage and constraint. Courage to choose publicly; constraint to choose less. Selective excellence beats frantic ubiquity. Add rituals where the world removed friction. Replace evasive excuses with explicit preferences, and give yourself permission to admit, “I don’t want this enough to do it,” which is not failure but alignment.

The end of excuses is not the end of compassion for ourselves or others. It is the beginning of a clearer story – one where our actions match our values, and our tools extend our judgment rather than replace it. Capability has advanced. Our standards can, too.


References

  • Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775–779.
  • Curran, T., & Hill, A. P. (2019). Perfectionism is increasing over time: A meta-analysis of birth cohort differences from 1989 to 2016. Psychological Bulletin, 145(4), 410–429.
  • Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11(4), 227–268.
  • Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society, 5, 40–60.
  • Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7(2), 117–140.
  • Iyengar, S. S., & Lepper, M. R. (2000). When choice is demotivating: Can one desire too much of a good thing? Journal of Personality and Social Psychology, 79(6), 995–1006.
  • OECD. (2025a). Bridging the AI skills gap: Is training keeping up? Paris: OECD.
  • OECD. (2025b). Emerging divides in the transition to artificial intelligence (Regional Development Papers No. 147). Paris: OECD.
  • Pew Research Center. (2025). Workers’ views of AI use in the workplace. Washington, DC: Pew Research Center.
  • Schwartz, B. (2004). The paradox of choice: Why more is less. New York, NY: Ecco/HarperCollins.
  • Stanford Institute for Human-Centered AI. (2025). AI Index Report 2025. Stanford, CA: Stanford University.
  • Steel, P. (2007). The nature of procrastination: A meta-analytic and theoretical review of quintessential self-regulatory failure. Psychological Bulletin, 133(1), 65–94.
  • Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. New York, NY: Basic Books.

Michael Sands

Cap Markets | GenAI | Process Redesign | Change Management

1mo

Good read Scott

To view or add a comment, sign in

More articles by Scott Fetter

Others also viewed

Explore content categories