#40 - AI Didn’t Free You – It Exposed You
The first sensation isn’t always empowerment. It can be exposure.
There’s a moment – maybe the first time a vague business concept becomes a detailed plan or a casual doodle becomes a rendered image – when you realize that the story you’ve told yourself about why you aren’t doing the thing no longer holds. “I don’t know how” once felt like a sensible alibi. Now it feels hollow. The practical limitations that protected our ambitions from scrutiny are thinning, and what remains is something more intimate: will.
This isn’t a simple productivity tale. Capability has become ambient, woven through browsers, documents, and workflows. Yet most workers still say they rarely use AI in their jobs, and many feel more worried than hopeful about its role. If the means are available, inaction starts to look less like circumstance and more like choice. And choice is heavy.
A culture of rising expectations
When prototyping compresses from months to minutes, expectations inflate. Benchmarks silently harden – one-person teams are asked to perform like small companies, and small companies like large ones. As capability rises, so too does a quieter cost: the psychological tax of expectation. This is the paradox of empowerment, and it generates its own pressure.
Psychologists call it the dark side of choice: more options do not always mean more freedom. In one well-known study, shoppers facing 24 jam flavors were less likely to buy than those offered six. Too much possibility can feel less like liberation and more like paralysis.
Excuses once shielded us from that pressure. Now they’ve evaporated, and the pressure lands more squarely on individuals. If you’re not using the tools, is that a structural limitation, a deliberate choice, or avoidance? Whatever the reason, the absence of excuses makes the question sharper, both in the workplace and within ourselves.
Accountability’s strange new geometry
When something goes right with intelligent tools, credit diffuses; when it goes wrong, blame concentrates. Madeleine Elish has called this the moral crumple zone: humans absorb responsibility for failures in complex automated systems – even when their actual control was limited.
There’s another irony here, captured decades earlier by Lisanne Bainbridge. The more routine tasks a system handles, the more humans are left only with rare, high-stakes interventions for which they’re least prepared. As systems handle the routine, humans are left underprepared for the rare – and often most consequential – exceptions. The end of excuses does not mean the end of uncertainty; it means a new kind of responsibility.
Philosophical lenses for a post-excuse world
Philosophy has long wrestled with the tension between freedom and responsibility. Generative AI doesn’t invent that tension, but it magnifies it. Old obstacles fall away, and what remains is the sharper question of how we live with choices that are undeniably ours. A few perspectives offer guidance.
Existentialism. Jean-Paul Sartre argued that we are “condemned to be free”; we choose, even in constraint. Generative AI multiplies feasible choices and therefore multiplies responsibility. The friction that remains – putting your name to the work, risking judgment – is the friction of authenticity. This is magnified by cultural shifts toward perfectionism, which research shows has been steadily rising across generations (Curran & Hill, 2019). The easier it is to begin, the more we worry about not being flawless when we do.
Stoicism. If the sphere of control expands instrumentally (you can do more), the counsel is not to do everything but to clarify what is yours to do. That invites constraints as virtues: rituals, checklists, decision rules that protect attention from infinite possibility. The paradox of our era is that wisdom looks like adding friction back where the world removed it.
Self-Determination Theory. Deci and Ryan remind us that intrinsic motivation rests on autonomy, competence, and relatedness. Tools that feign competence without building felt competence can undermine motivation; workflows that isolate us from peers can drain relatedness. The way we use AI must support, not erode, those three needs.
Together, these lenses converge on a simple truth: capability alone does not create meaning. It is our willingness to choose, to focus, and to connect that transforms possibility into something worth doing. AI may strip away our excuses, but it cannot supply our reasons.
Living without excuses
Why delay when the path is clear? Because clarity about means doesn’t resolve conflict about ends.
Procrastination often tracks with task aversion and fear of exposure. Piers Steel describes it not as poor time management but as a failure of self-regulation, which thrives precisely when friction is gone but self-doubt remains. Choice overload adds another twist. Even healthy ambition can sputter when multiplied across too many plausible projects. The rational response to abundance is selective excellence. The irrational response is to try everything and finish nothing, or to wait for a perfect signal that never arrives.
Meanwhile, social comparison grows louder. As Leon Festinger observed, we calibrate ourselves by others around us, but when the field shifts quickly, those comparisons can destabilize more than they guide. AI accelerates this dynamic, making it harder to hide behind excuses when others are visibly producing more, faster.
If excuses are fading, we need something to replace them. A few practices, tested by research and common sense:
Without excuses, we are left with choice, and the responsibility to design our lives in ways that make those choices deliberate, sustainable, and aligned with what we value most.
Closing: freedom with form
If generative AI has a psychological headline, it’s this: the tools make action cheap and avoidance expensive. They reveal, with awkward clarity, whether we actually want the things we say we want.
Living well in this landscape requires a quiet combination of courage and constraint. Courage to choose publicly; constraint to choose less. Selective excellence beats frantic ubiquity. Add rituals where the world removed friction. Replace evasive excuses with explicit preferences, and give yourself permission to admit, “I don’t want this enough to do it,” which is not failure but alignment.
The end of excuses is not the end of compassion for ourselves or others. It is the beginning of a clearer story – one where our actions match our values, and our tools extend our judgment rather than replace it. Capability has advanced. Our standards can, too.
References
Cap Markets | GenAI | Process Redesign | Change Management
1moGood read Scott