The Alarm and the Affirmation

Opinion6 min readPublished 2026-02-12AI Primer
AI CareersAI NarrativesCritical Analysis
Cover image for The Alarm and the Affirmation

Two pieces went viral this week about AI and what it means for your career. Together, they tell you a lot — just not quite what either author intended.

Matt Shumer, who runs an AI startup, wrote a long, personal warning. AI is improving faster than you think. His own job has been transformed. Yours is next. The comparison is to COVID in February 2020: you think it's overblown, but you're wrong. Act now.

Tatiana Tsiguleva wrote a response. Not a rebuttal — a continuation, as she put it. The technology is real, yes, but where Shumer sees threat, she sees possibility. New jobs are forming. The cost of making things is collapsing. The bottleneck shifts to the original idea. Aristotle imagined this. Maybe it's freedom, not doom.

I'd recommend reading both. Shumer's piece is the better account of what the technology can do right now. Tsiguleva's is the better essay about what it might mean. Together they cover a lot of ground.

But I keep coming back to what neither piece does, and I think it's the thing most people actually need.

Shumer tells you to feel urgency. Tsiguleva tells you to feel possibility. Both are essentially saying: here's the correct emotional response to AI. Shumer's is fear tempered by action. Tsiguleva's is hope tempered by realism. Both are thoughtful people writing in good faith.

Neither gives you a framework for making a specific decision about your specific situation.

And that, I think, is the gap that matters most right now. Not "should I be scared or excited?" but "what does this actually mean for my team's workflow in Q3?" Not "is AI coming for my job?" but "which parts of my work are most affected, on what timeline, and what should I be learning first?" Not "how should I feel?" but "how should I think?"

Shumer writes:

I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears.

Tsiguleva writes:

If a machine can do your job, what was your job actually worth to you?

Both are interesting provocations. But if you're a finance director, or an HR leader, or a solicitor, or a marketing manager — if you're someone with a team, a budget, and decisions to make this quarter — neither question is quite the right one. The right question is more boring and more useful: what can these tools reliably do in my domain today, where are they heading in the next twelve months, and what's my plan for integrating them thoughtfully?

That's not as shareable. It doesn't go viral. But it's what actually helps.

There's a pattern in how technology discourse works. A new capability arrives. Someone sounds the alarm. Someone else provides the counter-narrative. The discourse becomes a debate between "this changes everything" and "this changes everything, but in a good way." People pick a side based on temperament. And the practical middle — the part where you actually figure out what to do — gets squeezed out because it's less dramatic than either pole.

We're watching that happen in real time with AI. The alarmed camp and the optimistic camp are both growing. The "let's think about this carefully, by role, by industry, with evidence" camp is strangely underpopulated.

Tsiguleva's list of new AI-created jobs is a good example. "Clinical AI translators," "AI trust engineers," "AI workflow architects" — these all sound plausible. Some of them are real. But listing emerging job titles as a counterweight to Shumer's displacement warnings is the hopeful version of the same rhetorical move: anecdote and extrapolation in place of analysis. It's reassuring to read. It's not a plan.

Similarly, her advice to "reduce the noise" and "find the gold inside yourself" is warm and well-meaning. But a mid-career professional with a mortgage, two children, and a job that might look very different in two years needs more than introspection. They need to understand what's actually changing in their field, how quickly, and what the practical response looks like.

That's not a criticism of either writer. They're doing what essays do: shaping narrative, provoking thought, moving feelings. That has value.

But what most professionals need right now isn't a better narrative. It's a clearer picture. One that's specific to what they do, honest about uncertainty, and practical enough to act on this month — not just emotionally compelling enough to share this week.

That's the work we're trying to do here. Less dramatic than either piece. More useful, we hope, than both.

Stay current weekly

Get new commentary and weekly AI updates in the AI Primer Briefing.