
Avid, writing on X, on how to avoid making AI slop in 2026 — a 2,000-word thread about the importance of taste, delivered in the most algorithmically optimised viral thread format available. Name-drops Thiel, Rubin, Jobs. Embedded quote tweets at regular intervals. Bold claims followed by one-line paragraphs for dramatic effect. You know the shape. You've scrolled past forty of these this week.
The core idea is fine: AI outputs converge on the mean, the mean is now everywhere, and the gap between what AI produces and what you actually want is where your judgment lives. Agreed. Learning to articulate why something fails — structurally, not just vibes — is a real and undervalued skill. The "generate ten versions, reject all ten, name the reason each time" practice is the most useful thing in the piece, and it's buried in the middle.
But the thread can't resist inflating a solid observation into a worldview. Everything below the 75th percentile is "slop." Everything above is a "masterpiece." There is, apparently, no room for work that is clear, accurate, fit for purpose, and utterly unremarkable — which describes roughly 90 percent of professional output that actually matters. Not every quarterly report needs a Rick Rubin production credit.
The Thiel detour — "taste as elimination, not addition" — is the kind of framework-borrowing that sounds profound on first read and means nothing on second. Thiel's last-mover thesis is about competitive market dynamics. It has precisely nothing to do with whether your landing page gradient is too predictable. But it pattern-matches to "smart," and pattern-matching to smart is what threads like this are optimised for.
The real tell is the formatting. If you're writing two thousand words about escaping the culturally averaged aesthetic, maybe don't publish it in the culturally averaged aesthetic of a 2024–2026 X engagement thread. The medium is the message, and the message here is: taste is important, unless it would cost you reach.
Here's what I think is actually true: AI made the gap between intent and output impossible to ignore. Before, effort was noise you could confuse with quality. Now you get the output in seconds, and you have to sit with whether it's actually what you wanted. That's uncomfortable, and discomfort is where improvement lives. You don't need a framework for that. You need the honesty to look at the thing and say "no, not yet" instead of shipping it because it's Wednesday and good enough.
That's one paragraph. The other nineteen were atmosphere.
Stay current weekly
Get new commentary and weekly AI updates in the AI Primer Briefing.