The Best Evidence in This Piece Isn't the Argument
Source: Nicolas Bustamante on X

Nicolas Bustamante's LLMs Eat Scaffolding for Breakfast is circulating, and it deserves a closer read than most people will give it.
The Codex system prompt comparison is the piece's best moment — and it's the one thing you'll actually remember. GPT-o3's prompt: 310 lines. GPT-5's: 104 lines. That's not an assertion about model capability; it's a measurement of it. The old prompt had to explain how to behave as a coding agent. The new one assumes the model already knows. Most writing about "models getting smarter" stays vague because the author doesn't have anything concrete to point at. This does.
The argument built around that evidence — that complexity is now a liability, that the best AI code is close to the model and light on abstraction — holds up, and the organisational psychology section earns its place. Resume-driven development as a force preventing teams from deleting infrastructure they no longer need is a real phenomenon that doesn't get named often enough.
Where it stops earning its confidence is the "Four Constants" section. Three of the four — context windows, speed, cost — are measurable trends with numbers attached. Then comes "Intelligence: There is no Wall," which opens with the same declarative tone and immediately pivots to "every job can be done and will be done by AI." That's not a constant. It's a prediction dressed as an observation, and the move is so smooth most readers won't notice the gear change.
The scaffolding argument works. The eschatology is free-riding on it.
Stay current weekly
Get new commentary and weekly AI updates in the AI Primer Briefing.