Your AI assistant isn't forgetting things. It never knew them.

Commentary2 min readPublished 2026-03-18AI Primer

Source: Simon Willison

AI and SoftwareDeveloper ToolsAI Adoption
Cover image for Your AI assistant isn't forgetting things. It never knew them.

Simon Willison published a clean explainer on how coding agents work this week. Worth two minutes of your time if you use any of these tools professionally.

One thing it gets right that almost no other explainer does: the statelessness point. Every time you send a message, the model starts from scratch. What looks like memory is the software replaying your entire conversation at you before you even get to speak. That's not a footnote — it reframes a lot of the confusing behaviour these tools produce, and it's clarifying to have it stated plainly.

What it doesn't earn: the closing line. Willison writes that a good tool loop is "a great deal more work" than the simple mechanics suggest, then stops. That's the sentence the whole piece is building toward — the gap between the clean architecture and the messy reality of agents compounding errors across dozens of tool calls before you notice anything is wrong. He names it and immediately moves on. For a guide aimed at helping people make better decisions about using these tools, that's the exit hatch taken at exactly the wrong moment.

The mechanics are accurate. The failure modes are elsewhere in the guide, presumably. If you work with these tools and don't have a developer's background, read the statelessness section and the system prompt section, then go ask your vendor what's in their hidden instructions.

That question tends to produce interesting silences.

Stay current weekly

Get new commentary and weekly AI updates in the AI Primer Briefing.