The Piece About AI Coding Gets One Thing Right That Almost Everyone Else Gets Wrong

Commentary3 min readPublished 2026-03-10AI Primer

Source: Odysseus on X

AI and SoftwareDeveloper ToolsAI Adoption
Cover image for The Piece About AI Coding Gets One Thing Right That Almost Everyone Else Gets Wrong

This piece on "harness engineering" has been circulating this week. The cybernetics framing — Watt's governor, Kubernetes, agentic coding as three instances of the same pattern — is going to eat your morning if you let it.

Worth it. Read it first, then come back.

The section on calibration is the reason to share this at all. Most teams who are frustrated with coding agents have been blaming the wrong thing. The post names it precisely: "The agent isn't failing because it lacks capability. It's failing because the knowledge it needs is locked inside your head, and you haven't externalised it."

That's a specific and corrective observation. Not "improve your prompts." Not "the model isn't ready yet." Your judgment hasn't been made machine-readable, and the agent is filling the gap with plausible-sounding defaults. That reframe alone is worth the five minutes.

Here's where it doesn't earn what it's selling.

The cybernetics analogy works because Watt's governor and Kubernetes both operate in closed, measurable domains. Steam pressure is a number. Desired replica count is a declaration. The feedback loop closes cleanly because "correct" is unambiguous.

The post then applies the same frame to architectural judgment — and waves the gap away with "write better documentation." But that gap is not a documentation problem. A lot of what a senior engineer actually does cannot be written down because it doesn't exist as propositions yet. It's pattern recognition built from failures the docs don't mention, from the time a clean abstraction became a maintenance nightmare two years later, from knowing which conventions this particular team will actually follow under pressure.

"Encode your standards into the harness" is correct advice. It just isn't as close to sufficient as the historical arc implies.

The OpenAI example — a million lines in five months, nothing written by hand — is used to show where this is going. What it actually shows is where one highly resourced team, with direct access to the models they're shipping, currently is. The cost of building a harness capable of producing those results is itself a serious engineering project. That cost doesn't appear anywhere in the piece.

The last section, on the penalty for skipping practices that were always correct, is underrated and should have been longer. Bad documentation was always a drag. With agents, it becomes a force multiplier for the wrong outcomes, running at machine speed, around the clock. That's the argument that will actually change behaviour in engineering teams — and the piece buries it in the final three paragraphs.

The frame is the best available. The timeline it implies is not.

Stay current weekly

Get new commentary and weekly AI updates in the AI Primer Briefing.