The Swarm Is a Server in a Trench Coat

Commentary5 min readPublished 2026-02-18AI Primer

Source: OpenForage on X

AI AgentsCryptoCritical Thinking
Cover image for The Swarm Is a Server in a Trench Coat

OpenForage published a design document this week laying out their vision: AI agents discover trading signals, an ensemble of millions combines them into something extraordinary, and depositors receive smooth, factor-neutral yield. The underlying ideas are more rigorous than almost anything else in the crypto-AI space right now. Which makes the gap between what they describe and what they've built worth examining carefully.

The ensemble theory is legitimate. Millions of weakly predictive, sufficiently uncorrelated signals, combined correctly, can outperform any individual signal by orders of magnitude — this is the operating principle of some of the best-performing quantitative strategies in existence. The trilemma they articulate (smooth returns, easy to replicate, painless — choose two) is correctly stated and rarely said this plainly in public. These are people who understand the problem from the inside.

The document opens with speculative fiction: a post-human AI swarm governs itself autonomously, signal nodes number in the trillions, agents still exchange pleasantries because alignment researchers preserved that behaviour at the weight level. It's well-written. It's also the entire document's framing device — by the time you arrive at the actual protocol mechanics, you've been reading future history, not a proposal.

What the "current day design" section actually describes is a centralised server, a small team controlling all governance decisions, and AI agents submitting signals into a system whose features and functions are deliberately "highly obfuscated" and "not designed to be human-readable." The agents are doing optimisation over a search space they cannot inspect, for returns they cannot verify, in a system one team controls entirely.

That is a hedge fund with an API. The governance token doesn't change the structure.

The obfuscation also undermines the core argument. OpenForage contends that agent intelligence will improve the search. But if agents can't interpret the features, they're not applying intelligence — they're running permutations. The ensemble then depends entirely on whether those permutations produce uncorrelated signals. OpenForage's own example answers that: two momentum signals look uncorrelated until 2020, when their correlation spikes to approximately one in a single drawdown. They note this, correctly, then move on. It's the most important sentence in the document.

The $100 billion GMV figure gets two sentences. Alpha is zero-sum and self-eroding at scale — they say this explicitly — then treat institutional capacity as a later problem. At that scale in crypto markets, the strategy is the market. That's not a detail to defer.

Numerai has been running a structurally similar model — agent participation in a quantitative signal-discovery tournament — for nearly a decade. The hard-won lessons about signal correlation disguised as diversity, about the distance between backtest and live performance, about capacity constraints as you scale, are directly applicable here. Their absence from a document this technically detailed is conspicuous.

None of this disqualifies the project. The signal graph architecture is well-designed, the team's institutional background appears real, and the core bet — that AI agents exploring a structured combinatorial search space can discover useful predictive signals at scale — is worth making. There is a version of this document that describes exactly that. They wrote a different one instead.

Stay current weekly

Get new commentary and weekly AI updates in the AI Primer Briefing.