The Conductor Who Doesn't Build Instruments
Source: Aravind Srinivas on X

Aravind Srinivas, CEO of Perplexity, published a long essay yesterday timed to a product launch. The thesis: no single model family can do its best work alone, the future is "massively multimodel orchestration," and therefore the orchestration layer — not the models themselves — is where the value lives. Perplexity's new product runs 19 models in the backend. The AI, he argues, is the computer.
He's not wrong about specialisation. Anyone building production AI systems already knows you get better results routing different subtasks to different models. Reasoning to one, code to another, retrieval to a third. This has been the quiet engineering consensus for a year. The market just hasn't caught up because consumers still think in terms of "which model is best" rather than "which system uses models best."
The Chromebook analogy is sharp. Google was right that the browser was the computer but wrong about search being an adequate read interface for the web's knowledge. Framing the open web as a storage system with a broken read function is a useful lens, and it's the clearest explanation I've seen of what retrieval-augmented AI products are actually doing.
But then there's this: "The biggest weakness of Claude is that it only coworks with Claude."
That's not an architectural insight. That's a press release.
Anthropic, OpenAI, and Google are all building internal routing, specialisation, and multi-step orchestration within their own model families. Whether cross-family orchestration structurally outperforms well-designed single-family systems is an open empirical question, not the settled fact Srinivas presents it as. Funny how the settled facts always seem to settle in the direction of whatever the author is selling.
The orchestra metaphor is doing the heaviest lifting. Model makers are Stradivari — master craftsmen, sure, but ultimately just building instruments. Srinivas casts himself as the conductor. The one who makes the symphony. It's flattering. It's also the exact argument you'd make if you were the one company in the room that doesn't train frontier models. The metaphor quietly assumes the hard problem is coordination, not capability. I'm not sure the labs building the models would agree, and I'm not sure they'd be wrong.
"Go to sleep and wake up with weeks of work done" is the line that reveals the gap between the thesis and reality. For most professional work, the bottleneck was never delegation. It's specification — knowing what to ask for, knowing whether the output is right, catching the failure that looks like success. Orchestrating 19 models doesn't help if you can't tell which of the 19 got it wrong. That's a judgement problem, not a routing problem.
There's also a swipe at a competitor — "if you like malware reading your texts" — that tells you more about what Perplexity is worried about than about what's actually wrong with the competition. When the competitive shot is in paragraph four of your architectural manifesto, it's not an aside. It's the point.
The line everyone will remember is "The AI is the Computer." The line that actually matters is one he almost buries: "AI models are becoming so capable that the products built around them have been a bottleneck for showing their true potential." That's the real bet. Value is shifting from model training to product design. From the instrument to the orchestra.
He might be right. But it's worth noticing that the conductor never builds the Stradivarius. He just tells you it isn't enough on its own.
Stay current weekly
Get new commentary and weekly AI updates in the AI Primer Briefing.