Your AI Supplier Is Also Your Competitor, and They Have a Permanent Cost Advantage Over You
Source: Sarah Sachs

Sarah Sachs, who leads AI business at Notion, published a piece on navigating the AI inference market that's worth your time — mostly because she has visibility into pricing dynamics that almost nobody talks about publicly. The framing is "here's how applied AI companies can maintain leverage with frontier labs." The reality is more specific than that.
The sharpest move in the piece is a distinction that deserves to become standard vocabulary. Sachs separates AI tasks where frontier capability genuinely unlocks new value — autonomous agents, deep research, large-scale analysis — from tasks where intelligence has already saturated the solution: triaging an inbox, summarising notes, changing a database field. Then she points out that the market prices both identically. That's not an observation you'll find in most AI cost discussions, which fixate on the rate card and miss the structural problem entirely. A more powerful model doesn't make email triage meaningfully better. But you're paying for one anyway, because labs have deprecated the mid-tier models that used to handle it and funnelled everyone toward top-of-market pricing.
Now, the part that doesn't hold up.
The piece is titled for the "Fortune 5 Million" — meaning the vast majority of companies that can't afford dedicated AI procurement teams. But the actual playbook Sachs describes requires Notion-scale infrastructure to execute. Evaluate every major model on a cost-per-capability-per-second basis. Switch providers every two to three weeks. Offer detailed eval scorecards as a bargaining chip that labs actually want. Maintain multi-provider optionality so you can credibly walk away from any single contract.
A mid-market SaaS company with a few thousand users doesn't have the traffic volume to make that optionality economical. A five-person agency doesn't have the engineering capacity to build switching infrastructure. The advice to "preserve optionality" is correct in principle, but the cost of maintaining that optionality — reserved traffic per model, operational overhead, evaluation pipelines — is itself a significant barrier. The piece doesn't reckon with this. It describes Notion's position and frames it as generally available strategy.
There's also a line buried in the open-weight section that deserves far more attention than it gets. Sachs notes that smaller models "often error and retry enough times that the effective cost ends up comparable to running a larger model in the first place." She flags it, then moves on to the optimistic case for open-weight as a long-term lever. But if you're making procurement decisions this quarter, the timeline for "not yet reliable" is the whole question — and it's left unanswered.
Read the piece for the saturated-versus-frontier distinction and the Datadog/AWS analogy. Skip the parts that assume you have Notion's negotiating position. Most of us don't.
Stay current weekly
Get new commentary and weekly AI updates in the AI Primer Briefing.