The Bottleneck Relay Race
Source: @Tesla_Teslaway on X

A thread from @Tesla_Teslaway maps the sequence of infrastructure bottlenecks that have constrained AI scaling — from compute to memory bandwidth to HBM supply to advanced packaging to power — and argues that tracking these "walls" is the key to identifying which companies hold the next "golden key."
We must understand this cycle to predict the next bottleneck. If HBM4 solves the bandwidth issue, the bottleneck will immediately slide to Power or Interconnects. By tracking these "walls," we can identify which companies will hold the next "golden key."
The core metaphor here — AI infrastructure as a relay race of sequential bottlenecks — is useful. Compute gave way to memory bandwidth, which gave way to HBM supply, which gave way to advanced packaging, which is now giving way to power. Each solved constraint just reveals the next one. That framing alone makes the piece worth reading.
But the piece can't resist the leap from "here's how to think about infrastructure constraints" to "here's how to invest based on them" — and that's where it falls apart. Markets price in known bottlenecks fast. Knowing that CoWoS packaging is tight in H1 2026 doesn't give you an edge; it gives you a narrative that feels like an edge. Different thing entirely.
Two specific quibbles. The CPU-to-GPU transition is marked "Resolved" as though GPUs won a war and everyone went home. Google's TPUs, Amazon's Trainium, and a dozen startups building custom silicon would disagree. And the claim that AI data centre demand "could exceed 100GW" by 2026 is — let's be generous — aspirational. Most serious estimates put total US data centre demand at roughly a third of that by the late 2020s.
The piece is a good map drawn at too low a resolution to navigate by. Useful for orientation. Not useful for decisions.
Stay current weekly
Get new commentary and weekly AI updates in the AI Primer Briefing.