AI Chokepoints: Chips, Power Reshape Global Race
Frontier AI shifts from diffusible software to physical chokepoints in chips, helium, HBM/DRAM, power delivery, concentrating capability in few geographies like the US.
2026 Supply Chain Crises Hit AI Hardware Hard
AI production faces immediate "RAMageddon" from structural DRAM and HBM shortages, exacerbated by a helium crisis tied to the Iran War. Helium, essential for over 20 semiconductor fab steps, sees Qatar (34% global supply) blocked via Strait of Hormuz closure. Ras Laffan facility alone provides 30-33% of world helium; South Korea, sourcing 64.7% from Qatar and producing 60%+ of global memory via Samsung/SK Hynix, is hit hardest. This amplifies HBM bottlenecks—3D-stacked DRAM skyscrapers for Nvidia AI GPUs—driving prices up as silicon wafer capacity reallocates to AI over consumer uses. TSMC remains the core GPU bottleneck, but helium scarcity slows everything upstream.
Datacenter buildouts halved in Q4 2025 per Wood Mackenzie data: of 241GW disclosed capacity, only 33% is under active development. Factors include community opposition, speculative large projects, and grid limits. Paul Kedrosky charts show sharp slowdowns; Ed Zitron calls out AI industry hype amid reality bites.
In chips news, ARM breaks 35-year IP-only model with AGI CPU, selling physical chips to Meta, OpenAI, SAP, Cloudflare. Designed for agentic AI orchestration—autonomous reasoning/acting systems—announced March 24, 2026, in ARM Everywhere keynote.
"The Iran War has created a choke point in the supply of helium... used in more than 20 steps of semiconductor fabrication." — Nathan Warren, Exponential View.
Physical and Institutional Constraints Override Software Diffusion
Past decade's AI relied on fast-diffusing inputs: algorithms, papers, open-source, talent. Microsoft AI Diffusion Report shows AI spreading slower than internet/mobile but faster than many techs—until now. Frontier AI hinges on data centers converting electricity to compute at scale, bound by unevenly distributed chips, power, capital, institutions.
Harvard Belfer Center's National AI Capability Index decomposes by compute, data, algorithms, human capital, resources, regulation, performance—revealing US dominance, uneven global spread. Chokepoints make frontier capability geographically concentrated where silicon/power/finance/politics align.
"For much of the past decade, AI progress appeared to be driven by ideas that diffused easily across borders. That model no longer holds. Today, frontier artificial intelligence is constrained by geopolitical chokepoints: access to advanced chips, the ability to deliver large amounts of electricity quickly, and the capital and institutions required to build and operate massive data centers."
Software efficiency improves (OpenAI: 10x compute reduction 2012-2022; Epoch AI: LLM compute halves every 8 months, beating Moore's Law), but diffuses globally via papers/frameworks/talent. Thus, it's no chokepoint—everyone advances together. Instead, it spurs Jevons paradox: efficiency lowers intelligence cost, fueling more compute spend (Bain/IMF: "resource race" outpaces gains).
Scaling laws persist: Epoch Capabilities Index shows added compute yields frontier gains. DeepSeek (China) stress-tests: algorithmic wins spur spending, not less (Epoch: progress likely increases compute demand).
Training compute for top models grows exponentially per Epoch AI trends.
Power Emerges as Ultimate Scaling Limiter
With chips secured, electricity dictates growth. Data centers hit 415 TWh in 2024 (1.5% global); IEA projects 700-1,700 TWh by 2035, doubling/tripling via sustained AI loads. Frontier clusters draw 100-500MW continuously—like heavy industry or mid-size cities (300MW site: 2.6 TWh/year).
2025-2026 projects scale to gigawatts: xAI Colossus, OpenAI Stargate, Meta Hyperion campuses span urban-scale land, per Epoch AI maps. Goldman Sachs: AI drives 165% data center power jump by 2030.
Cheaper solar/batteries help, but timing kills: need MWs now, on AI cycles—not utility years. Constraints: permitting, grid queues, transmission. Modular solar/storage wins for speed (faster than thermal/grid), prioritizing velocity over price. "Delays matter more than electricity prices. A year without power means a year without training runs."
US edges via faster permitting/modular tech; AI accelerates energy transition ironically.

"Electricity becomes the principal variable that determines how large AI systems can grow."
Implications: Concentrated AI Power Reshapes Geopolitics
Chips/power/capital concentrate frontier AI in US (Belfer index lead), despite China algorithmic pushes like DeepSeek. Export controls stockpile-able; power not. Software accelerates all, but physicals decide leaders.
"Software efficiency continues to improve, but it accelerates competition rather than leveling it. As a result, frontier AI capability is becoming geographically concentrated."
Epoch AI power projections: from near-zero to GW-scale by 2026-27. Bain: meet insatiable compute via scale. IMF: AI-led resource race.
Key Takeaways
- Monitor helium/HBM supply: Qatar disruptions (30%+ global) hit fabs hardest; diversify or stockpile for AI GPU builds.
- Factor power timelines into infra plans: Prioritize sites with fast permitting/modular solar+batteries over cheap long-term energy.
- Expect Jevons-driven compute explosion: Efficiency gains mean more spending—budget for 2-3x power by 2030.
- Bet on concentrated leaders: US wins short-term via institutions/power; track xAI/OpenAI/Meta campuses.
- Agentic AI hardware shift: ARM's AGI CPU signals orchestration chips for autonomous agents—prototype with early access.
- Avoid hype slowdowns: Datacenter pipelines halved; validate 33% active dev before scaling commitments.
- Stress-test like DeepSeek: Use efficiency to spend more compute, not save—push frontiers despite costs.
- Geopolitics matters: Iran War/Strait Hormuz shows non-China risks; model supply chains end-to-end.