The AI buildout, status check: what TSM and ASML just confirmed

The question two months ago, after the Q4 2025 hyperscaler reports walked AI capex guidance up to roughly $600B, was whether the supplier layer would actually see those orders. Capex revisions are announcements. Revenue is shipments. The two are separated by quarters of order propagation, and a hyperscaler can revise capex without a single contract changing. The April 2026 reports from TSMC — the foundry chokepoint — and ASML — the lithography monopoly — are the first quarter where the supplier layer was forced to translate the hyperscaler announcements into real numbers. They confirmed the cycle. They also made clear that the structural ceiling on AI compute through this decade has not moved.

A clean signal is concordance. Both companies walked their FY2026 revenue guidance higher after a single quarter of execution. TSMC moved from "close to 30%" growth (Q4 2025 call) to "above 30%" (Q1 2026). ASML lifted the EUR 34-39B range to EUR 36-40B, and CFO Roger Dassen simultaneously reversed the non-EUV outlook — in Q4 2025 he had described it as "expected to be about flattish"; in Q1 2026 he said "in fact, an increase of demand there as well. So increased revenue on the non-EUV business." Both companies are unusually disciplined about not walking annual guidance after one quarter. The fact that both did, the same direction, in the same reporting period, is the cross-company confirmation that the hyperscaler capex revisions actually converted into orders moving through the supply chain. If only one had walked, you could attribute it to company-specific dynamics. With both walking, the simplest explanation is that the demand signal is propagating.

The bottleneck signal lives in the physical book, not the P&L. TSMC is running N3 above 100% utilization by 3Q-4Q 2026 per SemiAnalysis, has pulled Arizona Fab 2 forward to 2H 2027 HVM, has reassigned Japan Fab 2 from mature nodes to N3, has pushed 2026 CapEx to the $56B high end (37% above 2025's $40.9B), and C.C. Wei volunteered on the Q1 2026 call that a second Arizona land parcel has been purchased — and followed up on the full commit with an unusually candid "I'm also very nervous about it." That is what fab capacity running into physical limits looks like when a new build takes 18 months of construction before a single wafer moves. ASML's physical book tells the same story at a different place on the timeline. CEO Christophe Fouquet on the Q1 2026 call: "customers have increased their expected short- and medium-term demand for our products." Q4 2025 net bookings hit a record EUR 13.2B (EUR 7.4B of which was EUV), driving FY2025 total bookings to EUR 28.0B — +48.3% over FY2024's EUR 18.9B. Roger Dassen on visibility: "we are actually narrowing the window and also increasing the window." Both layers are tight. Foundry capacity is an important binding constraint now, because leading-edge fabs cannot be brought online inside the demand window. EUV tool output seems like the binding constraint over the rest of the decade, on a production curve ASML cannot accelerate at will, and no amount of fab CapEx relaxes it.

The next thing the data reveals is that the buildout is not a single-engine machine. ASML's non-EUV outlook upgrade — flipped from "flat" to "increasing" — is almost certainly memory-driven. Three of ASML's five strategic customers are memory makers: Samsung Memory, SK Hynix, and Micron. Two are Korean. Memory vendors stopped expanding capacity in 2022-2023 when DRAM and NAND prices collapsed, and three to four years of under-investment created a memory shortage that AI demand is now amplifying rather than causing. DRAM contract prices jumped 90-95% QoQ in Q1 2026 per TrendForce, roughly tripling from recent lows. AI is projected to absorb a rapidly growing share of DRAM wafer capacity through 2027 per SemiAnalysis. Every byte of HBM destroys roughly three times as much commodity DRAM supply because HBM yields roughly three times fewer bits per wafer area than standard DDR5, forcing memory vendors to expand total capacity faster than HBM volume alone would imply. The memory chain has its own structural pressure that doesn't transmit through TSMC, and ASML's order book reflects both engines. This is why ASML's revenue growth is more durable than a pure logic-chain comparison with TSMC suggests.

The structural ceiling on AI compute itself has not moved. SemiAnalysis CEO Dylan Patel, on the Dwarkesh podcast, on ASML's annual EUV production: "Currently, they can make about 70. Next year, they'll get to 80. Even under very aggressive supply chain expansion, they only get to a little bit over 100 by the end of the decade." Compounding to the cumulative installed base: "TSMC and the entire ecosystem have something like 250 to 300 EUV tools already. Then you stack on 70 this year, 80 next year, growing to 100 by 2030. You're at 700 EUV tools by the end of the decade." At Patel's ratio of 3.5 EUV tools per gigawatt of AI compute — roughly 2 million EUV wafer passes per GW of Rubin-class compute, divided by the ~75 wafers/hr throughput of a single EUV tool running at 90% uptime — 700 tools gets you "200 gigawatts worth of AI chips, assuming it's all allocated to AI, which it's not." Sam Altman's stated ambition to build "a gigawatt of new capacity per week" — about 52 GW per year at full run rate — implies absorbing roughly a quarter of that theoretical maximum for OpenAI alone. TSMC and ASML cannot add enough leading-edge capacity to fully meet demand for the next two years on any reasonable scenario. The April disclosures didn't change the ceiling. They confirmed that the demand pressing against it has not relented. The corroborating data outside the supplier deck points the same way: H100 one-year rental prices climbed roughly 40% from October 2025 ($1.70/hr) to March 2026 ($2.35/hr) per SemiAnalysis's GPU Pricing Index and were still rising 15-20% month-over-month; on-demand GPU capacity sold out across every major Neocloud; Anthropic's run-rate revenue climbed from $9B at year-end 2025 to $19B by March and $30B by April.

Pulled together, the April 2026 picture: the AI buildout is in mid-acceleration, the supply chain has confirmed the demand signal that hyperscaler capex announcements set in motion last quarter, the bottleneck has sharpened first at the foundry layer where physical capacity is bumping against utilization ceilings, the memory chain provides additional structural demand independent of further hyperscaler revisions, and the EUV ceiling on tool production keeps the supply-demand imbalance in place for at least the next two to three years. The supplier layer just told you the cycle is real. The leading indicators told you that two months ago.

But the real status update on the AI buildout lives at the source, not at the supplier. TSMC and ASML translated the Q4 2025 hyperscaler capex revisions into real numbers a quarter after the fact — confirmation, not news. Alphabet, Microsoft, Meta, and Amazon report Q1 2026 in the last week of April and the first week of May, and those four prints are the actual status update. They tell you which of three worlds you're in. If 2026 capex guidance walks higher again, the trajectory is still accelerating and the supplier tightness we just described gets worse before it gets better. If guidance holds flat, the April supplier numbers are the peak echo of a cycle already locked in — real, but no longer expanding. If any of the four cuts, TSM and ASML's April prints become the last good data point before the narrative turns. The cycle now runs on the hyperscaler P&L, not the chip maker's shipment log. That is where to watch.

And each outcome carries a specific equity stake. Sustained high capex keeps the bottlenecks intact — and the supply-chain valuations built on them along with it. A capex slowdown, whenever it arrives, would ease the bottlenecks in sequence over a few quarters and meaningfully reprice every name in the chain that is currently valued as if the tightness were permanent. TSMC, ASML, the memory pack, the advanced-packaging layer, the equipment makers — a contingent valuation component runs through all of them. The first hyperscaler cut is where the repricing starts.