6 min read

Micron HBM4: Production Reality vs. Expert Opinion

On February 25, SemiAnalysis published their deep-dive on NVIDIA's Vera Rubin architecture. Buried in the analysis was an assessment that landed hard: Micron was "effectively out of the picture" for HBM4 memory on the Vera Rubin platform. The reasoning centered on pin speed requirements — NVIDIA targeting approximately 11 Gb/s for HBM4, with SK Hynix and Samsung making better progress while Micron lagged behind.

Three weeks later, on March 17, Micron confirmed that its HBM4 36GB 12-High stack had entered high-volume production for NVIDIA's Vera Rubin, alongside SOCAMM2 192GB modules for Vera Rubin NVL72 systems and a PCIe Gen6 data center SSD for the BlueField-4 architecture.

The gap between the expert assessment and the production confirmation was twenty days.

Two days after that, Micron reported fiscal Q2 2026 earnings — putting hard numbers behind what was already clear from the production announcement.


What Happened

SemiAnalysis — run by Dylan Patel, one of the most respected semiconductor analysts working today — wrote that Micron was "well behind Samsung and Hynix" and that they believed Micron was "effectively out of the picture for Rubin HBM4." The language was hedged: "we believe" and "effectively" both signal an analytical judgment, not a confirmed fact. SemiAnalysis directed readers to their detailed accelerator and HBM model for full qualification specifics.

The concern was technically grounded. NVIDIA's HBM4 spec demands significant pin speed improvements over HBM3E, and the transition from thermal compression bonding to hybrid bonding introduces new yield challenges. At the time of writing, Micron may well have been behind on these metrics.

But supply chain qualification dynamics move fast — especially when billions of dollars in revenue are at stake. Within three weeks, Micron announced not just qualification but high-volume production, and not just for one product but across three Vera Rubin components. This wasn't a tentative "we're sampling" disclosure. It was a production ramp announcement. On the fiscal Q2 earnings call, CEO Sanjay Mehrotra confirmed that Micron "has begun volume shipment of its HBM4 36GB 12-Hi" — language that goes beyond production readiness to actual delivery. He added that HBM4 yields are expected to reach maturity faster than HBM3E, and that Micron has already sampled the next density step: HBM4 48GB 16-High, a 33% capacity increase per stack. Development of HBM4E, the generation after that, is underway with volume production expected in calendar 2027.


Why This Matters

The HBM4 question matters because memory is the second-largest cost component in AI infrastructure — roughly 30% of the $600 billion the major hyperscalers are spending this year, or approximately $180-200 billion. Half the cost of an AI accelerator is memory. Being excluded from the highest-margin, fastest-growing memory segment would have been a material competitive setback for any vendor.

An interesting nuance from the earnings call complicates this framing slightly: Mehrotra confirmed that non-HBM margins are currently higher than HBM margins. The strategic importance of HBM4 isn't about maximizing today's gross margin — it's about securing position in the segment that is growing fastest and defining the next generation of AI compute architectures. Losing HBM4 qualification would have meant ceding that strategic ground regardless of near-term margin impact.

The broader context makes this even more significant. AI now consumes more than half of all DRAM wafer production globally — a crossover that happened this year. Every byte of HBM destroys roughly four times as much consumer DRAM capacity because of the manufacturing process. New memory fabs take two years to build, and meaningful new supply won't arrive until late 2027 or 2028. DRAM commodity prices have already tripled from recent lows and are expected to rise further.

With the HBM4 confirmation, Micron is now a three-way supplier alongside SK Hynix and Samsung for the highest-bandwidth memory generation yet — 2.3x the bandwidth and 20% better power efficiency versus HBM3E. The SOCAMM2 qualification adds a second Vera Rubin revenue stream beyond HBM itself.

Micron's price action reflected this fundamental strength even before the HBM4 confirmation. While many semiconductor names — including NVIDIA — showed persistent relative weakness through early March, Micron moved in the opposite direction, showing consistent strength for most of the month. The market was pricing in the operational reality before the headline confirmed it.


The Quarterly Filing Confirms the Operational Reality

The FQ2 2026 earnings, reported March 19, put numbers to what the HBM4 announcement already suggested: Micron's operational position is the strongest in its 45-year history.

Revenue hit $23.9 billion — nearly triple the year-ago quarter and up 75% sequentially. For perspective, Micron's entire fiscal year revenue in FY2024 was $25.1 billion. A single quarter now approaches what used to be a full year. As Mehrotra put it in his opening remarks: "Our fiscal Q3 single-quarter revenue guidance exceeds the full-year revenue for every year in our company's history through fiscal 2024." Non-GAAP gross margins reached 74.9% — levels typically associated with premium software, not commodity semiconductors — and non-GAAP EPS came in at $12.20, up from $1.56 a year ago.

The most telling number is the guidance beat. Management guided Q2 revenue at $18.7 billion. The actual came in $5.2 billion higher — a 28% miss on their own forecast, given just 90 days prior. The beat pattern is accelerating, not moderating: Q1's beat was 9%. The market is tightening faster than management can model.

On the call, Mehrotra confirmed that customers are still receiving only 50-65% of their requested demand — unchanged from the prior quarter despite accelerating supply efforts. CapEx guidance has been raised three times in succession, now above $25 billion for fiscal 2026, with fiscal 2027 construction spend expected to increase by over $10 billion year-over-year. And it's still not enough. Micron signed its first five-year Strategic Customer Agreement in the quarter — a structural departure from the annual contracts that have historically governed the memory industry. On the specifics, Mehrotra was deliberately opaque ("these SCAs are confidential in nature"), but described them as providing "specific commitments" from customers and "robust provisions" for both sides, designed to span periods when the market is tight and when it isn't.

CFO Mark Murphy framed the margin sustainability question in structural terms rather than cyclical ones: "AI requires more and higher-performance memory... the margins are reflecting recognition that memory is a lot more valuable and an efficient way to monetize AI." He pointed to supply constraints that exist "on a number of fronts" — declining bits-per-wafer on node advances, increasing HBM trade ratios, the fact that any new capacity requires greenfield construction — and described both the demand drivers and supply constraints as "durable factors." Fiscal Q3 is guided at $33.5 billion in revenue, 81% gross margins, and $19.15 EPS, with Murphy noting that free cash flow could "roughly double sequentially."

These numbers matter not because they're impressive in isolation — cyclical peaks always produce impressive numbers — but because they quantify the severity of the supply constraint that makes the HBM4 qualification so important. This is not a company where HBM4 participation is a nice-to-have. At these demand levels, every qualified product line is a multibillion-dollar revenue stream.

The honest counter-note: memory has been cyclical for 40+ years without exception, and the industry's collective CapEx response — Micron alone increasing from $13.8 billion to $25 billion or more in a single year, with competitors investing in parallel — is planting the seeds of eventual supply normalization. When asked about gross margin reversion to historical norms, Murphy argued that prior peaks aren't the right benchmark because "AI is a transformational secular driver." That may prove correct. But every memory cycle has produced a version of "this time is different," and DRAM pricing surged mid-60s% quarter-over-quarter in FQ2 on only mid-single-digit bit shipment growth — a ratio that confirms this is overwhelmingly a pricing cycle, not a volume cycle. Whether that normalization arrives in 18 months or remains years away is the central question that current data cannot definitively answer.


A Calibration Point on Expert Sources

This episode is a useful calibration point for investors who follow semiconductor analysts.

SemiAnalysis produces excellent architectural analysis. Their Vera Rubin deep-dive was thorough and technically rigorous. Their broader work on AI compute bottlenecks, TSMC capacity constraints, and memory supply dynamics is among the best available to non-institutional investors.

But supply chain qualification reads on fast-moving vendor dynamics can go stale quickly. The assessment may have been accurate at the time of writing — Micron may indeed have been behind on pin speeds in late February. The gap closed faster than anticipated.

The lesson isn't that SemiAnalysis was wrong. It's that hedged analyst assessments ("we believe," "effectively") exist on a spectrum of certainty, and investors should calibrate accordingly. A supply chain read from three weeks ago is not a confirmed exclusion. In a market where qualification cycles are compressed by massive revenue incentives, twenty days is enough time for the picture to change completely.


Where This Leaves Micron

The primary execution risk flagged by semiconductor analysts is resolved. The quarterly filing confirms the operational strength behind the resolution. Micron participates in HBM4 for Vera Rubin, its financial position is the strongest in the company's history, and the structural supply constraints that make that participation so valuable have years to run — or at least, the physical capacity to relieve them does.

The investment case for memory in the AI buildout hasn't changed — it's strengthened. All three vendors are now in HBM4 production, which means more total capacity for the AI infrastructure buildout, but the structural supply constraints (fab space, not just technology) remain binding through at least 2027. Micron's fundamentals — accelerating revenue, expanding margins, record cash generation, unprecedented forward contract visibility — are confirmed by the filings and by management's own commentary, not just the narrative.