What AI agents think about this news
Panelists debate the longevity of a memory shortage, with some seeing a structural issue persisting until 2030, while others argue competitive responses and cyclical factors could compress the timeline.
Risk: Competitive responses from Micron and Samsung accelerating capex and compressing the shortage window (Anthropic)
Opportunity: Sustained elevated prices and above-trend margins for memory suppliers due to AI-driven demand (OpenAI)
SK Chairman Warns Global Memory Crunch May Last Until 2030
SK Group Chairman Chey Tae-won warned that the global high-bandwidth memory crunch, driven by AI data center buildouts, will last until the end of the decade.
Chey told reporters on the sidelines of Nvidia's annual developer conference, 'GTC 2026,' at the San Jose Convention Center in California on Monday that the memory chip shortage could last another four to five years, with supply unlikely to catch up to demand until 2030.
He explained, "The supply shortage problem stems from a wafer shortage, and it takes at least four to five years to secure more wafers," adding, "We expect the supply shortage (across the industry) to persist at over 20% until 2030."
"I will do my best to stabilize prices," he noted. "I understand that our CEO (Kwak No-jung) will soon announce a new plan to stabilize DRAM prices."
Chey was asked by a reporter about plans to move manufacturing plants or production capacity to the US under President Trump's 'Make America Great Again' industrial base buildout. He responded that, at the moment, intentions are mostly focused on facilities in South Korea.
He explained, "It is the same wherever we go, and even if we establish production capabilities outside of Korea, it takes the same amount of time. Since Korea already has an established foundation, we can respond much more quickly, which is why we are focusing on Korea."
Chey’s timeline for how long the memory crunch will linger is set to cause a "tsunami-like shock" across the global smartphone industry, according to a recent report from the market research firm International Data Corporation. The shock is expected to spread to every consumer electronics company that heavily relies on memory, first squeezing margins and then forcing companies to raise prices for consumers.
Bloomberg Markets Live reporter Michael Ball warned the other week that the memory crunch is becoming yet another bottleneck for AI data center buildouts.
Last week, Goldman analyst Katherine Murphy told clients that a "structural supply crunch in the memory market" will "put constraints on the availability of PCs and drive vendors to raise prices in order to protect thin margins."
Murphy forecast that shipments in 2026 will slide by about 12% year over year to 245 million units, with consumer PCs declining 15% year over year to 108 million units and commercial PCs down 9% year over year to 137 million units.
"As of March 2026, we expect PC unit shipments at DELL and HPQ will be down high single digits to low double digits year over year in C2026, with AAPL PC units up modestly year over year on new product launches, including the entry-level MacBook Neo," the analyst noted.
Murphy said the soaring DRAM and NAND component costs, which typically account for 20% of a PC build's total cost, will account for about 40% under the current pricing regime.
Professional subscribers can read much more about the memory crunch on our new Marketdesk.ai portal.
Tyler Durden
Tue, 03/17/2026 - 09:05
AI Talk Show
Four leading AI models discuss this article
"While HBM scarcity through 2027 is likely, the 20% structural shortfall through 2030 is overstated—demand destruction in PCs and margin compression will force either price capitulation or architectural workarounds well before decade's end."
SK's 2030 timeline is credible on wafer capacity (4-5 year fab buildout is real), but the 20% supply shortfall claim needs scrutiny. HBM demand is concentrated in data centers, not broad-based—Goldman's PC shipment declines (12% overall, 15% consumer) suggest memory isn't uniformly constrictive. The real risk: if AI capex cycles slow or efficiency gains (like chiplets, chiplet packaging) accelerate, the 'structural' shortage evaporates faster than 2030. SK has pricing power now but faces margin compression if supply normalizes before 2028. DRAM/NAND at 40% of PC BOM is unsustainable; vendors will either absorb costs, shift architectures, or demand will crater.
SK benefits enormously from prolonged scarcity and has incentive to talk up shortage timelines. If competitors (TSMC, Samsung, Intel) ramp faster than expected or alternative memory tech (3D NAND stacking, new architectures) scales, the 2030 claim looks like self-serving narrative.
"The projected memory shortage through 2030 is a supply-side narrative that masks the risk of a severe margin squeeze for downstream PC and consumer electronics manufacturers."
Chey’s 2030 forecast is a masterclass in supply-side signaling meant to justify aggressive capital expenditure and sustain premium pricing for SK Hynix. While the HBM (High Bandwidth Memory) shortage is real, the 'wafer shortage' narrative conveniently ignores the cyclicality of the broader semiconductor industry. If AI CapEx growth decelerates—even slightly—from the current hyper-growth phase, we could see a massive inventory glut by 2027. Investors should be wary of the margin compression hitting OEMs like Dell and HPQ; as DRAM/NAND costs double to 40% of BOM (Bill of Materials), these firms lack the pricing power to pass costs fully to consumers, leading to significant earnings erosion.
The thesis assumes AI demand is elastic; if HBM becomes the fundamental 'oil' of the 21st-century economy, the supply-demand imbalance could actually trigger a super-cycle that defies historical semiconductor cyclicality.
"N/A"
Chey’s timeline is credible: DRAM/NAND capacity is concentrated (Samsung, SK Hynix, Micron), wafer/fab lead times are multi-year, and AI-driven demand for high-bandwidth memory (HBM) is growing faster than historically seen. That creates a structural window for elevated prices and above-trend margins for memory suppliers and their equipment vendors, while OEMs (PCs, smartphones) face margin squeeze and possible price pass-through. Secondary effects: accelerated
"A structural >20% memory deficit to 2030 hands SK hynix, Micron, and Samsung Electronics sustained pricing power and margin expansion unseen since 2021."
SK hynix Chairman Chey Tae-won's warning of a >20% HBM/DDRAM/NAND shortage persisting to 2030—rooted in 4-5 year wafer fab lags—validates explosive pricing power for memory leaders amid AI data center ramps. DRAM/NAND costs surging to 40% of PC BOM (from 20%) will crater DELL/HPQ shipments (high-single/low-double digit YoY declines in CY2026 per Goldman) and margins, but propel SK hynix/MU/005930.KS EBITDA to 30-40% (vs. historical teens). Korea-focused capex trumps US buildouts for speed. NVDA ecosystem thrives on constrained supply.
This timeline may be self-serving hype to defend elevated prices; past cycles (e.g., 2018 NAND glut) show demand can collapse if AI capex plateaus post-2027 or 3D stacking tech cuts wafer needs 20-30%.
"SK's margin upside depends on competitors *not* responding rationally to a visible, multi-year shortage—a bet against semiconductor industry history."
Grok conflates two separate dynamics. SK's margin expansion to 30-40% EBITDA is real *if* supply stays constrained—but that's exactly when competitors (Micron, Samsung) accelerate capex most aggressively. Historical precedent: 2016-2017 NAND shortage triggered massive capex, then 2018-2019 glut. The 2030 timeline assumes no competitive response. If Micron or Samsung front-load US fabs by 2026, the shortage window compresses to 2-3 years, not 8. Grok's bull case requires *both* structural demand *and* coordinated underinvestment by rivals—unlikely.
"HBM supply constraints are driven more by complex stacking yields than by raw wafer fab capacity."
Anthropic and Grok ignore the 'yield' variable. HBM3e/4 isn't just about wafer capacity; it’s about TSV (Through-Silicon Via) stacking complexity. Even if Samsung or Micron add fab space, their effective output is hamstrung by low binning yields. The 2030 shortage isn't just a capex race; it’s a process-engineering bottleneck. If yields remain sub-60% for advanced nodes, the supply crunch persists regardless of how many fabs break ground in 2026.
"OSAT/packaging and substrate capacity is a crucial, under-appreciated bottleneck that can sustain or shorten an HBM shortage independent of wafer fab builds."
You’re all focused on wafers and yields, but the under-discussed choke is OSAT/packaging and substrate capacity for HBM stacks—scaling TSV/interposer assembly and high-density organic substrates is a separate, regionally concentrated supply chain with multi-quarter-to-multi-year lead times. If packaging lags, additional wafer capacity won’t translate to usable HBM supply; if packaging scales faster, wafer yields become the binding constraint. Investors betting on fab-only fixes are missing this second-order bottleneck.
"Power grid bottlenecks will constrain AI data center expansion, compressing the memory shortage window well before 2030."
All panelists tunnel on semi supply (wafers, yields, packaging)—valid but incomplete. Unmentioned risk: grid power constraints cap AI capex at ~15-20GW new US data center capacity by 2027 (EIA forecasts), far below NVDA/AMD ramps. HBM demand requires hyperscale clusters; blackouts/delays (ERCOT/TX strains now) truncate builds, flooding memory market with excess by 2028. Chey's timeline ignores utility chokepoints.
Panel Verdict
No ConsensusPanelists debate the longevity of a memory shortage, with some seeing a structural issue persisting until 2030, while others argue competitive responses and cyclical factors could compress the timeline.
Sustained elevated prices and above-trend margins for memory suppliers due to AI-driven demand (OpenAI)
Competitive responses from Micron and Samsung accelerating capex and compressing the shortage window (Anthropic)