The conversation about AI data center bottlenecks almost always lands on the same two culprits: power availability and GPU supply. But a quieter crisis has been developing in parallel — one that doesn't generate headlines about grid reform or chip export controls, but is just as capable of derailing a build. DRAM and high-bandwidth memory are now effectively committed through 2028. Solid-state drive supply is being squeezed by upstream memory pressure. And while tariff uncertainty has injected financial volatility into procurement planning, industry analysts say it's component scarcity — not import duties — that is actually dictating when AI data centers come online.
The Scale of the Commitment Problem
The memory crisis didn't materialize overnight. Its roots trace to the rapid industrialization of AI training workloads in 2023 and 2024, when hyperscalers began signing multi-year contracts with memory fabs at a pace the supply chain wasn't built to accommodate. Researchers and analysts now estimate that hyperscalers have locked up approximately 40% of global DRAM production under long-term agreements, leaving the remainder of the market to compete for what's left.
The consequences are measurable and direct. Lead times for data center GPUs — which cannot be assembled without high-bandwidth memory baked into the package — now stretch from 36 to 52 weeks. Standard DRAM module lead times have ballooned from 8–10 weeks to over 20 weeks. For HBM specifically, the situation is more acute: analysts report that the product is effectively sold out through 2028, with top suppliers SK Hynix, Micron, and Samsung all running at full capacity against pre-committed orders.
"The entire industry remains capacity-constrained because demand for computing capacity to train new AI models and support exploding growth in inferencing and agentic applications exceeds supply," Moody's Ratings wrote in a March 2026 report projecting $700 billion in capex by six U.S. hyperscalers this year alone. "The lack of readily available electricity for data centers and the time it takes to build them will constrain AI capacity, which we expect will lag demand through 2027." But analysts working closer to the supply chain say that even if power and real estate constraints evaporated tomorrow, memory availability would keep the pipeline throttled well into 2028.
Why HBM Is Almost Impossible to Surge-Produce
High-bandwidth memory is architecturally different from standard DRAM — and that difference is precisely what makes it so hard to scale. HBM is physically integrated into the GPU package itself, stacked directly beside the processor silicon using advanced 2.5D packaging techniques known as CoWoS (chip-on-wafer-on-substrate). Unlike conventional memory that slots into DIMM sockets, HBM must be co-packaged at the manufacturing stage. If a fab runs out of HBM, GPUs simply cannot be completed — there is no workaround, no assembly-line-style placeholder, no "build it now, retrofit later."
The physics of the manufacturing process compound the constraint. HBM requires specialized fab equipment, precision stacking processes, and advanced packaging capacity — all of which are long-lead items that can't be stood up in months. Building or retooling a fab to produce HBM takes years. TrendForce projected that HBM's share of total DRAM market value would surpass 30% by 2025 — and the race to capture that market has already consumed the future allocation of every major supplier. When approached by industry press about availability, Hynix, Micron, Samsung, Nvidia, Intel, and AMD have all declined to comment publicly on supply specifics, a silence that itself speaks volumes.
The CoWoS packaging bottleneck — concentrated primarily at TSMC — is a second structural constraint that runs parallel to HBM scarcity. Advanced 2.5D packaging capacity is one of the semiconductor industry's tightest chokepoints, with TSMC the dominant provider of the interposer-based assembly that makes modern AI accelerators possible. Expanding that capacity requires multi-billion-dollar investments and multi-year lead times — a problem that won't be resolved by any single government initiative or corporate spending pledge.
Tariffs Add Noise, Scarcity Dictates Timelines
The tariff environment in early 2026 has added a layer of financial uncertainty to an already strained supply chain. In February 2026, the U.S. Supreme Court ruled that portions of certain tariff actions were unlawful, prompting the Trump administration to revise its approach and triggering fresh litigation from companies seeking to recover duties already paid. With legal battles still ongoing, import costs remain uncertain — particularly for semiconductors sourced from China and South Korea.
But practitioners are clear-eyed about what actually drives their schedules. "Builders and buyers are absorbing any additional costs to keep the momentum going," said Alan Howard, senior analyst for infrastructure at Omdia, in comments published this week. "At the very core of all this is not just the typical circular trends that have been driving the market for years, but the anticipated significant revenue market opportunity for AI services." In other words, no amount of tariff-driven cost pressure is stopping the buildout — the financial math of AI services is large enough to absorb the overhead. What tariffs can't do is produce more memory chips.
The picture at the project level is nuanced. There is a substantial domestic U.S. manufacturing base for certain data center components — racks, battery backup systems, cabinetry, building management software — that are largely insulated from import duties. But even those manufacturers draw on raw materials and components sourced globally, creating indirect tariff exposure that ripples through the supply chain in ways that are hard to quantify on any given project. Who absorbs the cost varies by contract. "In some cases, we're passing that additional cost along to the customer. In some cases, we're sharing it, or we're eating it outright," said Matt Green, president of HVAC manufacturer's representative Brucker Company. "It's very project-specific, but it's not stalling or delaying any projects."
Hyperscalers Have Cornered the Market — Everyone Else Waits
The most consequential effect of the component shortage is not its impact on hyperscalers — it's its impact on everyone else. Microsoft, Amazon, Meta, Alphabet, Oracle, and CoreWeave collectively have the procurement scale and multi-year contract relationships to secure supply ahead of the open market. They are, in effect, the primary customers of the world's leading memory fabs. Hyperscalers have signed deals that lock up the output of some memory fabs for years at a time, leaving mid-tier cloud providers, enterprise AI developers, and sovereign AI programs competing for whatever residual supply exists.
"Nvidia is going to get the lion's share of the HBM, but AMD and others have likely already put their orders in for a while," analyst Anshel Sag of Moor Insights and Strategy explained. "So, if you're trying to launch something that uses HBM, and you haven't already negotiated your supply, you're probably not getting any." That dynamic effectively closes the door on new entrants, startup AI labs, and national programs that didn't build supplier relationships years in advance.
The situation also creates asymmetric risk in the data center construction market. Operators can break ground, pour concrete, install power infrastructure, and rack servers — but filling those racks with fully operational AI accelerators depends on a memory supply chain that isn't responding to price signals or political pressure. Moody's noted that Moody's projects total hyperscaler capex to rise further to $820 billion in 2027, but also flagged that emerging revenue growth from AI investments must materialize to justify that spending trajectory — a calculation complicated if deployed capacity sits underutilized while waiting for components.
The Downstream Squeeze on Storage and SSDs
Beyond DRAM and HBM, the memory shortage is propagating downstream into the solid-state drive market in ways that aren't widely reported. SSD supply is being squeezed as NAND flash memory fabs redirect capacity and attention toward the higher-margin HBM and DRAM segments that serve AI workloads. Data center operators who anticipated straightforward SSD procurement for storage tiers are now encountering extended lead times and price pressure in a market segment that was historically more stable.
The ripple effects extend to consumer hardware. Japanese PC vendors halted orders for high-end desktops when DDR5 memory kits reached prices roughly four times higher than 12 months earlier — a direct consequence of memory fabs prioritizing data center allocation over consumer channels. The gaming GPU market has similarly contracted, with production cuts driven by memory manufacturers deprioritizing the GDDR6 and GDDR7 modules used in gaming cards in favor of high-margin AI memory variants.
How Builders Are Adapting
The industry is not paralyzed — but it is recalibrating. The most significant strategic shift involves tying capital deployment more directly to confirmed demand and component availability rather than building speculatively and waiting for hardware to arrive. Moody's found that hyperscalers are attempting to manage risk by aligning new capacity builds more closely with contracted demand. "I'm only going to build if I can see firm commitments or contracts or demand," Raj Joshi, senior vice president at Moody's Corporate Finance Group, described as the guiding mindset.
Procurement strategy is also evolving. Operators are extending their planning horizons, placing component orders further in advance, and in some cases building strategic reserves of available hardware. Companies are also rethinking the composition of their AI fleets, adopting heterogeneous hardware strategies that mix GPU clusters with CPU-based inference and custom ASIC deployments to reduce dependency on any single hardware category. The goal is to maintain capacity utilization even when the most in-demand accelerators are unavailable.
The modular and phased build approach is gaining traction. Rather than constructing massive campuses and filling them over time, developers are increasingly staging builds in tranches aligned with hardware delivery schedules. This creates more financial predictability and reduces the risk of carrying stranded infrastructure costs — though it also means that the total deployed capacity of any given facility may not reach its intended scale for years after the building opens.
The Road to 2028: When Does Supply Catch Up?
The capacity expansion investments now underway are substantial — but they don't solve the near-term problem. Micron has announced a $100 billion fabrication plant in upstate New York that will expand domestic memory production — but fab construction takes years, and even greenfield fabs already under construction won't contribute meaningfully to HBM supply before late in the decade. SK Hynix and Samsung are investing heavily in HBM4 capacity for next-generation AI systems, but that production is already allocated to Nvidia's Vera Rubin and subsequent platform generations before the first wafers are cut.
The technology roadmap itself creates a moving target. Memory generations advance roughly in step with GPU platforms, meaning that HBM4 capacity — once it ramps — will be consumed immediately by the Vera Rubin and subsequent architectures, rather than creating breathing room in the broader market. Omdia analyst Vlad Galabov has projected that global data center capital expenditure could surpass $1 trillion by 2030 — a number that implies the demand side of this equation will continue outpacing supply expansion for the foreseeable future.
For data center developers, the strategic calculus is now more complex than at any previous point in the industry's history. Power constraints are real and geographically variable. Component shortages are structural and multi-year. Tariff uncertainty adds financial volatility. And demand is accelerating faster than any single constraint can be resolved. The companies that navigate this environment successfully will be those that treat memory procurement as a first-class infrastructure problem — not an afterthought to real estate and power planning — and that build supplier relationships and inventory strategies to match.
The headline numbers — $700 billion in 2026 capex, rising to $820 billion in 2027 — capture the ambition of the AI infrastructure race. But the memory committed through 2028 tells a different story: one where the industry's limiting factor isn't capital or vision, but physics and fabrication schedules that no amount of spending can compress.




