The five largest U.S. cloud and AI infrastructure companies have committed to spending between $660 billion and $690 billion on capital expenditures in 2026 — nearly doubling the already historic $443 billion they deployed in 2025. But a central, humbling irony now haunts the entire buildout: Microsoft, the company that ignited the AI arms race through its OpenAI partnership, is sitting on $80 billion in unfulfilled Azure orders because it cannot find enough electricity to power the GPUs already warehoused and waiting. The most expensive infrastructure sprint in corporate history has run headlong into the physical limits of the power grid.
The Numbers Behind the Sprint
The scale of the 2026 capex commitment is difficult to contextualize. Analysis from Futurum Group puts the combined figure at $660–690 billion across five hyperscalers: Amazon at $200 billion (the largest single-year corporate investment commitment in recorded history, which blew past Wall Street's consensus estimate of $146 billion); Alphabet at $175–185 billion; Meta at $115–135 billion; Microsoft tracking toward $120 billion or more; and Oracle targeting $50 billion. The Stargate project — the OpenAI, SoftBank, and Oracle joint venture — adds an additional $500 billion ambition over four years, with initial construction already underway in Texas.
In roughly 18 months, aggregate annual AI infrastructure spending from the five largest U.S. technology companies increased from approximately $380 billion in 2025 to a projected $660–690 billion in 2026. Every hyperscaler on the earnings call circuit has offered the same diagnosis: their markets are supply-constrained, not demand-constrained. The customers are there. The revenue is available. The bottleneck is physical.
Microsoft's $80 Billion Paradox
No company better illustrates the collision between ambition and infrastructure than Microsoft. CEO Satya Nadella has acknowledged publicly that GPUs are sitting in Microsoft's inventory, idle, because the company lacks sufficient electricity to bring them online. The $80 billion Azure backlog is not a demand problem. It is a physics problem.
Power transformer lead times — the unglamorous chokepoint in this entire story — have stretched to 128 weeks across the industry. A data center that breaks ground today cannot receive the transformers needed to energize it until well into 2028. Meanwhile, utility interconnection queues in the most desirable data center markets have grown so long that some projects are waiting three to five years for grid access. The result is a paradox the semiconductor industry would recognize: the bottleneck in AI is not chips. It is the infrastructure required to run them.
Amazon's situation is financially starker. The company's $200 billion capex commitment is projected to produce negative free cash flow of $17 to $28 billion in 2026 — an extraordinary figure for a company that spent the previous decade generating cash at scale. Alphabet's free cash flow is expected to plummet roughly 90 percent to approximately $8.2 billion. The five hyperscalers now collectively hold more debt than cash for the first time, having issued over $121 billion in bonds in 2025 alone. The AI infrastructure bet is being financed, in no small part, with borrowed money.
The Geography of Constraints
The power crisis is not evenly distributed. Northern Virginia — the world's largest data center market, home to the densest concentration of cloud infrastructure on the planet — has effectively hit a wall. Severe grid congestion and multi-year utility interconnection queues have slowed new large-scale projects in Ashburn and the surrounding Loudoun County corridor, pushing development into adjacent counties and neighboring states.
The beneficiary is Texas. JLL's year-end 2025 North America data center report found that Texas alone accounts for 6.5 gigawatts of capacity currently under construction, supporting projections that the state could overtake Virginia as the largest global data center market by 2030. The combination of available land, a deregulated electricity market with no federal grid membership requirements, and relatively permissive zoning has made the state a hyperscaler magnet. Oracle's Stargate campus, Meta's multi-campus expansion, and several Amazon Web Services campuses are all under active development in the Dallas-Fort Worth corridor.
JLL's analysis also found that nearly two-thirds of new data center capacity is now being built outside established hubs like Northern Virginia and Silicon Valley — a structural shift reflecting not just land costs but the reality that new power is increasingly only available in places where it hasn't been fully claimed yet. Tennessee, Indiana, and the broader Midwest are emerging as the new frontier. Companies like Cloverleaf Infrastructure — a Seattle-based startup that secures power and land for data centers — have dispatched teams to scout Wisconsin farmland and secure utility agreements in places previously invisible to the data center industry.
The Nuclear Bet
The most dramatic response to the power crisis has been a turn toward nuclear energy. Microsoft signed a long-term power purchase agreement with Constellation Energy to restart the Three Mile Island nuclear plant in Pennsylvania — rechristened under a new name — with commercial power expected to flow beginning in 2027. The deal represents the most direct expression yet of hyperscalers' willingness to make decade-scale bets on energy supply to unlock compute capacity. Amazon has separately announced nuclear power deals in Pennsylvania, anchoring a new data center adjacent to the Susquehanna nuclear facility.
The nuclear pivot is simultaneously a practical solution and a public signal. Hyperscalers need clean, always-on baseload power that can scale to hundreds of megawatts per campus — a description that fits nuclear far better than wind or solar, which require storage and backup infrastructure. But it also signals to regulators, ratepayers, and politicians that the industry is taking the Trump administration's pledge to pay for its own energy costs seriously. In March 2026, seven major tech companies — including Microsoft, Meta, Amazon, and Google — formally agreed to absorb data center power generation costs rather than pass them to ratepayers, a politically significant commitment in states where utility boards are increasingly scrutinizing the relationship between AI expansion and residential electricity bills.
The International Energy Agency projects that global data center electricity consumption will double from roughly 415 TWh today to 945 TWh by 2030. That figure assumes efficiency improvements continue at roughly their current pace. If AI workloads scale faster than efficiency gains — a real possibility, given that each new generation of large language model consumes substantially more compute than the last — the actual number could be higher.
The Global Dimension
The infrastructure race is not confined to North America. Google has secured government approval to develop a new data center on 600 acres in Visakhapatnam, India, as part of its $15 billion America-India Connect initiative, which aims to build subsea fiber routes and data center capacity positioning India as a hub for AI-driven trade across four continents. India's Yotta Data Services has separately announced a $2 billion deployment of 20,000 Nvidia Blackwell Ultra chips at its Greater Noida campus — one of the largest single-site GPU deployments outside North America.
The North American data center M&A market is also moving at unprecedented velocity. S&P Global Market Intelligence data shows 113 completed transactions in 2025, with total deal value exceeding $69 billion — including a $40 billion acquisition of Aligned Data Centers by a consortium of strategic and financial sponsors. Vacancy rates have held at approximately 1 percent for two consecutive years, reflecting a market where supply is being consumed almost as fast as it can be built.
The Sustainability Question
Whether the revenue can justify the investment is the question hanging over the entire sector. OpenAI ended 2025 with approximately $20 billion in annual recurring revenue, a threefold increase in a single year. Anthropic surpassed $9 billion in annual run rate by January 2026, up from roughly $1 billion at the close of 2024. The demand signal is unambiguous. The math question is whether the compounding of AI revenue can keep pace with infrastructure costs that are themselves compounding — and whether the 128-week transformer queue, the multi-year utility interconnection backlog, and the geographic relocation of entire data center markets can resolve fast enough to prevent the supply constraint from becoming a structural brake on AI deployment.
For now, the consensus view inside the hyperscalers is that the risk of underbuilding vastly outweighs the risk of overbuilding. Every major cloud provider reports supply-constrained revenues, meaning they are leaving money on the table today because they cannot provision capacity fast enough. In that context, a $690 billion capex bet looks less like irrational exuberance and more like a rational response to a visible demand signal — constrained only by the stubborn physics of electricity generation, transmission, and the long lead times of the hardware required to deliver it.
The AI infrastructure sprint is real. The wall of power is real. The companies that figure out how to close the gap between them first will define the next decade of cloud computing.




