Nvidia's $216B Record Year: Blackwell Dominates, Vera Rubin Samples Ship, $700B Capex Wave Continues

Rows of glowing GPU server racks inside a large-scale AI data center, bathed in dramatic blue and purple lighting

Nvidia just reported the most consequential quarter in semiconductor history. Q4 FY2026 revenue hit $68.1 billion — up 73% year over year — capping a record fiscal year of $216 billion in total sales. But the numbers are only part of the story. With first-generation Vera Rubin samples now shipping to customers and hyperscalers budgeting nearly $700 billion in AI infrastructure capex this year alone, Jensen Huang's thesis has never been more audacious — or more credible.

The Numbers That Silenced the Bears

Going into Wednesday's earnings call, Wall Street was braced for trouble. DeepSeek's low-cost model demonstrations had rattled investors earlier in the year, sparking a brief selloff on fears that AI compute demand might plateau. Those fears evaporated the moment Nvidia's CFO Colette Kress opened her prepared remarks.

Q4 FY2026 revenue of $68.13 billion came in roughly $2 billion ahead of analyst consensus. GAAP net income nearly doubled year over year to $43 billion — $1.76 per diluted share versus 89 cents in the prior-year quarter. Non-GAAP gross margin clocked in at 75.2%, recovering from 73.4% in Q3 and landing squarely in the "mid-70s" target range that management had promised going into fiscal 2027.

The Q1 FY2027 guidance of $78 billion — a potential 14% sequential jump — was the number that truly stunned. Analysts had been modeling $72.6 billion. The beat before the beat.

Data center revenue, which now accounts for over 91% of Nvidia's total sales, reached $62.3 billion for the quarter — up 75% from a year earlier and up 22% sequentially. Networking alone contributed $10.98 billion, a 263% year-over-year increase driven by NVLink and Spectrum-X Ethernet adoption. Supply-related commitments nearly doubled from $50.3 billion to $95.2 billion in a single quarter, signaling that Nvidia has locked in extraordinary demand visibility.

Blackwell's Ramp: Supply-Constrained, Not Demand-Constrained

The Grace Blackwell platform — a rack-scale system combining 72 Blackwell GPUs with 36 Grace CPUs in a single liquid-cooled unit — has become the dominant workhorse of the hyperscaler AI build-out. Meta, Google, Microsoft, Amazon, and OpenAI are all deploying it at scale. Hyperscalers collectively represented just over 50% of Nvidia's data center revenue this quarter, the company confirmed.

The critical constraint has never been demand. It has been supply. Memory — specifically HBM (High Bandwidth Memory) — remains the primary bottleneck as manufacturers scramble to keep pace. Nvidia has responded by securing "strategically secured inventory and capacity to meet demand beyond the next several quarters," CFO Kress wrote in her commentary. The company expects gaming GPU launches to be impacted by these memory constraints in Q1 FY2027 and beyond, as AI accelerators take priority across the supply chain.

The $500 billion in committed Blackwell and Rubin revenue that management first disclosed last fall is on track to be exceeded. Kress confirmed Wednesday that Nvidia expects "sequential revenue growth throughout calendar 2026, exceeding what was included in the $500 billion Blackwell and Rubin revenue opportunity." That is a remarkable statement for a company already measuring quarterly revenue in the tens of billions.

Vera Rubin Arrives — 10x More Performance Per Watt

The forward-looking narrative from this earnings cycle belongs to Vera Rubin, Nvidia's next-generation rack-scale AI system. Kress delivered a striking disclosure: "We shipped our first Vera Rubin samples to customers earlier this week, and we remain on track to commence production shipments in the second half of the year."

What makes Vera Rubin significant isn't just raw performance — it's efficiency. The system is built around 72 Rubin GPUs and 36 Vera CPUs, with core silicon manufactured on TSMC's advanced process nodes and HBM4 memory. Nvidia claims 10 times more performance per watt versus Grace Blackwell. At a moment when data center power consumption has become a geopolitical flashpoint — driving nuclear power deals, grid capacity concerns, and land acquisition wars — a 10x efficiency gain is not incremental. It is transformational.

The system is assembled from more than 1.3 million components sourced from over 80 suppliers across 20-plus countries. Meta has already announced plans to deploy Vera Rubin in its data centers by 2027. OpenAI and other top-tier customers are also in line. For context, Grace Blackwell was announced at GTC 2024 and changed the game for what a single compute cluster could do. Vera Rubin represents the same magnitude of leap — just two years later.

Jensen Huang's Thesis: "Compute Is Revenue"

Perhaps the most important moment of Wednesday's earnings call wasn't a number. It was a philosophical statement from Huang that crystallized the bull case for AI infrastructure spending in a single phrase.

"We have now seen the inflection of agentic AI," Huang told analysts. "In this new world of AI, compute is revenues. Without compute, there's no way to generate tokens. Without tokens, there's no way to grow revenues."

The argument is elegant and, if correct, self-fulfilling. Cloud providers generate revenue from AI workloads by processing tokens — the discrete units of AI output. More compute capacity enables more token generation, which enables more revenue. Every dollar of capex spent on Nvidia GPUs is, in this framing, an investment directly tied to future revenue generation rather than conventional infrastructure overhead.

It's a thesis the hyperscalers appear to have fully internalized. Meta, which spent $72 billion on capital expenditures in 2025, is projecting up to $135 billion in 2026. Google has signaled up to $185 billion — more than double its prior-year figure. Combined capex across the five major hyperscalers is approaching $700 billion for calendar 2026, with Nvidia positioned as the primary beneficiary of that spend.

The DeepSeek Factor: Demand Amplifier, Not Killer

One question hung over the call like a shadow: what does the rise of efficient Chinese AI models mean for Nvidia's demand outlook? DeepSeek's V3 model, which demonstrated near-frontier performance at a fraction of the typical training cost, had briefly rattled markets in January. Then, this week, Reuters and Digitimes reported that DeepSeek had withheld its upcoming V4 model from U.S. chipmakers including Nvidia and AMD — a break from industry norm that signals deepening U.S.-China AI hardware tensions.

Huang addressed the efficiency question directly, and his answer was counterintuitive: greater AI efficiency drives more demand, not less. When models become cheaper to run, the total addressable market expands. More applications become economically viable. More queries are processed. More compute is consumed in aggregate. The effect is similar to what happened with fuel-efficient cars — they didn't shrink fuel consumption globally; they enabled more people to drive more miles.

On the China revenue question, Nvidia was explicit: the company is not including any data center revenue from China in its Q1 guidance. The geopolitical constraints are real and have materially reduced a once-significant revenue stream. But the rest of the world — particularly sovereign AI deployments across Europe, the Middle East, and Southeast Asia — is absorbing that slack and more.

The Competitive Landscape: AMD, Broadcom, and Custom Silicon

Nvidia's dominance is not without challengers. AMD's MI350 and upcoming MI400 series continue to attract hyperscaler adoption, particularly as a second-source alternative that reduces supply risk. Broadcom and Marvell are winning large custom ASIC contracts from hyperscalers building their own inference chips — Google's TPUs, Amazon's Trainium, Meta's MTIA platform.

These are real competitive headwinds. But Nvidia's moat isn't just silicon — it's the CUDA software ecosystem, the NVLink networking fabric, and the rack-scale integration that competitors have yet to replicate at scale. The company's networking revenue alone growing at 263% year over year suggests that the full-stack approach is resonating.

Nvidia also announced plans to manufacture up to $500 billion of AI infrastructure domestically in the U.S. through 2029, including Blackwell GPUs at TSMC's Arizona fabs. That manufacturing localization play, while years in the making, positions the company favorably against export-control risk and aligns with the current administration's domestic production priorities.

What It Means for the AI Hardware Ecosystem

Nvidia's Q4 results don't just reflect one company's performance — they serve as a real-time barometer for the entire AI hardware ecosystem. The $62.3 billion data center quarter confirms that the AI infrastructure build-out is not slowing. The Vera Rubin sample shipments mark the beginning of a transition cycle that will reshape data center architecture by 2027. And the $700 billion capex commitment from hyperscalers establishes a floor for AI hardware demand that competitors will spend years trying to absorb.

The unanswered question remains: at what point does capacity outrun near-term application demand? Even Nvidia's bulls acknowledge that the ratio of compute capacity to useful AI applications will eventually need to balance. But with agentic AI workflows, multimodal reasoning, and trillion-parameter models all expanding the computational frontier simultaneously, that reckoning appears further away than skeptics expected.

For now, the AI hardware supercycle is running at full throttle — and Nvidia is holding the wheel.


Nvidia (NVDA) reported Q4 FY2026 results on February 26, 2026. Revenue figures and forward guidance are sourced from the company's official earnings release and CFO commentary. All financial data current as of the date of publication.

Related Articles