Nvidia just committed $4 billion to two photonics companies — Lumentum and Coherent — and the move signals something bigger than a supply chain deal. For the first time, Jensen Huang is publicly identifying optical interconnects as the binding constraint on where AI goes next. Not GPUs. Not memory. Light.
The Announcement
On March 2, 2026, Nvidia unveiled Spectrum-X Photonics and Quantum-X Photonics — a new class of co-packaged optics (CPO) networking switches designed to connect AI factories at million-GPU scale — and simultaneously announced $2 billion investments in each of Lumentum and Coherent. The investments include multibillion-dollar purchase commitments, future capacity rights, and access to advanced laser and optical networking products from both firms.
The market responded immediately. Lumentum shares closed nearly 12% higher; Coherent jumped 15%. Nvidia itself added roughly 3% on the day. These are not minor reactions to routine supply agreements — they reflect investor recognition that Nvidia has just identified, and is now financing, the next foundational bottleneck in AI infrastructure.
Jensen Huang was unambiguous in the framing: "AI factories are a new class of data centers with extreme scale, and networking infrastructure must be reinvented to keep pace. By integrating silicon photonics directly into switches, NVIDIA is shattering the old limitations of hyperscale and enterprise networks and opening the gate to million-GPU AI factories."
Why Copper Can't Scale to a Million GPUs
To understand why this matters, it helps to understand the physics of the problem Nvidia is solving. Today's AI training clusters connect GPUs through a combination of copper cables and pluggable optical transceivers. At the scale of tens of thousands of GPUs — clusters like xAI's Colossus — this architecture works. But the energy and cost economics collapse catastrophically as you scale further.
As Huang explained during GTC 2025 when he first unveiled the photonic switch architecture: a cluster of one million GPUs, connected using conventional Mach-Zehnder pluggable transceivers, would require six transceivers per GPU — six million transceivers total, consuming 180 megawatts just in interconnect power. At $6,000 per GPU in transceiver cost alone, the economics of million-GPU factories become untenable before a single compute cycle runs.
Nvidia's silicon photonics solution — built on TSMC's COUPE (Compact Universal Photonic Engine) process, which stacks 220 million transistors atop 1,000 photonic integrated circuits — replaces those six discrete transceivers per GPU with co-packaged optics integrated directly into the switch fabric. The result: 3.5x lower power consumption, 4x fewer lasers, 63x better signal integrity, and 10x greater network resiliency at scale, according to Nvidia's own specifications.
The new Spectrum-X Photonics Ethernet switches support configurations up to 512 ports of 800Gb/s — delivering 400Tb/s total throughput in a single switch. The Quantum-X Photonics InfiniBand switches offer 144 ports of 800Gb/s and a liquid-cooled design. Both platforms use 1.6 Tb/s port switches based on micro ring modulator (MRM) technology — a fundamentally different optical architecture than the incumbent Mach-Zehnder designs it replaces.
Indium Phosphide: The New CoWoS Constraint
Nvidia's simultaneous investment in two competing photonics suppliers tells the deeper story. This is not a partnership — it is deliberate supply chain diversification in the face of a material constraint that Nvidia's analysts have identified as a multi-year bottleneck.
The constraint is indium phosphide (InP) — the compound semiconductor substrate required for the continuous-wave lasers that power 800G and 1.6T optical transceivers and CPO engines. Unlike silicon, InP cannot be manufactured at commodity scale. The number of firms capable of performing epitaxial InP growth is small, yields are inherently lower, and throughput cannot be ramped quickly. According to Futurum Research analyst Brendan Burke, current transceiver demand already exceeds InP supply by a factor of two — a dynamic that directly mirrors the CoWoS advanced packaging bottleneck of 2023–2024 that throttled Nvidia's H100 production ramp for nearly two years.
Nvidia has navigated these structural constraints before. It pre-paid for CoWoS capacity at TSMC. It cultivated parallel HBM relationships with SK Hynix, Samsung, and Micron simultaneously to prevent any single supplier from becoming the ceiling on its ramp. It is now applying exactly the same playbook to photonics: by investing $2 billion in both Lumentum and Coherent — longtime rivals in the optical components space — Nvidia is funding competing manufacturing buildouts and securing future capacity rights across two distinct supply chains before the bottleneck becomes acute.
Lumentum CEO Michael Hurlston confirmed the company will invest in a new fabrication facility to increase capacity. Coherent CEO Jim Anderson described the deal as an expansion of a 20-year supplier relationship across multiple product families, now elevated to strategic investment level. Both companies will also support U.S.-based manufacturing buildouts — a geopolitically significant element given the current environment around semiconductor supply chain onshoring.
Scale-Up vs. Scale-Out: The Two Photonics Problems
The transition to photonics in AI data centers actually encompasses two distinct technical challenges, operating at different timescales and using different optical architectures. Nvidia's announcement addresses one explicitly and gestures toward the other.
Scale-out networking — the connections between GPU clusters across the broader data center fabric — is where Nvidia's Spectrum-X and Quantum-X photonic switches live. This is the infrastructure connecting racks to each other, the "north-south" traffic of a hyperscale AI factory. Nvidia has been moving this layer to CPO for Ethernet and InfiniBand fabrics, with the new switches expected in 2026 from leading infrastructure vendors.
Scale-up networking — the NVLink connections between GPUs within a single node or rack — is a harder problem that Nvidia has not yet announced a photonics solution for. As HPCwire noted, Nvidia's co-packaged optics adoption currently covers InfiniBand and Ethernet switches for scale-out clusters, while NVLink — its proprietary high-bandwidth interconnect for scale-up systems — remains on copper. That is likely the next frontier, and the capacity rights Nvidia has secured through the Lumentum and Coherent deals are probably positioned to address it when the time comes.
The Competitive Landscape Just Shifted
Nvidia is not moving into a vacuum. The photonics arms race across the broader AI infrastructure ecosystem has been accelerating for over a year, and Nvidia's $4 billion commitment is partly a response to moves already in progress across its competitive landscape.
In December 2025, Marvell announced a $3.25 billion acquisition of Celestial AI — a semiconductor startup developing photonic fabric interconnects specifically for scale-up XPU connectivity. That deal, expected to close by the end of March 2026, positions Marvell's custom ASIC business — already serving Apple, Google, Microsoft, and Amazon — with an optical scale-up connectivity play that directly competes with where NVLink must eventually go.
Meanwhile, Meta's $60 billion AMD chip procurement deal — the largest AI hardware procurement in history — is itself part of the pressure on Nvidia to accelerate its photonics roadmap. AMD's competing MI450 Instinct GPUs will need their own interconnect fabric at scale. Every hyperscaler that builds a photonics-native cluster architecture independent of Nvidia becomes a potential forcing function for the entire industry to accelerate the copper-to-optics transition.
Google, which announced its own CPO developments at Hot Chips 2025, and Intel's data center photonics work are additional vectors. The optical interconnect race is now a full-spectrum industry-wide competition, not a niche technology transition.
What Jensen Huang Is Really Saying
Nvidia's communications around these investments are worth reading carefully. Huang's language — "gigawatt-scale AI factories," "million-GPU AI factories," "shattering the old limitations" — is not hyperbole for its own sake. It is Huang doing what he has consistently done over the past five years: naming the next bottleneck in advance of the market recognizing it, and positioning Nvidia to own the solution before others realize it is a problem.
The GTC 2025 keynote introduced the Spectrum-X and Quantum-X photonic switch concepts. The March 2026 investment announcements convert that conceptual announcement into a funded supply chain. The pattern is identical to how Nvidia handled the transition from PCIe to NVLink, from GDDR to HBM, and from conventional packaging to CoWoS. Identify the constraint. Invest upstream. Control the supply chain. Deliver the product when the market is ready to pay for it.
Gilad Shainer, Nvidia's senior vice president of networking, put the strategic framing plainly: the Lumentum and Coherent collaborations on lasers and silicon photonics will enable the next generation of "million-scale AI." That phrase — million-scale AI — is the product specification for whatever Nvidia's roadmap calls for in 2027 and beyond. Today's Blackwell NVL72 racks connect 72 GPUs in a shared memory domain. Vera Rubin scales that further. The generation after Vera Rubin requires optical interconnects to exist at all.
Timeline and Market Impact
Nvidia expects Quantum-X Photonics InfiniBand switches to be available later in 2026, with Spectrum-X Photonics Ethernet switches following from major infrastructure and system vendors. The photonics ecosystem Nvidia has assembled — including TSMC, Corning, Foxconn, Fabrinet, SPIL, SENKO, Sumitomo Electric Industries, and TFC Communication alongside Lumentum and Coherent — is already in place. This is not a research program. It is a production ramp.
For the broader data center industry, the implications extend beyond Nvidia's own product line. Every hyperscaler building AI factories at scale will need to plan their networking infrastructure around the optical transition. Power consumption at the networking layer — which currently accounts for a meaningful fraction of data center energy budgets — will decline sharply as CPO adoption scales. That changes the economics of AI factory construction in ways that favor denser, more power-constrained facilities and reduce one of the primary arguments against building in markets with constrained grid access.
The photonics transition also reshapes the competitive dynamics for optical component suppliers. For years, Lumentum and Coherent competed primarily in datacom and telecom transceiver markets. Being folded into Nvidia's supply chain at $2 billion apiece — with purchase commitments that guarantee multi-year revenue visibility — transforms both companies' strategic positioning and removes the business model uncertainty that has historically made optical component stocks volatile.
The Pattern Repeats
In retrospect, every major transition in AI hardware has followed the same arc: a compute breakthrough creates demand that exposes the next layer of infrastructure as the binding constraint. GPUs exposed memory bandwidth. Memory bandwidth exposed interconnect speed. Interconnect speed is now exposing the limits of copper and conventional transceivers. Each time, Nvidia has been the first company to write the check that moves the constraint.
The $4 billion committed to Lumentum and Coherent is not the largest investment Nvidia has ever made in a supply chain transition. But it may be the most strategically significant — because optical interconnects are the last major physical layer of the AI factory stack that Nvidia does not yet own end to end. When the Spectrum-X and Quantum-X photonic switches ship at scale, that will change.
The copper era in AI data center networking is ending. Nvidia just decided when and how.



