Anthropic's Court Filings Expose AI's Accounting Shell Game: $10 Billion Spent, $5 Billion Earned

Long rows of glowing server racks inside a hyperscale data center, blue and cyan LED lights casting cool light down polished aisles, overhead cable management infrastructure visible

The Anthropic-Pentagon lawsuit was supposed to be a story about AI safety and government overreach. It has turned into something more revealing: the first time a major frontier AI lab has been forced, under oath, to tell the world what the business actually looks like. The numbers in those court filings — $10 billion spent, $5 billion earned — are shaking an industry that had grown comfortable with a different vocabulary entirely.

What the Court Documents Actually Say

In a declaration filed with the U.S. District Court for the Northern District of California, Anthropic Chief Financial Officer Krishna Rao disclosed that the company's lifetime revenue has exceeded "$5 billion to date." Separately, Reuters Breakingviews reported that Rao's filing also disclosed that Anthropic has spent over $10 billion on training models and serving responses to users — all to generate that $5 billion in cumulative revenue.

The ratio is stark: for every dollar Anthropic has earned since its founding in 2021, it has spent approximately two dollars building and running the systems that generate that revenue. The company has raised over $40 billion in venture capital and strategic investment to bridge that gap, most recently closing a $30 billion Series G round at a $380 billion post-money valuation in February. That valuation now sits alongside lifetime GAAP revenue of five billion dollars — an eye-watering multiple by any conventional measure.

Rao's filing was submitted as part of Anthropic's emergency motion seeking a restraining order against the Trump administration's supply chain risk designation, which the company argues is causing concrete and irreparable harm to its commercial relationships. The legal battle has proven more financially illuminating than most quarterly earnings calls.

The Run-Rate Revenue Problem

The $5 billion figure sits in uncomfortable proximity to a very different set of numbers Anthropic has publicized in recent months. As recently as the end of February, the company was touting a "run-rate revenue" of $19 billion. In mid-February, at the time of the Series G announcement, the figure was $14 billion. The gap between $5 billion in actual GAAP revenue and $19 billion in run-rate claims is not fraud — but it is a window into how Silicon Valley has learned to count money in ways that require careful translation.

Reuters Breakingviews explained the methodology: Anthropic's run-rate is calculated by taking the last 28 days of consumption-based revenue from enterprise customers and multiplying by 13, then adding annualized subscription revenue. This captures a snapshot of recent momentum and extrapolates it forward as if the current pace were guaranteed to continue indefinitely. The $5 billion GAAP figure, by contrast, represents every actual dollar the company received from 2023 through December 2025 — not a projection, but recorded sales.

The gap between the two figures narrows when you account for the hockey-stick nature of Anthropic's growth: by all accounts, the large majority of that $5 billion in actual revenue was generated in the final months of 2025. The run-rate methodology is not inherently dishonest; it is designed to signal trajectory, not history. But the practice makes it trivially easy to quote numbers that sound like recurring annual revenue when they are better understood as annualized snapshots of a business that is growing fast and, to date, has not earned nearly as much as it has spent.

What It Costs to Run a Frontier AI Lab

The $10 billion in training and inference costs is not a surprise to anyone who has watched the economics of large language model development up close. It is, however, bracing to see it confirmed in a legal filing. Training a single frontier model at Anthropic's scale requires thousands of Nvidia H100 or H200 GPUs running continuously for weeks or months at a time, at cloud compute costs that typically run between $2 and $5 per GPU-hour. A single serious training run for a model the scale of Claude 3.7 Sonnet can cost hundreds of millions of dollars. Multiply across multiple generations of models, safety research runs, fine-tuning workloads, and the actual serving of inference requests to millions of enterprise users, and $10 billion becomes a defensible — even conservative — accounting.

The inference side of the equation is where the math gets particularly unforgiving. Unlike training, which is a one-time capital expense per model generation, inference is ongoing: every query sent to Claude costs compute. Enterprise customers are billed on consumption, but the margin structure depends heavily on how efficiently the company can route requests, how aggressively it can quantize and distill models to reduce per-token costs, and whether hardware costs continue their historical trend of falling over time. Anthropic, like all frontier labs, is betting that unit economics improve fast enough to justify the scale of investment it has already made.

The Immediate Financial Damage From the Federal Ban

The revenue disclosures in the court filing were not abstract financial history — they were submitted in support of an emergency injunction request. Rao's declaration and a separate filing from Chief Commercial Officer Paul Smith document specific deals already disrupted by the Trump administration's supply chain risk designation.

Smith's declaration describes a customer that paused discussions on a $15 million contract after Anthropic was labeled a supply chain risk. Two financial services companies refused to finalize agreements worth a combined $80 million unless they received broad cancellation rights — provisions that would not have been requested, and likely not granted, prior to the designation. A major Anthropic investor disclosed that the Department of Defense had directly contacted portfolio companies about their use of Claude, creating a chilling effect across the company's client base.

Rao put the aggregate exposure bluntly: "Across Anthropic's entire business, and adjusting for how likely any given customer is to take a maximal reading, the government's actions could reduce Anthropic's 2026 revenue by multiple billions of dollars." For a company whose lifetime GAAP revenue currently stands at $5 billion, losing multiple billions in a single year is not an abstraction — it is a threat to the investment thesis that justifies the $380 billion valuation.

OpenAI's Own Precarious Math

Anthropic is not alone in operating with economics that require careful examination. OpenAI, the dominant player in the consumer AI market, reported $20 billion in annual recurring revenue at the end of December 2025 — a figure that uses a different definition than Anthropic's run-rate, specifically capturing subscription revenue rather than metered consumption. The company has since surpassed $25 billion in annualized revenue as of late February, according to The Information, a 25 percent increase in roughly two months.

Those headline numbers obscure a different kind of challenge. HSBC analysts estimated that OpenAI may require approximately $207 billion in additional financing by 2030 to sustain its current trajectory — even after its $110 billion funding round announced earlier this year. That round carries conditions that complicate the headline figure: Amazon's $50 billion commitment begins with an initial disbursement of approximately $15 billion, with the remainder contingent on OpenAI either completing a public offering or reaching artificial general intelligence, whichever comes first. SoftBank, expected to contribute an additional $30 billion, is itself navigating a negative credit outlook from S&P Global while exploring a bridge loan of up to $40 billion.

The competition dynamic compounds the problem for both companies. Open-source models — including Meta's Llama 4 series and AI2's Olmo Hybrid — are compressing the pricing that commercial frontier labs can sustain. As capable open models proliferate, enterprise customers gain negotiating leverage, and the per-token revenue that closed-source labs can command faces structural downward pressure.

The IPO Escape Valve

Both Anthropic and OpenAI are reportedly considering initial public offerings as early as this year, as the Financial Times and Reuters have reported. The IPO timeline is widely understood as the mechanism by which private investors monetize their positions and public markets absorb the ongoing capital requirements of scaling frontier AI.

The Anthropic court filings have complicated that calculus. Public investors, required to make buy decisions on the basis of publicly available financial information, will now have access to the GAAP revenue figure — $5 billion lifetime — as context for a valuation that implies the company is worth 76 times that number. The run-rate methodology will still be presented and will still be relevant, but the gap between accounting measures will be visible in a way it has not been in private fundraising rounds, where investors receive information under confidentiality and can be more easily guided toward the metrics that favor the bull case.

The broader question hanging over both companies — and over the $650 billion in AI infrastructure spending that Big Tech plans to deploy this year — is whether the economics of frontier AI ever close. The industry's answer is that models will improve faster than costs, that new use cases will unlock revenue that doesn't yet exist, and that the history of transformative technology platforms supports patience. The court documents suggest that thesis is being tested under less favorable conditions than the industry would prefer.

What the Filings Mean for the Industry

The involuntary financial disclosure embedded in Anthropic's lawsuit may end up mattering more to the AI industry than the lawsuit itself. For the first time, a major frontier lab has put lifetime GAAP revenue and total compute expenditure into a public legal document, forcing a comparison that the industry's PR operations would normally prevent.

The numbers do not mean Anthropic is failing. A company that has generated $5 billion in cumulative revenue in roughly two years of serious commercial operation, is growing at rates that routinely shock analysts, and commands enterprise contracts that span financial services, healthcare, and government contracting is not a company in crisis. What it is, is a company that has spent significantly more building its capabilities than it has yet recovered through commercial sales — and that is asking investors, customers, and now federal courts to trust that the trajectory closes the gap before the capital does.

The question for the broader AI market is how many frontier labs can make that argument simultaneously, against a backdrop of rising open-source competition, compressed pricing, and a macroeconomic environment that has grown less tolerant of infinite patience for profitability. The Anthropic court filings did not answer that question. But they finally put the terms of the problem on the record.

This article is part of TTN's ongoing series covering the Anthropic-Pentagon dispute and the economics of frontier AI. Previous coverage: Anthropic Takes the Pentagon to Court: Inside the AI Safety Lawsuit of the Decade.

Related Articles