Anthropic went to court on Monday, filing two separate federal lawsuits to block the Pentagon from enforcing an extraordinary supply chain risk designation that the AI company says is unconstitutional, unprecedented, and already causing hundreds of millions of dollars in irreparable harm. The filings mark the sharpest escalation yet in a two-week saga that has exposed the deepest fault lines in the relationship between Silicon Valley AI labs and the U.S. military — and set up a legal showdown that could determine who gets to set the rules for how artificial intelligence is used in war.
Two Lawsuits, Two Courts, One Constitutional Argument
The first complaint, filed in the U.S. District Court for the Northern District of California, asks a federal judge to vacate the Pentagon's supply chain risk designation and grant Anthropic an immediate stay to pause its enforcement while the legal case unfolds. The second, filed in the U.S. Court of Appeals for the Washington D.C. Circuit, seeks a formal administrative review of the Defense Department's determination under the National Defense Authorization Act provisions governing supply chain risk management.
Together, the filings attack the designation on two fronts simultaneously — forcing an executive branch review track in D.C. while pursuing injunctive relief in California. Legal experts note this dual-court approach is unusual and signals that Anthropic is not treating this as a routine government contract dispute. The company wants the designation stopped quickly, and it's betting that the constitutional arguments are strong enough to win an emergency stay before the six-month phase-out order takes full effect.
What Anthropic Is Actually Claiming
"These actions are unprecedented and unlawful," the California complaint states. "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech." The filing argues that the supply chain risk designation — historically reserved for foreign adversaries like Huawei and ZTE — is being weaponized here not because Anthropic poses a genuine security threat, but because the company refused to endorse government policy positions on AI weapons and surveillance.
The complaint specifically cites First Amendment free speech violations and Fifth Amendment due process violations. Anthropic argues that the designation was issued without any formal notice, without an opportunity to respond, and without the procedural safeguards that normally accompany actions of this magnitude. It also contends the action amounts to unconstitutional viewpoint discrimination — the government punishing a private company for the views it expressed about how its technology should and should not be used.
The financial stakes are spelled out bluntly in the filing: "Anthropic's contracts with the federal government are already being canceled. Current and future contracts with private parties are also in doubt, jeopardizing hundreds of millions of dollars in the near-term." The company adds that absent judicial relief, those harms "will only compound in the weeks and months ahead."
The Saga That Brought Us Here
To understand how one of the world's most valuable AI companies ended up suing its own government, it helps to trace the timeline. The Pentagon and Anthropic had been in increasingly contentious talks since January, centered on whether Anthropic's usage policies — embedded in Claude's model guidelines — were compatible with the military's demand for "full flexibility in using AI for any lawful use."
Anthropic had signed a $200 million agreement with the Department of Defense under the Biden administration, one of several major AI lab contracts the Pentagon struck as it moved rapidly to incorporate commercial AI into military operations, including active combat in the Iran conflict. But the terms embedded in that agreement reflected a different political moment — one in which the Defense Department accepted certain restrictions on autonomous lethal use as standard practice.
By February 2026, the Trump administration's approach had hardened. Defense Secretary Pete Hegseth demanded Anthropic remove two specific guardrails from its Claude deployments: restrictions on using Claude in autonomous weapons systems capable of killing without human authorization, and restrictions on using Claude for domestic mass surveillance of American citizens. Anthropic refused. CEO Dario Amodei flew to Washington to meet personally with Hegseth, but no deal was reached.
On February 27, President Trump directed all federal agencies to "immediately cease" use of Anthropic's technology. Four days later, on March 5, the Pentagon formally issued the supply chain risk designation — the first time that label has ever been applied to an American company. Defense contractors including Lockheed Martin have already begun removing Anthropic AI from their platforms, and state and treasury departments have stepped away from Claude deployments.
The Core Disagreement: Red Lines on Autonomy and Surveillance
At the heart of this conflict is a question that the entire AI industry has been quietly avoiding: who sets the ethical guardrails for AI deployed in warfare, and what happens when a company's answer differs from the government's?
Anthropic's position is grounded in technical as well as ethical arguments. The company has said publicly that even the best current AI models are not reliable enough for fully autonomous weapons decisions — that the error rates and failure modes of today's systems make human-in-the-loop oversight not merely preferable but necessary to avoid catastrophic outcomes. On domestic surveillance, Anthropic has drawn a harder line, calling the use of AI for mass surveillance of American citizens a fundamental violation of rights that the company will not enable regardless of government pressure.
The Pentagon's counterargument is that it is a sovereign government entity operating under U.S. law, and that a private technology company does not have the authority to constrain military decision-making. Hegseth said the restrictions could "endanger American lives" by limiting tactical flexibility in active combat operations. The Pentagon insists it, not Anthropic, determines the legal bounds of defense operations.
What makes this fight genuinely novel is that Anthropic embedded its restrictions not just in contractual terms but in the model's underlying behavior — Claude's responses are shaped by training that reflects these safety boundaries. The Pentagon's demand wasn't just for a contract modification; it was, in effect, a demand for a different model.
The Political Dimension
The feud has been fueled as much by politics as by policy. Trump posted on Truth Social in late February: "WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about."
Hegseth publicly accused Anthropic of "arrogance and betrayal" and demanded that any company doing business with the U.S. government cut all ties with the AI lab — a coordinated economic pressure campaign unusual in its breadth and speed.
The situation was further complicated by a leaked internal memo from Amodei, written the day after the February 27 announcement and published by tech outlet The Information the following Wednesday. In it, Amodei wrote that Pentagon officials didn't like Anthropic in part because "we haven't given dictator-style praise to Trump." Amodei later publicly apologized for the characterization, saying it was written in private frustration and did not reflect his considered views. But the memo's leak deepened the administration's hostility and gave Hegseth additional rhetorical ammunition.
Privately, however, Anthropic has kept the door open. Amodei has reportedly reopened informal negotiations with DoD officials in the days since the formal designation, and the company has been careful to state publicly that the lawsuits are not intended to force the government to work with Anthropic — only to stop the designation from functioning as economic punishment. "The lawsuit doesn't preclude re-opening negotiations," a company official told Reuters.
The OpenAI Contrast — and Its Consequences
The contrast with OpenAI has become unavoidable. While Anthropic held its ground, OpenAI struck its own deal with the Department of Defense shortly after Anthropic rejected the Pentagon's terms. The announcement triggered significant internal backlash at OpenAI — most visibly when Caitlin Kalinowski, the company's head of robotics, resigned over concerns that guardrails on surveillance and lethal autonomy were rushed through without adequate deliberation. OpenAI is now managing both a reputational crisis among safety-focused researchers and a messy attempt to amend the contract's terms.
Meanwhile, Anthropic has seen an unexpected upside: Claude usage has surged among developers and enterprise customers who view Anthropic's refusal to bend as a credibility signal. The company's $350 billion valuation, which seemed exposed by the Pentagon fallout, may be more resilient than the initial headlines suggested — at least among the commercial customer base that represents the bulk of its revenue.
A Precedent That Reaches Every AI Lab
Whatever the courts decide, this case will set precedent. No AI company has ever been designated a supply chain risk by the U.S. government. No AI company has ever sued to reverse such a designation. And no court has yet ruled on whether the First Amendment protects a tech company's right to embed safety restrictions into its own AI models — restrictions the government then demands be removed.
The outcome will matter far beyond Anthropic. Every AI lab with government contracts — Google, Microsoft, Meta, xAI — is watching. If the designation holds, it signals that any AI company that maintains safety constraints the government dislikes could face the same treatment. If the courts strike it down, it establishes that there are constitutional limits on how the executive branch can pressure private AI companies to modify their products.
The Pentagon signed contracts worth up to $200 million each with major AI labs, treating commercial AI as an essential and integrated component of U.S. military capability. The assumption embedded in that strategy — that commercial AI companies would align their products entirely with government requirements — is now being tested in federal court. The answer will come from judges, not from the market.
This is the latest installment in TTN's ongoing coverage of the Anthropic-Pentagon standoff. See also: Pentagon Makes Anthropic Blacklist Official — While Secretly Restarting Talks, OpenAI's Robotics Chief Walks Out Over Pentagon Deal, and Anthropic Takes Pentagon to Court: Trump Bans Federal AI Use as Constitutional Showdown Begins.




