On Thursday, March 5, the Pentagon made it official: Anthropic is now formally designated a "supply chain risk to national security," the first time that label — historically reserved for foreign adversaries — has ever been applied to an American company. Hours later, it emerged that secret negotiations between Anthropic CEO Dario Amodei and a senior Pentagon official had quietly resumed. The contradiction captures everything that's wrong, and everything that's genuinely unprecedented, about the most consequential AI policy crisis in U.S. history.
The Formal Designation: A Legal and Historical First
For more than a week, Defense Secretary Pete Hegseth's "supply chain risk" designation of Anthropic existed mostly as a social media declaration — a post on X, a verbal statement, something between a threat and a policy. On Thursday, it became something else: an official, documented government action.
"DoW officially informed Anthropic leadership the company and its products are deemed a supply chain risk, effective immediately," a Pentagon official told Bloomberg on Thursday.
That formalization matters enormously. A formal supply chain risk designation obligates defense vendors and contractors to certify that they are not using Anthropic's models in their work with the Pentagon. It creates legal exposure for companies that continue to do so. And it puts the Trump administration on record in a way that social media posts do not — making any reversal, or any legal challenge, a real matter of administrative law rather than a political spat.
Herbert Lin, a senior research scholar at Stanford University's Center for International Security and Cooperation, told CNBC that the designation remains deeply unusual: "Anthropic is the only American company ever to be publicly named a supply chain risk, as the designation has traditionally been used against foreign adversaries." That fact alone — that a homegrown AI safety company has been categorized alongside Chinese telecom firms and Russian software — marks a genuine rupture in how the U.S. government has historically defined national security threats.
President Trump reinforced the point with characteristic bluntness in a Politico interview on Thursday. "Well, I fired Anthropic. Anthropic is in trouble because I fired them like dogs, because they shouldn't have done that," Trump told Politico.
The Other Shoe: Talks Are Back On
Here is where Thursday became genuinely surreal. At nearly the same moment the Pentagon was formalizing its blacklist, the Financial Times and Bloomberg both reported that negotiations between Anthropic and the Department of Defense had quietly resumed. Amodei has been in direct contact with Emil Michael, the undersecretary of defense for research and engineering, according to those reports.
Michael, a former Uber executive, and Amodei have a complicated personal history. The New York Times previously reported that the two "strongly dislike one another" — a dynamic that, by multiple accounts, has made what might have been a routine contract negotiation feel like a personal vendetta on both sides. Reuters has described the broader dispute as "an ego and diplomacy problem" as much as a substantive policy clash.
That the formal blacklisting and the diplomatic re-engagement happened simultaneously on the same day is not, in the end, so surprising. The Trump administration has consistently used extreme public pressure as a negotiating tool, and the formal designation gives Hegseth and Michael something concrete to bargain with — or against. Amodei has said Anthropic will sue over the designation. A formal legal fight, unlike a verbal standoff, has timelines, costs, and outcomes that both sides have incentive to avoid.
Whether that calculation will produce a deal remains entirely unclear.
The Leaked Memo: Amodei vs. Altman Goes Public
One of the more damaging sub-plots of the week emerged when an internal message Amodei sent to Anthropic employees was leaked to the press. In it, Amodei called OpenAI CEO Sam Altman "mendacious" and described OpenAI's Pentagon deal — announced the same evening Anthropic was first blacklisted — as "safety theater."
The accusation landed with force. Altman had positioned OpenAI's agreement as a principled compromise: the company secured three explicit "red lines" — no use of its technology for mass domestic surveillance, no use to direct autonomous weapons, no use for high-stakes automated social scoring — while agreeing to deploy its models in classified environments. But critics, including the MIT Technology Review, argued that the deal's protections largely rely on existing law rather than independent contractual enforcement — meaning the Pentagon retains broad discretion for any use that doesn't break current statutes. "We have essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use," the Technology Review concluded.
Altman compounded the tension in an internal memo of his own, which was also obtained by press, in which he acknowledged that OpenAI would ultimately have "no control over how the military used OpenAI's technology." The disclosure undercut his public framing significantly.
On Thursday, Amodei walked back his internal message publicly. "I also want to apologize directly for a post internal to the company that was leaked to the press yesterday," he wrote in a statement. "It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation." The framing — a six-day-old memo suddenly "out of date" — signals that the back-channel talks may be more serious than either side has publicly acknowledged.
Who Is Actually Cutting Off Claude
While executives trade statements, the ground-level impact of the blacklist is already playing out across the defense industrial base. The effects are significant, though more complicated than a clean break.
Alexander Harstrick, managing partner at J2 Ventures — which backs defense-focused startups — told CNBC that 10 of his firm's portfolio companies working with the Department of Defense "have backed off their use of Claude for defense use cases and are in active processes to replace the service." The move is largely preemptive: companies are not waiting for a formal legal process before protecting themselves from potential contract liability.
Lockheed Martin is expected to remove Anthropic's technology from its supply chains, Reuters reported Tuesday. Both the State Department and Treasury Department have begun severing ties with Anthropic's products, their respective heads confirmed.
But the designation is not a universal kill switch. Microsoft's legal team studied the supply chain risk designation and concluded that Claude "can remain available" to its customers as a commercial product, because the designation — in its current form — applies only to companies' use of Claude specifically as part of DoD contracts. Commercial use and government civilian use remain legally distinct. Anthropic itself noted in a blog post that Defense Secretary Hegseth, under applicable federal statute, may lack the unilateral authority to restrict companies from working with Anthropic at all.
The legal picture, in short, is deeply unsettled. And the military is still using Claude. Anthropic's models remain embedded in Palantir's Maven system, the AI-driven intelligence platform that has been central to U.S. and Israeli operations against Iran — operations that began just hours after the initial blacklisting. Stanford's Lin flagged the obvious contradiction to CNBC: if Anthropic's technology poses a genuine supply chain risk to national security, why is it still active in live military operations with a six-month phaseout window?
The Paradox: Anthropic's Revenue Is at an All-Time High
Whatever the political and legal calculus, one data point stands apart from the rest: Anthropic is, by most measures, winning in the market even as it loses in Washington.
Since the public clash began, Claude has topped the Apple App Store. Anthropic's annualized revenue pace has surged to $19 billion — up from $14 billion just weeks earlier, according to Reuters. Supporters of the company have chalked admiring messages outside its San Francisco headquarters; one read, "God loves Anthropic."
The revenue surge matters for a specific reason: Anthropic's most recent fundraising round, valued at approximately $60 billion, was reported by Axios to be in jeopardy due to the Pentagon standoff. Investors have been alarmed, with some reaching out directly to the Trump administration to attempt de-escalation. But if Anthropic's commercial trajectory continues to accelerate — consumer downloads, developer adoption, enterprise contracts — the $60B round may look less fragile than the political drama suggests.
The company's position also attracted unusual institutional support. A group of retired defense officials, policy leaders, and executives wrote to Congress on Thursday, defending Anthropic and calling the Trump administration's supply chain designation a "dangerous precedent." Separately, nearly 500 employees from OpenAI and Google signed an open letter titled "We Will Not Be Divided," calling out what they saw as the Pentagon's attempt to fracture the AI industry by threatening companies into compliance one by one.
What Happens Next — Five Questions With No Clear Answers
As of this writing, the Anthropic-Pentagon standoff sits in a genuinely open-ended state. Several key questions remain unresolved, and their answers will determine whether this week's drama is a turning point or merely another escalation in a months-long trench war.
Will the formal designation hold legally? Anthropic has signaled it will sue. Its own blog post cited federal statute suggesting Hegseth lacks the unilateral authority to execute the designation as structured. A court challenge could freeze enforcement while the case winds through the system — potentially for years.
Can back-channel talks produce a deal? Reports of resumed negotiations are the most encouraging sign in weeks. But the same personal antagonism between Amodei and Michael that escalated the original standoff remains in place. Whether both sides can subordinate ego to pragmatism is, by multiple accounts, the central variable.
If Claude is a national security risk, why is it still in use? The continued deployment of Anthropic's technology in active military operations — including live strikes on Iran via the Palantir/Maven system — is the clearest evidence that the supply chain risk designation is as much political as operational. A genuinely dangerous technology would not receive a six-month phaseout window during active combat.
Does OpenAI's deal actually hold its red lines? The MIT Technology Review and legal analysts have flagged that OpenAI's contract provides protection against illegal uses of AI — but relies on existing law to define what's illegal. Since laws around autonomous weapons and domestic surveillance are actively contested, those protections may prove weaker than Altman's public messaging suggests. If anything, Anthropic's hardline stance has set a higher benchmark that competitors may eventually be held against.
What does "phaseout" actually mean in practice? President Trump ordered a six-month phaseout period for agencies like the DoD. But with Anthropic's models embedded in critical operational systems like Maven, the practical complexity of removing them mid-conflict is enormous. Six months may prove to be enough time for a deal — or it may simply be the runway for a legal and political resolution to take shape.
The Anthropic-Pentagon standoff has never been purely about one company's contract. It has always been about who sets the terms for how the most powerful AI systems in history are deployed in warfare and law enforcement — a question that democracies have barely begun to answer. Thursday's formal blacklisting, paired with its immediate contradiction of secret resumed talks, suggests neither side is quite ready to let the other win. For now, the standoff continues — with a $19 billion revenue surge, a formal government designation, and live military operations all happening simultaneously.




