OpenAI Steps Into the Pentagon as Anthropic Is Blacklisted

Pentagon building at dusk with holographic AI network overlays — OpenAI Pentagon deal after Anthropic blacklist 2026

In the span of a single Friday, the Pentagon's AI landscape was redrawn. Anthropic — the safety-focused lab that had deployed Claude across the Defense Department's classified networks — was labeled a national security risk and banned from all federal contracts. Hours later, Sam Altman announced OpenAI had signed a classified AI deal with the Pentagon. The twist: he claimed OpenAI secured the exact same guardrails Anthropic had been denied. Here's what happened, what it means, and why the story is far from over.

A 24-Hour Power Play

On Friday, February 28, the Trump administration moved with unusual speed. Defense Secretary Pete Hegseth invoked his authority to designate Anthropic a "Supply-Chain Risk to National Security" — a label historically reserved for foreign adversaries like Huawei and ZTE. President Trump then posted on Truth Social, directing every federal agency to "IMMEDIATELY CEASE" all use of Anthropic's technology, with a six-month phase-out window.

The trigger was months in the making. Anthropic had been negotiating terms with the Pentagon since its $200 million contract was first signed in 2024. The company wanted two explicit guarantees written into its contract: that Claude would not be used for mass domestic surveillance of Americans, and that it would not be deployed in fully autonomous weapons systems with no human in the decision loop.

The Pentagon's position was equally firm: it must retain the right to use contracted AI for "any lawful use." Defense officials repeatedly cited national security necessity and what they characterized as the Pentagon's existing policy prohibitions as sufficient protection. Anthropic's insistence on contract-level language was cast by officials close to Hegseth as an attempt by a "woke" AI company to impose its terms on the U.S. military.

Talks collapsed. The designations followed within hours. And then Sam Altman walked in.

OpenAI's Deal — and the Contradiction It Creates

Late Friday night, Altman posted on X that OpenAI had reached an agreement with the Department of Defense to deploy its models on classified government networks. In the post, he wrote: "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."

On the surface, this looked identical to what Anthropic had demanded — and been denied. The apparent contradiction generated immediate scrutiny across the tech and policy community.

The distinction appears to lie in contract architecture. Anthropic sought explicit contractual prohibitions on specific use cases — language it wanted embedded in the DoD agreement itself. OpenAI reportedly took a different path: agreeing that the Pentagon could use its technology for "any lawful purpose," while separately enshrining the autonomous weapons and surveillance prohibitions in a layered approach that includes technical safeguards, OpenAI-deployed personnel with classified clearances, cloud-only deployment (not on-premises DoD hardware), and OpenAI's retained discretion over its safety stack.

In a separate statement published Saturday, OpenAI said its agreement "has more guardrails than any previous agreement for classified AI deployment, including Anthropic's" — and added a third red line: the prohibition of OpenAI technology for "high-stakes automated decisions," such as social credit-style systems.

Why Did the Pentagon Accept OpenAI's Terms but Reject Anthropic's?

This is the central question that remains unanswered — and may not be fully answerable from public information alone.

Some analysts suggest the distinction is primarily political. Anthropic's CEO Dario Amodei had made the dispute publicly prominent over months, writing essays about the "illegitimacy" of using AI for domestic surveillance and publicly calling autonomous weapons a democratic threat. In the Trump administration's framing, this positioned Anthropic as an adversarial actor — one imposing its political values on the military.

By contrast, Altman played a different game. Even as he publicly sympathized with Anthropic's position and OpenAI employees signed open letters in support of Amodei, Altman kept back-channel diplomacy alive. He framed OpenAI's deal not as a concession to Pentagon pressure but as a proof of concept — demonstrating that safety guardrails and military utility are compatible, and inviting other companies to accept the same terms.

The structure of the agreements may also matter. The "lawful use" framing gave the Pentagon the rhetorical and legal flexibility it needed, while the multi-layered technical and personnel controls gave OpenAI the operational leverage to enforce its red lines without relying solely on contractual language. Whether those controls are actually robust — and whether they would hold under operational pressure — remains an open question.

The Supply Chain Risk Designation: What It Really Means

The "Supply-Chain Risk to National Security" designation is the most consequential — and legally contested — element of this story.

Hegseth's interpretation of that designation goes well beyond canceling Anthropic's own Pentagon contract. In a post on X, he stated: "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."

If that interpretation stands, the damage to Anthropic could be catastrophic. The company — which recently closed a $30 billion venture capital round valuing it at $380 billion and is reportedly preparing for an IPO — counts among its enterprise customers many firms that also do business with the U.S. military. Amazon, Google, and Nvidia have all made major investments in Anthropic. Under a broad reading of Hegseth's directive, those relationships could be legally compromised.

Legal experts have immediately challenged the scope of Hegseth's interpretation. Peter Harrell, a former National Security Council official who worked on supply chain risk policy, and other legal analysts noted that "supply chain risk" designations have historically been used in narrow, targeted ways against foreign technology providers — not domestic American companies. Applying it to Anthropic, they argue, would represent an unprecedented expansion of executive power over private commercial technology companies.

Anthropic has signaled it will fight back. The company's public statement said it had "not yet received direct communication" from either the Pentagon or Trump, and declared: "We will challenge any supply chain risk designation in court. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government."

The Autonomous Weapons Debate Moves to Center Stage

Beneath the political theater, a critical technical and ethical debate is playing out in real time: at what point, if any, should AI be allowed to make lethal decisions without direct human authorization?

The Pentagon's existing policy — DoD Directive 3000.09, last revised in 2023 — requires that autonomous weapons systems maintain "appropriate levels of human judgment over the use of force." That language is deliberately vague, and military planners have argued that in high-tempo combat environments, the human oversight requirement may need to be interpreted flexibly. The rise of drone swarms, AI-guided missile systems, and autonomous battlefield logistics has made the question more urgent than ever.

Amodei's position, stated plainly, is that "today, frontier AI systems are simply not reliable enough to power fully autonomous weapons." This is not merely an ethical claim — it reflects a technical reality that the AI safety community has documented extensively: current large language models can hallucinate, misclassify, and fail unpredictably under adversarial conditions. Deploying them in lethal autonomous systems without human override capability represents a risk that most AI researchers consider unacceptable.

Altman's framing attempts to thread this needle by arguing that the DoD's existing law and policy already prohibit the most dangerous autonomous weapons uses, making his contractual red lines redundant — and therefore easier for the Pentagon to accept without feeling it is ceding operational authority.

Industry-Wide Implications: A Template or a Pressure Campaign?

Altman's closing move in his public statement was pointed: he asked the Pentagon to "offer these same terms to all AI companies," calling it a framework that "everyone should be willing to accept." It was both an olive branch and a challenge — implicitly directed at Anthropic, framing their rejection as unnecessary escalation.

Meanwhile, nearly 500 OpenAI and Google employees had signed an open letter titled "We will not be divided," explicitly warning that the Pentagon was "trying to divide each company with fear that the other will give in." Whether Altman's deal represents a strategic win for AI safety principles or a fragmentation of the industry's united front will depend heavily on what the classified terms actually say — and whether OpenAI's technical safeguards can withstand the pressures of real military deployment.

The Anthropic situation has also prompted broader concern about the use of national security mechanisms as a commercial and political cudgel. If the Trump administration can designate a domestic AI company a "supply chain risk" for declining to remove contractual guardrails, the precedent it sets for every technology company doing business with the federal government is significant — and chilling.

What Happens Next

The immediate legal battle will be critical. Anthropic's challenge to the supply chain risk designation is expected to move quickly given the commercial stakes. Courts will need to rule on whether Hegseth's broad interpretation of the designation is legally sound, and whether applying it to a domestic company over a contract dispute violates Anthropic's rights.

Google is also reportedly in negotiations with the Pentagon under similar terms — and the outcome of those talks will indicate whether OpenAI's deal represents a genuine framework or a one-time political arrangement. If Google secures the same terms, it strengthens the case that Altman's approach was the right one. If Google cannot, it raises serious questions about whether the deal reflected merit or simply who was in favor at a given political moment.

The Pentagon's classified AI deployment is also not going away. Whatever the legal outcome for Anthropic, the military's push to integrate AI into intelligence analysis, logistics, targeting support, and autonomous systems will continue to accelerate. The question is not whether AI will be embedded in defense operations — it's whether the companies building it will have any meaningful say in how it is used.

That question, once largely theoretical, has become the defining challenge of AI's militarization. And it's only getting started.

Related Articles