OpenAI's "Sloppy" Pentagon Deal: How Biden-Era AI Contracts Nearly Paralyzed U.S. Military Operations

Pentagon building exterior at dusk with digital data streams and classified document folders rendered as glowing holographic overlays above the structure

A senior Pentagon official revealed on Tuesday that AI service contracts negotiated under the Biden administration contained sweeping operational restrictions so broad they could have frozen military commands mid-operation — including the ability to plan and execute combat strikes. The disclosure arrives as OpenAI scrambles to amend what CEO Sam Altman now calls a "rushed," "opportunistic and sloppy" Pentagon deal it struck in the hours after Anthropic was blacklisted from all federal government business. Together, the revelations expose a structural crisis at the intersection of commercial AI and national security: the rules governing how AI can be used in war are being written — and rewritten — by corporate contracts, not Congress.

The "Holy Cow" Moment at the Pentagon

Emil Michael, the Under Secretary of Defense for Research and Engineering and one of the Trump administration's most aggressive technology reformers, described a moment of alarm he experienced while auditing legacy AI contracts inherited from the prior administration. Speaking at the American Dynamism Summit in Washington on Tuesday, Michael said the contracts contained restrictions so far-reaching they had the potential to disable AI tools at the worst possible moment.

"I had a 'holy, holy cow' moment," Michael told the summit. "There were things ... you couldn't plan an operation ... if it would potentially lead to kinetics" — the military term for explosive force. He described dozens of restrictions embedded in agreements governing commands responsible for air operations over Iran, China, and South America. Under those terms, if an operator violated the service agreement, the AI model could theoretically "just stop in the middle of an operation."

Michael declined to name the specific AI provider whose contracts he was reviewing. But the context was unmistakable: Anthropic's Claude was, until very recently, the only commercial AI model deployed on the Defense Department's classified networks. The company had negotiated a landmark deal to make Claude available on classified systems, and it was under that same framework that restrictions on autonomous weapons and combat planning had been embedded in the service terms.

What the Biden-Era Contracts Actually Said

The Biden administration's approach to commercial AI in defense was shaped by two competing impulses: urgency to integrate advanced AI capabilities, and caution about AI use in lethal decision-making. The result, according to Michael, was contractual language that tried to thread an impossible needle — and failed in both directions.

Anthropic's terms of service, which carry over into government agreements, have historically prohibited using Claude for activities likely to cause significant harm — a category that AI safety advocates argue includes lethal autonomous weapons systems. Anthropic had specifically sought to codify those restrictions in its Pentagon contract negotiations, demanding explicit prohibitions on its AI being used for domestic mass surveillance and to power autonomous weapons systems capable of making kill decisions without direct human oversight.

From the military's operational perspective, those restrictions created a liability nightmare. If an AI tool embedded in classified command systems could theoretically "turn off" mid-mission because an operation crossed a line in a commercial service agreement, that's not just a contractual problem — it's an operational risk that commanders cannot accept. Michael's public disclosure suggests the Pentagon had quietly identified this flaw and was already treating it as a national security issue before the public confrontation with Anthropic broke into the open.

OpenAI's Friday Night Deal — and the Monday Morning Regrets

The chain of events that culminated in this week's disclosures accelerated dramatically on February 27. Following the breakdown of Pentagon-Anthropic negotiations, President Donald Trump directed all federal agencies to cease using Anthropic's technology, and Defense Secretary Pete Hegseth designated the company a "supply-chain risk." Within hours — and crucially, just before U.S.-coordinated strikes on Iranian infrastructure — OpenAI announced it had struck its own deal with the Defense Department.

The timing generated immediate and intense backlash. Earlier that same week, Altman had told OpenAI employees in an internal memo that the company shared Anthropic's "red lines" on surveillance and autonomous weapons. Many read the Friday night announcement as OpenAI capitalizing on Anthropic's vulnerability to grab lucrative government contracts — exactly the kind of competitive opportunism that critics said undermined the broader AI safety community's credibility.

By Monday, the blowback had become untenable. Altman published what he described as a repost of an internal memo on X, acknowledging the misstep directly. "We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy," he wrote. "We shouldn't have rushed."

What the Amended Deal Actually Covers

To address the backlash, OpenAI announced it was renegotiating the contract to include new language specifically addressing domestic surveillance. According to Altman, the updated terms state that OpenAI's AI systems shall not be "intentionally used for domestic surveillance of U.S. persons and nationals" — consistent with the Fourth Amendment, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act of 1978.

Katrina Mulligan, OpenAI's head of national security partnerships and a former senior official at the Pentagon, NSC, and DOJ, added that Defense Intelligence Components — including the National Security Agency, National Geospatial-Intelligence Agency, and Defense Intelligence Agency — would be barred from using OpenAI's services under the current agreement. Any use by those agencies would require a separate contract modification.

The revised deal also adds explicit restrictions covering commercially purchased data — cell phone location records, fitness app data, and similar sources that have historically existed in a legal gray area. According to reporting in The Atlantic, this was precisely the kind of guarantee Anthropic had demanded and the Pentagon had refused to provide.

But legal experts were quick to note that surveillance was only half the problem. Charlie Bullock, a senior research fellow at the Institute for Law & AI, said on X: "This seems like a significant improvement over the previous language with respect to surveillance, and I'm glad to see it. It does not address autonomous weapons concerns, nor does it claim to."

The Autonomous Weapons Gap

That gap is the most significant unresolved issue in the entire crisis. Anthropic had insisted on two hard limits in its Pentagon negotiations: a prohibition on mass surveillance of American citizens, and a prohibition on its technology being incorporated into autonomous weapons systems — those capable of deciding to strike targets without direct human oversight.

OpenAI's amended deal addresses the first. It is silent on the second.

That silence matters enormously. The U.S. military is currently accelerating its use of AI across targeting, threat assessment, logistics, and decision support systems. The Army is revamping its entire electronic warfare acquisition system to incorporate commercially sourced AI. The Air Force's air operations centers — the very commands Michael cited as operating under the disputed Biden-era contracts — are integrating AI into everything from mission planning to real-time battle management.

Anthropic's demand for an autonomous weapons prohibition was not an abstract policy position. It was a direct response to evidence that its AI was being used in contexts it had not explicitly authorized — including, reportedly, the planning of the January raid that captured former Venezuelan President Nicolás Maduro. By refusing those terms, the Pentagon signaled that it intends to reserve the right to deploy AI in targeting and operational contexts with minimal contractual restriction.

Industry Fracture and the Ethics Credibility Problem

The episode has split the AI industry along fault lines that were previously more latent. Anthropic's public refusal to capitulate to Pentagon terms generated significant consumer goodwill — Claude shot to the top of Apple's App Store charts as users switched from ChatGPT in protest. OpenAI reportedly saw a 295% surge in ChatGPT uninstalls in the days following the Pentagon deal. Chalk graffiti criticizing the decision appeared on the sidewalk outside OpenAI's San Francisco headquarters.

Inside OpenAI, the dissent was equally visible. Aidan McLaughlin, a research scientist at the company, posted publicly on X that he personally did not think "this deal was worth it" — a statement that drew nearly 500,000 views. Multiple OpenAI employees signed an open letter supporting Anthropic's stance.

Jonathan Iwry, a fellow at the Accountable AI Lab at the Wharton School of the University of Pennsylvania, offered perhaps the sharpest critique of what OpenAI's move revealed about the AI industry's collective resolve. "What is particularly disappointing is that the rest of the AI industry failed to come to Anthropic's support," Iwry told Fortune. "If these companies were serious about their commitment to safe and responsible AI (on which some of them built their reputations), they could have closed ranks and stood together against the Pentagon on behalf of the public. Instead, they let the administration play them off against one another as market competitors."

In his Monday memo, Altman said he had told Pentagon officials that Anthropic should not be designated a supply-chain risk, and that he hoped the Defense Department would offer Anthropic "the same terms we've agreed to." Whether those terms will be sufficient — or whether Anthropic's two-week-old legal challenge will produce a different resolution — remains an open question.

What This Means for AI Governance in Defense

The week's events have made one thing unmistakably clear: the United States currently has no coherent, enforceable legal framework governing the use of commercial AI in military operations. The rules are being written in private contract negotiations between AI companies and the Department of Defense — without public transparency, without congressional oversight, and with enforcement mechanisms that are entirely voluntary.

Analysts across the political spectrum agree that this situation is untenable, even if they disagree on the solution. Conservatives like Emil Michael argue the problem is that Biden-era AI companies imposed civilian ethical constraints on warfighting tools, creating operational vulnerabilities. AI safety researchers argue the problem is that the government is deploying AI in lethal contexts without any meaningful guardrails whatsoever.

What neither side disputes is that the current framework — in which a company's terms of service are the primary constraint on how AI is used in combat — is legally fragile, strategically incoherent, and likely to produce worse outcomes than either a strong regulatory framework or an explicit policy decision to operate without restrictions.

The Pentagon's concurrent push to publish open-source software stacks for next-generation military networks suggests the Defense Department's longer-term play may be to reduce its dependence on commercial AI vendors entirely — building government-owned or open-source AI infrastructure that is not subject to any private company's terms of service. That path would take years and enormous investment. In the meantime, the contractual free-for-all continues.

The Week That Changed Military AI Forever

What happened between February 27 and March 4, 2026 will be studied in national security policy courses for years. In less than a week, the U.S. government banned its leading AI vendor, replaced it with a competitor under rushed terms, watched that competitor publicly recant and renegotiate, and had a senior defense official reveal that the previous AI contracts had contained provisions that could have stopped military operations cold.

The companies at the center of this crisis — Anthropic, OpenAI, and the Pentagon — are all improvising in real time. Anthropic is fighting its blacklisting in federal court. OpenAI is rewriting a contract it signed four days ago. The Pentagon is navigating a geopolitical crisis with AI tools operating under legal terms that its own officials describe as a threat to mission success.

The one institution that could resolve this with lasting legitimacy — Congress — has said nothing.

Related Articles