A Judge Blocked the Pentagon's Anthropic Ban. Then a Data Leak Showed What Anthropic Is Really Building.

Split composition of a modern federal courthouse exterior in blue-white light beside a dark server room with a glowing AI circuit chip emanating violet light through digital smoke

In a week that should have been a straightforward legal victory, Anthropic found itself defending two fronts at once. On one, a federal judge ruled in its favor — issuing a preliminary injunction to block the Trump administration's unprecedented designation of the AI company as a national security supply chain risk. On the other, the company faced sharp questions about its own internal security after a misconfigured content management system left nearly 3,000 unpublished files publicly accessible, including a draft blog post revealing its most powerful unreleased AI model. The model, described internally as "Claude Mythos" or "Capybara," was characterized in the leaked draft as "far ahead of any other AI model in cyber capabilities" — and as something Anthropic itself believes could "far outpace the efforts of defenders" in cybersecurity. For an organization whose entire dispute with the Pentagon was premised on responsible AI development, the timing was awkward.

The Ruling: What Judge Lin Decided — and Why It Matters

On March 27–28, Judge Rita F. Lin of the Northern District of California issued a preliminary injunction blocking the Pentagon's supply chain risk designation against Anthropic while the case proceeds toward a final ruling. The injunction goes into effect seven days from the order — approximately April 4, 2026.

The ruling was pointed. "The Department of War's records show that it designated Anthropic as a supply chain risk because of its 'hostile manner through the press,'" Judge Lin wrote in the order. "Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation."

Lin had telegraphed her skepticism at the March 24 hearing, asking sharp questions from the bench — including a pointed exchange about whether a military contractor providing toilet paper to the Pentagon could be fired for using Anthropic. To the government's representative arguing the DoD was merely exercising procurement discretion, Lin responded: "I see the question in this case as being… whether the government violated the law when it went beyond that." When the Department of War's counsel argued that Defense Secretary Hegseth hadn't really meant his public post banning all contractors from working with Anthropic, Lin was unimpressed: "You're standing here saying, 'We said it but we didn't really mean it.'"

Anthropic spokesperson Danielle Cohen issued a statement after the ruling: "We're grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."

The preliminary injunction does not resolve the case — a final verdict is "weeks or months" away. But a preliminary finding that Anthropic is "likely to succeed on the merits" is a significant legal signal, and for the 100+ enterprise customers who had sought guidance about their exposure, it provides immediate operational clarity.

How We Got Here: The Stakes and the Scale

The dispute's timeline runs through some of the most consequential AI policy decisions in 2026. In January, Defense Secretary Pete Hegseth issued a memo demanding "any lawful use" language in all AI procurement contracts — effectively requiring AI providers to waive autonomy over how the military deployed their models. Anthropic declined. On February 27, the Department of War designated Anthropic a "supply chain risk" — a classification normally reserved for foreign adversaries like Huawei or Kaspersky. It was the first time in known history the designation had been used against a domestic American company.

The same day, OpenAI announced it had taken the DoD contract Anthropic walked away from. Sam Altman posted on X that the deal included the same prohibitions Anthropic had insisted on — "domestic mass surveillance and human responsibility for the use of force" — leading to immediate questions about why Anthropic was blacklisted for terms OpenAI was credited with accepting.

The financial consequences of the designation were severe. Court filings cited "hundreds of millions to billions" in potential 2026 revenue at risk. The General Services Administration terminated Anthropic's OneGov contract — which had made Claude available across all three branches of federal government. Treasury and State reportedly ceased use as well. Trump ordered all federal agencies to stop using Anthropic technology entirely.

Anthropic filed suit shortly after, arguing the designation violated both the First and Fifth Amendments — penalizing the company for its publicly stated safety principles (protected speech) and depriving it of property without due process.

What the Injunction Does — and Doesn't Do

The preliminary injunction temporarily restores Anthropic's legal standing and signals to enterprise customers that use of Claude is not legally compromised. It blocks the DoD from enforcing the supply chain designation during the litigation period.

What it does not do is reinstate any cancelled government contracts or reverse the order that agencies cease using Anthropic technology. The GSA's OneGov cancellation is a separate action with its own legal path. The injunction is targeted at the supply chain designation mechanism specifically — the precedent-setting element of the dispute.

Microsoft had already concluded in early March that Anthropic products could remain available to its customers — other than the Department of War — through platforms including M365, GitHub Copilot, and Microsoft's AI Foundry. That position holds following the injunction, providing continuity for the millions of enterprise users accessing Claude through Microsoft's ecosystem.

For enterprise customers outside direct DoD work, the practical outcome of the injunction is that the legal ambiguity that had chilled some procurement decisions is lifted, at least temporarily. Whether the underlying litigation ultimately succeeds or fails, the preliminary finding gives organizations cover to continue Claude deployments while the case proceeds.

The Senate Piles On: Congress Tries to Codify Anthropic's Red Lines

As the court battle reached its preliminary conclusion, Anthropic's fight simultaneously expanded to Congress. Senator Adam Schiff (D-CA) announced he is working on legislation to codify the very principles Anthropic was blacklisted for holding — ensuring humans make ultimate decisions on matters of life and death in military AI applications.

Senator Elissa Slotkin (D-MI) moved further, introducing the AI Guardrails Act, which would restrict the Department of Defense from using AI to detonate nuclear weapons, deploy autonomous lethal weapons without human intervention, or conduct domestic mass surveillance. The bill includes a notification mechanism allowing the Defense Secretary to inform Congress under "extraordinary circumstances" — an acknowledgment that absolute rules create operational brittleness, while also creating political accountability for any exceptions.

Schiff framed the legislation in direct terms: "Whenever a technology has the capability of taking a human life, there needs to be a human operator in the chain of command. We don't want to delegate that kind of responsibility over life and death to an algorithm."

Neither bill has a realistic path in a Republican-controlled Senate in the near term. Their function is primarily political positioning and precedent-setting — establishing that a bipartisan coalition from the left (Schiff, Slotkin, Sanders) to the right (Josh Hawley) has found grounds to oppose unconstrained AI in military applications. The real pressure is being created in parallel to the courtroom rather than through legislation alone.

The Mythos Leak: Anthropic's Other Problem

While the court was ruling in its favor, Anthropic was managing a separate crisis. Fortune reported on March 26 that approximately 3,000 unpublished assets from Anthropic's content management system had been left publicly accessible — verified independently by Cambridge University researcher Alexandre Pauwels and LayerX Security's Roy Paz. Among the exposed files was a draft blog post announcing a new AI model.

The model in question goes by two names: Claude Mythos and Capybara. Per the leaked draft (reported in full by Fortune), it represents a new tier above Opus — Anthropic's current flagship — and is described as "by far the most powerful AI model we've ever developed" with "a step change" in reasoning, coding, and cybersecurity capabilities relative to its predecessor.

The cybersecurity warning embedded in the leaked draft is the most striking element. According to Fortune's reporting, the draft stated directly that Claude Mythos is "currently far ahead of any other AI model in cyber capabilities" and "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." Anthropic's planned rollout strategy, per the draft, was to initially release the model only to organizations focused on cyber defense — giving defenders a deliberate head start before broader availability.

Anthropic confirmed the model's existence to Fortune after being contacted about the leak. The company's spokesperson acknowledged: "We're developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity. Given the strength of its capabilities, we're being deliberate about how we release it."

The leak mechanism was procedural rather than adversarial. Anthropic's CMS stored both published and unpublished assets in a centrally accessible system with no authentication required by default — assets were public unless explicitly set to private. Anthropic failed to restrict access to a significant volume of draft content. After Fortune informed the company on March 26, Anthropic secured the exposure the same day. The company characterized the incident as "an issue with one of our external CMS tools" resulting from "human error in CMS configuration" and emphasized that "core infrastructure, AI systems, customer data, or security architecture" were not involved.

The Narrative Tension: Safety Advocate, Security Liability?

The symmetry here is uncomfortable for Anthropic's brand positioning. The company staked its legal fight — and its entire market identity — on being the AI lab that refuses to compromise safety for commercial expediency. The court validated that positioning, at least at the preliminary injunction stage.

The Mythos leak complicates that narrative in two ways. First, it reveals that the same organization warning about unprecedented cybersecurity risks in its own AI models left 3,000 files openly accessible online — including drafts that described those very risks in alarming terms. The internal security hygiene lapse is not catastrophic, but it is incongruous with the company's public posture.

Second, the leaked draft makes clear that Anthropic has crossed a capability threshold it itself regards as dangerous enough to require deliberate, staged rollout. The company is doing the right thing by sequencing the release carefully. But the existence of a model it describes as capable of enabling cyberattacks at a scale that "outpaces defenders" will inevitably feed into ongoing policy debates about whether AI labs can self-regulate effectively — debates that Anthropic has, until now, been winning on the strength of its stated principles.

There is also a specific irony worth noting for context: one stated rationale in the Pentagon's supply chain risk designation was concern that Anthropic might "disable or preemptively alter its model during warfighting." The government's concern, in that narrow sense, was not entirely unfounded — Anthropic is in fact exercising deliberate control over its most capable model's deployment. The legal question, which Judge Lin has now answered at the preliminary stage, is whether the government can punish a company for exercising that control in ways it disagrees with. On that question, the court sided clearly with Anthropic.

What Happens Next

The final ruling in Anthropic PBC v. U.S. Department of War is weeks to months away. The preliminary injunction is a significant win but not a resolution — the government can appeal, and the underlying contractual and constitutional questions have not been fully adjudicated.

For enterprise customers and partners, the most likely near-term scenario is operational normalcy. The DoD and Anthropic remain in an arms-length standoff — no government contracts reinstated, but the supply chain designation blocked from further enforcement during litigation.

On the Claude Mythos front, expect a staged public announcement with extensive safety framing when Anthropic is ready. Given the sensitivity of the cybersecurity capabilities described in the leaked draft, the rollout will almost certainly involve close coordination with government and enterprise security customers before any general availability. The company confirmed the model is currently with early access customers.

The Schiff and Slotkin bills face long odds in the current Congress, but they are doing something important: creating a formal legislative record that the principles Anthropic was blacklisted for holding have bipartisan political support. If the final ruling in this case goes Anthropic's way — or if the government's appeal fails — that record becomes part of the precedent that shapes how the next government handles AI procurement disputes.

The bigger risk to watch is not the resolution of this specific case but its potential misread. If the DoD designation isn't fully reversed in final judgment, the precedent that a government can blacklist a domestic AI company for its stated safety principles — even with a preliminary injunction blocking enforcement — would have chilling effects far beyond Anthropic. Every AI lab with public safety commitments would face the question of whether those commitments could someday be used against them. That's a question the industry cannot afford to leave unanswered.

Related Articles