When Defense Secretary Pete Hegseth declared Anthropic a national security "supply chain risk" on February 28, he did more than end a $200 million contract dispute — he rewrote the rules of engagement for every AI company that wants to work with the U.S. military. Within hours, OpenAI announced a deal on its own terms. Within days, a $32 million startup run by ex-Marine commanders was suddenly the most interesting company in Silicon Valley. The Anthropic saga isn't a story about one company losing a contract. It's a story about who controls the guardrails on AI in war — and whether those guardrails will exist at all.
OpenAI's Rapid Pivot
The timing was precise enough to raise eyebrows. On the same day President Trump ordered every federal agency to stop using Anthropic's products, OpenAI CEO Sam Altman announced a new agreement with the Pentagon for deploying advanced AI systems in classified environments. Altman framed the deal as a model of restraint. "We think our agreement has more guardrails than any previous agreement," OpenAI wrote in a blog post, while simultaneously requesting that the government make similar terms available to all AI companies.
Critics, however, noted that OpenAI's guardrails are considerably softer than what Anthropic refused to relinquish. Analysis of the contract terms indicates OpenAI's protections against mass surveillance and fully autonomous weapons deployment are present but less prescriptive — language that critics argue gives the Pentagon far more operational flexibility than Anthropic was willing to grant. The company, which has a corporate partnership with The Atlantic, saw ChatGPT uninstall rates spike 295 percent on February 28, the same day the deal became public — a signal of just how sharply the AI safety community reacted.
The personnel fallout inside OpenAI was immediate. Caitlin Kalinowski, the company's head of robotics, resigned over the deal, publicly warning that surveillance and lethal autonomy guardrails had been rushed through without adequate deliberation. Her departure prompted a wider conversation about whether OpenAI's leadership had shifted from its founding safety-first ethos toward a posture that prioritizes federal revenue.
Smack Technologies: War AI Without Apology
While legacy AI labs navigate the political and ethical landmines, a new category of company is emerging with a fundamentally different philosophy. Smack Technologies, founded by former Marine Corps special operations commanders, announced a $32 million funding round this week to build AI models trained exclusively for military operations — with no civilian product, no general-purpose chatbot, and no hedged corporate statements about autonomous weapons.
CEO Andy Markoff, a former commander in U.S. Marine Forces Special Operations Command with combat deployments in Iraq and Afghanistan, is blunt about the premise. "When you serve in the military, you take an oath you're going to serve honorably, lawfully, in accordance with the rules of war," Markoff told Wired. "To me, the people who deploy the technology and make sure it is used ethically need to be in a uniform." The implication is direct: the ethical framework should come from within the military, not be externally imposed by a San Francisco AI company.
Smack's models are trained using a method similar to how DeepMind trained AlphaGo — running AI through thousands of war game scenarios and using expert military analysts to provide reward signals that tell the model whether its chosen strategy will succeed. Unlike general-purpose LLMs that are strong at summarizing reports but untrained on physical-world constraints, Smack's models are being purpose-built for mission planning, target prioritization, and tactical coordination. Markoff says current-generation LLMs like Claude are "absolutely not capable of target identification" — and his company intends to change that.
The $32 million raise positions Smack alongside a growing cohort of defense-focused AI startups — Anduril, Shield AI, Epirus — that have benefited from the breakdown between frontier AI labs and the Pentagon. Where those earlier companies built hardware and autonomous platforms, Smack is going after the intelligence and decision layer: the AI that tells other systems what to do.
The Hegseth Doctrine
To understand why the Anthropic dispute escalated so quickly into a federal ban, it's necessary to understand what changed at the Pentagon in January 2026. The Department of Defense — formally rebranded the Department of War by the Trump administration — issued a new AI strategy memo declaring that all AI contracts must allow the government to use the technology for "any lawful use," without restrictions imposed by the vendor.
This was a direct collision course with Anthropic's two firm red lines: no use of Claude for fully autonomous weapons without human decision-making authority in targeting and firing decisions, and no mass domestic surveillance of U.S. citizens. Hegseth's position was categorical. "America's warfighters will never be held hostage by the ideological whims of Big Tech," Hegseth posted on social media after the Friday deadline passed. "This decision is final."
The doctrine Hegseth is operationalizing reflects a broader strategic view: that AI safety constraints imposed by private companies amount to a de facto veto over military operations. From this perspective, allowing a vendor to prohibit certain uses of AI is functionally equivalent to letting a rifle manufacturer dictate rules of engagement. Whether that analogy holds — and whether it's legally sound — is now a question for the federal courts.
The Supply Chain Weapon
The legal mechanism Hegseth chose to enforce his position is aggressive and, according to Anthropic's attorneys, unprecedented. The supply chain risk designation — typically reserved for foreign adversaries like Huawei — requires every company doing business with the Pentagon to certify that it doesn't use Anthropic's models. That single designation could strip Anthropic of hundreds of millions of dollars in annual revenue — not just from direct government contracts, but from software companies that embed Claude into services sold to federal agencies.
Defense contractors including Lockheed Martin and others have reportedly begun evaluating alternatives following the designation. Microsoft, which offers Claude through its Azure platform, confirmed that Claude remains available to its customers — except for the Defense Department. The General Services Administration terminated Anthropic's "OneGov" contract outright, ending Claude's availability across all three branches of the federal government.
The tech industry reacted swiftly. A major tech industry group wrote to Hegseth expressing concern over the designation, noting that it sets a dangerous precedent: any company that refuses to accommodate a government demand — regardless of ethical or technical basis — could be branded a national security threat. The letter stopped short of defending Anthropic's specific positions but warned that the precedent could chill participation by safety-conscious companies across the entire defense industrial base.
What the Courts Will Decide
Anthropic filed two federal complaints on March 9, one in a California district court and a separate petition in the D.C. Circuit Court of Appeals. The dual filings target the designation on three fronts: First Amendment retaliation (the government cannot punish a company for expressing opinions about AI limitations), procedural violations (the supply chain risk statute requires risk assessment, notification, and congressional notice — none of which occurred), and executive overreach (Trump's order directing all agencies to stop using Anthropic exceeded the authority granted by Congress).
"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," Anthropic's complaint reads. Anthropic is seeking a temporary restraining order to halt enforcement of the designation while the case proceeds, with a proposed hearing as early as Friday, March 13. If the court grants interim relief, the legal battle could drag on for months — with Anthropic continuing to serve government clients in a legal gray zone as the case unfolds.
Constitutional law scholars note that the First Amendment argument is novel but not frivolous. The government's use of procurement power to punish protected speech has historical precedent as a viable legal theory, though courts have generally given executive agencies broad deference on national security grounds. The procedural argument — that Hegseth skipped the statutory steps — may be the stronger near-term hook for a restraining order.
The AI Battlefield, Beyond the Courtroom
While lawyers argue about the constitutional implications in San Francisco, AI is already shaping combat in the Middle East in ways that would have seemed speculative two years ago. The Maven Smart System — the Pentagon's primary AI targeting platform — uses LLMs including Claude for intelligence analysis, target prioritization, and battle simulations, according to reports from the Washington Post and Nature. Even as Trump's ban took effect, the system continued operating through a six-month transition window that Hegseth confirmed would allow continuity.
The conflict in Iran has thrust these systems into real-world validation. Iran's deployment of thousands of low-cost Shahed drones has created precisely the kind of high-tempo, data-saturated targeting environment that AI decision-support systems were designed for — and exposed the limits of human-speed analysis against machine-speed threats. The pressure to deploy AI more aggressively, with less human oversight, is not abstract. It's being generated by the pace of actual warfare.
Political scientist Michael Horowitz at the University of Pennsylvania frames the stakes clearly. "The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent," Horowitz told Nature. Diplomats meeting in Geneva this week — at the UN Convention on Certain Conventional Weapons — are attempting to negotiate international frameworks for lethal autonomous weapons, but the pace of technological and contractual change in Washington is outrunning any multilateral process.
A New Landscape, Permanently
Whether Anthropic wins or loses in court, the landscape it returns to will look different from the one it left. OpenAI now holds a Pentagon contract without Anthropic's hard constraints. Smack Technologies is building models that may eventually surpass civilian LLMs at the specific task of warfare planning. Defense contractors are accelerating their evaluation of alternative AI providers. And the Hegseth Doctrine — "any lawful use, no restrictions" — is now the stated policy of the U.S. Department of War.
The question that remains is whether Anthropic's stand will, over time, prove to be a commercial catastrophe or a reputational asset. The 295 percent spike in ChatGPT uninstalls on the day OpenAI announced its Pentagon deal suggests a meaningful segment of users cares deeply about these questions. The Atlantic's analysis argues that Anthropic's ethical stand may ultimately be paying off in customer trust — even if it costs the company federal contracts.
For the defense industrial base, the calculus is more immediate. Companies that build on AI foundations now have to choose between access to federal contracts and alignment with safety-forward AI vendors. Some will choose OpenAI's accommodation. Others may quietly shift to purpose-built military providers like Smack. A few may hold the line on AI safety commitments and accept the commercial consequences. The Pentagon's AI market is not just being redistributed — it is being fundamentally restructured, along fault lines that were invisible six weeks ago.




