The Phantom Has Landed: AI Humanoid Soldiers Are Now Being Tested in a Real War

A sleek black armored military humanoid robot standing in an industrial facility with dramatic overhead lighting and steel grating

Two Phantom MK-1 humanoid robots arrived in Ukraine in February — not as a publicity stunt, but for real operational evaluation in an active war zone. The machines were built by Foundation, a U.S. defense startup whose co-founders include a Marine Corps veteran with over 300 combat missions, and they carry $24 million in research contracts across the U.S. Army, Navy, and Air Force. This is the first confirmed deployment of a humanoid robot to a live battlefield. It won't be the last.

What Is the Phantom MK-1?

The Phantom MK-1 is not a repurposed industrial robot wearing camouflage. It was designed from the start for defense applications — a distinction its makers emphasize at every turn. Encased in jet-black armored plating with a tinted visor where a human face would be, it's built to wield, in the words of Foundation co-founder Mike LeBlanc, "any kind of weapon that a human can." In demonstrations, it has been shown handling a revolver, a pistol, a shotgun, and a replica M-16 rifle.

LeBlanc, who completed multiple tours in Iraq and Afghanistan with the Marine Corps, co-founded Foundation alongside CEO Sankaet Pathak with a specific philosophical premise: that sending humanoid robots into combat zones instead of soldiers isn't just a technological upgrade — it's a moral imperative. The company's pitch to the Pentagon, and increasingly to the public, is that a sufficiently capable robot soldier eliminates the human cost of war without sacrificing battlefield effectiveness.

Prior to its Ukraine deployment, the Phantom was already being tested in factories and dockyards across Atlanta and Singapore — the same kind of industrial hardening that humanoid platforms from Boston Dynamics, Figure, and others have pursued. But Foundation's declared mission, per a TIME investigation published March 9, has always been military-first, commercial-second.

Ukraine: The First Battlefield Deployment of a Humanoid Soldier

Ukraine's front lines have become the world's most intensive proving ground for autonomous military technology. The country now launches up to 9,000 drones per day, has normalized semi-autonomous target acquisition systems, and has fundamentally inverted the traditional relationship between human combatants and machines. As LeBlanc told TIME after his visit: "It's a complete robot war, where the robot is the primary fighter and the humans are in support."

Into this environment, Foundation sent two Phantoms in February 2026. The stated purpose is frontline reconnaissance support — gathering visual intelligence in environments too dangerous for human soldiers. But Foundation is also preparing the platform for potential direct combat deployment, pending Pentagon authorization. The company is in parallel discussions with the Marine Corps about training Phantoms for breaching operations — specifically, placing explosives on doors to help troops enter fortified sites without exposing human lives.

What makes the Ukrainian deployment significant beyond its novelty is the feedback loop it creates. Ukraine has compressed military technology development cycles from years into months. AI-guided first-person-view drones that would have taken three to five years to field in peacetime are being prototyped, deployed, and iterated on the front lines within weeks. Foundation is betting that the same compression applies to humanoid systems — that the data gathered from active operational testing in Ukraine will produce a faster, more capable Phantom than any amount of controlled testing stateside could.

Pentagon Contracts and the Path to Frontline Deployment

Foundation's military relationship predates Ukraine. The company holds research contracts totaling $24 million with the U.S. Army, Navy, and Air Force, including an SBIR Phase 3 contract — the government's mechanism for officially graduating a startup into an approved military vendor. That status gives Foundation a procurement pathway that most defense startups spend a decade trying to unlock.

The Pentagon spokesperson quoted in the TIME investigation confirmed that the Defense Department "continues to explore the development of militarized humanoid prototypes designed to operate alongside warfighters in complex, high-risk environments." That language is carefully calibrated — it stops short of endorsing autonomous lethal engagement but explicitly validates the research direction.

Current U.S. military doctrine requires a human in the decision loop for any lethal engagement — a standard sometimes called "meaningful human control." The Phantom, as designed, operates within this constraint. Foundation insists the robot will only engage targets with a human green light. But as TTN has previously reported, the Pentagon's $153 billion autonomous systems spending plan is building infrastructure that could eventually enable fully autonomous engagement at scale — a posture that critics argue makes the "human in the loop" requirement increasingly nominal in practice.

Separately, Foundation is in discussions with the Department of Homeland Security about deploying Phantoms along the U.S.-Mexico border for patrol functions. The border application signals the dual-use trajectory that defense analysts have long warned about: a system justified for overseas warfighting that eventually migrates into domestic law enforcement and surveillance contexts.

A Global Humanoid Arms Race

The United States is not developing humanoid soldiers in a vacuum. Both Russia and China have active programs aimed at fielding armed robotic platforms, and the race has acquired the features of a classic security dilemma: each side's defensive investments become the other side's threat assessment, accelerating development across the board.

China's advances in dual-use robotics — including commercial humanoid platforms from companies like Unitree, which gained global attention after its viral martial arts demonstrations — provide direct technological foundations for military applications. Russia has pursued armed ground robots since at least 2016 with its Uran-9 system, and has updated programs to incorporate AI-guided targeting capabilities. "A humanoid-soldier arms race is already happening," Foundation CEO Sankaet Pathak told TIME.

The competitive dynamic has a specific accelerant: Eric Trump is both an investor in and the newly appointed chief strategic adviser at Foundation, per the TIME investigation. That connection creates a direct line between Foundation's product roadmap and an administration that has already demonstrated its willingness to strip AI guardrails in national security contexts — most visibly by blacklisting Anthropic after the AI safety company refused to allow its technology to be used for autonomous weapons or domestic mass surveillance.

The Human Control Problem

The central tension in humanoid soldier development is not technical. It's about where on the autonomy spectrum a machine is permitted to operate, and who decides.

Current Pentagon protocols mandate human authorization for lethal engagement. But that requirement is already being stress-tested on Ukraine's front lines, where AI-powered drones are autonomously assessing and engaging targets as Russian electronic jamming makes remote human control unreliable. The human-in-the-loop doctrine didn't collapse by policy decision — it eroded under battlefield conditions. Critics argue the same process will play out with humanoid systems once they're deployed at scale.

"The appeal of automating things and having humans out of the loop is extremely high," says Jennifer Kavanagh, director of military analysis at the think tank Defense Priorities. "The lack of transparency between the two sides of any conflict creates additional concerns." Her core argument: once both sides suspect the other of operating autonomous lethal systems, the incentive to maintain human control evaporates — because any delay in the decision loop is a tactical disadvantage.

Bonnie Docherty, a lecturer at the International Human Rights Clinic at Harvard Law School, frames the problem as a spectrum problem. "Autonomy is a spectrum," she told TIME. "Technology is moving rapidly towards full autonomy. And there are serious concerns when life-and-death decisions are delegated to a machine." The international community's best attempt at a legal framework — the UN's 2026 deadline for autonomous weapons governance — has produced little binding agreement, leaving the field open to competing national programs with minimal coordination.

The concern isn't hypothetical: well-documented algorithmic biases in AI facial recognition and target classification systems remain unresolved. Deploying those systems in high-stress, low-visibility combat environments — the exact conditions where Phantom is being tested in Ukraine — creates failure modes with lethal consequences and murky accountability chains.

The Deterrence Argument — and Its Limits

LeBlanc's deterrence thesis is the most ambitious claim in Foundation's pitch deck. His argument: if every major power fields giant armies of humanoid robots, the escalatory calculus in any conflict changes fundamentally. Without human casualties to galvanize public opposition, wars become harder to start — just as nuclear deterrence theoretically prevents large-scale conflict between nuclear powers.

The counterargument, voiced by multiple analysts, is structurally opposite. If wars don't cost human lives on the attacking side, the political barriers to initiating conflict drop dramatically. A conflict that would have triggered domestic opposition if 10,000 soldiers died might encounter no such brake if 10,000 robots are lost instead. The lowered threshold for engagement could produce more frequent conflicts, not fewer — a proliferation of smaller wars enabled by the very technology meant to prevent them.

The Trump administration's posture reinforces this concern. By terminating Anthropic's federal contracts — specifically because the company's terms prohibited autonomous lethal weapon use without human involvement — the White House signaled that even baseline safeguards around autonomous engagement are negotiable at the policy level. As TTN has reported, Claude AI was used via Palantir for targeting and battlefield simulation in joint U.S.-Israel Iran strikes, hours after Trump's federal ban on Anthropic was announced. The gap between stated policy and operational reality is already wide.

What the Phantom's Ukraine deployment confirms is that humanoid soldier technology has crossed from theoretical to operational. The machines are on the ground. The contracts are signed. The arms race is live. The governance frameworks meant to constrain this technology are years behind the deployment curve — and closing the gap will require a level of international coordination that, so far, no party has demonstrated the will to pursue.

Related Articles