For the first time in U.S. military history, a sitting combatant commander has publicly confirmed that artificial intelligence is embedded in active combat operations — not as a logistics tool or a training aid, but as a live participant in the targeting cycle. Admiral Brad Cooper, head of U.S. Central Command, said on Wednesday that AI systems are helping U.S. forces identify and engage more than 5,500 targets inside Iran during Operation Epic Fury. Congress immediately demanded guardrails. And somewhere in Ukraine, two humanoid combat robots named Phantom are watching the war unfold.
The Official Confirmation Washington Had Tried to Avoid
The language was carefully chosen but unambiguous. "Our warfighters are leveraging a variety of advanced AI tools," Admiral Cooper said in a video message released Wednesday. "These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react."
Cooper was careful to add that "humans will always make final decisions on what to shoot and what not to shoot and when to shoot." But the qualifier — buried at the end of a sentence celebrating AI's speed advantage — did little to quiet critics. The confirmation marked a significant shift from the ambiguity that had characterized the Pentagon's public posture since Operation Epic Fury began on February 28.
The U.S. military has reportedly been relying on Palantir's Maven Smart System and Anthropic's Claude AI technology throughout the campaign, according to The Washington Post. The Maven program — developed in partnership with Palantir — ingests imagery, signals intelligence, and open-source data to surface targeting options at machine speed. Claude, embedded within Maven, processes natural language queries from analysts and helps synthesize intelligence assessments. Neither system autonomously fires a weapon. But both are now formally acknowledged parts of the operational cycle that results in strikes.
The Scale: 5,500 Targets in Twelve Days
The raw numbers from CENTCOM are staggering. In the twelve days since the campaign launched, U.S. forces have hit more than 5,500 targets across Iran — drone and ballistic missile production sites, command-and-control infrastructure, air defense systems, military communications nodes, and naval assets. Iranian ballistic missile launches have dropped 90 percent from the opening of the campaign; drone attacks are down 83 percent, according to Gen. Dan Caine, chairman of the Joint Chiefs of Staff.
That tempo — roughly 460 strikes per day — is only achievable with AI-assisted targeting. Traditional intelligence processing timelines run hours to days. Cooper's own framing made the dependency explicit: AI "can turn processes that used to take hours and sometimes even days into seconds."
Operation Epic Fury has also served as the combat debut for several next-generation weapons systems. LUCAS drones — loitering unmanned aerial systems capable of autonomous target acquisition — made their first operational appearance in the campaign. So did the Precision Strike Missile, a long-range GPS and inertial-navigation weapon designed to prosecute targets identified through AI-assisted analysis. These are not science fiction systems. They are in production, deployed, and firing now.
The Phantom MK-1: When Humanoid Robots Reach the Front
While CENTCOM's AI confirmation dominated Wednesday's news cycle, a quieter development may prove more historically significant. In February, two Phantom MK-1 humanoid robots developed by the San Francisco company Foundation were deployed to Ukraine for frontline reconnaissance support, according to Time. The Phantom is the world's first humanoid robot explicitly designed for defense applications — and it can carry a rifle.
Foundation was co-founded by Mike LeBlanc, a 14-year Marine Corps veteran with multiple combat tours. LeBlanc's pitch is straightforward: "We think there's a moral imperative to put these robots into war instead of soldiers." The Phantom currently holds research contracts worth a combined $24 million with the U.S. Army, Navy, and Air Force, including an SBIR Phase 3 designation — effectively making it an approved military vendor. The Marine Corps is preparing to run Phantoms through its "methods of entry" course, training the robots to breach doors.
The Pentagon confirmed it "continues to explore the development of militarized humanoid prototypes designed to operate alongside warfighters in complex, high-risk environments." Foundation is also in discussions with the Department of Homeland Security about potential patrol functions along the southern border.
The Phantom's advocates frame it as an obvious extension of existing autonomous systems — a logical next step from armed drones. Compared to deploying teenagers into contested environments, a robot that doesn't experience fear, fatigue, radiation sickness, or moral injury represents a genuinely different risk calculus. LeBlanc's longer-term thesis: armies of humanoid robots could create a deterrence dynamic analogous to nuclear weapons, where the sheer scale of automated force projection reduces incentives for escalation.
Critics find that argument alarming rather than reassuring. "It's a slippery slope," Jennifer Kavanagh, director of military analysis at the Washington-based think tank Defense Priorities, told Time. "The appeal of automating things and having humans out of the loop is extremely high. The lack of transparency between the two sides of any conflict creates additional concerns." The concern is not merely philosophical: AI-powered drones operating in Ukraine are already demonstrating that autonomous engagement can emerge from operational necessity when radio jamming degrades human control.
Congress Moves for Oversight — With Urgency It Rarely Shows
On Capitol Hill, the official confirmation of AI in combat operations triggered a swift bipartisan response. Representative Jill Tokuda of Hawaii, a member of the House Armed Services Committee, told NBC News that "we need a full, impartial review to determine if AI has already harmed or jeopardized lives in the war with Iran." "Human judgment must remain at the center of life-or-death decisions," Tokuda added.
Representative Sara Jacobs of California was blunter: "AI tools aren't 100% reliable — they can fail in subtle ways and yet operators continue to over-trust them." Jacobs called for strict guardrails and mandatory human authorization for all lethal force decisions.
The oversight push comes against the backdrop of mounting civilian casualty concerns. The bombing of a school in southern Iran on March 9, which killed more than 170 people — predominantly children — has prompted calls for an independent investigation. Iran's Red Crescent Society said the campaign has damaged nearly 20,000 civilian buildings and 77 healthcare facilities. While the connection between AI-assisted targeting and these incidents remains unproven, the correlation is politically potent and legally significant.
The Pentagon's AI Strategy for the Department of War, published in January 2026 under Defense Secretary Pete Hegseth, explicitly called for putting "artificial intelligence at the heart" of American combat operations. That document is now serving as Exhibit A for congressional critics who argue Hegseth's doctrine traded accountability for speed.
A $30 Billion Market Built on Moral Ambiguity
Whatever the ethical and legal outcome of Operation Epic Fury, the commercial trajectory of military AI is essentially locked in. The global AI defense market was valued at approximately $9.13 billion in 2025 and is projected to reach $29.48 billion by 2035 — a 12.5 percent compound annual growth rate driven by autonomous surveillance, predictive analytics, and AI-powered command systems. Operation Epic Fury is not slowing that trajectory; it's validating it.
The companies positioned to benefit are not small startups. Palantir's Maven system is already the backbone of CENTCOM's targeting infrastructure. OpenAI stepped into the Pentagon's orbit after Anthropic was designated a supply chain risk for refusing to remove guardrails against lethal autonomy. Defense-specific AI firms like Anduril, Shield AI, and Kratos have raised billions on the thesis that every military function — logistics, ISR, targeting, electronic warfare — will eventually run on AI. The U.S. Army's Robots First initiative is already deploying autonomous systems in logistics and reconnaissance roles that were human-staffed two years ago.
China's government warned Wednesday against "the unrestricted application of AI by the military," arguing that "giving algorithms the power to determine life and death not only erodes ethical restraints and accountability in wars" but threatens global stability. The warning is likely to land with limited impact in Washington — but it signals that the AI arms race dynamic has reached a point where major powers are now explicitly negotiating the boundaries of autonomous warfare in public, rather than behind closed doors.
The Slippery Slope Is Now a Cliff Edge
The central tension of Operation Epic Fury's AI story is straightforward to state and nearly impossible to resolve. The Pentagon insists humans remain in the loop on every lethal decision. But the operational tempo enabled by AI — hundreds of strikes per day, intelligence cycles measured in seconds — creates structural pressure to reduce human dwell time per target. At some point, "human in the loop" becomes a rubber stamp on a machine recommendation generated faster than any human can meaningfully verify.
Ukraine's battlefield has already shown where that pressure leads. Russian radio jamming has forced Ukrainian drone operators to deploy units capable of autonomous terminal guidance — AI targeting that kicks in precisely when human control is severed. The U.S. military is watching. So is every other defense establishment with resources to pursue the same capability.
Foundation's Phantom MK-1 is the clearest embodiment of this trajectory. It is not yet autonomous in combat. It currently requires human authorization for any engagement. But it is designed for a future where that authorization may be difficult to provide, where the communications window to a remote operator may be jammed or simply too slow. The engineers building it are planning for that future even as they publicly commit to not deploying it yet.
Admiral Cooper's video message on Wednesday marked a milestone: the first time a sitting U.S. combatant commander explicitly credited AI with operational outcomes in a live war. The statement was careful, calibrated, and clearly intended to project confidence. What it could not project — because no one currently knows — is where the human in the loop will be standing when the next campaign begins, and how many seconds they will have to decide.




