In a dramatic escalation of tensions in the Middle East, the United States launched a pre-dawn aerial strike on Tehran early Saturday morning, assassinating Iran's Supreme Leader Ayatollah Ali Khamenei and several top officials, according to U.S. military sources. The operation, dubbed Operation Epic Fury, involved 100 fighter jets and marked a bold use of artificial intelligence in modern warfare, with reports indicating that AI systems from companies like Anthropic and OpenAI played key roles in planning and execution.
The strike occurred around 1 a.m. local time, catching Iranian leaders off guard amid a near-total internet blackout imposed by the government for several months. According to a Sunday report in The Wall Street Journal, citing sources familiar with the matter, Anthropic's Claude AI was embedded in military command centers for intelligence assessments, target identification, and battle scenario simulations during the operation. While details remain classified, the use of Claude—the only AI system with security clearance for classified information until recently—highlights the deepening integration of AI into U.S. defense strategies.
The events unfolded against a backdrop of heated disputes between the Pentagon and AI firms over contracts and ethical boundaries. Just days earlier, on Friday, Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk, threatening penalties for defense contractors engaging in "any commercial business" with the company. This move came after President Donald Trump posted on Truth Social at 3:47 p.m. that day, urging Anthropic to "get their act together" during a phase-out period or face "major civil and criminal consequences." White House observers interpreted Trump's statement as a negotiation tactic, noting his history of using social media for public pressure followed by private deals.
Anthropic responded Friday by asserting that supply-chain risk laws apply only to Claude's use within the Defense Department and not beyond. Industry insiders expressed confusion over the scope of Hegseth's directive, with one policy expert, speaking anonymously, telling The Appleton Times that no one is clear on what "any commercial activity" entails or the potential punishments for non-defense contracts. Hegseth's decision, which he described as "final," can be made unilaterally by the Defense Secretary without presidential approval or public announcement, adding to the uncertainty rippling through the tech sector.
Parallel to the Anthropic tensions, OpenAI announced a new contract with the Pentagon last week, touted by CEO Sam Altman as including firm red lines against mass surveillance and autonomous lethal weapons. In a company blog post, OpenAI shared excerpts from the agreement, stating that "The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities," referencing laws like the Fourth Amendment and the Foreign Intelligence Surveillance Act. However, critics questioned the safeguards' effectiveness, pointing to post-9/11 expansions of surveillance that operated within similar legal frameworks, including mass domestic spying programs.
One legal analyst noted that "unconstrained monitoring" is not a recognized term in the cited statutes, raising doubts about the contract's robustness. OpenAI emphasized compliance with existing directives requiring a "defined foreign intelligence purpose," but the agreement's reliance on pre-existing laws has not quelled concerns among privacy advocates. As one Verge reporter observed, U.S. intelligence agencies have historically pushed boundaries under these very authorities, conducting invasive operations both domestically and abroad.
The Iran strike has thrust these AI-Pentagon dynamics into the spotlight, with unconfirmed reports suggesting Claude's tools enabled the operation's precision despite Iran's digital isolation. Hamza Chaudhry, AI and National Security lead at the nonpartisan Future of Life Institute, described the conflict as a harbinger of dyadic automated warfare, where "two AI systems effectively talking to each other through the medium of kinetic action, each optimizing and responding faster than human decision-makers can follow."
Chaudhry warned of broader risks to global stability, particularly nuclear deterrence. In a statement to The Verge, he said, "Recent analyses of the 2025 India-Pakistan and Iran-Israel conflicts found that AI renders second-strike forces more transparent and thus more vulnerable, and that while nuclear arsenals still impose a ceiling on all-out war, AI lowers the floor for sub-threshold aggression and compresses political reaction time." He added that adversaries might respond by expanding arsenals or adopting launch-on-warning postures, threatening "arms race stability" and eroding international governance frameworks for emerging technologies.
Iran, no stranger to AI in conflict, has deployed AI-assisted missiles in recent engagements, according to defense analysts. The U.S. operation's success underscores America's edge, but experts like Chaudhry caution that the technologies enabling such strikes—advanced mapping, tracking, and simulation—are making nuclear assets more vulnerable worldwide. "This is not a hypothetical future problem," Chaudhry emphasized. "The technologies that made Operation Epic Fury possible are the same technologies that are slowly making nuclear deterrence more fragile. We have no international governance framework that addresses this adequately."
Behind the scenes, negotiations have been fraught. Emil Michael, Uber's former chief and current Pentagon CTO, has led talks with both Anthropic and OpenAI but has drawn criticism for inflammatory social media posts. On X (formerly Twitter), Michael has repeatedly attacked Anthropic CEO Dario Amodei, calling him a "liar [with] a God-complex" in late-night rants—outpacing his commentary on the Iran strike itself. Meanwhile, State Department Under Secretary Jeremy Lewin has publicly argued that private companies should not dictate terms to the government, a view echoed by some Pentagon officials who contend the U.S., as an elected entity, holds ultimate authority over technology use.
The Anthropic designation initially seemed tied to national security concerns, but the Iran events suggest it was more about leverage than risk. Sources indicated the conflict was never truly about Anthropic's security posture, especially as Claude remained integral to operations like the Tehran strike until last week. OpenAI's deal, while celebrated by Altman, has similarly faced scrutiny, with some viewing it as insufficient against potential misuse in intelligence gathering.
Beyond immediate military applications, the weekend's chaos intersects with ongoing debates about AI's societal impact. Last week, a sold-out taping of The Hopkins Forum's "Open to Debate" series in Washington, D.C., featured Andrew Yang and MIT economist Simon Johnson arguing that AI could cause widespread job loss without safeguards, potentially leading to societal upheaval. Opposing them, Facebook co-founder Chris Hughes and data scientist Rumman Chowdhury contended that AI could augment human work and enhance life, though both sides agreed corporate greed might steer it toward dystopian outcomes.
As the dust settles, questions linger about the Pentagon's AI dependencies. With Hegseth's ruling throwing the tech industry into disarray, companies like Anthropic may seek legal challenges, while OpenAI's contract sets a precedent for future deals. Trump administration officials have not commented further, but the episode illustrates Trump's pattern of using public threats to force concessions, as seen in past tariff disputes and foreign policy maneuvers.
The implications extend far beyond this strike. As AI blurs lines between culture wars and real ones, experts urge urgent international dialogue on governance. Without it, Chaudhry and others warn, sub-threshold aggressions could proliferate, compressing decision times and heightening escalation risks. For now, the U.S. celebrates a tactical victory, but the fusion of AI and warfare promises a volatile new era in global security.
In Tehran, the aftermath includes mourning for Khamenei and vows of retaliation from remaining Iranian leaders, though their communications remain hampered by the blackout. U.S. allies have expressed mixed support, praising the precision while questioning the legality of targeting a head of state. As investigations into the AI roles continue, the world watches how Washington balances innovation with oversight in an increasingly automated age.
