The Scoop
In the days after the deadly US missile strike on an Iranian elementary school, one prime suspect emerged: artificial intelligence. Critics were eager to pin the more than 150 deaths on AI after reports on the use of Claude in the capture of Venezuela’s Nicolás Maduro — and a public spat between the Pentagon and Anthropic — kicked off a global debate over the growing role of the technology in warfighting.
The reality, according to former military officials and people familiar with aspects of the bombing campaign in Iran, is that humans — specifically the thousands of people who gather intelligence and analyze satellite photos to build massive target lists ahead of potential conflicts with foreign adversaries — are to blame for the deaths at Shajareh Tayyebeh Elementary School, which shadows the US and Israeli war on Iran.
The error was one that AI would not be likely to make: US officials failed to recognize subtle changes in satellite imagery, while human intelligence analysts missed publicly available information about a school located inside the Revolutionary Guard compound (or failed to add it to the database used for targeting). AI has its notorious failings, from hallucinations to sycophancy, but it’s also able to take in far more information than current, human-led systems — and a deeper look at satellite imagery or, simply, an internet search could have forestalled the disaster. Even a scan of Iranian business listings turned up the school, according to Reuters.
“Finding the right balance between humans and machines will be a crucial component of future training,” said Jack Shanahan, a senior fellow at the Center for New American Security and a 36-year veteran of the US military, where he served as the founding director of Project Maven, the Pentagon’s AI program.
In this article:
Know More
New military technology is enabling a larger volume of targeting than in previous conflicts, at much higher speeds, by pulling together satellite imagery, radar, human intelligence, and other data feeds. But right now, AI is mostly designed to help the military with identifying unknown or moving targets, like in the case of Maduro, compared with static, already identified military targets.
In Iran, the Sayyid al-Shuhada Islamic Revolutionary Guard Corps complex in Minab would have been designated a so-called deliberate target selected in advance using intelligence and satellite imagery, according to interviews with former military leaders familiar with the way the US selects conflict targets.
It’s likely the compound would have been added to a target list the US has compiled over decades, which is maintained by the US Central Command and subject to scrutiny by military lawyers. Had the target been identified as a school, it would have been removed from the target list, military experts said.
In the 24 to 48 hours ahead of the strike, human reviewers would have also looked over targets to confirm their legitimacy. Had they noticed any anomalies, based on common practice, they would have flagged the target for further review by computer vision technology that can parse the pixel-by-pixel minutia of an image and compare satellite images, looking for any possible changes. It’s unclear whether any such review was requested, military experts said.
There was a fair amount of public information showing a school located on the base, as well as satellite imagery that revealed new walls and a separate entrance had been added. But without comparing current satellite photos to older ones, human reviewers with outdated information would have had little reason to doubt that the area was a military target.
Step Back
The core Pentagon automation technology for targeting is Palantir’s Maven Smart System, which allows military personnel to plan strikes by clicking, dragging, and dropping in a single program, condensing hours or days of work into minutes. Built atop Palantir’s popular Gotham platform, which has been employed by law enforcement and private sector companies, it also sucks in bits of data from large language models like Anthropic’s Claude, which quickly scans and summarizes relevant documents.
The Maven Smart System can even suggest possible targets and help personnel rank priorities in how to hit the target, from the fuel required to the munitions used. (Selecting targets functions like the life-and-death version of placing a takeout order. The Palantir device is a kind of Doordash for deadly missile strikes, replacing printed menus with suggestions from the system — recommending the sites it thinks you’d like to hit, rather than the food you’re probably craving.)
Anthropic and other AI companies have publicly grappled with worries that the military will eventually try to give too much power to large language models, asking them to make targeting decisions. But the technology is a long way from that capability. Those in the military and Palantir have given no indication of a desire to take humans out of the “kill chain” altogether.
“There will be a risk of becoming overly reliant on machine outputs and short-circuiting critical human review,” Shanahan said.
In the future, Palantir expects the role of large language models to increase in importance, according to people with knowledge of the company’s ambitions. For instance, the models could be used for more of the planning process in military operations, suggesting strategies or even making autonomous targeting decisions.
But they’re not yet capable of those tasks. The more immediate worry is whether human reviewers can keep up with AI as it compiles more targets quicker than ever before, or succumb to pressure to approve decisions without the necessary time to vet them for potential civilian danger.
Reed’s view
The debate over the use of AI in the military mirrors, in some ways, the one over autonomous driving. There are some horrific accidents that occur — just as they would with human drivers — but, on balance, self-driving cars take a more cautious approach to driving, and fatal accidents are far fewer.
Still, single incidents can change the public’s view on the technology. A single accident led to Uber’s entire autonomous driving program shutting down, for instance. General Motors’ self-driving unit, Cruise, faced a similar incident that ended its robotaxi service in San Francisco.
The tragedy at the Shajareh Tayyebeh school led to an immediate rush to blame AI for the fatal missile attack. And while the initial speculation turned out to be wrong, it does suggest that civilian deaths stemming from automation will be viewed differently than those caused by humans, even if autonomous weapons prove safer in the aggregate.
The other important factor here is that autonomy increases the possible volume of military strikes. So, while humans may be less precise, automation could in the grimmest case make deadly errors at scale.
Notable
- Last week, more than 120 Democratic members of Congress sent a letter to Defense Secretary Pete Hegseth questioning the extent of the use of AI during strikes in Iran, and whether the Pentagon was planning to investigate the attack on the Minab school as a war crime.



