Exclusive / Pentagon’s Anthropic feud deepened after tense exchange over missile attacks

Reed Albergotti
Reed Albergotti
Tech Editor, Semafor
Feb 24, 2026, 8:45pm EST
TechnologyPoliticsNorth America
Reuters/Al Drago
PostEmailWhatsapp
Title icon

The Scoop

Even before Tuesday’s high-stakes meeting between Pentagon chief Pete Hegseth and Anthropic over how the US government can use the AI company’s technology, the relationship between the two had hit a breaking point.

In a previously unreported exchange in early December, Under Secretary of War for Research and Engineering Emil Michael was outraged by Anthropic CEO Dario Amodei’s answer to a hypothetical question: If the US were under attack – with hypersonic missiles hurtling toward US soil – and Anthropic’s AI models could thwart the missiles, would the company refuse to help its country due to Anthropic’s prohibition on using its tech in conjunction with autonomous weapons?

According to people familiar with the administration, Amodei responded that the Pentagon should, in the midst of the attack, reach out and check with Anthropic. But sources familiar with Anthropic’s view say the AI company offered to make a missile defense carveout for otherwise prohibited weapons.

An Anthropic spokesman called the accusation that Amodei suggested defense officials seek Anthropic’s permission to intercept missiles “patently false.” He added that “every iteration of our proposed contract language would enable our models to support missile defense and similar uses.”

AD

Months later, in the meeting with the Pentagon on Tuesday, Amodei reiterated that Claude could be used to automate missile defense, underscoring that, from Anthropic’s perspective, it is willing to make reasonable concessions to its usage policies to ensure national security.

The conversation highlights just how much the relationship between the AI startup and the US government had already deteriorated before culminating in an ultimatum on Tuesday. The Pentagon gave Anthropic until 5:01 pm on Friday to agree to the Pentagon’s demands to offer its AI models for “all lawful uses” of the company’s AI or face retribution.

If the company chooses not to comply, a senior Pentagon official said Hegseth would invoke the Defense Production Act to compel Anthropic to do so anyway. Anthropic would also be labeled a “supply chain risk,” the person said, meaning its products couldn’t be used by the US government, even via third parties, potentially crippling Anthropic’s other enterprise business prospects.

AD

The official told Semafor that other frontier AI companies are close to an agreement allowing them to be used in classified settings. Grok, the model made by Elon Musk’s xAI, is already on board, the official said. Musk didn’t immediately respond to a request for comment. Google and OpenAI did not immediately respond to requests for comment.

So far, Anthropic’s Claude is among the only AI models available for classified use, via Amazon’s top-secret cloud and Palantir’s Artificial Intelligence Platform.

Anthropic has been in a tense battle with the Pentagon over its AI models since at least January, when Anthropic was employed by Palantir during the capture of Venezuelan leader Nicolas Maduro, according to people familiar with the matter.

AD

After an Anthropic employee inquired with Palantir about Claude’s role in the raid, a Palantir senior executive, who was alarmed by Anthropic’s seeming disapproval, notified the Pentagon, Semafor reported earlier. An Anthropic spokesman denied that the company expressed concern.

Anthropic has so far refused to sign off on the “all lawful uses” clause the Pentagon is requesting. It will not allow the US government to use its models for surveillance of US citizens or for autonomous weapons.

For its part, the Pentagon has said it will only use the models for legal purposes. “This has nothing to do with surveillance and autonomous weapons being used,” the Pentagon official told Semafor. “You can’t lead tactical ops by exception.”

The official added that the Pentagon has a responsibility to follow all US laws, regardless of Anthropic’s usage policy. “The Pentagon has only given out lawful orders.”

Title icon

Reed’s view

The most likely conclusion of this saga is that the Pentagon forces Anthropic to comply, utilizing the Defense Production Act. In some ways, this means both sides get what they want. The Pentagon no longer has restrictions on how it can use Claude, and Anthropic gets to save face even if it ends up complying with the Pentagon’s demands.

Anthropic might still bring a lawsuit as a result, arguing the situation does not warrant the use of the DPA, or that it’s not applicable to computer software.

The case echoes what happened when Apple fought the FBI’s attempt to force it to unlock an iPhone belonging to suspects in the San Bernardino terrorist attack.

Apple never backed down and the FBI found a way to unlock the phone anyway, using bugs that already existed in Apple’s code, effectively providing the back door that law enforcement sought in the first place.

But just like the Apple incident drew attention to how easy it is for governments to break into iPhones, the move by the Pentagon could remind international customers of just how much power the US government has over its homegrown tech giants.

Title icon

Notable

The deeper problem in this standoff between the Pentagon and Anthropic isn’t who is right, but that this negotiation is happening at all, Lawfare argued: “The terms governing how the military uses the most transformative technology of the century are being set through bilateral haggling between a defense secretary and a startup CEO” when Congress should be setting those rules.

AD
AD