Hours after Pentagon bans Anthropic, OpenAI strikes defense deal

Reed Albergotti
Reed Albergotti
Tech Editor, Semafor
Feb 27, 2026, 11:27pm EST
Technology
Reuters/Dado Ruvic
PostEmailWhatsapp
Title icon

The News

Defense Secretary Pete Hegseth dropped the hammer on Anthropic Friday for denying the military ‘unobstructed’ access to its AI models. Hours later, rival OpenAI endorsed the Pentagon’s plans and urged competitors to follow suit.

Hegseth said the government would designate Anthropic models a “supply chain risk,” which he said means no entity that does business with the US military can conduct commercial business with Anthropic.

The designation, which Anthropic will fight in court, could become a serious problem for the startup, which earns its revenue through enterprise software sales to companies that might currently or one day want to work with the military in some capacity.

Anthropic has received an outpouring of goodwill from supporters in the tech industry who celebrate the company’s decision to stand by its morals. Specifically, Anthropic refuses to allow its models to be used for the mass surveillance of Americans. And, citing technical shortcomings in its models, Anthropic prohibits the use of the tech for autonomous weapons.

AD

But the dustup goes much deeper than those two prohibitions. Hegseth’s harsh punishment is the culmination of a long, slow slide that began with a political disagreement.

Title icon

Know More

Anthropic’s relationship with the Trump administration has been strained since last year, when the company lobbied against a provision in the “Big Beautiful Bill” that would have preempted state AI regulation, Semafor first reported.

Anthropic then butted heads with the Pentagon and national security agencies over company policies prohibiting surveillance and autonomous weapons, an issue that bubbled up in December, when CEO Dario Amodei met with Emil Michael, a former tech executive who now serves as chief technology officer for the military, Semafor first reported.

AD

Hegseth hit back in January in a speech announcing the US government’s new Genai.mil initiative, referring to AI models that “won’t allow you to fight wars,” Semafor first reported.

By contrast, OpenAI has been savvier at navigating Washington and allowed its AI models to be used by the DoD’s Genai.mil for “all lawful uses,” after months of internal deliberations, Semafor first reported.

OpenAI was comfortable with the lack of restrictions because so many safeguards were already built into its models, according to people familiar with the matter.

AD

By threading the needle, OpenAI found a way to placate both the Pentagon and its own employees, many of whom are skeptical of AI use in the military.

On Friday night, OpenAI CEO Sam Altman said the company had reached an agreement with the Pentagon to deploy ChatGPT on classified networks, offering an alternative to Claude. Altman said OpenAI also prohibits domestic surveillance and autonomous weapons. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he said on X.

Both Anthropic and OpenAI declined to provide further comment.

It’s unclear whether the military actually wanted to use Anthropic models for autonomous weapons or domestic surveillance, which is illegal for the military to conduct.

Anthropic has been most focused on applying its technology to cybersecurity and has been one of the only advanced AI models available for use in classified operations.

The Pentagon plans to keep using Anthropic models for up to six months while it looks for alternatives.

Title icon

Reed’s view

Anthropic’s predicament is a self-inflicted wound. There is nothing inherent in Anthropic’s policies that make it incompatible with the DoD. Since domestic surveillance by the Pentagon is not legal anyway, it would have already been prohibited in an “all lawful uses” contract.

And Anthropic isn’t actually opposed to using AI to deploy autonomous weapons. Rather, the company maintains that the technology isn’t ready yet to be deployed responsibly in certain contexts.

This amounts to more of a user manual issue than a policy. It’s akin to when contractors sell fighter jets to the military and outline the capabilities and limitations of those weapons.

Anthropic’s other option would have been to bake in safeguards directly into the model. The Pentagon doesn’t seem to have any problem with that practice, in part because this is all new and the guidelines and policies are still being hashed out.

Title icon

Room for Disagreement

There’s a potential silver lining for Anthropic. This whole thing could blow over - in fact that’s likely - and Anthropic will have propped up its brand.

Similar to the time Apple took on the FBI by refusing to unlock an iPhone belonging to a terrorist suspect, Anthropic will likely be remembered as a principled and moral actor. That could, in the long run, actually help its enterprise sales.

Title icon

Notable

AD
AD