• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG

Intelligence for the New World Economy

  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


Exclusive / Defense Secretary Pete Hegseth jabs Anthropic over safety policies

Reed Albergotti
Reed Albergotti
Tech Editor, Semafor
Jan 16, 2026, 11:31am EST
Technology
Pete Hegseth
Evelyn Hockstein/Reuters
PostEmailWhatsapp
Title icon

The Scoop

When Defense Secretary Pete Hegseth announced the Pentagon was adding Grok to its list of generative AI providers, he railed against AI models that “won’t allow you to fight wars.”

Hegseth wasn’t just riffing, a person familiar with his thinking said: He was specifically referring to Anthropic, the AI startup that spun out of OpenAI in an attempt to build safer AI technology.

In recent weeks, tension has built up between Anthropic and the military, according to two people with knowledge of the matter, as the Trump administration attempts to more quickly adopt new warfighting technology, including the most advanced AI models.

From Anthropic’s perspective, the company feels like it has a responsibility to ensure its models are not pushed beyond their capabilities, particularly in military actions that could have lethal consequences.

AD

From the military’s point of view, though, Anthropic shouldn’t attempt to make the final call on how, exactly, its models are used in warfare. Those decisions should be left to the military, like any other technology or weapon the Pentagon purchases, it says.

A Defense Department official, speaking on background, said it would only deploy AI models that are “free from ideological constraints that limit lawful military applications. Our warfighters need to have access to the models that provide decision superiority in the battlefield.” Anthropic declined to comment.

Title icon

Step Back

In December, the Pentagon launched a new platform called Genai.mil, a portal offering a specialized version of Google’s Gemini frontier model.

AD

On Monday, when Hegseth announced a comprehensive AI acceleration strategy, he announced that xAI’s Grok would be added to the list.

But as the document circulated in AI circles, AI safety advocates, including some at Anthropic, became concerned about language that seemed to gloss over the topic of guardrails. “We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment,” the policy said. “The Department must also utilize models free from usage policy constraints that may limit lawful military applications.”

Some at Anthropic fear the military could rely too heavily on the models or place too much trust in them before they are ready to be deployed on the battlefield, potentially leading to deadly mistakes.

An administration official told Semafor that Anthropic’s desire to control the specific military use cases of its technology goes too far.

Title icon

Know More

It isn’t the first time Anthropic has found itself at odds with the administration. Last year, as Semafor first reported, it clashed with the White House over the company’s support for regulation of AI at the state level, opposing a law that would have preempted state AI regulation.

AD

The rift widened when Anthropic barred its models from being used for certain law enforcement activities.

Title icon

Reed’s view

Anthropic’s policies prohibit the use of its technology to develop weapons. But there is little doubt AI will play a larger role in lethal military force.

That’s not going to be easy for many AI researchers to accept. It’s especially difficult for Anthropic, which has built its company around creating a more responsible version of the technology. The reality is, technologists have never been able to fully control how their inventions are used; the alternative, shaky as it may sometimes seem, is to rely on the rule of law to hold people accountable.

And autonomous weapons aren’t what many AI safety researchers are fundamentally worried about. As AI safety advocate Nate Soares told me, if all-powerful superintelligence decides that it wants to wipe out humanity, it won’t need conventional weapons.

And perhaps there’s reason for optimism, even about war: Autonomy on the battlefield could ultimately take more humans out of harm’s way.

Title icon

Notable

  • The DoD’s new AI strategy shows similarity with its 2023 Biden-era predecessor, who also called for a quick adoption of frontier AI models for military purposes, Patrick Tucker writes in Defense One.
  • Public AI models may not be suited for military use because their civilian-focused guardrails — especially those discouraging violence — conflict with the Pentagon’s mission to plan for lethal force, Mieke Eoyang, deputy assistant secretary of Defense for cyber policy during the Biden administration, told Politico.
AD
AD