Exclusive / Palantir partnership is at heart of Anthropic, Pentagon rift

Reed Albergotti
Reed Albergotti
Tech Editor, Semafor
Feb 17, 2026, 5:30pm EST
TechnologyNorth America
An aerial view of the Pentagon.
Joshua Roberts/File Photo/Reuters
PostEmailWhatsapp
Title icon

The Scoop

The brewing conflict between the US military and Anthropic escalated over the weekend, when senior administration officials told Axios they were considering banning the Silicon Valley startup from use in the military.

But the roots of the conflict, which began in early January, are in the changing nature of the software stacks used by the Pentagon. As AI models become more powerful and general purpose, the same underlying models that power consumer chatbots could one day make life and death decisions on the battlefield, raising new ethical and technical questions.

Anthropic is one of the few “frontier” large language models available for classified use by the US government because it is available through Amazon’s Top Secret Cloud and through Palantir’s Artificial Intelligence Platform, which is how its Claude chatbot ended up appearing on the screens of officials who were monitoring the seizure of then-Venezuelan President Nicolás Maduro.

The raid, condemned by many Democrats as lawless, came amid a growing resurgence of activism in Silicon Valley around the use of its products by the US government. Palantir has faced pressure in the UK and Europe over the use of its tools by immigration officials.

AD

“The Department of War’s relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” said Chief Pentagon Spokesman Sean Parnell in a statement to Semafor.

Soon after the Maduro raid, during a regular check-in that Palantir holds with Anthropic, an Anthropic official discussed the operation with a Palantir senior executive, who gathered from the exchange that the AI startup disapproved of its technology being used for that purpose.

The Palantir executive was alarmed by the implication of Anthropic’s inquiry that the company might resist the use of its technology in a US military operation, and reported the conversation back to the Pentagon, a senior Defense Department official said.

AD

That exchange led to a rupture in Anthropic’s relationship with the Pentagon, according to several people briefed on the matter. Semafor previously reported that on January 12, Defense Secretary Pete Hegseth jabbed Anthropic in a speech announcing the Pentagon’s new genai.mil platform, which allows Pentagon officials to use AI models from Google, OpenAI, and xAI for nonclassified purposes.

“We will not employ AI models that won’t allow you to fight wars,” Hegseth said, in a veiled reference to Anthropic.

An Anthropic spokesman called the account of the exchange between the company and Palantir as “false.” The spokesman said the company has not “discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters.”

“Anthropic is committed to using frontier AI in support of US national security. That’s why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers. Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy,” the spokesman said.

AD

Anthropic has not agreed to sign an “all lawful uses” contract with the Pentagon, which would allow Claude’s use without any restrictions. Anthropic wants carve-outs that prohibit certain surveillance and autonomous weapons restrictions, according to people familiar with the matter.

Since then, the relationship between Anthropic and the Pentagon has deteriorated, according to people familiar with the matter. The Defense Department official told Semafor that the military is beginning to lose trust in Anthropic, viewing their models as a possible “supply chain risk,” and making hazy threats about barring subcontractors (like Palantir) from using them.

An official designation like that, which would be a rare move by the Pentagon, could scare even private sector customers away from Anthropic and threaten its business prospects just as the company prepares for an initial public offering later this year.

Behind the scenes, the two sides are still negotiating terms for a contract. “We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right,” the Anthropic spokesman said.

Title icon

Step Back

Anthropic is an outlier among Silicon Valley AI companies because it has, on several occasions, appeared to sacrifice its own economic interests for its ideals. Palantir takes the opposite view: As a matter of principle, it doesn’t try to control US government uses of its tech; it’s facing a backlash from the other direction in Europe and the US over the use of its data mining software by ICE.

In May, Semafor reported that Anthropic upset the Trump administration by lobbying against a law that would have preempted states’ abilities to pass AI regulation. Anthropic has also pushed for AI regulation bills in various US states, even as the White House has tried to avoid a “patchwork” of laws that could slow innovation.

Critics have accused the company’s AI researcher, co-founder, and CEO, Dario Amodei, of a cynical attempt to block competition. White House AI czar David Sacks called it a “sophisticated regulatory capture strategy based on fear-mongering,” a charge Anthropic denies.

Anthropic again landed on the wrong side of White House officials in September, Semafor previously reported, when it declined requests by contractors working with federal law enforcement agencies because the company refused to make an exception to allow its AI tools to be used for some tasks, including surveillance of US citizens.

Now, military planners worry that they might rely on a company like Palantir, only to find out during an operation that they can no longer use it because it violates a requirement imposed upon the military.

Military officials who spoke to Semafor said they believe suppliers have no role in dictating how the Pentagon uses technology during military operations.

Title icon

Reed’s view

First of all, it’s important to keep in mind that Claude played no meaningful role in seizing Maduro — not for ethical reasons, but because the technology isn’t good enough yet to warrant so much concern.

I’ve talked with people at Palantir about how, exactly, the company uses frontier language models like Claude, and the truth is they usually constitute about 10% to 20% of any given customized software application. We remain some distance away from the Joint Chiefs’ creating a “General Claude” chatbot to run the military for them.

Palantir develops bespoke software solutions for specific purposes and then uses language models often as a way to make those programs easier to use, or for specific non-deterministic data-mining tasks that can help with strategic analysis.

More broadly, the present debates reflect the resurgence of a paternalistic view in Silicon Valley that only technologists can understand and guide the use of their creations. In contrast, the maker of a fighter jet might tell the military that, over a certain G force, the plane will become unreliable. They don’t seek a guarantee from the military that no pilot will ever push the plane beyond its limits.

When technologists get overly invested in how their technology is used by governments, it can go terribly wrong — most notably, when Elon Musk reportedly cut off Starlink access in the middle of a Ukrainian military operation because he feared it would lead to an escalation in the war.

But ultimately, Anthropic needs to decide whether it wants to be a supplier of technology to the US military or not. And if the answer is anything but “yes,” then it shouldn’t be surprising if the US starts looking for other suppliers. The Pentagon’s threats, however, remain unusual and out of proportion with the challenge, and create a novel situation in which companies would effectively be forced to work with the government.

AD
AD