
The Scoop
Anthropic is in the midst of a splashy media tour in Washington, but its refusal to allow its models to be used for some law enforcement purposes has deepened hostility to the company inside the Trump administration, two senior officials told Semafor.
Anthropic recently declined requests by contractors working with federal law enforcement agencies because the company refuses to make an exception allowing its AI tools to be used for some tasks, including surveillance of US citizens, said the officials, who spoke to Semafor on the condition of anonymity.
The tensions come at a moment when Donald Trump’s White House has championed American AI companies as patriotic bulwarks of global competition — and expect the companies to repay that loyalty. The officials said they worried that Anthropic was selectively enforcing its policies based on politics and using vague terminology to allow its rules to be interpreted broadly.
For instance, Anthropic currently limits how the FBI, Secret Service and Immigration and Customs Enforcement can use its AI models because those agencies conduct surveillance, which is prohibited by Anthropic’s usage policy.
One of the officials said Anthropic’s position amounts to making a moral judgment about how law enforcement agencies do their jobs.
The policy doesn’t specifically define what it means by “domestic surveillance” in a law enforcement context and appears to be using the term broadly, creating room for interpretation.
Other AI model providers also list restrictions on surveillance, but offer more specific examples and often have carveouts for law enforcement activities. OpenAI’s policy, for instance, prohibits “unauthorized monitoring of individuals,” implying consent for legal monitoring by law enforcement.
Anthropic declined to comment.
In this article:
Know More
Anthropic’s decision to limit how law enforcement and national security agencies use its models has turned into a headache for some private contractors that work with those agencies.
That’s because in some cases, Anthropic’s Claude models — available through Amazon Web Services GovCloud system — are the only top-tier models cleared for top secret security situations, the officials said.
Anthropic has a specific service aimed at national security customers and struck a deal with the federal government to offer its services to government agencies for $1 fee.
Anthropic also does work with the US Department of Defense, although its policies still prohibit the use of its models for making weapons.
Anthropic has clashed with Trump administration officials in the past, when it opposed a proposed law that would have preempted US states from passing AI regulation.

Reed’s view
Anthropic’s kerfuffle with law enforcement contractors — coming as larger rivals like Google have quietly backed off attempts to enforce progressive politics through their corporate policies — raises a difficult question for the most powerful companies in Silicon Valley: How much should software providers be able to control how their products are used, particularly once they are sold into government agencies?
For instance, an agency could pay for a subscription or negotiate a pay-per-use contract with an AI provider, only to find out that it is prohibited from using the AI model in certain ways, limiting its value.
Traditional software isn’t like that. Once a government agency has access to Microsoft Office, it doesn’t have to worry about whether it is using Excel to keep track of weapons or pencils.
It’s also just been generally frowned upon in the government contracting world to “pick and choose” how software can be used. But activist employees over the last decade also often pressed their employers to stay out of the defense industry in particular.
The tensions between Anthropic and the White House are part of a broader battle between the AI “safety” movement, which has allies at the independent AI startup, and many of its rivals and the Republican administration, which prefer to move faster.
One thing protecting Anthropic in all this is that its models perform extremely well. Eventually, though, its politics could end up hurting its government business.