Flagship newsletter icon
From Semafor Flagship
In your inbox, Every Weekday
Sign up

Anthropic eases AI safety restrictions to avoid slowing development

Feb 25, 2026, 6:47am EST
PostEmailWhatsapp
Dado Ruvic/Illustration/File Photo/Reuters

Anthropic dropped its pledge to pause development if the capability of its AI models far outstripped safety research.

The previous “Responsible Scaling Policy” involved a binary trigger: If AI reached a certain threshold, the company would stop training it unless it could be certain that AI would be safe. The new system involves more flexible responses rather than “tying [ourselves] to the mast,” a board member wrote.

Part of the reasoning, one executive told TIME, was that it didn’t make sense to “make unilateral commitments … if competitors are blazing ahead.” The CEOs of Anthropic and Google DeepMind expressed similar concerns recently, suggesting that it was impossible for any one AI company to slow development without regulatory input.

AD