
Reed’s view
The cold war between Anthropic and the White House broke out into public view this week when White House AI Czar David Sacks attacked Anthropic Co-founder Jack Clark on X, accusing him of concealing a “sophisticated regulatory capture strategy based on fear-mongering.”
The core claim: The “AI safety” conversation is a mere commercial tactic by one of the four main US players in the AI race.
That’s a convenient line for the accelerationists in the White House and some of the other companies, but my own assessment — based on many years of experience in weighing corporate BS — is that people like Clark and Anthropic CEO Dario Amodei aren’t faking it. This is a real, philosophical divide about the nature of this technology, one that has evolved since it burst out of subreddits three years ago.
A concept I find helpful in this context is Moravec’s paradox: Even as AI makes huge leaps in performance with more data and more compute — contributing to scientific discovery, transforming Hollywood, and automating corporate workflows — it continues to have trouble with simple tasks. These very basic mistakes mean that while AI might help us cure cancer, it will still require human supervision for most tasks for the foreseeable future.
From Sacks’ point of view, this is a “goldilocks” situation, where all the predictions of AI destroying humanity and replacing most human workers were wrong, and we’re instead seeing gradual improvements and a competitive marketplace primed to enable a wave of innovation.
But from Clark’s vantage point (up-close to some of the top researchers in the world), Sacks’ assessment is premature. As its capabilities increase, AI may eventually learn how to better itself, leading to more rapid increases. “This technology really is more akin to something grown than something made,” Clark writes in his essay. “We are growing extremely powerful systems that we do not fully understand.”
It’s possible that AI models trained with today’s architecture could continue to improve with scale, but make little progress toward the kind of reliability needed for those simple tasks. In that case, researchers would focus their brainpower on the last hurdles standing in the way of the AI holy grail: predictability and interpretability. Pass those, and you’d find a fundamentally different and much safer kind of AI than we have today.
If not, companies like Anthropic will have to decide whether to keep spending billions on “growing” something they can’t fully control and don’t understand.
In this article:

Room for Disagreement
In a recent video debate published on Substack, AI safety commentator Liron Shapira agreed that employees inside Anthropic are genuinely concerned about AI alignment, but said that makes the company’s mission hypocritical because it benefits from framing safety as a problem. “The fact that Anthropic exists and they’re still building AI — they’re arguably the biggest offenders at tractability washing because if they’re building AI, that makes it okay for anybody to build AI,” he said.
Rob Miles, the YouTuber who Shapira was debating, said Anthropic needs a different approach to government — that they’re well positioned to handle the safety problem, but that “it would be better if none of us were progressing towards” superintelligence.

Notable
- “I don’t know exactly how AI is going to shake out; I don’t think anybody does,” Fed Gov. Stephen Miran said Thursday at Semafor’s World Economy Summit, adding that anyone who claims to is “a hubristic fool.”
- The issue of chatbot safety, particularly when it comes to protecting children, has been the object of scrutiny from a swathe of lawmakers across the political spectrum, Semafor’s Morgan Chalfant writes. Some have shown interest in pursuing federal action to address the issue, but it’s too early to say what shape that might take.