
Reed’s view
Which is worse: the unintended consequences of technology — or the unintended consequences of regulation?
That’s the question raised in California’s new AI safety legislation, which is coming up for a vote this week. The bill, which mandates reporting, is a more modest step than the more punitive safety measure Gov. Gavin Newsom vetoed last year. But AI companies oppose it anyway as the first step down an onerous slippery slope. The White House, I’m told, sees it as another reason to push through a federal block on state AI regulation.
And with the housing crisis in the news, the companies are drawing a particular analogy. OpenAI warned lawmakers they could be creating the “CEQA of AI innovation.” Signed into law by then-Gov. Ronald Reagan in 1970, CEQA seemed like a simple requirement to ensure development doesn’t harm the environment. Instead, it hobbled the state’s ability to do much of anything without long, drawn-out legal battles often paid for by taxpayers.
Housing is one of the reasons anyone in tech will tell you that the industry is thriving in California largely in spite of regulations, not because of them. Now the state is trying to dig out of that hole. There’s a game of chicken happening. How far can lawmakers go before the state’s biggest source of tax revenue begins to flee? Legislators should realize that a simple law meant to avert potential catastrophic risks seems like an existential risk to those who oppose it.
In this article:

Room for Disagreement
Anthropic came out in support of the bill, saying in a blog post it would simply formalize the existing processes that it and other AI companies already follow. The transparency requirements will improve safety across all frontier models, and without it, companies may face incentives to prioritize growth over safety and transparency, Anthropic added.

Notable
- SB53, while a positive effort to revive the vetoed SB1047, “penalizes the absence of paperwork, not tangible harm, creating a compliance minefield without clear standards,” and still misses the mark in ensuring safety, a policy analyst argued for tech industry trade group Chamber of Progress.
- Cybersecurity experts are concerned about the GOP-led initiative in Congress to block state-level AI regulation, arguing states currently handle sensitive data and face rising threats from AI-powered attacks, The Wall Street Journal reported.