• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG

Feb 23, 2024, 12:38pm EST

California leads in U.S. efforts to rein in AI

Liz Hafalia/The San Francisco Chronicle via Getty Images
Title icon

The Scene

California has often played mop-up on tech regulations in the U.S., passing laws on privacy and net neutrality years after federal bills failed to pass. Now it may regulate AI before D.C. even takes a shot.

States in general have been more proactive than Washington, with more than 400 bills addressing artificial intelligence. But California is one of the few that have come up with comprehensive proposals as opposed to focusing on specific areas, like deepfakes.

Last week, Microsoft and Workday endorsed one of those plans, which focuses on automated tools that make decisions like employment screenings or credit approvals.


Prominent California Senator Scott Wiener has another that sets safety standards for the largest, most powerful AI models and also establishes a public cloud computing resource for smaller startups, academics, and others that don’t have the same means as larger companies to develop the technology.

In the edited conversation below, we spoke to Wiener, who represents the Bay area, about his rationale for regulating a technology that has barely gotten off the ground and how the AI boom is affecting San Francisco.

Title icon

The View From Scott Wiener

Q: Your AI bill would require makers of very large, powerful AI models to comply with strict regulations aimed at making them safer. But these companies are already limiting models because of worries they will be held legally liable. So why do we need the regulation?


A: We don’t exactly know what the companies are doing. We know there are labs that take it very seriously and labs that may take it a bit less seriously. I’m not going to name names, but there was one large corporation where there was some concern that the safety protocols they put into place were actually not adequate. The reality is that you can’t just trust people to do what you want them to do; you need to have a standard. And that’s what this is about. Self-regulation doesn’t always work, and having clear, consistent safety standards for all labs developing these incredibly large, powerful models is a good thing. It’s good for the labs, too, because then they know what’s required of them.

Q: Do you think liability plays a role here? With Section 230, social media companies were allowed to do what they wanted without fear of litigation. Most people I talk to think that Section 230 would not apply to the outputs of large language models. Does that give you any comfort?

A: In our bill, we empower the [state] attorney general to pursue remedies and accountability if someone violates the law. I do think there are common law legal remedies that may be available right now if someone creates a large, powerful model that does something terrible and causes significant harm. What we’re trying to do is not to supplant or replace any of that, but to say this is what’s expected of you in terms of doing due diligence to evaluate the safety of your model and mitigate any safety risks that you detect.


Q: The White House issued an AI executive order, but it’s anybody’s guess whether there will be any federal AI legislation. Is this another situation where California leads the way and becomes the de facto federal regulator?

A: I wouldn’t characterize it as de facto federal regulations. I would characterize it as California protecting California residents. And, of course, because of the size of our economy and the role that we play in technology, we definitely set a standard for other states and hopefully, eventually for the federal government. But unfortunately, I don’t have enormous confidence that Congress will act and pass a strong AI safety law.

You’re right, the federal government failed around net neutrality, so we stepped in. The federal government failed around data privacy, so we stepped in. And here, I wouldn’t call it a complete failure because I give President Biden enormous credit for spending a lot of energy and political capital to try to formulate safety standards. But an executive order has its limits. And unless Congress enacts strong safety standards into statute, then we can’t have complete confidence that these standards are binding. And that’s why we need to act at the state level.

Q: You mentioned California residents. There’s so much energy right now in the Bay area around AI. How do you balance that with the need to regulate the technology causing all this excitement?

A: As a San Franciscan and as a representative of San Francisco, I’m incredibly proud that San Francisco yet again is on the cutting edge of transformative technology. People love to beat San Francisco up, but the reality remains, San Francisco is an intensely creative, innovative, and amazing place. We’re showing it yet again with AI. I want San Francisco and California to lead on AI.

We need to be clear that AI innovation is not inconsistent with AI safety, that the two go hand in hand. In fact, if we have a lack of safety, that will dramatically undermine public confidence in AI, and that’s not good for AI innovation. AI is so exciting. There are so many great things that are happening and will happen that will make the world a better place. Let’s just be safe about it. Let’s take basic steps to at least understand what the risks are and then mitigate those risks.

Q: One group that has sprouted up in the last year is the Effective Accelerationist movement [which believes in unrestricted tech development]. Some adherents are on your side when it comes to your housing initiatives. Do you talk with them?

A: I know a lot of people in AI with a lot of different perspectives. This is a very carefully and meticulously crafted bill. It’s always going to be, as with every bill, a work in progress, and we continue to welcome constructive feedback. But we spent nearly a year meeting with people with all sorts of perspectives. We made quite a few changes to the bill before we introduced it in response to constructive feedback from folks in the AI space, including people who may tend a little bit more towards the accelerationist end of things.

Since we rolled it out, I was bracing myself because I didn’t know exactly what to expect, especially representing the beating heart of AI innovation in San Francisco. I thought maybe people would start yelling at me, and that has not happened. I’ve gotten a lot of positive feedback. We’ve gotten some constructive critiques, which I love because it can make the bill better. And there are some people who I think are more on the accelerationist side of things, and I assumed they would just hate the bill, and they don’t. They may have some feedback on it, but I’ve been pleasantly surprised.

Q: So you’re not getting disinvited to dinner parties because you’re regulating AI?

A: More people seem to be mad at me for my speed governor bill requiring new cars to have hard speed limit caps.

Q: So literal acceleration rather than effective acceleration?

A: Exactly.

Q: For CalCompute, which would give compute resources to researchers and startups working in AI, do you have a number of Nvidia GPUs in mind?

A: A lot of people are excited about CalCompute. We’ve had other models like this nationally and in other states, but they’ve been very focused on researchers and academic research, which is great. I believe this will be the first time we’ve created a program like this for people who are just simply building AI models, giving people access at low cost or no cost to the compute they need to build these large models, which are quite expensive. And we want to make sure that we’re not closing off innovation and competition around large language models.

Q: And you want to see that compute power used by California-based startups, versus other states?

A: Yes, California and the Bay area, especially San Francisco, are already a magnet for people who want to be on the cutting edge of AI. We’re seeing that people, including those who left during the pandemic, are now coming back. And CalCompute will cement California’s status as the capital of AI.

Q: Will these data centers in CalCompute have to be green and run on renewable energy? That may need to include nuclear power.

A: It can be, and California, we’re producing so much solar, in particular, but we’re moving towards wind, too. We have invested dramatically in clean energy storage. Unfortunately, we’ve had some hiccups. The California Public Utilities Commission seems determined, at times, to undermine our solar industry. But we’re working on that as well. California needs to stay on the leading edge of clean energy. That absolutely includes some of the new nuclear technologies that are incredibly promising.

Q: Do you mean fusion, or are you talking about smaller, modular reactors?

A: Obviously fusion is like the Holy Grail. That’s a total game changer, but I’m talking about the nuclear generators that we’re seeing in other parts of the world, that don’t present the risks of some of the old mega nuclear plants and that are completely clean.

Q: Of the companies building these big foundation models, OpenAI is probably the biggest ascendent player now. Have you talked with Sam Altman much and what’s that relationship like?

A: I’ve known them for quite some time. We’ve spoken with a number of the major labs, including OpenAI and Anthropic, and we’ve interacted with the large tech companies as well. So we’ve definitely cast a wide net in terms of getting feedback, and OpenAI and Anthropic are doing incredibly exciting work. We really value their insights and feedback.

Q: Have you talked with them specifically about the regulation?

A: Yes, about the bill specifically.

Q: Are they supportive of it?

A: We’ve had very collaborative and thoughtful discussions with them. I really don’t want to speak for them in terms of their take, but they’ve been very helpful conversations.

Q: AI is booming here, but the incident setting a Waymo on fire raises this question of whether the city is hospitable to entrepreneurs.

A: We went down this road about a year ago when [Cash App founder] Bob Lee was murdered, and there was this tidal wave of characterizations of safety in San Francisco. It must have been a homeless person, it’s unsafe on the streets, and so forth. And it turned out that he was murdered by the brother of someone that he was with, one of the oldest stories in human existence, and had nothing to do with the safety of San Francisco. So I’m very hesitant to characterize or read into what one incident can represent to the world.We do not have an epidemic of people burning cars in San Francisco.

Q: You came out here in the mid-90s, when the first dot-com boom was in full swing. Are there similarities to the recent AI craze?

A: It feels very different. I came in ‘97, right as the dot-com boom was just starting to take off. The first day I was out here, I went on a Friday night thinking I’m going to spend a day or two trying to find an apartment. On Saturday morning, I went to an open house for just a regular apartment, nothing special, and there was a line down the block to get in, and people were trying to bribe the landlord. That was dot-com. And we’re not seeing that now, thankfully. Rents are still way higher than I wish they were but they’re not off the charts. So it’s a different vibe today.

But I think in some sense, it’s a more sustainable vibe. Right now, we’re recovering from the pandemic. Downtown is going to need a lot of help and it’s going to take many years for a full recovery. But new activity is happening. We have small businesses that are struggling but we also see new small businesses opening so it’s not as big of a boom-bust as we’ve seen in the past. It’s a little more slow and steady.

Q: You know a lot of people working in AI. Are they giving you any input on how it can be used in political campaigns?

A: Some people I know have said, ‘Hey, have you thought about doing this, have you thought about doing that.’ And we’ve had some meetings just to high-level talk to folks in the AI world about what the future might look like. There are some scary things around AI and campaigns, some of the fake stuff, forgeries, fake voices, and deepfakes. But there’s also some really promising efficiencies. It’s too soon to say exactly what direction it’s going to go in terms of campaigns, but it could be a really powerful tool.

Q: Somebody pitched me a chatbot that was in a politician’s voice. Like a robocall, but you could actually converse with it. That seems out there to me.

A: I find that to be a little creepy. I guess if you’re fully disclosing it from the beginning, that it’s not actually you. I think a lot of people would find that creepy.

Q: But it is a way to get your policies out there instead of just going to your website and bullet points.

A: Assuming that it didn’t say something terrible. ChatGPT is pretty amazing and impressive, but there are times where it will respond with something that’s really off. Nothing’s perfect and so that would be really unfortunate if it then, on my behalf, responded with something terrible.

Q: A political gaffe made by AI.

A: Of course, I would never make a gaffe. Human beings don’t make gaffes.

Semafor Logo