• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we look at state senator Scott Wiener’s AI bill, which could beat federal regula͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
February 23, 2024
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

While most of the conversation around the regulation of AI in the U.S. has focused on the White House executive order, it might be California that once again beats Washington to the passage of actual legislation.

This has happened before. When Congress balked on privacy legislation, California stepped in with a landmark bill.

But that was 2018. The economic boom stemming from social media companies had subsided and that industry had already consolidated. Venture capital money and tech talent were already flowing out of the state in search of cheaper housing and more opportunities.

The AI boom is just beginning. San Francisco is finally beginning to recover from the pandemic and an influx of AI talent and money represent its best chance to fully bounce back.

Regulating a technology that is in its infancy and reflects massive potential revenue for the state is a tightrope act for California state senator Scott Wiener, who recently introduced a comprehensive AI bill, one of the few in the country.

I spoke to him about that delicate balance and how he sees AI regulation affecting the trajectory of the technology, and its economic impact on his home city of San Francisco and his home state.

Move Fast/Break Things

➚ MOVE FAST: Data deals. Google has agreed to pay Reddit $60 million a year to use the social media platform’s data to train AI models in one of the biggest AI licensing deals to date, Reuters reported. It shows how valuable good training data is to creators of very large language models.

➘ BREAK THINGS: No data. AT&T left thousands of customers in the dark Thursday, when they lost service for hours. The company blamed an “incorrect process” used to expand the network and service was restored. But it raised questions about the resiliency of U.S. communication networks.

Kena Betancur/VIEWpress/Getty Images
PostEmail
Artificial Flavor

The launch of ChatGPT set off alarm bells among teachers worried about how it could be used by kids to cheat in school. But nonprofit online educational outfit Khan Academy has found it to be a useful teaching tool and developed Khanmigo, its inhouse tutor built off of OpenAI’s model. The Washington Post’s Josh Tyrangiel tested it out and found it was the “first AI software I’m excited for my kids to use.”

Khan Academy fine tuned OpenAI’s model, training it on its own lesson plans and sample problems, which made it less prone to hallucinations, including when it comes to math. When Tyrangiel gave the wrong answer to an algebra problem, Khanmigo offered suggestions to help him think through the issue, but didn’t provide the solution: “Hmm, not quite. Remember, we want to isolate Z on one side of the equation. To do this, we should first try to get rid of the ‘+8’ on the left side. What operation could we use to do that?”

And when he asked a question about U.S. history on race relations, it didn’t shy away and offered a thoughtful response. Separately, an education official in Indiana testing out the software said it helped teachers provide a more personalized level of education to students of varying levels in one class. It’s an example of how AI is moving past the hype phase into more practical uses.

Rachel Murray/Getty Images for The LA Promise Fund's "Hello Future" Summit
PostEmail
Q&A

Scott Wiener is a state senator in California.

Q: Your AI bill would require makers of very large, powerful AI models to comply with strict regulations aimed at making them safer. But these companies are already limiting models because of worries they will be held legally liable. So why do we need the regulation?

A: We don’t exactly know what the companies are doing. We know there are labs that take it very seriously and labs that may take it a bit less seriously. I’m not going to name names, but there was one large corporation where there was some concern that the safety protocols they put into place were actually not adequate. The reality is that you can’t just trust people to do what you want them to do; you need to have a standard. And that’s what this is about. Self-regulation doesn’t always work, and having clear, consistent safety standards for all labs developing these incredibly large, powerful models is a good thing. It’s good for the labs, too, because then they know what’s required of them.

Q: The White House issued an AI executive order, but it’s anybody’s guess whether there will be any federal AI legislation. Is this another situation where California leads the way and becomes the de facto federal regulator?

A: I wouldn’t characterize it as de facto federal regulations. I would characterize it as California protecting California residents. And, of course, because of the size of our economy and the role that we play in technology, we definitely set a standard for other states and hopefully, eventually for the federal government. But unfortunately, I don’t have enormous confidence that Congress will act and pass a strong AI safety law.

You’re right, the federal government failed around net neutrality, so we stepped in. The federal government failed around data privacy, so we stepped in. And here, I wouldn’t call it a complete failure because I give President Biden enormous credit for spending a lot of energy and political capital to try to formulate safety standards. But an executive order has its limits. And unless Congress enacts strong safety standards into statute, then we can’t have complete confidence that these standards are binding. And that’s why we need to act at the state level.

Liz Hafalia/The San Francisco Chronicle via Getty Images

Q: One group that has sprouted up in the last year is the Effective Accelerationist movement [which believes in unrestricted tech development]. Some adherents are on your side when it comes to your housing initiatives. Do you talk with them?

A: I know a lot of people in AI with a lot of different perspectives. This is a very carefully and meticulously crafted bill. It’s always going to be, as with every bill, a work in progress, and we continue to welcome constructive feedback. But we spent nearly a year meeting with people with all sorts of perspectives. We made quite a few changes to the bill before we introduced it in response to constructive feedback from folks in the AI space, including people who may tend a little bit more towards the accelerationist end of things.

Since we rolled it out, I was bracing myself because I didn’t know exactly what to expect, especially representing the beating heart of AI innovation in San Francisco. I thought maybe people would start yelling at me, and that has not happened. I’ve gotten a lot of positive feedback. We’ve gotten some constructive critiques, which I love because it can make the bill better. And there are some people who I think are more on the accelerationist side of things, and I assumed they would just hate the bill, and they don’t. They may have some feedback on it, but I’ve been pleasantly surprised.

Q: So you’re not getting disinvited to dinner parties because you’re regulating AI?

A: More people seem to be mad at me for my speed governor bill requiring new cars to have hard speed limit caps.

Q: So literal acceleration rather than effective acceleration?

A: Exactly.

Read here for the rest of the conversation, including what the AI carrots are in Wiener's bill.  →

PostEmail
Semafor Stat

The market cap that Nvidia hit Friday morning, a milestone that highlights the chipmaker’s astonishing rise in the supercharged, post-LLM world.

PostEmail
Obsessions
Frank J. Fleming via X/Screenshot

Google had to pause the image generation feature of its new Gemini AI model yesterday. But it wasn’t due to a technical glitch or a server error. Rather, Google’s efforts to avoid controversial outputs created a controversy of its own. When asked to create images of “white people,” the tool often refused, claiming that it could perpetuate stereotypes.

It’s an embarrassing incident for Google but it reflects a deeper issue with AI safety — it’s often more about brand safety than actual safety. Long before ChatGPT, tech companies learned the hard way that chatbots can go off the rails and be made to say horrible things.

Now that large language models have become a critical part of the tech stack, avoiding embarrassment is even more crucial. Lawmakers in the U.S. and Europe see bias as a major risk for AI models and any slip-up could create regulatory headaches.

One problem is that even intelligent humans have a difficult time navigating this minefield and would prefer to avoid topics that might upset someone. We can’t expect better-than-human nuance from AI chatbots that don’t actually comprehend the meaning of their outputs.

By cutting them off from certain topics, they become less useful as consumer products. Companies may be faced with a very difficult choice: accept that chatbots might offend people or severely limit their capabilities.

PostEmail
Hot On Semafor
PostEmail