• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we look at the findings by Georgetown University that show a lopsided balance be͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
April 3, 2024
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

AI safety is a buzzword right now in tech and I recently wrote about how the definition of that term is expanding to include more things. The top AI companies say they are devoted to the topic. And on a long drive over the weekend, I heard AI safety mentioned more than a dozen times in Sam Altman’s interview with Lex Fridman. Altman even said OpenAI would be mostly devoted to AI safety at some point in the future.

So when Georgetown University reached out about a recent study on AI safety, one of its findings really stuck out: AI safety research, while growing, makes up only 2% of the global studies on the overall technology.

If you know anything about AI, you know that being outgunned really matters. To research these issues, you need access to millions if not billions of dollars just to build the computers capable of training and running these models. The disparity is a reflection of that and also an indicator of how inadequate safety research is today.

But more importantly, the lack of understanding of AI safety should worry lawmakers who are trying to pass comprehensive legislation on the technology. It’s difficult to admit that we don’t know what we don’t know. But that does seem to be where we are.

Advocating for more taxpayer funds to go into academic grants for AI safety research isn’t the kind of thing that wins elections, but it’s probably what we need here — more than new laws and executive orders. Read below for more on the research.

Move Fast/Break Things
Pier Marco Tacca/Getty Images

➚ MOVE FAST: New tricks. Training manuals can be annoying and inefficient. The owner of Pizza Hut, KFC, and Taco Bell wants to address that problem with an internal, AI-powered app to help workers remember how to make new menu items or set the right oven temperature. It could do a better — and faster — job helping employees in real time.

➘ BREAK THINGS: Old playbooks. The chair of the U.S. Federal Communications Commission indicated yesterday that the agency would vote to restore net neutrality rules at its next meeting this month. That would reverse the Trump administration’s roll back. But if the former president wins in November, what’s approved later in April will likely be undone again.

PostEmail
Reed Albergotti

AI safety research doesn’t meet the hype

THE SCOOP

In public policy conversations about artificial intelligence, “safety research” is one of the biggest topics that has helped drive new regulations around the world.

But according to a new study, there appears to be more talk about safety than hard data.

AI safety accounts for only 2% of overall AI research, according to a new study conducted by Georgetown University’s Emerging Technology Observatory that was shared exclusively with Semafor.

Georgetown found that American scholarly institutions and companies are the biggest contributors to AI safety research, but it pales in comparison to the amount of overall studies into AI, raising questions about public and private sector priorities.

Of the 172,621 AI research papers published by American authors between 2017 and 2021, only 5% were on safety. For China, the difference was even starker, with only 1% of research published focusing on AI safety.

Nevertheless, studies on the topic are on the rise globally, with AI safety research papers more than quadrupling between 2017 and 2022.

Unsplash/Mohamed Nohassi

Check out Reed's view on why the U.S. government should put its money where its mouth is on AI safety.  →

PostEmail
Quotable
“Apple asked us not to do it.”

The Daily Show host Jon Stewart to Federal Trade Commission Chair Lina Khan during Monday night’s episode. Stewart said he wanted to have Khan on a podcast and the iPhone maker told him, “please, don’t talk to her.”

PostEmail
What We’re Tracking
Ann Wang/Reuters/File Photo

The 7.4 magnitude earthquake in Taiwan is expected to deliver a $60 million hit to TSMC, a blow to a chipmaking industry that is already straining to meet demand. Unfortunately, this is not an anomaly.

The industry that makes the brains of nearly every electronic device is particularly vulnerable to natural disasters. That’s in part because there are so few locations where the devices are made. And it’s also due to the extremely sensitive nature of making chips.

The processes that rely on atomic-level precision can’t withstand even the slightest vibrations or variations in humidity, and they take weeks to complete. So it’s likely TSMC will need to write off a batch of its most advanced chips that were mid-process during the earthquake, when it was forced to evacuate facilities.

Here are some other examples of natural disasters that affected global chip production. Japan’s chipmakers, some of the biggest in the world, were knocked offline during the 2011 tsunami that roiled the country. In 2019, Japan was again hit when a massive typhoon suspended work at chip plants.

In 2020-2021, Taiwan grappled with another natural disaster, a drought that forced the country to ration water, disrupting power-hungry chipmakers when the pandemic had already increased demand for electronics. And semiconductor manufacturers in Texas were forced to shut down plants in 2021 after a winter storm exposed the state’s outdated energy infrastructure.

You can’t change the precarious process of chipmaking, but if efforts in the U.S. and elsewhere are successful at diversifying manufacturing, it will likely reduce the overall impact when one plant is taken down. On the other hand, extreme weather events are on the rise because of climate change and it appears no geography is immune to the effects.

PostEmail
Obsessions
Reuters/Carlos Barria

The nice thing about the AI wave is that, unlike Web 2.0, you are not the product. Instead of making their offerings entirely free, companies like OpenAI, Microsoft, Perplexity and others have found success in charging subscriptions.

But over the past week or so, it looks like AI companies are flirting with free models, raising the specter of the user monetization era. OpenAI just started letting people prompt ChatGPT for free, without even logging in. And Perplexity told AdWeek it would soon start selling ads.

My view is that this is a blip on the radar that will last as long as foundation models remain in this adolescent stage, where they’re kind of amazing but also kind of useless for anything important. As foundation models get better and more useful, the current ad-based internet economy could crumble. But AI companies probably want to grab as much revenue now in order to extend their runway as much as possible.

PostEmail
Hot on Semafor
PostEmail