• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we look at how the potential harms of artificial intelligence is creating factio͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
cloudy San Francisco
thunderstorms Washington, D.C.
cloudy Beijing
rotating globe
March 31, 2023
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome to Semafor Tech, a twice-weekly newsletter from Louise Matsakis and me that gives an inside look at the struggle for the future of the tech industry.

The rapid and unexpected jump in AI capabilities has created a full-blown panic in every corner of society. While there are definitely things to be concerned about, the discourse has been confusing and, at times, alarmist.

It’s wonderful that we’re having this debate now, at the beginning stages of development. That didn’t happen with internet 1.0 or 2.0. It took decades for society to wake up to the pitfalls of upending the way information is distributed.

AI is potentially even more transformative than the internet, but the dangers are too diverse to lump into one “we should pause AI development” message. Below, we’ll break down several of the main categories of concern.

And in some ways, AI is scary because it will force us to confront very old and very human issues, instead of just kicking the can down the road. I hope reading this will make you less panicked and able to think more clearly about how humanity should respond to this new technology.

Housekeeping: This newsletter will send from a new email address starting next week. To make sure we make it to your inbox, add technology@semafor.com to your contacts.

Are you enjoying Semafor Tech? Help us spread the word!

Move Fast/Break Things

➚ MOVE FAST: New reality. Advanced artificial intelligence has moved to the forefront of the economy and national conversation in the U.S., with everyone from Senator Mitt Romney to late-night talk show host Stephen Colbert weighing in. And there’s no going back.

➘ BREAK THINGS: Virtual reality. After it was widely reported that Apple was launching its virtual reality headset in June, breathing new life into the industry, reports are now coming out that it might be delayed again.

via Reuters
PostEmail
Semafor Stat

Percentage of Americans who reported using a “buy now, pay later” service, according to a survey published by Consumer Reports in August, up from just 18% that January. Earlier this week, Apple said it launched its own BNPL product, dubbed Apple Pay Later, which will compete with offerings from firms like Klarna, Afterpay, and Affirm.

PostEmail
Reed Albergotti and Louise Matsakis

The AI factions of Silicon Valley

THE NEWS

The debate over the potential harms of artificial intelligence has divided the tech industry into various factions.

Elon Musk is among those who signed an open letter calling for a six-month moratorium on training models more powerful than GPT-4. AI safety evangelist Eliezer Yudkowsky wrote in TIME magazine that the dangers of AI might require destroying “a rogue datacenter by airstrike.”

Notably absent from the open letter were the leaders of the biggest players in commercializing the technology, from OpenAI to Google to Microsoft.

Elon Musk
Getty Images/Justin Sullivan

KNOW MORE

There are several groups in the tech industry with different, and sometimes competing, concerns about how AI is being developed. There are AI researchers who want to race forward as quickly as possible. Others want to be cautious. Some focus on the social issues AI might exacerbate. Others want to stop progress indefinitely.

In other words, the “AI is dangerous” message can’t be lumped into one category. Here’s a breakdown of five broad areas of criticism.

Killer robots: If you’ve read or watched science fiction, you’ve probably thought about this. It’s the plot of various movies and shows like The Terminator, The Matrix, and Westworld. Some of the off-the-wall responses people have elicited from AI chatbots have put adherents to the killer robot theory on high alert, such as Yudkowsky.

Job loss: We’ve already seen AI replace humans in recent years. In manufacturing, advanced robotics have taken jobs. Checkout cashiers, financial analysts, and customer service reps have also been displaced. But if recent breakthroughs are any indication of what is to come, the next 10 to 20 years could accelerate the speed of disruption.

Bias: This is another AI problem that is not new. In areas ranging from facial recognition to criminal sentencing, to recruiting to healthcare, AI algorithms have worked against underrepresented groups and perpetuated or even amplified stereotypes and bias in humans. As AI becomes more sophisticated and widespread, there is little evidence to suggest that these problems have been adequately addressed.

Privacy: To get a sense of how AI can supercharge authoritarian repression, look at Xinjiang, a region of China populated by Uyghurs and other Muslim minorities. The area has become a  laboratory for the Chinese Communist Party’s AI-powered surveillance systems, which can track almost every aspect of people’s lives.

National security: There are two sides to this one. On the one hand, China and the U.S. are  competing to develop AI, including for military uses. On the other hand, critics have questioned the morality of autonomous weapons that would leave life and death decisions to computers.

There are also concerns that AI could accelerate the rate of computer hacks and flood the internet with disinformation, from fake news to realistic-looking videos depicting things that didn’t actually happen.

REED’S VIEW

If you believe we’re on the cusp of artificial general intelligence, or AGI, and that self-aware computers will soon take over as the dominant species on Earth, then all other concerns really become secondary. There isn’t any specific evidence that this will happen. The theory relies on the basic idea that breakthroughs in AI are happening fast and that we will eventually get there.

But preventing the development of AI effectively means halting technological progress. AI is part of nearly every industry now, from biotech to farming. Economic growth depends in part on advancing AI. Stopping it in its tracks could mean a different kind of dystopia, even if there are no killer robots.

We need massive investment, probably by the government in the form of academic research, to study the theoretical ways we can keep AI contained to its intended uses. There are burgeoning fields such as mechanistic interpretability, or the study of understanding why AI models do what they do. Breakthroughs in that area would help detect possible deception in AI models, which would be a sign that we might be losing control.

Almost every other concern is tied to other, underlying issues in society. Job displacement, for instance, is a great example.

For more than 30 years, new technologies and globalization have caused people to lose their jobs at incredibly high rates. Remember Michael Moore’s Roger & Me, the 1989 documentary about Flint, Michigan that shed light on the social destruction resulting from widespread job displacement? AI could turn entire countries into Flint.

But attempts to ban the use of AI will likely be as successful as the Luddites were at destroying textile machinery in the early 1800s.

We probably need social programs to help people transition. That means universal healthcare, better unemployment benefits, more access to childcare, and affordable housing. All of those things are politically divisive in the U.S., which makes this country exceptionally vulnerable to disruption from AI.

The national security area is perhaps the most complex. When nuclear weapons were created, people understandably worried about the annihilation of the human race. Over time, humanity has learned to live with the possibility that one wrong move could end all life in an instant. It would be wise for nation states to create norms and treaties around the use of autonomous weapons now, rather than waiting for the AI equivalent of Hiroshima.

AI will happen whether we like it or not. Whether it becomes a tool used to fight climate change and disease or a destabilizing force that threatens our institutions is not a function of the technology itself. Instead, it relies on society’s willingness to confront social and political challenges that are decades old.

Unsplash/Arseny Togulev

LOUISE’S VIEW

The most extreme arguments about the dangers of artificial intelligence are coming from a small group of people, a large percentage of whom belong to the same well-monied subculture: effective altruism. Proponents of the ideology, which has grown popular in tech circles, say they want to use “evidence and reason” to help others as much as possible. Many EAs have concluded that the best way to do that is to prevent AI from wiping out humanity in the future.

Tesla CEO Elon Musk, PayPal co-founder Peter Thiel, and Facebook co-founder Dustin Moskovitz are just some of the wealthy tech figures that have poured large sums of money into EA-affiliated “AI Safety” initiatives in recent years. In 2015, for example, Musk donated $10 million to the Future of Life Institute, which organized the open letter and itself was co-founded by Jaan Tallinn, another rich technologist who helped develop Skype.

Because they provide the funding, wealthy tech donors have had an outsized impact on the way AI and its potential downsides are discussed in Silicon Valley and other corridors of power. In their telling, AI is an urgent threat to humanity, but don’t worry — catastrophe can be avoided if EA-aligned companies and nonprofits have the power to steer how it is developed.

Under that culture of “AI Safety,” artificial intelligence research has become more opaque, and research labs like OpenAI are publishing less information about how their models work, making it difficult for third parties to study them. Many current harms of this technology, such as the poor working conditions of data labelers, the proliferation of non-consensual deepfake porn, or the ongoing use of stolen artwork to train AI systems, have gotten relatively little attention.

AI will almost certainly have major impacts on society, but to mitigate the damage, the world needs to look beyond wealthy tech figures and the experts in the EA community, which have largely dominated the conversation thus far. OpenAI CEO Sam Altman said himself that he wants the governance of “AGI to be widely and fairly shared.” It’s time to take him up on that idea.

ROOM FOR DISAGREEMENT

Slowing down the progress of artificial intelligence might not be such a bad idea, Casey Newton argued in Platformer. “I don’t know if AI will ultimately wreak the havoc that some alarmists are now predicting,” he wrote. “But I believe those harms are more likely to come to pass if the industry keeps moving at full speed.”

NOTABLE

  • Princeton computer science researchers Sayash Kapoor and Arvind Narayanan argued on Substack that the open letter from the Future of Life Institute “presents a speculative, futuristic risk,” and “distracts from the real issues and makes it harder to address them.”
  • New York Times opinion writer David Wallace-Wells tried to get to the bottom of why AI researchers are working on a technology many believe will destroy humanity, concluding that catastrophic thinking has permeated many corners of society.
PostEmail
Evidence

PostEmail
Watchdogs
Tucker Carlson
Reuters/Brendan McDermid

A group of strange bedfellows came out against legislation that would allow the U.S. government to ban TikTok, including conservative pundit Tucker Carlson, Sen. Rand Paul (R-KY), and Rep. Alexandria Ocasio-Cortez (D-NY). “If Republicans want to continuously lose elections for generations, they should pass this bill to ban TikTok, a social media app used by 150 million people, primarily young Americans,” Paul said during a Senate speech. “Have faith that our desire for freedom is strong enough to survive a few dance videos.”

The pushback suggests that the idea of fully banning TikTok might not be as popular as it seemed just last week, when the social media platform’s CEO endured over five hours of brutal questioning before Congress. The Biden administration will likely do something about TikTok to address national security concerns posed by its Chinese ownership, but what form that will take remains anyone’s guess.

Louise

PostEmail
One Good Text

Justin Hotard leads Hewlett Packard Enterprise’s high performance computing and artificial intelligence business groups.

PostEmail
China Window

The CEO of Midjourney purposely blocked users from creating satirical images of Chinese leader Xi Jinping to prevent the AI image generator from being censored in the People’s Republic. “Political satire in China is pretty not-okay,” David Holz wrote in Discord messages unearthed by The Washington Post. He added that “the ability for people in China to use this tech is more important than your ability to generate satire.”

Holz certainly isn’t the first tech entrepreneur to make concessions in exchange for access to Chinese consumers. But absent any regulations, companies like Midjourney are in an incredibly powerful position to unilaterally decide how AI tools should be used. Earlier this week, for instance, Midjourney decided to suspend free trials after several fake images made with its art generator went viral, like one of the pope in a white coat.

Louise

PostEmail
Ahem

We noted last week that ex-U.S. President Donald Trump’s warning about his impending arrest was good for business, leading to a stock bump for the company that hopes to acquire the parent of his social media network, Truth Social. After a Manhattan grand jury indicted Trump yesterday in an alleged hush money case, shares of Digital World Acquisition Corp. were up again, this time 9% after the market closed.

The bet is people will flock to Truth Social to hear the former commander-in-chief’s complaints and narration of his legal proceedings. But other social networks can take to heart that if there are pictures of Trump’s orange face in an orange jumpsuit, they will be shared across all platforms.

PostEmail
Enthusiasms
Wei Gao/California Institute of Technology

Researchers at the California Institute of Technology are working on a smart bandage aimed at patients with chronic wounds that won’t heal. The bandage can monitor injuries with sensors and administer medications, such as antibiotics, at the right time. It can also administer slight electro-stimulations, helping wounds heal faster. The tech, funded by agencies like the National Institutes of Health and the National Science Foundation, has only been tested in mice so far, but could be coming to humans soon.

Reed

PostEmail
How Are We Doing?

Are you enjoying Semafor Tech? The more people read us, the better we’ll get. So please share it with your family, friends and colleagues to get those network effects rolling.

Thanks for reading.

Want more Semafor? Explore all our newsletters at semafor.com/newsletters

— Reed and Louise

PostEmail