• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Mar 31, 2023, 12:25pm EDT
tech

The AI factions of Silicon Valley

Elon Musk
Getty Images/Justin Sullivan
PostEmailWhatsapp
Title icon

The News

The debate over the potential harms of artificial intelligence has divided the tech industry into various factions.

Elon Musk is among those who signed an open letter calling for a six-month moratorium on training models more powerful than GPT-4. AI safety evangelist Eliezer Yudkowsky wrote in TIME magazine that the dangers of AI might require destroying “a rogue datacenter by airstrike.”

Notably absent from the open letter were the leaders of the biggest players in commercializing the technology, from OpenAI to Google to Microsoft.

AD
Title icon

Know More

There are several groups in the tech industry with different, and sometimes competing, concerns about how AI is being developed. There are AI researchers who want to race forward as quickly as possible. Others want to be cautious. Some focus on the social issues AI might exacerbate. Others want to stop progress indefinitely.

In other words, the “AI is dangerous” message can’t be lumped into one category. Here’s a breakdown of five broad areas of criticism.

Killer robots: If you’ve read or watched science fiction, you’ve probably thought about this. It’s the plot of various movies and shows like The Terminator, The Matrix, and Westworld. Some of the off-the-wall responses people have elicited from AI chatbots have put adherents to the killer robot theory on high alert, such as Yudkowsky.

AD

Job loss: We’ve already seen AI replace humans in recent years. In manufacturing, advanced robotics have taken jobs. Checkout cashiers, financial analysts, and customer service reps have also been displaced. But if recent breakthroughs are any indication of what is to come, the next 10 to 20 years could accelerate the speed of disruption.

Bias: This is another AI problem that is not new. In areas ranging from facial recognition to criminal sentencing, to recruiting to healthcare, AI algorithms have worked against underrepresented groups and perpetuated or even amplified stereotypes and bias in humans. As AI becomes more sophisticated and widespread, there is little evidence to suggest that these problems have been adequately addressed.

Privacy: To get a sense of how AI can supercharge authoritarian repression, look at Xinjiang, a region of China populated by Uyghurs and other Muslim minorities. The area has become a  laboratory for the Chinese Communist Party’s AI-powered surveillance systems, which can track almost every aspect of people’s lives.

AD

National security: There are two sides to this one. On the one hand, China and the U.S. are  competing to develop AI, including for military uses. On the other hand, critics have questioned the morality of autonomous weapons that would leave life and death decisions to computers.

There are also concerns that AI could accelerate the rate of computer hacks and flood the internet with disinformation, from fake news to realistic-looking videos depicting things that didn’t actually happen.

Unsplash/Arseny Togulev
Title icon

Reed’s view

If you believe we’re on the cusp of artificial general intelligence, or AGI, and that self-aware computers will soon take over as the dominant species on Earth, then all other concerns really become secondary. There isn’t any specific evidence that this will happen. The theory relies on the basic idea that breakthroughs in AI are happening fast and that we will eventually get there.

But preventing the development of AI effectively means halting technological progress. AI is part of nearly every industry now, from biotech to farming. Economic growth depends in part on advancing AI. Stopping it in its tracks could mean a different kind of dystopia, even if there are no killer robots.

We need massive investment, probably by the government in the form of academic research, to study the theoretical ways we can keep AI contained to its intended uses. There are burgeoning fields such as mechanistic interpretability, or the study of understanding why AI models do what they do. Breakthroughs in that area would help detect possible deception in AI models, which would be a sign that we might be losing control.

Almost every other concern is tied to other, underlying issues in society. Job displacement, for instance, is a great example.

For more than 30 years, new technologies and globalization have caused people to lose their jobs at incredibly high rates. Remember Michael Moore’s Roger & Me, the 1989 documentary about Flint, Michigan that shed light on the social destruction resulting from widespread job displacement? AI could turn entire countries into Flint.

But attempts to ban the use of AI will likely be as successful as the Luddites were at destroying textile machinery in the early 1800s.

We probably need social programs to help people transition. That means universal healthcare, better unemployment benefits, more access to childcare, and affordable housing. All of those things are politically divisive in the U.S., which makes this country exceptionally vulnerable to disruption from AI.

The national security area is perhaps the most complex. When nuclear weapons were created, people understandably worried about the annihilation of the human race. Over time, humanity has learned to live with the possibility that one wrong move could end all life in an instant. It would be wise for nation states to create norms and treaties around the use of autonomous weapons now, rather than waiting for the AI equivalent of Hiroshima.

AI will happen whether we like it or not. Whether it becomes a tool used to fight climate change and disease or a destabilizing force that threatens our institutions is not a function of the technology itself. Instead, it relies on society’s willingness to confront social and political challenges that are decades old.

Title icon

Louise’s view

The most extreme arguments about the dangers of artificial intelligence are coming from a small group of people, a large percentage of whom belong to the same well-monied subculture: effective altruism. Proponents of the ideology, which has grown popular in tech circles, say they want to use “evidence and reason” to help others as much as possible. Many EAs have concluded that the best way to do that is to prevent AI from wiping out humanity in the future.

Tesla CEO Elon Musk, PayPal co-founder Peter Thiel, and Facebook co-founder Dustin Moskovitz are just some of the wealthy tech figures that have poured large sums of money into EA-affiliated “AI Safety” initiatives in recent years. In 2015, for example, Musk donated $10 million to the Future of Life Institute, which organized the open letter and itself was co-founded by Jaan Tallinn, another rich technologist who helped develop Skype.

Because they provide the funding, wealthy tech donors have had an outsized impact on the way AI and its potential downsides are discussed in Silicon Valley and other corridors of power. In their telling, AI is an urgent threat to humanity, but don’t worry — catastrophe can be avoided if EA-aligned companies and nonprofits have the power to steer how it is developed.

Under that culture of “AI Safety,” artificial intelligence research has become more opaque, and research labs like OpenAI are publishing less information about how their models work, making it difficult for third parties to study them. Many current harms of this technology, such as the poor working conditions of data labelers, the proliferation of non-consensual deepfake porn, or the ongoing use of stolen artwork to train AI systems, have gotten relatively little attention.

AI will almost certainly have major impacts on society, but to mitigate the damage, the world needs to look beyond wealthy tech figures and the experts in the EA community, which have largely dominated the conversation thus far. OpenAI CEO Sam Altman said himself that he wants the governance of “AGI to be widely and fairly shared.” It’s time to take him up on that idea.

Title icon

Room for Disagreement

Slowing down the progress of artificial intelligence might not be such a bad idea, Casey Newton argued in Platformer. “I don’t know if AI will ultimately wreak the havoc that some alarmists are now predicting,” he wrote. “But I believe those harms are more likely to come to pass if the industry keeps moving at full speed.”

Title icon

Notable

  • Princeton computer science researchers Sayash Kapoor and Arvind Narayanan argued on Substack that the open letter from the Future of Life Institute “presents a speculative, futuristic risk,” and “distracts from the real issues and makes it harder to address them.”
  • New York Times opinion writer David Wallace-Wells tried to get to the bottom of why AI researchers are working on a technology many believe will destroy humanity, concluding that catastrophic thinking has permeated many corners of society.
Semafor Logo
AD