• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Jan 4, 2024, 7:29am EST
techNorth America
icon

Semafor Signals

Supported by

Microsoft logo

Researchers think there’s a 5% chance AI could wipe out humanity

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/
REUTERS/Aly Song
PostEmailWhatsapp
Title icon

The News

A survey of artificial-intelligence researchers found that a majority thought there was a 5% chance the tech could pose an existential threat to humanity.

Nearly 58% of more than 2,700 researchers agreed that AI could trigger catastrophic consequences — though they disagreed widely about the nature of the risks.

The survey also found that researchers thought it is likely that AI will hit major milestones, such as generating music indistinguishable from that created by humans, earlier than first believed.

icon

SIGNALS

Semafor Signals: Global insights on today's biggest stories.

The threat is likely more philosophical than physical

Source icon
Source:  
Scientific American

AI has rapidly become an “alien” intelligence, and fears that the tech could behave in a way that doesn’t align with the values of its creators have proliferated. But the technology isn’t advanced enough yet to actually bring about the catastrophic consequences that many AI-skeptics predict, Nir Eisikovits, a professor of philosophy at the University of Massachusetts Boston, argued in July. AI systems don’t have the capacity to make complex, multilayered decisions yet, Eisikovits noted, and “it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.” In his view, the threat is less about the possibility of AI making world-altering decisions, and more about how few decisions humans will need to make in their own lives: An “increasingly uncritical embrace of it … means the gradual erosion of some of humans’ most important skills,” he wrote.

Focus on the real-world harms AI causes first

Source icon
Source:  
Nature

The threat of AI wiping out humanity makes for catchy headlines, but the tech is already causing societal harms, an editorial in Nature argued. Biased decision-making and job elimination are present-day concerns, the journal noted, as is the misuse of facial recognition technology by autocratic governments. “Fearmongering narratives about existential risks are not constructive. Serious discussion about actual risks, and action to contain them, are,” the editorial argued.

It might be time to hit the brakes on developing AI

Source icon
Source:  
Time

AI with human-level intelligence is on the horizon, and it’s theoretically possible that future AI systems could develop other AI, meaning that the tech could make its own advancements. The result would be a “superintelligence” that humans can’t control, argued Otto Barten, director of the Existential Risk Observatory, and Joep Meindertsma, the founder of advocacy group PauseAI. At the moment, the competitive nature of AI labs means that tech companies are incentivized to create new products all the time, possibly setting aside ethical considerations and taking risks in order to do it. Humans “have historically not been very good at predicting the future externalities of new technologies,” the authors wrote.

Semafor Logo
AD