• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


icon

Semafor Signals

GPT-4 only makes it slightly easier to create a bioweapon, OpenAI says

Insights from Gizmodo, Georgetown University, and Science|Business

Arrow Down
Feb 1, 2024, 5:42pm EST
techNorth America
This illustration picture shows the AI smartphone app ChatGPT surrounded by other AI apps on June 6, 2023.
AFP via Getty Images/Olivier Morin
PostEmailWhatsapp
Title icon

The News

In an attempt to determine how useful its tools would be to malicious actors, OpenAI had good news, sort of: GPT-4 — it’s most powerful AI software — poses “at most” a slight risk in helping someone create a biological threat.

The artificial intelligence research company said Wednesday that it created a “tripwire” to indicate whether having access to its large language model (LLM) would make it easier for someone to access information that would help them commit an act of bioterrorism such as engineering and spreading a deadly virus. This research is part of OpenAI’s broader “Preparedness Framework” that the company hopes will safeguard against AI-fueled threats.

AD
icon

SIGNALS

Semafor Signals: Global insights on today's biggest stories.

OpenAI could be downplaying concerns in response to US lawmakers

Source icon
Sources:  
Gizmodo, Roll Call, Reuters

“Facing pressure from policymakers, OpenAI would like to ease our concerns that its large language models barely help at all in creating bioweapons,” Gizmodo wrote of the company’s study. “But hey, what’s a few percentage points when the outcome is, oh I don’t know, the end of humanity?” the outlet joked. Last year, a group of scientists at Harvard and MIT conducted a study suggesting that “capable future” LLM-powered chatbots could “help malicious actors cause pandemics unless properly safeguarded.” 

As AI has grown in popularity, U.S. lawmakers have called for urgent protections against such AI-fueled threats that have been the subject of much debate in the last year. Some leading industry executives, including OpenAI’s CEO Sam Altman, have appeared to agree with Congress over the need to regulate AI. In testimony at a senate committee last year, Altman, said, “I think if this technology goes wrong, it can go quite wrong,” adding, “We want to work with the government to prevent that from happening.”

Biorisk is already out there

Source icon
Sources:  
OpenAI, Georgetown University

One of the OpenAI project’s main takeaways was that biorisk information is relatively easy to find, even without help from AI. “Online resources and databases have more dangerous content than we realized,” the report found, though bioterrorism is still rare. A December study from Georgetown University’s Center for Security and Emerging Technology drew the same conclusion: “Biorisk is already possible without AI, even for non-experts.” However, AI executives have warned that some AI tools can fill the gap left by non-AI sources in developing bioweapons. “Certain steps in bioweapons production involve knowledge that can’t be found on Google or in textbooks and requires a high level of expertise,” AI company Anthropic’s chief executive Dario Amodei said. ”We found that today’s AI tools can fill in some of these steps.” 

AI tools could amplify bioterrorism threats, but also counter them

Source icon
Sources:  
Science|Business, Axios

“If you look at individual AI tools in isolation, it’s possible that you could underestimate the risk that they pose,” said a microbiology researcher at Birmingham University. But if a chatbot is integrated with a biological design tool and a robotics platform, she warned, the threat could increase greatly. On the other hand, researchers say AI could also help develop antibodies for viruses, blunting the risk of a successful bioterrorism operation.

AD