• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


icon

Semafor Signals

US agents shut down huge Russian AI bot farm as fears over misinformation grow

Insights from MIT Technology Review, The Guardian, and The Atlantic

Arrow Down
Updated Jul 10, 2024, 9:34am EDT
North America
Kacper Pempel/File Photo/Reuters
PostEmailWhatsapp
Title icon

The News

US authorities revealed they had foiled an artificial intelligence-powered Russian disinformation network that ran nearly 1,000 accounts on Elon Musk’s social media platform, X.

The “bot farm” was created by an editor at Russian state-owned media outlet RT, and funded by the Russian security service, according to court documents.

AD

The accounts tended to pose as US citizens, and posted pro-Russia messaging, including claims that Ukraine is part of Russia. Musk’s X suspended the accounts. The FBI said the bot farm was intended to “undermine our partners in Ukraine.”

icon

SIGNALS

Semafor Signals: Global insights on today's biggest stories.

AI is increasing the spread of propaganda and disinformation

Source icon
Sources:  
MIT Technology Review, The Washington Post

Artificial intelligence is being used increasingly at a governmental level, by both autocratic and democratic countries, both to manipulate public opinion and censor certain types of content, the MIT Technology Review noted. In Venezuela, for example, a fake English-language news outlet promoted favorable messages about the government in 2023. And China has made chatbots that don’t respond to questions related to the 1989 Tiananmen Square massacre. Now, the FBI’s operation to take down the Russian bot farm may be the first successful foiling of an AI misinformation operation of this kind, but it is unlikely to be the last, The Washington Post reported. “This isn’t even the tip of the iceberg,” a researcher told the outlet. “This is the drip of the iceberg.”

US may be unprepared for AI disinformation in November

Source icon
Source:  
The Guardian

Worries around disinformation are high in the US ahead of November’s presidential vote, especially given the role disinformation previously played in the 2016 election. And US regulations may not be up-to-date enough to curb the rise of AI-powered influence campaigns, The Guardian noted. When voters in New Hampshire received phone calls earlier this year from an AI voice clone of Biden discouraging them from voting, the state banned calls that use AI audio — but some damage was no doubt already done. Congress is working on how to regulate the tech, too, but any measures would likely not come into force in time for the election, leaving voters to fend for themselves.

AI has made ‘post-truth’ a reality

Source icon
Sources:  
The Atlantic, The Hill

The rise of AI has enhanced distrust in an information environment that has already made telling fact from fiction difficult, The Atlantic wrote. The technology makes it easier than ever to generate content designed to amplify biases and misgivings, and easily “collect evidence that sustains a particular worldview and build a made-up world around cognitive biases on any political or pop-culture issue.” A professor and computer scientist had argued previously in 2019 that with the rise of AI, perception would no longer equal reality, and that educating people to be skeptical when presented with content online would become increasingly important. This is now the case, The Atlantic argued: We are living in a “post-truth” world.

AD