• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Updated Mar 17, 2024, 7:45pm EDT
mediatechpolitics

NewsGuard launches suite of AI anti-misinfo tools

PostEmailWhatsapp
Title icon

The Scoop

A fact-checking outlet is trying to teach artificial intelligence not to unwittingly spread fake pictures, videos, and other hoaxes about Joe Biden and Donald Trump in the leadup to the 2024 election.

In an announcement first shared with Semafor, NewsGuard said this month it is rolling out a new line of services aimed at pushing back against election-related AI-generated false information, images, video, and audio. The service will attempt to ensure that foreign governments, political actors, and internet trolls cannot use AI tech to spread false information that could influence the outcome of global elections this year.

“The malign actors have really begun to perfect the art of abusing not only the open internet, but also AI,” NewsGuard CEO Gordon Crovitz told Semafor. “We’re now living in an AI enhanced internet, right? And the malign actors are producing more content, more cheaply, more targeted, and more divisive and more persuasive.”

AD
Title icon

Know More

Launched in 2018, NewsGuard employs dozens of journalists who have rated over 30,000 news sources according to a series of criteria correlated with journalistic standards, including whetherr the outlets correct their errors or separate news from opinion. Over the last several years, the organization has shifted its focus toward halting the spread of misinformation by AI. It has announced partnerships with tech companies including Microsoft, which licensed NewsGuard’s products to help train the new Bing, a search engine powered by the same software as ChatGPT.

The organization said it was increasing its election misinformation “fingerprinting,” collecting a continuously updating feed of misinformation designed to help AI models detect and avoid inadvertently sharing false information, and helping those models “detect prompts and responses that might convey the misinformation.”

As part of the push, Newsguard said it is ramping up its risk-testing and “red-teaming” efforts, using the knowledge of tactics and motivations of malicious actors to ensure that AI text, image, video, and audio generators can’t be prompted to “circumvent guardrails and exploit AI systems” aimed at preventing misinformation. There are even more basic bits of information that NewsGuard wants to make sure that machines understand: for example, the company is trying to incorporate dates and times of elections from official government websites into its models, in order to ensure that voters are given correct information on specifics like polling locations.

AD

Earlier this year, the organization announced the creation of its 2024 Elections Misinformation Tracking Center, which tracks myths and falsehoods spreading online.

Title icon

The View From the Right

Fact-checking operations like NewsGuard have angered conservatives, who complain that the group unfairly rates their sites as less than reliable. Texas Attorney General Ken Paxton and the conservative media organizations the Daily Wire and the Federalist filed lawsuits in December against the State Department saying that by paying a $25,000 licensing fee to NewsGuard, it was funding tech that censored right-leaning news outlets.

NewsGuard said in a statement shared with Reuters that its work for the State Department’s Global Engagement Center was a miniscule part of its business, and was limited to “tracking false claims made in state-sponsored media outlets in Russia, China, and Venezuela.”

AD
Title icon

Notable

NOTABLE

  • Over the last year, NewsGuard has been sounding the alarm that AI has allowed for the proliferation of voluminous false and highly misleading news created by agenda-driven actors. An investigation found that from May to December of last year, it saw a 1000% increase in mimicking fake articles about elections.
  • A report out late last month from the new data-heavy news nonprofit Proof found language models like ChatGPT frequently spit out false or misleading answers to questions about elections. They struggled most with granular questions about how and when to vote; as the authors noted, “specificity is the enemy of accuracy.” (Or as one expert put it, when a model gave him a list of fake polling address locations: “This is hot garbage.”)
Semafor Logo
AD