• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


Breakthrough AI tools still rely on old-school moderation techniques

Dec 14, 2022, 1:48pm EST
tech
PostEmailWhatsapp
Title icon

The Scoop

Unsplash/Chris Schramm

The breakthrough artificial intelligence technologies that have captivated the internet in recent months still rely on outside help, including human beings, for one task: Content moderation, according to people familiar with the matter.

OpenAI, for instance, which makes ChatGPT and the image creation tool DALL-E, uses a blend of internal and external people for content monitoring, according to a person familiar with the matter.

The fact that these highly advanced, breakthrough technologies still require scanning with automated tools that use older software shows how these otherwise impressive applications have some significant limitations.

AD

While the “generative AI” technology behind ChatGPT, DALL-E and other products, is able to wow users by drawing on a vast amount of internet data, its ability to truly understand what it’s creating and the concept of content moderation, is still a work in progress.

Products like ChatGPT, for instance, can answer almost any question in complete sentences that are indistinguishable from a human response. It can also write software code. Products like DALL-E and Stable Diffusion from Stability AI are able to conjure up original works of art from a simple text description.

Kevin Guo, co-founder and CEO of Hive, a content moderation firm whose clients include BeReal and Reddit, said he recently ran an experiment testing generative AI for content moderation. The test compared the new technology to “deep learning,” which was until recently the most advanced AI technology.

AD

“The traditional model dramatically outperformed,” he said. “We’re talking multiples more in accuracy.”

OpenAI is working on its own content moderation tools using generative AI, according to an August blog post on the company’s site. In research papers, OpenAI has also described using human contractors to help identify data that is offensive.

Title icon

Reed’s view

In every big breakthrough in artificial intelligence, from self-driving cars to releasing ChatGPT, a media hype cycle prompts warnings about the rise of intelligent machines.

AD

Then, there’s a letdown when the technology’s limits become clear.

The outside assistance used by OpenAI represents a technological boundary of sorts, and an important new clue about what these novel forms of AI have learned and what they have not.

Products like DALL-E, ChatGPT and Stable Diffusion do represent major advances in technology that allow us to do things that were impossible just a few years ago.

The truly useful mass market AI tools will no doubt change entire industries as more people figure out how to put it to use.

Guo said the thing that makes generative AI so impressive — its ability to draw on massive amounts of written language — also makes it struggle with specific tasks, like determining whether a piece of content violates policies.

“You’ll see examples where it’s very magical. On the other hand it can seem kind of stupid,” Guo said. “What are we seeing here? Is it true intelligence or is it a massive mimicry engine that fails the process if you give it something it hasn’t seen before?

And there’s another problem for generative AI: Cost. Because the models use such vast pools of data, they consume more processing power for each task. Sam Altman, CEO of OpenAI estimated it costs “single-digit cents” every time someone asks ChatGPT a question. “The compute costs are eye-watering,” he tweeted.

Guo said using generative AI to do content moderation would increase costs by roughly 100 times. (Right now, he said Hive charges around 25 cents for 1,000 pieces of content it scans for a company.)

The high costs would make generative AI prohibitively expensive for companies like Google and Facebook, which have billions of users.

Artificial intelligence researchers at Microsoft did find a way to use generative AI in content moderation: By asking it to make more offensive content to help put traditional content moderation tools to the test.

In a paper, authored by Hamid Palangi, researchers described a new tool called “ToxiGen” that acted like a digital “red team,” using OpenAI’s GPT-3 “large language model” to try and find holes in content moderation tools so that they could later be plugged.

Ece Kamar, an AI researcher at Microsoft Research said it would be some time before technologies like large language models make sense as content moderation tools themselves, mainly because of high costs and low success rates at catching prohibited content.

“These technologies are getting more powerful, but at the same time, we are all learning what they’re good at, what they’re not good at, and how they are going to get better,” she said. “I think we are always gonna have humans in the loop.”

Title icon

Room for Disagreement

While it is true that generative AI technology may have difficulty understanding complex concepts like content moderation policies, it is important to note that these technologies are still in their early stages of development. It is not fair to say that generative AI is not capable of performing content moderation tasks, especially when it has shown the ability to generate human-like responses and original works of art.

Additionally, it is worth noting that the results of a single experiment performed by a single company do not necessarily reflect the capabilities of generative AI as a whole. As the technology continues to advance, it is likely that its ability to understand and implement complex concepts will improve — Copied and pasted directly from ChatGPT.

Title icon

The View From China

In the West, China is often depicted as a technological dystopia, where artificial intelligence is used to swiftly censor all forms of dissent on the internet. But like their Western counterparts, Chinese tech companies still rely on vast amounts of human labor to scrub social media of offending content. Earlier this year, some of these workers spoke to the news site Sixth Tone, who reported “their job required them to monitor online content at all hours of the day, in shifts that sometimes stretched up to 14 to 15 hours.”

Louise Matsakis

AD