• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Updated Feb 27, 2024, 11:26pm EST
tech

Google CEO calls AI tool’s controversial responses ‘completely unacceptable’

Reuters/Gonzalo Fuentes
PostEmailWhatsapp
Title icon

The Scoop

Google CEO Sundar Pichai addressed the company’s Gemini controversy Tuesday evening, calling the AI app’s problematic responses around race unacceptable and vowing to make structural changes to fix the problem.

Google suspended its Gemini image creation tool last week after it generated embarrassing and offensive results, in some cases declining to depict white people, or inserting photos of women or people of color when prompted to create images of Vikings, Nazis, and the Pope.

The controversy spiraled when Gemini was found to be creating questionable text responses, such as equating Elon Musk’s influence on society with Adolf Hitler’s.

AD

Those comments drew sharp criticisms, especially from conservatives, who accused Google of an anti-white bias.

Most companies offering AI tools like Gemini create guardrails to mitigate abuses and to avoid bias, especially in light of other experiences. For instance, image generation tools from companies like OpenAI have been criticized when they created predominately images of white people in professional roles and depicting Black people in stereotypical roles.

“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai said.

AD

Pichai said the company has already made progress in fixing Gemini’s guardrails. “Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts,” he said.

Google confirmed the memo, and the full note from Pichai is below.

Title icon

The View From Sundar Pichai

I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong.

AD

Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.

Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging AI products.

We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.

We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave. Let’s focus on what matters most: building helpful products that are deserving of our users’ trust.

Title icon

Reed’s view

The Gemini controversy has provided fodder for critics on the right, who often accuse tech companies of liberal bias.

But it isn’t really about bias. It shows that Google made technical errors in the fine-tuning of its AI models. The problem is not with the underlying models themselves, but in the software guardrails that sit atop the model.

This is a challenge facing every company building consumer AI products — not just Google.

Based on my understanding of this saga, nobody at Google actually set out to force Gemini to depict the Pope as a woman, or Vikings as Black people, nor did anyone want it to find moral equivalency between Musk and Hitler. This was a failed attempt at instilling less bias and it went awry.

If anything, this debacle shows how fast Google is moving. Like all large companies, Google has slowed down over the years. But the generative AI race has forced it to speed up product development.

The Gemini mistakes are a fixable, technical problem and Pichai’s note to staff Tuesday night shows that the company is working on it.

But the reputational problems this raises may not be so easy to fix.

Semafor Logo
AD