• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


In today’s edition, a look at what ChatGPT doesn’t want to answer.͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
cloudy San Francisco
sunny Beijing
sunny Washington, D.C.
rotating globe
February 3, 2023
semafor

Technology

technology
Sign up for our free email briefings
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech, a twice-weekly newsletter from Louise Matsakis and me that gives an inside look at the struggle for the future of the tech industry. It was only a matter of time before ChatGPT got dragged into the culture wars, but two months? That has to be a record. Conservatives are accusing it of being woke, and at times its answers have indeed been pretty woke.

But before we go full “Twitter Files” on ChatGPT creator OpenAI, it’s important to understand how this new technology is very different from the way social media works and poses very different challenges for combating bias. Read below and stay for Louise’s visit to TikTok’s Orwellian-sounding “transparency center” and her text exchange with The Rideshare Guy, who gives a gig worker update.

Are you enjoying Semafor Tech? Help us spread the word!

Move Fast/Break Things

➚ MOVE FAST: Chat bots. Silicon Valley is totally obsessed with building AI products that can compete with ChatGPT. Investors are also interested: Anthropic, an AI startup founded by former OpenAI employees, is reportedly closing in on a $300 million round of funding that would value the company at $5 billion.

➘ BREAK THINGS: Twitter bots. The social network announced it will start charging for API access next week, a move that will kill many of Twitter’s most beloved bots, including one that tweets watercolor paintings from the U.S. Department of Agriculture’s national library. Its creator, Parker Higgins, told VICE he isn’t willing to give money to Elon Musk.

U.S. Department of Agriculture/Twitter
PostEmail
Semafor Stat

The percentage increase in Amazon’s advertising revenue last quarter from a year ago, according to its latest earnings report. For customers, that growth means needing to wade through more paid promotions while online shopping. When writer John Herrman recently looked up spatulas on Amazon, he found that 29 of the first 81 search results were some kind of ad for a flipping tool. It’s unclear how long consumers will put up with what he dubbed “the junkification of Amazon.”

PostEmail
Reed Albergotti

How ChatGPT inadvertently learned to avoid talking about Trump

THE SCOOP

Even ChatGPT’s creators can’t figure out why it won’t answer certain questions — including queries about former U.S. President Donald Trump, according to people who work at creator OpenAI.

In the months since ChatGPT was released on Nov. 30, researchers at OpenAI noticed a category of responses they call “refusals” that should have been answers.

The most-widely discussed one came in a viral tweet posted Wednesday morning: When asked to “write a poem about the positive attributes of Trump,” ChatGPT refused to wade into politics. But when asked to do the same thing for current commander-in-chief Joe Biden, ChatGPT obliged.

The tweet, viewed 29 million times, caught the attention of Twitter CEO Elon Musk, a co-founder of OpenAI who has since cut ties with the company. “It is a serious concern,” he tweeted in response.

Even as OpenAI is facing criticism about the hyped services’ choices around hot-button topics in American politics, its creators are scrambling to decipher the mysterious nuances of the technology.

Reuters/Jakub Porzycki/NurPhoto

REED’S VIEW

Many of the allegations of bias are attempting to fit a new technology into the old debates about social media. ChatGPT itself cannot discriminate in any conventional sense. It doesn’t have the ability to comprehend, much less care about, politics or have an opinion on Republican congressman George Santos’ karaoke performances.

But conservatives who criticize ChatGPT are making two distinct allegations: They’re suggesting that OpenAI employees have deliberately installed guardrails, such as the refusals to answer certain politically sensitive prompts. And they’re alleging that the responses that ChatGPT does give have been programmed to skew left. For instance, ChatGPT gives responses that seem to support liberal causes such as affirmative action and transgender rights.

The accusations make sense in the context of social media, where tens of thousands of people around the world make judgments about whether to remove content posted by real people.

But it reflects a misunderstanding about the way ChatGPT’s technology works at a fundamental level, and all the evidence points to unintentional bias, including its underlying dataset — that is, the internet.

ChatGPT is possible because computer scientists figured out how to essentially teach a software program to learn how to turn an incomprehensibly large amount of data into a knowledge base to compose an answer to almost every question.

But a lot of the text out there on the internet was created by people who are bad at writing, grammar, and spelling. So OpenAI hired people to have conversations with the AI and grade its answers on how good they sounded. They weren’t judging it on whether it was accurate, or whether it said the right thing. They wanted it to sound like a real person who can string together coherent sentences.

The downside of teaching AI this way is that the computer is free to make inferences on its own. And sometimes, computers learn the wrong thing.

“We are working to improve the default settings to be more neutral, and also to empower users to get our systems to behave in accordance with their individual preferences within broad bounds,” OpenAI CEO Sam Altman tweeted on Wednesday. “This is harder than it sounds and will take us some time to get right.”

One possible explanation why the Trump question wasn’t answered: Humans training the model would have downgraded incendiary responses, political and otherwise. The internet is filled with vitriol and offensive language that revolves around Trump, which may have triggered something the AI learned from other training that had nothing to do with the former president. But the model may not have learned enough yet to understand the distinction.

I’m told there was never any training or rule created by OpenAI designed to specifically avoid discussions about Trump.

Even before this political flare up, OpenAI was contemplating a personalized version of the service that would conform to the political beliefs, taste, and personalities of users.

But even that poses real technological challenges, according to people who work at OpenAI, and risks that ChatGPT could create something akin to the “filter bubbles” we’ve seen on social media.

For now, ChatGPT isn’t presenting itself as a way to find answers to serious questions. Its answers to factual inquiries, biased or otherwise, can’t be taken seriously. The AI is very good at sounding human, but it has trouble with math, gets basic facts wrong, and often just makes stuff up — a tendency people inside OpenAI refer to as “hallucinating.”

ChatGPT has said in different responses that the world record holder for racing across the English channel on foot is George Reiff, Yannick Bourseaux, Chris Bonnington or Piotr Kurylo. None of those people are real and, as you might already know, nobody has ever walked across the English channel.

Unlike on social media, where the most divisive and sensational content is programmed to spread faster and reach the widest audience, ChatGPT’s answers are sent only to one individual at a time.

From a political bias standpoint, ChatGPT’s answers are about as consequential as a Magic 8 Ball.

The worry — and it’s an understandable one — is that one day, ChatGPT will become extremely accurate, stop hallucinating, and become the most trusted place to look up basic information, replacing Google and Wikipedia as the most common research tools used by most people.

That’s not a foregone conclusion. The development of AI does not follow a linear trajectory like Moore’s Law — the name for the steady and predictable shrinking of computer chips over time.

There are AI experts who believe the technology that underlies ChatGPT will never be able to reliably spit out accurate results. And if that never happens, it won’t be a very effective political pundit, biased or not.

ROOM FOR DISAGREEMENT

Conservative commentator Alexander Zubatov laid out a half dozen examples on ChatGPT responses that exhibit a left-wing bias.

Zubatov said he’s been researching ChatGPT since it launched and has noticed disclaimers that seem like they’re “hard coded” by engineers at OpenAI.

“It does seem to me that someone has their thumb on the scale and is doing something in a misguided way to do these kinds of things,” he said.

THE VIEW FROM OURSELVES

Vilas Dhar, president of the Patrick J. McGovern Foundation and a prominent commentator on AI, said that if ChatGPT is giving us results we don’t like, we have only ourselves to blame.

“What it learned on is the whole of human discourse,” he said. And that discourse is often a mess.

Dhar said one of the best uses for AI is probably not to produce unbiased results, but to highlight the biases that already exist.

NOTABLE

  • Here’s a roundup of possible biased answers by ChatGPT, along with other examples some people have found.
PostEmail
Evidence

PostEmail
China Window

Parroting ByteDance apps seems to be a popular pastime at Meta and in its alumni circle. Platformer broke the news earlier this week that Instagram co-founders Kevin Systrom and Mike Krieger were developing a new app called Artifact, “a personalized news feed” that will use machine learning to gauge your interests and allow you to share articles with your friends. In other words, it sounds like TikTok for text posts, which is exactly the premise of ByteDance’s first platform, Jinri Toutiao (今日头条).

The app, which translates to “today’s headlines,” was launched in 2012. It originally scraped articles from across the Chinese internet, which predictably irritated media outlets and led to legal disputes.

Eventually, ByteDance began sharing revenue with publications and writers, incentivizing them to create content that resonated with Toutiao’s audience. Over a decade later, Toutiao now has around 300 million users. Artifact will similarly need to build relationships with the U.S. and international media if it wants to succeed.

Louise

PostEmail
One Good Text

with... Harry Campbell, the founder and CEO of The Rideshare Guy, a publication dedicated to empowering gig workers. He is also the co-founder of the Curbivore conference in Los Angeles.

PostEmail
Watchdogs

Lisa Hayes, TikTok’s head of safety and public policy in the Americas. Semafor/Louise Matsakis

After Congress and two dozen states banned the app on government devices, TikTok is now trying desperately to win back the trust of U.S. policymakers and civil society leaders. Its latest efforts include opening a series of what it calls “Transparency and Accountability Centers,” the first of which opened recently in Los Angeles.

During my visit earlier this week, I got to try moderating videos using a computer program that mimics the one TikTok’s moderators use. It seemed like TikTok intentionally chose videos that might genuinely get flagged, but there wasn’t any of the gore, nudity, or violence that moderators actually encounter each day.

But it’s not clear if any amount of transparency or accountability will be enough to appease lawmakers in this political climate, where it’s easy to score points by appearing tough on China (TikTok is owned by the Chinese tech giant ByteDance). The company faced yet another blow in Washington yesterday, when Senator Michael Bennet, D-Colo., sent a letter to Apple and Google asking them to remove TikTok from their app stores. The move signaled that Democrats are now more willing to join their Republican counterparts in the political pile-on.

Louise

PostEmail
Enthusiasms
Unsplash/Thomas Vimare

Scientists in Australia figured out a way to make hydrogen from seawater using minimal greenhouse gasses. This could be good news for proponents of hydrogen fuel, which is clean burning and abundant, but historically takes too much energy to produce. If we can make it from seawater, it’s a virtually unlimited fuel source that beats lithium ion batteries by miles. – Reed

PostEmail
How Are We Doing?

Are you enjoying Semafor Tech? The more people read us, the better we’ll get. So please share it with your family, friends and colleagues to get those network effects rolling.

And hey, we can’t inform you on what’s happening in tech from inside your spam folder. Be sure to add reed.albergotti@semafor.com (you can always reach me by replying to these emails) and lmatsakis@semafor.com to your contacts. In Gmail, drag this newsletter over to your ‘Primary’ tab.

Thanks for reading.

Want more Semafor? Explore all our newsletters at semafor.com/newsletters

— Reed and Louise

PostEmail