• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we look at how the booming startup told a company that it can’t advertising usin͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
May 17, 2023
semafor

Technology

Technology
Sign up for our free newsletters
 
Louise Matsakis
Louise Matsakis

Hi, and welcome to Semafor Tech, a twice-weekly newsletter from Reed Albergotti and me. This week, I’m in Washington, where lawmakers are panicking about artificial intelligence. But despite all the hype, Congress is unlikely to move quickly to regulate the technology, as Reed notes below. Instead, companies like OpenAI will be left to enforce their own policies.

Today, I have a story for you about how that effort is starting to take shape. If there’s anything I’ve learned throughout my years as a tech reporter, it’s that content moderation eventually becomes a problem for every platform.

Move Fast/Break Things

➚ MOVE FAST: Big Blue. IBM’s stock price has seen a nice bump over the past week as it soaks up some of the AI hype. It was one of the first players in the space and has been trying to get its mojo back. Its chief privacy and trust officer also shared the spotlight at Tuesday’s congressional hearing on the emerging technology.

IBM's Christina Montgomery
Reuters/Elizabeth Frantz

➘ BREAK THINGS: Seeing Red. A former Apple employee was indicted for allegedly stealing autonomous driving secrets for China, marking the third ex-staffer to face such accusations. Everywhere, including in South Korea, the screws are tightening on China’s tech ambitions. Even Elon Musk, who gets big business from China, is warning of a possible Chinese invasion of Taiwan.

TweetEmail
Semafor Stat

The amount of fraudulent credit card transactions on Apple’s App Store the company said it blocked last year, according to an announcement touting its efforts to protect consumers. Apple has been criticized over the years for allowing scams on its store, which it claims is “curated.” Apple didn’t disclose historical numbers for comparison.

TweetEmail
Louise Matsakis

OpenAI shut down DC company’s pitch to apply ChatGPT to politics

THE SCOOP

OpenAI told a leading company that provides data to Washington lobbyists and policy advocates that it can’t advertise using ChatGPT for politics.

The booming Silicon Valley startup took action after the Washington, D.C. company, FiscalNote, touted in a press release that it would use ChatGPT to help boost productivity in “the multi-billion dollar lobbying and advocacy industry” and “enhance political participation.”

Afterward, those lines disappeared from FiscalNote’s press release and were replaced by an editor’s note explaining ChatGPT could be used solely for “grassroots advocacy campaigns.”

A FiscalNote spokesperson told Semafor it never intended to violate OpenAI’s rules, and that it deleted that text from its press release to “ensure clarity.”

KNOW MORE

This is the first known instance of OpenAI policing how the use of its technology is advertised. The company last updated its policies in March, which now ban people from using its models for, among other things, building products for political campaigning or lobbying, payday lending, unproven dietary supplements, dating apps, and “high risk government decision-making,” such as “migration and asylum.”

OpenAI told Semafor that it uses a number of different methods to monitor and police when those policies are being violated. In the case of politics specifically, the company revealed it’s working on building a machine learning classifier that will flag when ChatGPT is asked to generate large volumes of text that appear related to electoral campaigns or lobbying.

The incident between OpenAI and FiscalNote stemmed from how the latter described two intertwined products. One is “VoterVoice,” which uses AI to help well-funded Washington interests to send hundreds of millions of targeted messages to elected officials in support or opposition of legislation.

OpenAI did not express objections to another product, SmartCheck, which uses ChatGPT to coach grassroots advocacy groups on how to improve their email campaigns by assessing things like subject lines, the number of links they include, and other factors.

Reuters/Florence Lo

LOUISE’S VIEW

ChatGPT isn’t a social media platform, but OpenAI will have to make many of the same calls that sites like YouTube and Meta have had to make about who can access their tools and under what circumstances. Regulators can push the company to be forthcoming about its decisions and when bad actors inevitably slip through the cracks.

One major question is how well OpenAI will be able to enforce its policies in languages other than English, and whether it ends up being easier to abuse ChatGPT to manipulate elections in non-Western countries. And unlike most mainstream social media platforms, OpenAI does not yet publish regular transparency reports about how often it’s catching violators.

But OpenAI does have one important advantage that will make it easier to police ChatGPT: It can build safeguards directly into the chatbot itself, which already refuses to respond to many queries that violate the company’s rules.

ROOM FOR DISAGREEMENT

Even if OpenAI catches the majority of bad actors abusing its technology, other companies will be happy to use the same kind of AI to help clients influence politics, especially because it appears to work.

When researchers from Cornell University sent over 30,000 human-written and AI-generated emails to more than 7,000 state legislators on hot-button issues like gun control and reproductive rights, they found that the lawmakers were “only slightly less likely” to respond to the automated messages.

THE VIEW FROM CHINA

In the first known case of its kind, Chinese police arrested a man earlier this month for reportedly using ChatGPT to fabricate a series of news articles about a train accident. China recently proposed new rules for AI chatbots and implemented regulations governing the use of “deep synthesis” technologies, like deepfakes. But as the newsletter ChinaTalk notes, the man in northwest Gansu province was charged for “picking quarrels and provoking trouble,” a classic catch-all offense for activities the Chinese Communist Party doesn’t like.

NOTABLE

TweetEmail
Evidence

The number of businesses and trade groups trying to sway U.S. policymakers on AI issues has skyrocketed, according to OpenSecrets. The roster of such clients in the first quarter of 2023 alone approached the total number of entities pitching AI topics during all of last year.

TweetEmail
One Good Text

Marwa Fatafta leads Access Now’s work on digital rights in the Middle East and North Africa region as the MENA Policy Manager.

TweetEmail
Watchdogs

Sam Altman
Reuters/Elizabeth Frantz

The push to keep AI safe is going to come more from Silicon Valley than from Washington. That’s the big takeaway from Tuesday’s Senate hearing featuring OpenAI CEO and co-founder Sam Altman.

He and his company have been conducting research and publishing papers on AI regulation for almost a decade. They’ve proposed ideas including a licensing agency that oversees AI companies, and a royalties regime that compensates artists for their content. So far, at least, these suggestions have been far more specific and sweeping than those emanating from Washington, where a lawmaker recently floated AI “nutrition labels.”

At Tuesday’s hearing, some lawmakers expressed skepticism about the power of regulation. When AI expert Gary Marcus suggested new laws might be a more effective way to limit the potential harmful effects of AI than private lawsuits against companies, Senator Josh Hawley scoffed at the idea, doubting Congress’s ability to actually do anything.

The bottom line: Don’t expect politicians to save you from AI, but take some comfort in the fact that the big players in the space are at least thinking about it and trying to do the right thing. The big test will come when their financial health comes into conflict with their values — something that hasn’t really happened yet.

Nearly everyone in the hearing agreed with the analogy comparing AI to the development of the nuclear bomb. In the case of the bomb, world governments were successfully able to keep the technology out of the wrong hands, and have so far averted a nuclear holocaust (knock on wood).

This kind of collaborative approach could also work for policing AI. Though one glaring problem is that the U.S. government has allowed itself to fall far behind the private sector on technological development.

Rather than put nutritional labels on AI (whatever that means), maybe this will be the wakeup call that Uncle Sam needs to regain its position at the forefront of AI technology. Then maybe it will have more luck controlling how that technology is used.

— Reed

TweetEmail
How Are We Doing?

Are you enjoying Semafor Tech? The more people read us, the better we’ll get. So please share it with your family, friends and colleagues to get those network effects rolling. Thanks for reading.

Want more Semafor? Explore all our newsletters at semafor.com/newsletters

— Reed and Louise

TweetEmail