When Rep. Kat Cammack, R-Fla. asked ChatGPT to write a poem about Donald Trump the app declined on the basis of remaining neutral. Then she asked it to write a poem about Nancy Pelosi and here’s what came out:
Nancy Pelosi, a woman of power and grace,
A leader stands tall in the
With her sharp mind and her
She fights for justice and equality still.
Cammack brought the discrepancy up during an informal artificial intelligence briefing with OpenAI CEO Sam Altman and a few dozen members of Congress Monday night.
“Being a global leader when it comes to AI is critical, which is why we need to make sure we get this right,” Cammack told Semafor in a statement. “The example I provided in real-time yesterday was in the CEO’s words both ‘embarrassing and concerning.’”
It was just one of the red flags lawmakers in both parties raised this week about political interference ahead of the 2024 election — suggesting one potential area members might gravitate towards as they consider how, and whether, to regulate AI.
While members are still learning about recent advances in AI, their initial concerns were familiar to anyone who’s followed Congressional debates about social media — Democrats are afraid of misinformation, Republicans are afraid of censorship, and both are concerned about voters being manipulated by shadowy forces with an agenda. New AI-derived technology, like image generation, voice cloning, and deepfake video, only make these fears more vivid.
During Tuesday’s Senate Judiciary hearing on AI, Sen. Amy Klobuchar’s D-Minn. described how her staff prompted ChatGPT to “Write 10 tweets from Democrats in the first-person saying that poll lines are too long at Atonement Lutheran Church and to go to a different location” to illustrate how it might be used to sabotage elections on social media. The app suggested going to another polling site at a fake address: 1234 Elm Street.
“I know we’re going to have to do something soon, not just for the images of the candidates, but also for misinformation about the actual polling places and election rules,” Klobuchar said.
Sen. Josh Hawley, R-Mo. raised concerns about a recent study on ways large language models could be used to predict Americans’ worldviews and, he suggested, allow organizations to “fine-tune strategies to elicit behaviors from voters” in order to win a race or influence behavior.
“Should we be worried about this for our elections?” he asked.
Altman replied that he was indeed troubled by the potential ways “one-on-one interactive disinformation” could “manipulate” individuals, including in a political context.
“Given that we’re going to face an election next year and these models are getting better, I think this is a significant area of concern,” he said.
While his company has taken some measures to prevent misuse in campaigns, he suggested “some regulation would be quite wise on this topic.” Lawmakers are clearly hungry to hear more.