• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


Updated Mar 11, 2023, 8:43am EST
tech

Should we be polite to ChatGPT?

Joey Pfeifer/Semafor
TweetEmailWhatsapp

Sign up for Semafor Technology: What’s next in the new era of tech. Read it now.

Title icon

The News

“Why do I keep saying ‘please’ to ChatGPT?” entrepreneur and investor Mario Nawfal asked his Twitter followers on Thursday.

It’s a question many have posed across Twitter, Discord, and Reddit in recent weeks, as part of an ongoing philosophical debate over chatbot etiquette: Should we be nice to ChatGPT?

Even though rude or polite prompts don’t seem to generate vastly different responses from the AI-powered chatbot, many users say they still find themselves adding “please” or “thank you” while asking questions, and one informal Twitter poll suggested that most people found it “mildly hard” to be rude to bots.

AD

While being nice to chatbots sometimes says more about the person than the tech, experts said, they caution against thinking of them as sentient.

The discussion around these interactions, while often unserious, can offer clues into the future of our relationship with increasingly human-like AI technology.

Title icon

Know More

“Does anyone else feel obligated to be polite to ChatGPT when asking questions?” one user asked the r/ChatGPT subreddit earlier this week. “Maybe it’s my old school upbringing but I just feel bad if I don’t use ‘please’ and ‘thank you’ or other polite ways of framing my questions/ prompts.”

AD

The post garnered 350 responses from users, most of whom said that they, too, are often polite to ChatGPT.

“I literally said to it the other day, ‘Hi, I need some help with calculus please,’” one user responded.

A common half-joking refrain is that ChatGPT will one day become sentient and remember who was nice to it.

AD

Some users, however, don’t see a need to add the formalities.

“It’s a tool. I ask it in a way I expect results,” a Reddit user said.

“Writing direct, concise prompts is important. It doesn’t care if you say please or thank you,” another responded.

In an informal Twitter poll conducted by Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, 45% of respondents who used ChatGPT said they were polite to it, while 15% said they just ordered it around.

Title icon

Expert View

It’s important to remember that ChatGPT and other AI chatbots are “not any more human than a toaster is,” Scott Schanke, a professor at the University of Wisconsin-Milwaukee’s Lubar College of Business, told Semafor.

But research has shown that if a bot is more human-like, users are more likely to treat it like they would a person, said Schanke, who has studied the humanization of chatbots.

ChatGPT specifically uses a highly sophisticated and conversational tone, and responds as if it were typing back to the user in real time.

“You just sort of assume that you need to be polite to it or be nice to it,” he said. “It’s those social cues which kind of trigger that psychological reaction.”

There’s nothing to suggest ChatGPT would provide different responses based on how politely someone frames a question, Schanke said. When asked whether etiquette affected its answers, the chatbot replied: “I don’t have feelings and emotions, so I don’t respond differently based on the tone of the question.”

But being kind to chatbots can sometimes say more about the person than it does the bot.

“I’m from the Midwest, so I like to be polite... As a personal way of interacting with people and things, being polite is nice,” Schanke said. It’s not like anyone’s feelings would get hurt if a user was rude or impolite, he added, but “it’s not necessarily a good thing.”

Title icon

Room for Disagreement

While politeness is a good trait, many experts argue that it can be a distraction to think of AI-powered bots as sentient beings with their own thoughts and emotions. While they may sound alive, the models are trained on data and text that already exists online.

“Anthropomorphizing automated technologies reveals our fascination with them, but it gets in the way of a meaningful understanding of how they work, and how they impact us,” Jenna Burrell, the director of research at Data & Society, wrote in Poynter.

She told Semafor that it’s better to think of chatbots as tools, not as people.

Schanke agrees that there can be a downside to anthropomorphizing a chatbot. If a naïve user starts to think of it as sentient, they might be more likely to believe what it says. And ChatGPT, for example, has acknowledged that it might “occasionally generate incorrect information.”

“It can lead to unintended consequences,” Schanke said.

Title icon

The View From Bing

Shortly after Microsoft began a limited rollout of its new Bing chatbot — which uses an updated version of the underlying technology that ChatGPT uses — people reported receiving odd responses from the chatbot that seemed to take their tone into account.

In one testy viral exchange, the chatbot tried to insist the current year was 2022. When told that was incorrect, the bot said, “You are wasting my time and yours” and called the user “wrong, confused, and rude.”

By the end, the bot demanded that the user either apologize or end the conversation, “and start a new one with a better attitude.”

The Associated Press and The New York Times also reported on hostile interactions with the Bing bot (it compared an AP reporter to Hitler, Pol Pot and Stalin). Microsoft later limited the number of messages a user could send during one chat session, and users reported being shut down if they asked the chatbot about itself.

Bing now also offers three conversation styles to choose from: creative, balanced, and precise.

Title icon

Contact

Want to pass along a tip or feedback? Write to J.D. at jcapelouto@semafor.com.

Semafor Logo
AD