OpenAI on Tuesday announced a version of ChatGPT for teens, as tech companies face growing pressure to protect minors who use chatbots.
If ChatGPT detects an under-18 user, it will automatically direct them to an age-appropriate variant of the bot that blocks inappropriate content and can contact law enforcement in some cases. US regulators are probing AI companies over child safety, and a lawsuit filed against Character AI Tuesday became the third high-profile case accusing a chatbot of contributing to a teen’s suicide.
The scrutiny over AI companionship “stems from a basic contradiction,” MIT Technology Review wrote: “Companies have built chatbots to act like caring humans, but they’ve postponed developing the standards and accountability we demand of real caregivers.”