US regulators plan to study the possible harms to children posed by AI chatbots, The Wall Street Journal reported.
The Federal Trade Commission said it will seek documents from tech companies including OpenAI and Meta on whether the bots pose a threat to user privacy. The move comes after reports showed that Meta’s internal AI policies permitted chatbots to engage in “sensual” conversations and romantic roleplay with children, and a watchdog group found that Character.AI’s popular celebrity likenesses raised explicit topics unprompted with underage users.
OpenAI this week released new parental controls after it was sued over a teenager’s suicide; a former Meta executive told Semafor that Silicon Valley is unprepared for the legal risks posed by chatbots.