
the issue
Long a target of Congress, major tech companies are now taking heat in Washington for an artificial intelligence-powered innovation: chatbots.
Lawmakers from both parties have queried the companies about their policies regarding AI chatbots, responding in part to news coverage of loophole-plagued and vague rules that govern their conversations with children. Recent lawsuits have also claimed that chatbots played a key role in suicides of young people.
In this article:

the bond
The issue has attracted scrutiny from a growing cadre of lawmakers, including Sen. Josh Hawley, R-Mo., who presided over a Senate Judiciary subcommittee hearing on Tuesday that featured testimony from parents who hold chatbots responsible for their childrens’ suicides.
Sen. Brian Schatz, D-Hawaii, led a bipartisan letter to Meta last month raising concerns about a report that its policies allowed chatbots to hold “sensual” conversations with children. Lawmakers pressed Meta to bar targeted advertising for minors and implement a mental health referral system, among other suggestions.
Some lawmakers have expressed interest in federal action to address the issue, but it’s too early to say what shape that might take.
In a brief interview with Semafor, Hawley suggested one achievable goal: barring chatbot companions for minors or requiring chatbots to “disclose to every user that they are not human.”
“I think we could put in place immediately some common-sense stuff that would have a major effect. And then the longer-term solution — or, one of them — is that we’ve got to allow victims and parents to hold these companies accountable and to sue them,” Hawley said.
Current law known as Section 230 has created a liability shield that protects social media companies from legal claims arising from third-party content.
Sen. Chris Van Hollen, D-Md., who signed onto the Meta letter, told Semafor separately he was exploring some kind of federal action.
“I think there needs to be guardrails, safeguards. I’m not sure exactly what form the answer should take, but it is something we’re looking into,” Van Hollen said.

The View From the tech industry
Tech companies scrutinized on Capitol Hill have pushed back, arguing that they take safety of their users seriously and have policies in place designed to protect minors.
“We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized roleplay between adults and minors,” a Meta spokesperson said in response to Schatz’s letter last month.
The spokesperson blamed Reuters’ chatbot coverage on “erroneous and inconsistent” notes within an internal policy document that was later removed.
Some companies are taking actions on their own to strengthen their rules in a possible bid to ward off federal regulation. This week, OpenAI unveiled new safety features for teens following a lawsuit filed by a family that alleged the company’s ChatGPT functioned as a “suicide coach” for their son, who later took his own life.
On the other hand, Anthropic has stood out among AI companies in pushing for basic guardrails and transparency requirements at the federal level for the emerging technology.
“We need to understand the risks that we face in the models, so that over time, companies can learn from each other and address these risks,” Anthropic CEO Dario Amodei told a small group of reporters in Washington earlier this week.
Amodei added that he believes it’s “truly irresponsible” for AI companies to “advocate against transparency and against guardrails for the technology” while allegations grow of bots enabling suicides or engaging in other damaging behavior.

Notable
- Silicon Valley heavyweights are pouring money into two new super PACs whose goal is to get rid of politicians “whom they see as insufficiently supportive of the push into artificial intelligence,” The New York Times reported.
- Senators are making a fresh push to pass the Kids Online Safety Act in the wake of revelations about chatbots’ interactions with children, The Hill reported.