A bipartisan group of senators is calling on leaders in the artificial intelligence industry to commit to publicly disclose more information about how the industry thinks about risk, including possible harms to children.
The group, led by Sens. Brian Schatz, D-Hawaii, and Katie Britt, R-Ala., sent letters Thursday to eight tech companies that are working on leading-edge AI models. The senators wrote that companies have been inconsistent in their transparency practices, including how much information they publicly disclose and when.
“In the past few years, reports have emerged about chatbots that have engaged in suicidal fantasies with children, drafted suicide notes, and provided specific instructions on self-harm,” the senators wrote.
“These incidents have exposed how companies can fail to adequately evaluate models for possible use cases and inadequately disclose known risks associated with chatbot use,” they wrote.
The letters are a sign of the stepped-up scrutiny AI is getting in Congress, especially in the wake of teen suicides that families have blamed partly on AI chatbots. Two senators introduced legislation in October to ban companies from offering AI chatbots to minors entirely, and there was bipartisan backlash last month after the industry sought federal help to pre-empt state efforts to regulate AI.
The letters ask the AI companies to agree to 11 commitments related to safety and transparency, including researching the long-term psychological impact of AI chatbots, disclosing whether companies use chatbot data for targeted advertising and collaborating with external experts on safety evaluations.
“If AI companies struggle to predict and mitigate relatively well-understood risks, it raises concerns about their ability to manage more complex risks,” the senators wrote.
Senators sent the letters to Anthropic, Character.AI, Google, Luka, Meta, Microsoft, OpenAI and xAI.
In response to a request for comment on the letters, Anthropic and Meta pointed to their transparency websites. Microsoft declined to comment. Character.AI said in a statement: “We welcome working with regulators and lawmakers as they develop regulations and legislation for this emerging space. We are reviewing the letter from Senator Schatz’s office and look forward to engaging with them on it.” The other companies did not immediately respond to requests for comment.
The letters come amid rising questions around AI transparency. A study released Tuesday by researchers at four universities found that industry transparency had declined since 2024, “with companies diverging greatly in the extent to which they are transparent.”
There have been other attempts to get the AI industry to coalesce around uniform standards for safety and transparency. Some tech companies — including five of the eight companies to receive the senators’ letter — have signed on to at least part of the European Union’s General-Purpose AI Code of Practice, published in July.
And in September, California Gov. Gavin Newsom, a Democrat, signed first-of-its-kind legislation in the United States requiring AI companies to fulfill various transparency requirements and report AI-related safety incidents. The law, known as SB 53, is backed by civil penalties for noncompliance, to be enforced by the state attorney general’s office.
Faced with safety risks, at least one company has recently scaled back its offerings. Character.AI said in October that it would ban people younger than 18 from using its open-ended chatbot because of concerns related to teens. The company is being sued by a Florida mom whose son died by suicide after he used the chatbot. It has denied the allegations in the lawsuit.
Major insurance companies are also expressing concern about the risks of generative AI and are asking U.S. regulators for permission to exclude certain AI-related liabilities from corporate policies, The Financial Times reported last month.
