Sens. Maggie Hassan, D-N.H., and Josh Hawley, R-Mo., sent letters to America’s leading artificial intelligence companies Thursday morning demanding information about how they are handling AI-enabled scams.
The letters, sent to Anthropic, Google, Meta, Microsoft, OpenAI, Perplexity and xAI, ask the companies to share details about their efforts to prevent scammers from using their services.
Among other requests, the letters inquire about the companies’ systems for tracking potential scams or fraud incidents that use their platforms, measures to authenticate users and how the companies already cooperate with the government to combat fraud.
“With advancements in AI, scams will continue to grow in sophistication, frequency, and impact,” the senators wrote in the letter. “In the early phases of a scam, criminals can use generative AI to quickly identify and then collect details on their targets, enabling them to create tailor-made scams.”

“Once armed with information like addresses, account numbers, and birthdates, bad actors can more realistically impersonate a victim’s bank, a government office, or even a member of their own family in an attempt to gain control of accounts or induce fraudulent payments,” they continued.
In a statement Thursday, OpenAI spokesperson Kate Waters wrote: "Scammers are not inventing new crimes with AI, they are scaling old ones faster. That is why OpenAI focuses on disruption at scale by detecting patterns of scam behavior, enforcing against accounts used by scammers hundreds of thousands of times a month, and sharing threat intelligence with partners and law enforcement."
"At the same time, people use ChatGPT millions of times a month to spot scams and help keep themselves safe. AI is a tool scammers try to exploit, but it is also one of the most powerful tools we have to stop them,” she added.
Many experts say generative AI is enabling a new wave of fraud, as free-to-use AI tools allow nonexperts to create more and higher-quality fraudulent documents, phone calls and websites than ever before. AI-enabled fraud is surging in venues ranging from business expenses and the court system to social media platforms.
Before today’s AI era, a scammer would have had to hire a web developer or possess the technical skills required to create a scam website designed to mimic an authentic, trusted website. Now, many consumer-facing AI tools allow users to create realistic, interactive websites in a matter of minutes using basic English instructions.
While these scams often target elderly Americans, who lose several billion dollars to scams every year, a broader report last year from Deloitte found that generative AI could cause American fraud losses to reach $40 billion by 2027.
The senators additionally requested the companies detail their fraud-prevention strategies and investments and provide the steps the companies take to prevent the release of sensitive customer information. The senators asked the companies to provide responses by Jan. 14.
Hassan has made scams a focus of her tenure as the highest-ranking Democrat on the Joint Economic Committee, which brings together members of the House and Senate to address economic issues.

In late September, Hawley, who has zeroed in on AI policy in recent months, introduced a new bill that would classify AI systems as products, opening up AI companies to liability claims from users when their AI systems cause harm.
Thursday’s letter adds to growing federal efforts focused on preventing AI-enabled fraud. A bill recently proposed by Reps. Ted Lieu, D-Calif., and Neal Dunn, R-Fla., would update fraud penalties and criminal definitions to account for the rise of AI.

