FIRST ON NBC NEWS
Artificial intelligence

Google says attackers used 100,000+ prompts to try to clone AI chatbot Gemini

NBC News Clone summarizes the latest on: Google Gemini Hit 100000 Prompts Cloning Attempt Rcna258657 - Technology and Innovation | NBC News Clone. This article is rewritten and presented in a simplified tone for a better reader experience.

Google says private companies and researchers are trying to copy Gemini’s capabilities by repeatedly prompting it at scale.
The GOOGLE logo is illuminated on a brick building at night.
A Google office in Dublin, Ireland. A distillation campaign prompted Gemini more than 100,000 times before Google identified what was happening and adjusted its AI to better protect itself, the report said. Artur Widak / NurPhoto via Getty Images file
Listen to this article with a free account

Google says its flagship artificial intelligence chatbot, Gemini, has been inundated by “commercially motivated” actors who are trying to clone it by repeatedly prompting it, sometimes with thousands of different queries — including one campaign that prompted Gemini more than 100,000 times.

In a report published Thursday, Google said it has increasingly come under “distillation attacks,” or repeated questions designed to get a chatbot to reveal its inner workings. Google described the activity as “model extraction,” in which would-be copycats probe the system for the patterns and logic that make it work. The attackers appear to want to use the information to build or bolster their own AI, it said.

The company believes the culprits are mostly private companies or researchers looking to gain a competitive advantage. A spokesperson told NBC News that Google believes the attacks have come from around the world but declined to share additional details about what was known about the suspects.

The scope of attacks on Gemini indicates that they most likely are or soon will be common against smaller companies’ custom AI tools, as well, said John Hultquist, the chief analyst of Google’s Threat Intelligence Group.

“We’re going to be the canary in the coal mine for far more incidents,” Hultquist said. He declined to name suspects.

The company considers distillation to be intellectual property theft, it said.

Tech companies have spent billions of dollars racing to develop their AI chatbots, or large language models, and consider the inner workings of their top models to be extremely valuable proprietary information.

Even though they have mechanisms to try to identify distillation attacks and block the people behind them, major LLMs are inherently vulnerable to distillation because they are open to anyone on the internet.

OpenAI, the company behind ChatGPT, accused its Chinese rival DeepSeek last year of conducting distillation attacks to improve its models.

Many of the attacks were crafted to tease out the algorithms that help Gemini “reason,” or decide how to process information, Google said.

Hultquist said that as more companies design their own custom LLMs trained on potentially sensitive data, they become vulnerable to similar attacks.

“Let’s say your LLM has been trained on 100 years of secret thinking of the way you trade. Theoretically, you could distill some of that,” he said.

×
AdBlock Detected!
Please disable it to support our content.

Related Articles

Donald Trump Presidency Updates - Politics and Government | NBC News Clone | Inflation Rates 2025 Analysis - Business and Economy | NBC News Clone | Latest Vaccine Developments - Health and Medicine | NBC News Clone | Ukraine Russia Conflict Updates - World News | NBC News Clone | Openai Chatgpt News - Technology and Innovation | NBC News Clone | 2024 Paris Games Highlights - Sports and Recreation | NBC News Clone | Extreme Weather Events - Weather and Climate | NBC News Clone | Hollywood Updates - Entertainment and Celebrity | NBC News Clone | Government Transparency - Investigations and Analysis | NBC News Clone | Community Stories - Local News and Communities | NBC News Clone