Chinese AI companies 'distilled' Claude to improve own models, Anthropic says

NBC News Clone summarizes the latest on: Chinese Ai Companies Distilled Claude Improve Models Anthropic Says Rcna260386 - World News | NBC News Clone. This article is rewritten and presented in a simplified tone for a better reader experience.

DeepSeek, Moonshot and MiniMax created more than 16 million interactions with Claude using roughly 24,000 fake accounts, the U.S. company said in a blog post.
Anthropic In Talks To Raise Up To $10 Billion In New Funding
Anthropic, creator of Claude, warned that illicitly distilled models lacked necessary safeguards, creating significant national security risks. Gabby Jones / Bloomberg via Getty Images
Listen to this article with a free account

Three Chinese artificial intelligence companies used Claude to improperly obtain capabilities to improve their own models, the chatbot’s creator Anthropic said in a blog post Monday while also making a case for export controls on chips.

The announcement follows a memo by OpenAI earlier this month, when the startup warned U.S. lawmakers that Chinese AI firm DeepSeek is targeting the ChatGPT maker and the nation’s leading AI companies to replicate models and use them for its own training.

DeepSeek, Moonshot and MiniMax created more than 16 million interactions with Claude using roughly 24,000 fake accounts, in violation of Anthropic’s terms of service and regional access restrictions, the company said.

They used a technique called “distillation,” which involves training a less capable model on the outputs of a stronger one, Anthropic said.

“These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region.”

Anthropic warned that illicitly distilled models lacked necessary safeguards, creating significant national security risks. If these models are open-sourced, the risk multiplies as capabilities spread freely beyond any single government’s control.

Anthropic, which raised $30 billion in its latest funding round and is now valued at $380 billion, said that distillation attacks support the case for export controls: Chip access restrictions reduce both direct model training capabilities and the extent of improper distillation.

DeepSeek’s operation targeted reasoning capabilities across diverse tasks and the creation of censorship-safe alternatives to policy-sensitive queries, while Moonshot aimed at agentic reasoning and tool use, as well as coding and data analysis, Anthropic said.

MiniMax targeted agentic coding, tool use and orchestration and Anthropic detected the campaign while it was still active — before MiniMax released the model it was training.

“When we released a new model during MiniMax’s active campaign, they pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system,” the blog post said.

DeepSeek, Moonshot and MiniMax did not immediately respond to requests for comment.

×
AdBlock Detected!
Please disable it to support our content.

Related Articles

Donald Trump Presidency Updates - Politics and Government | NBC News Clone | Inflation Rates 2025 Analysis - Business and Economy | NBC News Clone | Latest Vaccine Developments - Health and Medicine | NBC News Clone | Ukraine Russia Conflict Updates - World News | NBC News Clone | Openai Chatgpt News - Technology and Innovation | NBC News Clone | 2024 Paris Games Highlights - Sports and Recreation | NBC News Clone | Extreme Weather Events - Weather and Climate | NBC News Clone | Hollywood Updates - Entertainment and Celebrity | NBC News Clone | Government Transparency - Investigations and Analysis | NBC News Clone | Community Stories - Local News and Communities | NBC News Clone