Anthropic reveals how Chinese AI companies are trying to steal LLM technology

Anthropic accuses three Chinese artificial intelligence companies of “industrial scale campaigns” to “spoof” its technology using distillation attacks. Anthropic says these companies created 24,000 fake accounts to hide these efforts.
In a blog post detailing the attack, Anthropic called out three AI firms, including DeepSeek, makers of the popular DeepSeek AI models. Anthropic clearly framed the attack as a matter of national security.
“We identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illegally exploit Claude’s capabilities to develop their models,” the blog post read. “These labs generated more than 16 million transactions with Claude using 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions.”
This Tweet is currently unavailable. Either it is loading or it has already been downloaded.
In January, OpenAI also accused DeepSeek of engaging in distillation attacks, effectively stealing its technology.
At the time, many people reacted not with sympathy, but with sarcasm, as OpenAI and other AI companies have said they have every right to train their models on copyrighted works without permission or payment. Often, AI industry fans say they have no choice but to train on copyrighted works because Chinese competitors are sure to ignore copyright laws anyway.
“You can’t expect to have a successful AI program where every single article, book, or anything else you read or read, you have to pay for,” said President Donald Trump at an AI event in July 2025. “When someone reads a book or an article, they’ve gained a lot of knowledge. That doesn’t mean you’re violating copyright laws or you have to do content providers.” He added, “China does not.”
Anthropic releases Claude Sonnet 4.6: Benchmark performance, how to test it
That puts AI companies in the disadvantageous position of claiming that their intellectual property is not limited to model training, while they are also engaging in the same behavior.
Mashable Light Speed
What is a distillation attack?
Distillation is a common training method for large language models; however, it can also be used to effectively reverse-engineer other technical aspects. In distillation, AI researchers use variations of the same data over and over again to see how a particular model responds.
“Distillation is a widely used and legitimate training method. For example, border AI labs often dismantle their models to make smaller, cheaper versions for their clients. But distillation can be used for illegal purposes: competitors can use it to get powerful skills from other labs in half the time, and with that fraction of the cost the private sector will take their part of independence.”
Chinese companies have a reputation for ignoring intellectual property agreements and copyright laws, and for reverse engineering technology from Western companies. However, while Anthropic claims that the distillation attack it created violated its terms of service, it is unclear whether it violated any international laws, or what remedy Anthropic has other than suspending the offending accounts.
To prevent attacks like this, Anthropic called for cooperation between AI companies, government agencies, and other stakeholders.
AI companies like Anthropic, xAI, Meta, and OpenAI are among some of the biggest spenders ever seen, with tens of billions of dollars being poured into AI infrastructure, data centers, and research and development. If competing foreign AI companies can recreate their LLM technology cheaply using distillation, they will clearly have an advantage over their US competitors.
“These campaigns are growing in intensity and sophistication,” the blog post reads. “The window for action is narrow, and the threat extends beyond any single company or region. Addressing it will require swift, concerted action among industry players, policymakers, and the global AI community.”
Mashable reached out to Anthropic with questions about the distillation attack, and we’ll update this article when we hear back.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging that it infringes Ziff Davis’s copyright in training and using its AI programs.



