Anthropic exposes how Chinese AI firms try to steal LLM tech

Anthropic accuses three Chinese artificial intelligence companies of “industrial-scale campaigns” aimed at “illegally extracting” its technology using distillation attacks. Anthropic claims these companies created 24,000 fraudulent accounts to hide their efforts.
In a blog post detailing the attacks, Anthropic named three AI companies, including DeepSeek, the maker of the popular DeepSeek AI models. Anthropic explicitly framed the attack as a matter of national security.
“We have identified industrial-scale campaigns by three AI labs (DeepSeek, Moonshot and MiniMax) to illegally extract Claude’s capabilities to improve their own models,” the blog reads. “These labs generated more than 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions.”
This Tweet is currently unavailable. It may be loading or has been deleted.
In January, OpenAI also accused DeepSeek of engaging in distillation attacks, stealing its technology.
At the time, many people reacted not with sympathy, but with mockery, as OpenAI and other AI companies claimed they had the absolute right to train their models on copyrighted works without permission or payment. Typically, AI industry supporters argue that they have no choice but to educate themselves on copyrighted works, because Chinese competitors will ignore copyright laws anyway.
“You can’t be expected to have a successful AI program when you’re expected to pay for every article, book or anything you read or study,” President Donald Trump said at an AI event in July 2025. “When someone reads a book or article, they gain great knowledge. That doesn’t mean you’re violating copyright laws or that you have to make deals with every content provider.” He also added: “China doesn’t do it.”
Anthropic releases Claude Sonnet 4.6: Performance benchmark, how to try it
This puts AI companies in the awkward position of claiming that their intellectual property is prohibited from training models, while engaging in similar behavior themselves.
Crushable speed of light
What are distillation attacks?
Distillation is a common training technique for models using large languages; however, it can also be used to effectively reverse engineer certain aspects of technology. In distillation, AI researchers run variations of the same prompt repeatedly to see how a particular model responds.
“Distillation is a legitimate and widely used training method. For example, cutting-edge AI labs routinely distill their own models to create smaller, cheaper versions for their clients. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time and at a fraction of the cost it would take to develop them independently.”
Chinese companies have a reputation for blatantly ignoring intellectual property treaties and copyright laws, as well as reverse-engineering technologies from Western companies. However, while Anthropic claims that the discovered distillation attacks violated its terms of service, it is not clear that they violated any international laws, nor what recourse Anthropic has beyond suspending violating accounts.
To prevent such attacks, Anthropic called for cooperation between AI companies, government agencies and other stakeholders.
AI companies like Anthropic, xAI, Meta and OpenAI are in the midst of one of the biggest spending booms ever seen, with tens of billions of dollars invested in AI infrastructure, data centers and research and development. If rival foreign AI companies could cheaply recreate their LLM technology using distillation, they would clearly have an advantage over their American rivals.
“These campaigns are gaining in intensity and sophistication,” we can read on the blog. “The window for action is narrow and the threat extends beyond a single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers and the global AI community.”
Mashable reached out to Anthropic to ask about the distillation attacks, and we will update this article if we receive a response.
Disclosure: Ziff Davis, the parent company of Mashable, filed a lawsuit in April 2025 against OpenAI, alleging that it had violated Ziff Davis’ copyrights in the training and operation of its AI systems.



