Back

Anthropic Accuses Three Chinese AI Labs of Large-Scale Claude Distillation

Anthropic’s Allegations

Anthropic has publicly accused three Chinese artificial‑intelligence companies—DeepSeek, Moonshot AI and MiniMax—of establishing more than 24,000 fraudulent accounts that engaged with its Claude model. The firms allegedly generated over 16 million exchanges with Claude, employing a method known as “distillation.” According to Anthropic, the attacks targeted Claude’s most distinctive abilities, including agentic reasoning, tool use and coding.

Scale of the Distillation Effort

Anthropic reports that DeepSeek conducted more than 150,000 exchanges focused on foundational logic, alignment and censorship‑safe alternatives to policy‑sensitive queries. Moonshot AI is said to have produced over 3.4 million interactions aimed at agentic reasoning, tool use, coding, data analysis, computer‑use agents and computer vision. MiniMax allegedly carried out 13 million exchanges targeting agentic coding, tool use and orchestration, redirecting roughly half of its traffic to siphon capabilities from the newest Claude release.

Implications for U.S. Export Controls

The company ties these distillation attacks to the current debate over U.S. export controls on advanced AI chips. Anthropic notes that the scale of the illicit extraction “requires access to advanced chips,” arguing that limiting chip sales to China would reduce both direct model training and large‑scale distillation. The firm calls for a coordinated response across the AI industry, cloud providers and policymakers to strengthen defenses that make distillation attacks harder to carry out and easier to identify.

Security and National‑Security Concerns

Anthropic warns that models built through illicit distillation may lack safeguards designed to prevent misuse, such as the development of bioweapons, malicious cyber activities, disinformation campaigns and mass surveillance. The company suggests that the proliferation of such unprotected models could amplify national‑security risks, especially if authoritarian governments gain access to advanced AI capabilities without the protective measures embedded in frontier models.

Industry Reaction

TechCrunch reached out to DeepSeek, MiniMax and Moonshot AI for comment. The allegations come amid broader industry discussions about the balance between open‑source innovation and the protection of proprietary AI advancements.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: TechCrunch