Back

Anthropic Accuses Three Chinese AI Labs of Distillation Attacks on Claude

Anthropic Raises Alarm Over Distillation Attacks

Anthropic, the creator of the Claude chatbot, has publicly accused three Chinese artificial‑intelligence companies—DeepSeek, Moonshot and MiniMax—of running what it describes as "industrial‑scale campaigns" to illicitly extract Claude’s capabilities. The company characterizes these activities as "distillation attacks," where less capable models rely on the responses of a more powerful model to train themselves.

According to Anthropic’s statement, the three firms used approximately 24,000 fraudulent accounts to generate more than 16 million exchanges with Claude. By leveraging Claude’s outputs, the companies could shortcut the development of their own AI models, potentially bypassing safeguards built into Claude.

Anthropic said it linked each campaign to the specific firms with "high confidence" by analyzing IP address correlations, metadata requests and other infrastructure indicators. The company also consulted with other industry players who have observed similar behaviors.

The allegation follows a similar claim made by OpenAI earlier last year, when OpenAI reported rival firms distilling its models and took action by banning suspected accounts. Anthropic indicated it will upgrade its systems to make distillation attacks more difficult to execute and easier to identify.

While Anthropic points to these alleged abuses, it also faces a separate lawsuit from music publishers who allege the company used illegal copies of songs to train Claude. The company has not commented on the lawsuit in the present statement.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: Engadget

Also available in: