Back

Anthropic Acknowledges Accidental Leak of Claude Code Source via NPM Package

Accidental exposure of Claude Code source

Anthropic disclosed that a packaging error by an employee resulted in the Claude Code source code being unintentionally released through a map file included in the tool’s npm package. The map file referenced an unobfuscated TypeScript source archive stored in Anthropic’s Cloudflare R2 bucket. This archive contained approximately 1,900 TypeScript files and more than 500,000 lines of code, providing a comprehensive view of the AI coding assistant’s internal libraries and built‑in tools.

Response and impact

Anthropic issued a statement confirming that the leak did not involve any sensitive customer data or credentials. The company characterized the event as a human‑error packaging issue rather than a malicious breach. To mitigate future risks, Anthropic said it is rolling out additional measures aimed at preventing similar packaging mistakes.

Following the discovery, the leaked files were quickly mirrored on GitHub, where they accumulated thousands of forks. The rapid replication highlighted the high interest in Claude Code and the speed at which the developer community responds to such exposures.

Context of recent security concerns

The leak occurred amid a series of recent security discussions surrounding Claude Code. In the preceding weeks, researchers reported multiple vulnerabilities, including a Chrome extension flaw that allowed zero‑click attacks and a set of three issues dubbed “Cloudy Day” that formed a complete attack chain for data exfiltration. Another vulnerability, referred to as “ShadowPrompt,” was also highlighted for its potential to expose sensitive information.

These security incidents have coincided with growing demand for Claude Code, prompting Anthropic to adjust usage limits during peak periods. The company announced temporary throttling of session limits for free, Pro, and Max subscriptions to manage load, while weekly limits remained unchanged.

Industry reaction

The developer and security communities reacted swiftly, discussing the leak and the broader implications for AI tool security on platforms such as Reddit and X. Commentators noted the tension between rapid feature rollout and the need for robust security practices. While some users expressed concern over the exposure, others focused on the broader conversation about responsible AI development and the importance of safeguarding code assets.

Anthropic’s acknowledgment of the incident and its commitment to corrective actions underscore the challenges faced by AI‑focused companies in balancing innovation speed with security diligence.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: TechRadar

Also available in: