On Tuesday, AI upstart Anthropic confirmed that internal code for its popular AI coding assistant, Claude Code, was accidentally exposed due to human error.
The company issued a statement, saying that no sensitive customer data or credentials were compromised. The company called it a “release packaging issue,” not a security breach, and that measures were being rolled out to prevent anything similar happening in future.
The leak was discovered after version 2.1.88 of the Claude Code npm package was released with a source map file, giving developers access to nearly 2,000 TypeScript files and over 512,000 lines of code. The release has since been pulled from npm, but not before the code spread on GitHub, racking up 84,000 stars and 82,000 forks.
Why This Matters
The leak exposes Claude Code’s inner workings:
- Self-healing memory architecture to overcome context limits.
- A multi-agent system for orchestrating complex tasks.
- Tools for file reads, bash execution, and API orchestration.
- KAIROS mode: a feature that lets Claude Code run in the background, fixing errors and sending push notifications without human input.
- Dream mode: a constant background thinker for idea iteration.
- Undercover Mode: stealth contributions to open-source repos without revealing internal info.
- Anti-distillation controls: fake tool definitions to poison competitor training attempts.
For competitors and attackers, this is a goldmine. Security firm Straiker warns that the leak makes it much easier to craft persistent backdoors and bypass safeguards.
Supply Chain Nightmare
The most pressing concern from this attack is the fallout, and how it affects users. Users who updated Claude Code via npm on 31st March 31 2026 may have pulled a trojanised version of the HTTP client that contains a cross-platform remote access trojan. Users of the platform are advised to downgrade to a safe version immediately.
Attackers are already caplitalising on the leak, weaponising the leaked data to target developers trying to compile the leaked code.
Anthropic’s Week From Hell
This is Anthropic’s second major slip in a week. Last week, details of an upcoming AI model were left exposed on its CMS. The company insists it’s “most capable we’ve built to date,” but clearly, internal security needs urgent attention.
Developers: stay alert. The Claude Code leak isn’t just an embarrassment, it’s a warning about what can happen when a major AI player slips.
📧 info@istormsolutions.co.uk
📞 01789 608708