A packaging mistake that exposed AI coding agent security risks
The last week of March 2026 delivered a stark reminder that in the race to ship AI products at speed, even the most sophisticated technology companies are not immune to basic human error and that attackers are watching, ready to exploit the fallout within hours.
On 31 March 2026, Anthropic accidentally published the full client-side source code of Claude Code, its flagship terminal-based AI coding agent, through a packaging mistake in its public npm release. A 59.8 MB JavaScript source map file was inadvertently bundled into version 2.1.88 of the @anthropic-ai/claude-code package, exposing approximately 513,000 lines of unobfuscated TypeScript across nearly 2,000 files. Security researcher Chaofan Shou spotted the issue and flagged it publicly on X within hours, triggering a global scramble by developers, researchers, and threat actors to download, mirror, and analyse the code.
Anthropic moved quickly to confirm the incident, stating that this was a release packaging issue caused by human error, not a security breach, and that no sensitive customer data or credentials were involved. But while the company’s response was measured, the downstream consequences were anything but.
How hackers turned a packaging error into a malware campaign
What happened next illustrates a pattern that every IT and security leader should understand: the moment a high-profile technology event generates public curiosity, threat actors pivot to exploit that curiosity as a lure.
Within hours of the leak becoming public, malicious GitHub repositories appeared claiming to offer the leaked Claude Code, complete with promises of unlocked enterprise features and no usage restrictions.
Zscaler’s ThreatLabz research team identified the campaign. A repository optimised for search engine discovery appeared at the top of Google results for searches including “leaked Claude Code” and “Claude Code source download.” Curious developers who clicked through were not receiving Anthropic’s source code — they were executing a Rust-based dropper named ClaudeCode_x64.exe that deployed two distinct malware payloads:
- Vidar: a commodity infostealer capable of harvesting browser credentials, saved passwords, and cryptocurrency wallet data
- GhostSocks: proxy malware that conscripts infected machines into residential proxy infrastructure, giving attackers a way to mask their activity through compromised devices
A second repository, linked to the same threat actor operating under a different account, hosted identical payloads — a deliberate multi-channel distribution strategy designed to maximise reach before takedown.
A compounding AI security crisis
The malware campaign was not the only consequence. Security researchers at Adversa AI disclosed a critical vulnerability in Claude Code’s permission system, made far easier to exploit by the leaked source. The flaw allows a crafted CLAUDE.md file in a malicious repository to generate a pipeline of over 50 subcommands, at which point Claude Code’s deny rules, security validators, and command injection detection are entirely bypassed. Attackers could silently exfiltrate SSH keys, AWS credentials, GitHub tokens, and environment secrets from a developer’s workstation simply by having the agent open a specially crafted project.
Separately, a supply chain attack on the Axios npm package coincided almost exactly with the Claude Code leak. Developers who updated via npm on 31 March between 00:21 and 03:29 UTC may have inadvertently pulled a trojanised HTTP client containing a cross-platform remote access trojan. Security teams are strongly advised to rotate all secrets for any workstations that updated Claude Code during that window.
What enterprise IT leaders must do now
For organisations deploying AI coding agents, this incident should serve as a catalyst for a frank assessment of developer toolchain security. AI agents with local shell access and the ability to execute scripts represent a fundamentally different risk profile to traditional software. When those agents interact with external repositories, the attack surface expands considerably.
At Northdoor, we work with enterprise clients to ensure that the speed of AI adoption does not outpace the security controls needed to protect it.
Three principles should now be non-negotiable:
- Developer workstations must be treated as privileged endpoints — not general-purpose machines
- AI agents with shell execution capabilities must operate within Zero Trust architecture — with explicit permission boundaries and behavioural monitoring
- Any code claiming to be a leaked or unlocked version of a proprietary tool must be verified against official, signed sources only — no exceptions
The gap between vendor mistake and active attack has never been narrower
The Claude Code leak is a story about speed. In 2026, threat actors do not need to discover vulnerabilities independently — they need only monitor the news cycle and move faster than defenders. Anthropic acknowledged the error and committed to preventing a recurrence, but the malware campaigns, supply chain attack, and newly disclosed vulnerabilities that followed are a reminder that the consequences of an exposure can outlast the original incident by weeks.
For IT leaders, the lesson is not to avoid AI tools — the productivity gains are real and significant. The lesson is to ensure that the governance, monitoring, and access controls surrounding those tools are as sophisticated as the tools themselves.
If your organisation is deploying AI coding agents and has not yet reviewed your developer workstation security posture, speak to the Northdoor team about where to start.