🔴 BREAKING (April 4, 2026)
On March 31, 2026, Anthropic accidentally shipped the complete source code for Claude Code inside a public npm package. Within hours, 512,000 lines of TypeScript were mirrored across GitHub, a critical vulnerability was discovered, and malware campaigns began targeting developers. This post covers everything that happened, what it means for your security, and exactly what to do right now.
⚡ TL;DR – The Bottom Line
What Happened: Anthropic accidentally shipped 512,000 lines of Claude Code’s complete TypeScript source code in a public npm package on March 31, 2026.
What Was Exposed: The full client-side agent harness — permission systems, tool orchestration, memory architecture, 44 unreleased features, and internal model codenames. No customer data or model weights were leaked.
The Worst Part: A critical permission bypass vulnerability (known internally as CC-643) was found — and Anthropic had already built a fix but hadn’t shipped it to users.
⚠️ Action Required: Update past v2.1.88 immediately, switch from npm to the native installer, rotate your API keys, and check lockfiles for malicious axios packages if you installed on March 31.
📑 Quick Navigation
The Bottom Line
The Claude Code leak is the biggest accidental source code exposure in AI history. It came just five days after Anthropic separately leaked internal documents about their upcoming “Mythos” model. For a company that markets itself as the “safety-first AI lab,” two major security lapses in one week is a serious blow to credibility. The leak didn’t expose model weights, customer data, or API credentials. But it exposed the complete operational blueprint of how Claude Code works, including 44 unreleased feature flags, internal model codenames, and a permission system vulnerability that Anthropic knew about but hadn’t fixed in public builds.
If you’re a Claude Code user: update past version 2.1.88 immediately, use the native installer going forward, and rotate your API keys. If you installed via npm on March 31 between 00:21 and 03:29 UTC, check your lockfiles for malicious axios packages. If you’re evaluating Claude Code for your team: the tool remains best-in-class for AI coding, but this incident should factor into your security posture. For the full feature breakdown, read our Claude Code review 2026.
The Claude Code Leak: What Actually Happened

Here’s the timeline, stripped of speculation. On March 31, 2026, Anthropic pushed version 2.1.88 of the @anthropic-ai/claude-code package to npm. That package contained a 59.8 MB JavaScript source map file, a debugging tool that maps compiled code back to readable source. Source maps are meant to stay in development environments. This one shipped to production.
By 4:23 AM ET, security researcher Chaofan Shou spotted it and posted a direct download link on X. The source map pointed to an unprotected archive on Anthropic’s own Cloudflare R2 storage bucket. Anyone could click it and download the complete, unobfuscated, commented TypeScript source code. Within two hours, GitHub repositories mirroring the code hit 50,000 stars, the fastest in the platform’s history.
The scale, as VentureBeat reported: approximately 512,000 lines of TypeScript across 1,906 files. This wasn’t a snippet or a partial exposure. It was the entire client-side agent harness, including the permission model, every bash security validator, tool call orchestration, MCP server integration, multi-agent coordination systems, and the self-healing memory architecture that developers had been trying to reverse-engineer for months.
Anthropic confirmed the leak to multiple outlets, calling it “a release packaging issue caused by human error, not a security breach.” They emphasized that no customer data, model weights, or API credentials were exposed. The likely root cause was a known bug in Bun (the JavaScript runtime Claude Code is built on) that serves source maps in production mode even when they should be excluded.
🔍 REALITY CHECK
Marketing Claims: “Anthropic positions itself as the ‘safety-first AI lab’ with rigorous security practices.”
Actual Experience: Two major accidental data exposures in five days. The Mythos/Capybara blog post leak (March 26) exposed ~3,000 internal assets through a CMS misconfiguration. The Claude Code leak (March 31) shipped complete source code via npm. Both were attributed to “human error.”
Verdict: The AI models may be world-class, but the release engineering and operational security clearly need work. Being “safety-first” at the model layer doesn’t help if the packaging pipeline has manual steps that can fail this catastrophically.
What the Claude Code Leak Actually Revealed
Developers and security researchers spent the days after the leak poring over the source code. Here’s what they found, and why it matters whether you’re a Claude Code user or just watching from the sidelines.
44 Unreleased Feature Flags
The source code contained references to features Anthropic hasn’t announced yet. The most significant is KAIROS, an autonomous background agent that can run persistent sessions, periodically fix errors, and send push notifications without waiting for human input. Think of it as Claude Code that never clocks out. Another discovery was the “Buddy System,” a virtual pet feature with 18 species, rarity levels, and shiny variants. Yes, Anthropic’s engineers built a Tamagotchi inside their coding agent. The code also confirmed references to the Capybara/Mythos model that had leaked separately days earlier, suggesting it’s further along than Anthropic’s cautious public statements implied.
Undercover Mode
A file called undercover.ts instructs Claude Code to hide its AI identity when contributing to public open-source repositories. The code explicitly states: “You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover.” This drew immediate criticism from the open-source community, where some projects have policies against AI-generated code contributions. You can force Undercover Mode on, but you cannot force it off.
Self-Healing Memory Architecture
The leaked source revealed how Claude Code solves “context entropy,” the tendency for AI agents to become confused during long sessions. It uses a three-layer memory system centered on MEMORY.md, a lightweight index of pointers that stays permanently in context. A background process called autoDream consolidates memory across sessions, merging outdated information and resolving contradictions. This runs automatically when four conditions are met: at least 24 hours since the last consolidation, at least 5 new sessions, no other consolidation running, and at least 10 minutes since the last scan. For developers building their own AI agents, this architecture is essentially a free masterclass. For Anthropic, it’s competitive intelligence that Cursor, Antigravity, and every other coding agent can now study.
💡 Key Takeaway: The leaked source code is a goldmine for competitors and security researchers alike. If you’re building AI agents, the three-layer memory architecture and multi-agent coordination patterns are now public knowledge. If you’re an Anthropic competitor, the architectural moat just got a lot shallower.
The Critical Vulnerability Anthropic Already Knew About

Days after the Claude Code leak, AI security firm Adversa AI discovered a serious flaw in Claude Code’s permission system. Here’s the plain-English version: Claude Code has rules that block dangerous commands. You can configure it to always deny commands like curl (which makes network requests) to prevent data from being sent to outside servers. These deny rules are your safety net.
But the source code revealed a catch. When Claude Code encounters a compound command with more than 50 subcommands (instructions chained together with && or ||), it stops checking each one against the deny rules. Instead, it just asks you “Do you want to proceed?” without telling you that the safety checks were skipped. The reasoning was performance: analyzing every subcommand in a very long chain caused the interface to freeze. Anthropic’s internal ticket CC-643 documented this as a known issue.
Here’s where it gets worse. A comment in the code says: “Fifty is generous: legitimate user commands don’t split that wide.” That’s true for human-typed commands. But it doesn’t account for AI-generated commands from prompt injection, where a malicious CLAUDE.md file in a repository instructs Claude Code to generate a 50+ subcommand pipeline that looks like a legitimate build process. An attacker could embed realistic-looking build steps in a repository’s configuration file. Claude Code would generate the long command chain, skip all deny rule enforcement, and the user would see a generic approval prompt with no indication that security checks were bypassed.
The truly frustrating part: Anthropic already built a fix. A parser called “tree-sitter” exists in the source code and works internally. But it wasn’t enabled in the public builds that customers use. Adversa noted that even a one-line code change, switching the fallback behavior from “ask” to “deny,” would have addressed this specific vulnerability. As of Claude Code v2.1.90, the vulnerability appears to have been silently patched.
🔍 REALITY CHECK
Marketing Claims: “Claude Code’s permission system provides ‘defense-in-depth’ security with allow, deny, and ask rules.”
Actual Experience: The deny rules silently stopped working after 50 subcommands. Anthropic knew about it (internal ticket CC-643), built a fix (tree-sitter parser), and didn’t ship it to customers. The vulnerability existed independently of the AI model’s safety filters — a bug in the code, not a limitation of Claude’s intelligence.
Verdict: The permission system works as advertised for normal use. But “normal use” doesn’t include AI-generated commands from prompt injection — exactly the threat model an AI coding agent should be designed for. Fixed in v2.1.90, but the gap between having a fix and shipping it is concerning.
💡 Key Takeaway: If you’re using Claude Code’s deny rules as your only security layer, that’s not enough. Always audit CLAUDE.md files in unfamiliar repositories, review AI-generated commands before approving them, and never run Claude Code with --dangerously-skip-permissions outside a disposable sandbox.
The Supply Chain Attack That Made Everything Worse
In a coincidence that no one would believe in fiction, a completely separate supply chain attack hit npm on the same day. Between 00:21 and 03:29 UTC on March 31, malicious versions of the popular axios HTTP library (versions 1.14.1 and 0.30.4) appeared on npm containing a Remote Access Trojan. This was unrelated to Anthropic but catastrophically bad timing. Anyone who installed or updated Claude Code via npm during that three-hour window may have pulled in the trojanized axios package alongside the leaked source code.
If you installed Claude Code via npm on March 31, check your lockfiles (package-lock.json, yarn.lock, or bun.lockb) immediately. Search for axios versions 1.14.1 or 0.30.4, or the dependency plain-crypto-js. If you find them, uninstall immediately, rotate all secrets (API keys, SSH keys, tokens), and monitor your accounts for suspicious activity.
Anthropic’s Security Cascade: March 2026
Severity rating (1-10) of each incident in chronological order
The DMCA Takedown That Backfired
Anthropic’s response to the Claude Code leak compounded the damage. The company filed copyright takedown notices with GitHub to remove repositories containing the leaked source code. According to GitHub’s records reported by TechCrunch, the notice was executed against approximately 8,100 repositories, including legitimate forks of Anthropic’s own publicly released Claude Code repository. Developers who had never even seen the leaked code found their projects suddenly inaccessible.
Boris Cherny, Anthropic’s head of Claude Code, acknowledged the overbroad takedowns were accidental and retracted the majority of the notices, limiting the action to one repository and 96 forks containing the leaked source map. GitHub restored access to the affected repositories. But the damage to developer goodwill was immediate. Reddit threads and Hacker News comments pointed out the irony: a company whose AI models are trained on vast amounts of internet data was now aggressively enforcing copyright on its own leaked code. One commenter summarized the sentiment: “The irony is rich.”
Adding another layer: Gartner noted that Claude Code is reportedly 90% AI-generated, per Anthropic’s own public disclosures. Under current U.S. copyright law, which requires human authorship, the leaked code may carry diminished intellectual property protection. The Supreme Court declined to revisit the human authorship standard in March 2026. This creates an unresolved legal question for every organization shipping AI-generated production code.
📬 Following this story?
Get honest AI security analysis delivered weekly. No hype, no spam.
Malware Campaigns: Hackers Weaponized the Leak
Within days of the Claude Code leak, threat actors began using it as bait. Security firm Zscaler discovered malicious GitHub repositories claiming to contain Claude Code’s source code with “unlocked enterprise features.” These repositories ranked on the first page of Google for terms like “leaked Claude Code.” Inside was a 7-Zip archive containing ClaudeCode_x64.exe, a Rust-based dropper that installs Vidar (an infostealer that grabs browser data, passwords, and cryptocurrency wallets) and GhostSocks (a network traffic proxy tool).
The bottom line: do not download, fork, build, or run code from any GitHub repository claiming to be the leaked
Claude Code. Verify everything against Anthropic’s official channels only. Leaked code is not “open source.” It
remains proprietary and potentially dangerous to run unmodified. If you need Claude Code, install it through the
official native installer: curl -fsSL https://claude.ai/install.sh | bash.
A Pattern of Security Issues
The Claude Code leak didn’t happen in isolation. Anthropic has been shipping new products at remarkable speed, and security researchers have been finding cracks. On March 19, Oasis Security reported three vulnerabilities in Claude that, when chained together, created a complete attack path from victim delivery to data theft. They called it “Cloudy Day” and responsibly disclosed it to Anthropic, who quickly patched it. On March 27, Koi Security found a major flaw in the Claude in Chrome extension called “ShadowPrompt” that enabled zero-click attacks and potential data exfiltration. Read our Claude in Chrome review for more on that extension’s capabilities and limitations.
GitGuardian’s State of Secrets Sprawl 2026 report added another data point: Claude Code-assisted commits leaked secrets (API keys, passwords, credentials) at a 3.2% rate, more than double the 1.5% baseline across all public GitHub commits. AI service credential leaks surged 81% year-over-year. GitGuardian noted the elevated rate reflects human workflow failures amplified by AI speed, not a defect in Claude Code itself. But it underscores a broader truth: AI coding agents lower the barrier to powerful development while simultaneously lowering the barrier to security mistakes.
💡 Key Takeaway: AI coding agents amplify both productivity and risk. GitGuardian’s data shows Claude Code-assisted commits leak secrets at 2x the baseline rate — not because the tool is flawed, but because AI speed outpaces human review. If your team uses any AI coding tool, secrets scanning in CI/CD is no longer optional.
Claude Code Leak: Impact Assessment
How the leak affected different dimensions of Anthropic’s ecosystem
What This Means for Enterprise Users
If your organization uses Claude Code, or is evaluating it alongside tools like ChatGPT Codex or Google Antigravity, this incident should trigger a security posture review. According to analysts at Everest Group, Greyhound Research, and Pareekh Consulting, some enterprises are pausing expansion of Claude Code in their workflows, though few are expected to rip and replace immediately.
The practical steps enterprise security teams should take right now: isolate development environments where AI agents operate, enforce stricter repository permissions, require human review before any AI-generated output reaches production, audit any repository’s CLAUDE.md file before running Claude Code against it, and monitor developer workstations for anomalous outbound connections. CrowdStrike’s CTO Elia Zaitsev put it bluntly at RSAC 2026: “Don’t give an agent access to everything just because you’re lazy. Give it access to only what it needs to get the job done.”
The exposed permission logic also means attackers can now design malicious repositories specifically tailored to trick Claude Code into running unauthorized commands or exfiltrating data. This isn’t theoretical. The Adversa vulnerability proved the concept, and the leaked source code makes crafting convincing attacks significantly easier by revealing the exact interface contracts for hooks, MCP servers, and the four-stage context management pipeline. For teams using Claude Agent Teams or Claude Code plugins, the expanded attack surface is proportional to the number of integrations.
Is Claude Code Still Safe to Use?
Yes, with caveats. The Claude Code leak exposed how the tool works, not the AI model itself. The model weights, training data, and customer data were not compromised. The Adversa vulnerability has been silently patched in v2.1.90. And Claude Code remains the highest-scoring AI coding agent on SWE-bench Verified at 80.8%. The core product is still best-in-class for autonomous coding work.
But “safe” now requires more vigilance from users. Before this leak, attackers had to brute-force prompt injections and jailbreaks against Claude Code’s defenses without knowing the implementation details. Now they have a complete roadmap. The attack research cost, as security firm Straiker put it, “collapsed overnight.” You should treat Claude Code like any privileged software: apply the same scrutiny you’d give an admin tool or automation platform.
Here’s your action checklist. If you use the native installer (recommended), update to the latest version
immediately. Auto-updates should handle this, but verify with claude --version. If you installed via
npm, uninstall version 2.1.88 and switch to the native installer. Check your lockfiles for malicious axios versions
if you updated on March 31. Rotate your Anthropic API keys via the developer console. Never run Claude Code with
--dangerously-skip-permissions in anything other than a disposable sandbox. Audit CLAUDE.md and
.claude/config.json files before running Claude Code in unfamiliar repositories. For cost-conscious developers, the
Claude Code Router offers an alternative pathway
that reduces dependency on Anthropic’s infrastructure.
What Developers Are Actually Saying
The developer community response has been a mix of fascination, concern, and dark humor. On Reddit, the biggest thread appeared on r/LocalLLaMA with over 3,700 upvotes, where the focus was on what the architecture reveals for building similar systems with open-weight local models. On r/ClaudeAI, a popular thread with 1,800+ upvotes highlighted that one developer used the leaked source to find and patch a bug causing excessive token consumption. Theo Browne (t3.gg) called Anthropic’s closed-source strategy “the biggest fumble in the AI era,” pointing to cache invalidation bugs that were silently costing users 10-20x more in tokens.
Some developers pushed back on the panic. Developer Skanda argued the leak was “kind of clickbait” since the minified JavaScript was always readable in the npm package, and the source map just made it more accessible as TypeScript. Others noted that code you can read is code you can audit, making open scrutiny a net positive for security over time. The counterargument: there’s a difference between code built in the open from day one and a closed codebase suddenly exposed without the benefit of community feedback shaping it along the way.
The most interesting speculation: some developers question whether the “leak” was actually a deliberate PR move. Two high-profile exposures in five days, both generating enormous excitement about Anthropic’s upcoming roadmap, just days after the company angered developers by sending legal threats to the OpenCode project. Within 48 hours of the leak, developer sentiment on social media flipped from “Anthropic sucks” to “look what Anthropic is building.” The strongest argument against the conspiracy theory: nobody would engineer a PR stunt to overlap with an active malware attack on npm.
What Comes Next
The Claude Code leak is a before-and-after moment for AI coding agent security. Several things are now inevitable. Anthropic will face tougher enterprise procurement conversations. Procurement teams will push for tighter release controls, clearer incident reporting, and stronger indemnity clauses. The exposed features like KAIROS (autonomous background agents) and Undercover Mode raise governance questions that enterprises will want answered before expanding their Claude Code deployments.
The broader AI coding tool market will adjust. Competitors like Codex, Cursor, and Goose AI now know exactly how Anthropic solved context management, permission systems, and multi-agent coordination. The architectural insights are permanently public. But as one analyst noted, Anthropic’s real moat was always the Claude model itself, not the CLI tool wrapping it.
For the full picture on Claude’s ecosystem and whether it’s still the right choice for your workflow, start with our Claude AI review 2026 and explore our guides on Claude Cowork, Claude computer use, and the full AI agents for developers landscape.
FAQs: Your Questions Answered
Q: Was my data exposed in the Claude Code leak?
A: No. Anthropic confirmed that no customer data, API credentials, or model weights were exposed. The leak contained only the client-side source code for Claude Code itself, which is the software that runs on your machine and communicates with Anthropic’s servers.
Q: Do I need to update Claude Code right now?
A: Yes. Update past version 2.1.88 immediately. If you use the native installer, auto-updates should
handle this. Verify with claude --version. If you’re still on npm, switch to the native installer:
curl -fsSL https://claude.ai/install.sh | bash. The Adversa vulnerability appears fixed in v2.1.90.
Q: Is Claude Code still safe to use after the leak?
A: Yes, but with increased caution. The core product and AI model are unaffected. The known
vulnerability has been patched. However, attackers now have detailed knowledge of Claude Code’s security
architecture, so follow best practices: never use --dangerously-skip-permissions in production, audit
unfamiliar repositories before running Claude Code in them, and review AI-generated commands before approving them.
Q: What was the malicious axios package issue?
A: Unrelated to Anthropic, but coinciding with the leak. Malicious versions of the axios npm package (1.14.1 and 0.30.4) containing a Remote Access Trojan appeared on npm between 00:21 and 03:29 UTC on March 31. If you installed Claude Code via npm during that window, check your lockfiles for these versions and the dependency plain-crypto-js. Remove them immediately and rotate all secrets if found.
Q: Should enterprises pause their Claude Code deployments?
A: Most analysts say enterprises should tighten security posture rather than rip and replace. Practical steps include environment isolation, stricter repository permissions, enforced human review of AI-generated output, and auditing Claude Code configuration files in all repositories. Some organizations may pause expansion while evaluating the incident, but the underlying tool remains capable.
Q: What is the Undercover Mode that was found in the code?
A: A feature that instructs Claude Code to hide its AI identity when contributing to public open-source repositories. It prevents internal codenames and Anthropic-specific information from appearing in commits and pull requests. This is controversial because some open-source projects explicitly ban AI-generated contributions.
Q: What is KAIROS?
A: An unreleased feature discovered in the leaked code. KAIROS would allow Claude Code to operate as a persistent background agent, running tasks autonomously, fixing errors, and sending push notifications without waiting for human input. It hasn’t been officially announced and raises significant governance questions about always-on AI agents with system access.
Q: How does this compare to other AI tool security incidents?
A: This is the largest accidental source code exposure for a major AI coding tool. For comparison, GitHub Copilot has faced prompt injection concerns but no source code leaks of this scale. Google’s Antigravity has dealt with rate limit issues but not security breaches. The Claude Code leak is unique in both scale (512,000 lines) and secondary consequences (vulnerability discovery, malware campaigns, DMCA chaos).
Q: Can competitors now clone Claude Code?
A: They can study the architecture, but they can’t replicate the product. The leaked code is the harness that wraps around Claude’s AI models. Without the model weights and training data, the source code alone doesn’t reproduce Claude Code’s capabilities. As one security expert put it: they can see the blueprints but can’t build the engine. That said, architectural patterns like the three-layer memory system and multi-agent coordination are now public knowledge that any team can learn from.
Final Verdict

The Claude Code leak is embarrassing for Anthropic and genuinely consequential for security. But it doesn’t change the fundamental product calculus. Claude Code remains the best AI coding agent available, with the highest accuracy on real-world coding benchmarks and the richest feature ecosystem of any terminal-based tool. The leak exposed the wrapper, not the engine.
What it does change is the trust equation. Anthropic’s “safety-first” branding took a real hit. Two major leaks in five days, a known-but-unshipped vulnerability fix, an overbroad DMCA takedown that nuked legitimate developer repositories, and an Undercover Mode that actively conceals AI authorship. Each of these individually would be a manageable incident. Together, they paint a picture of a company whose product engineering outpaced its operational security.
Use Claude Code if you need best-in-class AI coding capability and you’re willing to apply enterprise-grade security practices around it. Stick with Cursor if you prefer an IDE-first workflow with less surface area exposure. Consider Goose AI if you want an open-source alternative where you control the entire stack. And regardless of which tool you choose, treat every AI coding agent as privileged software that deserves the same security scrutiny as any tool with shell access to your development environment.
Update your Claude Code installation today: curl -fsSL https://claude.ai/install.sh | bash
Founder of AI Tool Analysis. Tests every tool personally so you don’t have to. Covering AI tools for 10,000+ professionals since 2025. See how we test →
Stay Updated on AI Security
Don’t miss the next major update. Subscribe for honest AI coding tool reviews, price drop alerts, and breaking feature launches every Thursday at 9 AM EST.
- ✅ Honest Reviews: We actually test these tools, not rewrite press releases
- ✅ Price Tracking: Know when tools drop prices or add free tiers
- ✅ Feature Launches: Major updates covered within days
- ✅ Comparison Updates: As the market shifts, we update our verdicts
- ✅ No Hype: Just the AI news that actually matters for your work
Free, unsubscribe anytime. 10,000+ professionals trust us.
Want AI insights? Sign up for the AI Tool Analysis weekly briefing.
Newsletter

Related Reading
Explore more AI coding tool reviews and security analysis:
- Claude Code Review 2026: Voice Mode, /Loop and 1M Context
- Claude AI Review 2026: The Full Platform
- Claude Code vs Cursor 2026
- Top AI Agents for Developers 2026
- Claude Code Plugins: 9,000+ Extensions
- Claude Code Router: Cut Your AI Coding Bill by 80%
Last Updated: April 4, 2026
Claude Code Version Referenced: v2.1.88 (leaked) / v2.1.90 (patched)
Next Review Update: May 4, 2026
Have a tool you want us to review? Suggest it here | Questions? Contact us