Anthropic Just Leaked Claude Code's Entire Source Code. Every Crypto Dev Using AI Should Be Paying Attention.
This morning, a security researcher discovered that Anthropic shipped the entire source code of Claude Code — their AI-powered coding CLI — inside a source map file on npm. All 512,000 lines of TypeScript. Every tool definition, every permission gate, every feature flag. Publicly readable by anyone who looked.
This happened on the same day that North Korean hackers backdoored Axios via npm, dropping a RAT designed to steal crypto wallets. Two npm supply chain incidents in one morning. If you’re building in crypto and not rethinking your dependency security model right now, you’re behind.
What Was Exposed
Version 2.1.88 of the @anthropic-ai/claude-code package on npm included a 59.8 MB .map file — a JavaScript source map intended for internal debugging that was never supposed to ship. By 4:23 AM ET, the discovery was on X. Within hours, the full codebase was mirrored across GitHub and being dissected by thousands of developers.
Here’s what the source code revealed:
~40 discrete tools, each permission-gated. The tool system alone is 29,000 lines of TypeScript. Every capability Claude Code has — file reads, shell execution, web fetches, code edits — is a distinct tool with its own permission logic. This is the exact specification an attacker needs to craft inputs that exploit permission boundaries.
A 46,000-line Query Engine. This handles all LLM API calls, streaming, caching, and orchestration. It’s the core of how Claude Code decides what to do and when. Understanding this logic makes it significantly easier to construct prompt injection attacks that manipulate the agent’s decision-making.
44 unreleased feature flags. Fully built features that haven’t shipped yet. Some of these may introduce new attack surface that security teams can now study — and exploit — before they’re even released.
Internal model codenames. Capybara maps to a Claude 4.6 variant. Fennec maps to Opus 4.6. Numbat is still in testing. These details are interesting but the security-relevant finding is the architecture, not the names.
Anthropic’s response: “This was a release packaging issue caused by human error, not a security breach.” No customer data or credentials were exposed. That’s true — but it misses the point.
Why This Matters for Crypto
The leak didn’t expose secrets. It exposed the blueprint. And in crypto, where AI coding tools are increasingly used to write smart contracts, manage deployments, and interact with on-chain infrastructure, the blueprint is the attack surface.
Consider what crypto teams use AI coding assistants for:
- Writing and auditing Solidity/Rust smart contracts — if an attacker can craft a malicious repository that tricks Claude Code into executing a backdoored deployment script, funds are at risk
- Managing private keys and deployment workflows — AI tools that can read files and execute shell commands have access to
.envfiles, wallet configs, and signing infrastructure - Interacting with blockchain nodes and APIs — tool calls that make HTTP requests can be redirected through the same SSRF patterns that plagued Axios
- Running in CI/CD pipelines — where AI agents often have elevated permissions and access to production secrets
The leaked source code reveals exactly how Claude Code’s Hooks and MCP (Model Context Protocol) server integration works. Hooks are user-defined shell commands that execute in response to Claude Code events. MCP servers extend the tool system with external capabilities. Both are powerful — and both can be weaponized.
An attacker who understands the hook execution model can design a malicious .claude configuration file in a repository that, when a developer clones and runs Claude Code against it, silently exfiltrates environment variables, wallet keys, or API tokens. Before today, crafting such an attack required black-box experimentation. Now it’s open-book.
The Collision With the Axios Attack
The timing is not coincidental in its implications. On the same morning:
- North Korean hackers published backdoored Axios versions that dropped a RAT targeting crypto wallets
- Anthropic leaked the full architecture of their AI coding tool via npm
Both incidents share the same attack surface: the npm supply chain. Both affect the same population: JavaScript developers building crypto infrastructure. And both exploit the same trust model: developers who run npm install and assume what they’re pulling is safe.
If you installed or updated Claude Code between 00:21 and 03:29 UTC today, you may have pulled the malicious Axios version as a transitive dependency. The RAT would have executed during install — before you even opened your editor.
This is the kind of compounding risk that the crypto industry is uniquely bad at pricing. Each incident alone is manageable. Together, they represent a systemic failure in how crypto teams manage their development toolchain.
AI Coding Tools Need a Security Model
The Claude Code leak forces a conversation the industry has been avoiding: AI coding assistants are infrastructure, not toys. They need the same security treatment as any other privileged system in your stack.
What most crypto teams do today:
- Install AI coding tools globally with full filesystem and shell access
- Run them against untrusted codebases without sandboxing
- Give them access to the same environment variables that hold private keys and API secrets
- Trust that the tool vendor’s security is sufficient
What they should do:
- Run AI coding tools in isolated environments with no access to production secrets
- Use dedicated service accounts with minimal permissions
- Audit tool configurations (
.claudedirectories, MCP server configs) in every repository before running - Pin AI tool versions and review changelogs before updating
- Treat AI tool execution as untrusted code execution — because it is
The irony is that Claude Code’s architecture, now visible to everyone, actually shows a thoughtful permission model. Tools are gated. Capabilities are scoped. The problem isn’t that Anthropic built it wrong. The problem is that most teams deploy these tools with maximum permissions and zero isolation.
The Roles That Didn’t Exist Two Years Ago
This incident creates demand for a category of security engineer that barely existed before 2024: someone who understands both AI agent architectures and blockchain security.
| Role | What They’d Do | Typical Comp (USD) |
|---|---|---|
| AI Security Engineer | Audit AI tool configurations, prevent prompt injection in dev workflows | $170,000 - $260,000 |
| DevSecOps Engineer (AI-aware) | Sandbox AI coding tools in CI/CD, manage tool permissions | $150,000 - $230,000 |
| Supply Chain Security Engineer | Audit npm dependencies, detect malicious packages, enforce lockfiles | $180,000 - $280,000 |
| Smart Contract Security Engineer | Verify AI-generated contract code, audit deployment pipelines | $200,000 - $350,000+ |
| Security Architect (Crypto + AI) | Design zero-trust development environments for AI-augmented crypto teams | $200,000 - $300,000 |
| Threat Intelligence Analyst | Monitor for attacks targeting AI dev tools in crypto contexts | $150,000 - $220,000 |
The intersection of “understands LLM agent internals” and “understands crypto security” is vanishingly small. The people in that intersection are going to be the most sought-after hires in Web3 security for the next several years.
What To Do Now
1. Audit your AI tool setup. Check what files and environment variables your AI coding tools can access. If they can read your .env or wallet configs, fix that today.
2. Check your Claude Code version. If you’re on 2.1.88, the source map is included. Update and check that no malicious dependencies were pulled during the window.
3. Review .claude and MCP configs in your repositories. If you’re pulling open-source repos and running AI tools against them, you’re trusting the repo author’s tool configuration. Don’t.
4. Isolate AI tools from production secrets. Use separate environments, service accounts, and network segmentation. AI coding tools should never have direct access to signing keys or deployment credentials.
5. Monitor for prompt injection attacks targeting your codebase. With Claude Code’s architecture now public, expect an increase in repositories designed to exploit AI coding tools. If a repo includes a .claude directory with custom hooks, treat it as executable code and review it before use.
The Bigger Picture
Three npm-related security incidents in a single day — the Axios supply chain attack, the Claude Code source leak, and the ongoing fallout from both — paint a clear picture: the JavaScript ecosystem that crypto is built on was never designed for this threat model.
The developers building billion-dollar DeFi protocols, custody platforms, and exchange backends are using the same package manager, the same AI tools, and the same trust assumptions as someone building a blog. The attackers — including nation-state actors with nine-figure motivation — know this.
The teams that survive the next wave of supply chain attacks will be the ones that treated their development environment as a first-class security boundary. The ones that didn’t will be writing post-mortems.
Related reading: North Korea Backdoored Axios — Crypto Was the Target · Google Quantum Paper Threatens 6.9M BTC · What Crypto Companies Hire For in the AI Era
Explore AI security and DevSecOps roles at cryptogrind.com →