The malicious @bitwarden/[email protected] was live on the npm registry for ninety-three minutes. Three hundred and thirty-four machines pulled it down — laptops, build agents, Renovate-driven dependency bumps — and on every one of them a preinstall hook ran, swept the host for credentials, and shipped them off to a domain chosen for its corporate-camouflage value: audit.checkmarx[.]cx. The payload identified itself in a comment as Shai-Hulud: The Third Coming. The first coming was September 2025. The second was January 2026. The third was three weeks ago. We are at the point where the operator brands the generations.
If you spend any time inside CI/CD pipelines, this is the part of 2026 that should be keeping you up at night, not the latest CVE in some appliance. The sustained tempo — eight months of self-propagating worms in npm and PyPI, attributed across at least two distinct operator clusters, hitting packages with combined download counts in the hundreds of millions per month — is not a sequence of unrelated incidents. It is the same primitive being reused, refined, and shipped against every new ecosystem of trust the open-source world produces. And every defensive write-up about each individual incident has been pointing at the same root cause without quite coming out and saying it: the package registries should not be running our code on install.
A short and unhappy lineage
The pattern is easier to see if you lay the major incidents out next to each other. Dates are when the malicious packages first appeared on the registry; download counts are the affected packages’ baseline pre-incident traffic, not victims.
September 2025 — Shai-Hulud (original). First documented self-propagating worm on npm. 796 packages compromised, ~132 million monthly downloads of affected versions. Created a public shai-hulud repository on each victim’s GitHub account to exfiltrate credentials. This was the proof of concept.
January 2026 — Shai-Hulud 2.0. Same brand, evolved tradecraft. Less reliance on the loud shai-hulud repository name, broader credential coverage including AI-coding-assistant tokens.
March 31, 2026 — Axios compromise. Two malicious versions of axios (>100M weekly downloads in its healthy state) injected a fake runtime dependency, [email protected], whose only purpose was to trigger an install-time script that pulled a cross-platform RAT. Notably, the package’s runtime code was unchanged — the entire attack lived in the install lifecycle, where it is invisible to anyone reading the application’s actual import statements.
Late March / April 2026 — TeamPCP cluster (Trivy GitHub Action, Checkmarx KICS, LiteLLM, VS Code Marketplace). Different ecosystems, same operator family, all chained through CI/CD release-pipeline compromise.
April 21–23, 2026 — CanisterSprawl. New campaign reusing the older CanisterWorm/Trivy ICP-canister exfil tradecraft. Hit pgserve, automagik, and the typosquats kube-health-tools and kube-node-health on npm; jumped to PyPI through xinference 2.6.0–2.6.2. The npm postinstall is 1,143 lines. Exfiltration goes to telemetry.api-monitor.com and to a decentralized Internet Computer Protocol canister that cannot be domain-seized. Initial access for the Namastex.ai branch came through poisoned pull requests with branch names matching prt-scan-{12-hex-chars}, abusing CI workflows that run on pull_request with secrets.
April 22, 2026 — Bitwarden CLI / “Shai-Hulud: The Third Coming.” TeamPCP’s pivot from Checkmarx KICS into Bitwarden’s publish-ci.yml workflow. Exfil to audit.checkmarx[.]cx. Worm logic: when the harvested npm token has publish rights, it forks the payload into the victim’s other packages and republishes. Camouflage: instead of creating the loud shai-hulud repository the earlier waves used, this version writes the exfil into existing repos on the victim’s account as ordinary-looking commits.
April 29–30, 2026 — Mini Shai-Hulud. TeamPCP again, this time hitting SAP’s Cloud Application Programming packages (mbt, @cap-js/db-service, @cap-js/sqlite, @cap-js/postgres), Intercom’s official intercom-client, and PyTorch Lightning. The novelty: a Bun-runtime stealer that drops an embedded Python helper to read /proc/<pid>/maps and /proc/<pid>/mem for the GitHub Actions Runner.Worker process and grep secrets directly out of memory — bypassing the log-masking GitHub Actions applies to stdout. PyTorch Lightning has 2.1M weekly downloads. The Lightning maintainers’ GitHub account was apparently itself compromised, because a Socket-opened issue warning users was closed within one minute by a pl-ghost account posting a “SILENCE DEVELOPER” meme.
Eight months. Six distinct named campaigns. At least two clearly different operator clusters (TeamPCP and the CanisterSprawl crew, with the original Shai-Hulud authors a third unknown). Combined exposure measured in hundreds of millions of monthly downloads.
The single primitive everyone is using
Strip away the branding and the mascots and read each incident’s package.json diff. The mechanism is identical across every one of them:
| |
That is from the Bitwarden CLI 2026.4.0 diff. The CanisterSprawl pgserve 1.1.11 diff differs in filename only. The Mini Shai-Hulud SAP packages use a preinstall hook running setup.mjs. The Axios compromise uses a postinstall in a planted dependency. PyTorch Lightning hides a _runtime/ directory that auto-runs on import lightning, which is the Python equivalent — same primitive, different ecosystem, same trust assumption.
The trust assumption is this: when a developer or a CI job runs npm install, the package manager will execute scripts written by every package in the dependency tree, with full local user authority, before any application code has had a chance to run. Those scripts can read every file the user can read, exfiltrate to anywhere the host can reach on the network, and — critically — find publish tokens and use them to ship more compromised packages downstream. That last step is what turns a one-shot package compromise into a worm.
This was a defensible default in 2010 when npm was small, packages were mostly authored by individuals, and CI runners were rare. It has been an indefensible default for at least five years, and now we have the empirical data to prove it. The September 2025 Shai-Hulud and the April 2026 Mini Shai-Hulud are not really different attacks — they are the same attack run twice, with eight months of operator iteration in between. Every detection that pops one generation gets sanded off in the next: the loud shai-hulud repo name disappears, the exfil endpoint moves from a domain seizure target to an ICP canister, log-masking gets bypassed by reading process memory directly, and so on. The defender treadmill is faster than the response time of the registries, and the registries are the only place you could actually fix this.
CI runners are the new endpoint, and it is worse than it looks
When you read these write-ups, the common reflex is to think I would notice if a stealer ran on my laptop. That is probably true, and also irrelevant. The real damage in every one of these incidents is happening on CI runners, not developer machines.
A GitHub Actions runner has, on average, a much richer credential trove than a developer laptop and far worse telemetry. It holds the secrets the workflow declared, plus whatever federated tokens the workflow can mint via OIDC, plus — courtesy of Mini Shai-Hulud’s /proc/<pid>/mem scrape — every secret the runner has ever loaded into memory, regardless of whether the workflow author thought it was using them. It has no EDR. Its outbound egress is, in the default configuration, the open internet. Its lifetime is minutes; by the time anyone could investigate, it is gone, and the only forensic record is whatever GitHub’s billing telemetry retained.
If your release workflow holds a publish token and runs on every PR, every push to main, every nightly cron, the worm only needs to land on one of those runs. The SAP root cause in Mini Shai-Hulud is the most striking case: an attacker pushed a malicious commit to the upstream repo that hijacked the release workflow, which already had publish permissions and no manual approval gate. The window between repo write and registry publish was zero. There was no second factor between anyone with commit rights to a workflow file and the SAP CAP packages on npm.
Most of you have a workflow that looks structurally like that. Look.
What “fixing it” actually means
Every advisory ends with a list of one-line mitigations, and every list looks like this: pin versions, rotate tokens, audit repos, block the IOC. Those are correct things to do, and they are not a strategy. The strategy is to remove install-time execution as a default-on capability. There are three places to do it.
At the registry. This is the right place. A package that wants to run code on install should be opted-in to that capability per-package, with publisher justification, and the lifecycle-hook bit should be visible at npm view time so that consumers can see it before they install. pnpm has been demonstrating this is feasible: pnpm v10 stopped automatically executing dependency postinstall hooks, and the recommended model is to explicitly allow only trusted packages via allowBuilds. pnpm v11, shipped April 2026, kept that default. The npm CLI itself has not made this change, and package.json lifecycle hooks are still consumed indiscriminately by anyone who has not gone out of their way to disable them.
In the build environment. If you cannot wait for the registry to change defaults — and you cannot — the next-best is to make npm ci --ignore-scripts (or npm config set ignore-scripts true) the default in your CI images and your dev onboarding scripts, then explicitly allow lifecycle hooks for the small list of packages that actually need them (mostly native module builds — node-gyp, esbuild, etc.). This is annoying for about a week and then it isn’t.
At the publish boundary. The SAP and Bitwarden compromises both came down to a release workflow that held a publish token and could be triggered by any commit author. That is the wrong topology. Publish should be a separate workflow, gated by a manual approval (GitHub environments with required reviewers), running on a runner that is not used for any other purpose, with the publish token scoped to the single package it is publishing. Granular npm access tokens — scoped, time-limited, ideally IP-restricted — replace the all-or-nothing legacy tokens that turn one compromise into an ecosystem incident. None of this is hard, all of it is paperwork, and most projects have not done it.
Detections worth running this week
Even before the architectural fixes land, there is a small set of grep-able artifacts that the current generation of worms is leaving behind. Run these.
For GitHub organizations: search recently created repositories whose description matches Shai-Hulud: The Third Coming or A Mini Shai-Hulud has Appeared. Those are the canaries — the worms create them on victim accounts as exfil stores. Also sweep recent commit messages for the literal string LongLiveTheResistanceAgainstMachines:.
For npm publish histories on every account that has installed dependencies in the last sixty days: list publishes you did not initiate. The CanisterSprawl and Shai-Hulud lineages both push patch-version bumps from stolen tokens; if you see a 1.2.3 -> 1.2.4 you do not remember authoring, that is your incident.
For package lockfiles: pin known-bad versions. @bitwarden/[email protected], [email protected]/12/13, [email protected]/2.6.1/2.6.2, [email protected], @cap-js/[email protected], @cap-js/[email protected], @cap-js/[email protected], [email protected]/7.0.5, [email protected]/2.6.3. Run npm ls and pip list against the union.
For repository CI: search for branches matching prt-scan-[0-9a-f]{12} — that is the CanisterSprawl initial-access pattern for pull_request-triggered workflows. Any repo with secrets in pull_request-triggered workflows should be treated as a public-facing surface.
For egress: block, or alert on, the known C2 endpoints — audit.checkmarx[.]cx (94.154.172[.]43), telemetry.api-monitor.com, plus any outbound to the ICP canister endpoints called out in the CanisterSprawl writeups. None of these are forever-true; the value is detecting historical compromise via NetFlow lookback.
What I expect the next one to look like
I would bet on three vectors for whatever lands in May or June, in roughly this order of likelihood. First, a private-registry compromise — most of these incidents have hit public npm and PyPI, but plenty of organizations operate Verdaccio, Artifactory, or GitHub Packages instances whose access controls and auditing are markedly worse than the public registries. Once one operator figures out the mass-customization angle there, that will be the new front. Second, a more aggressive jump into language ecosystems with weaker install-time defenses than npm — Rubygems, Composer, NuGet, perhaps Cargo despite its better isolation story. Third, an attack that targets the development tooling itself rather than runtime packages: linters, formatters, test runners, language-server installers, the things developers install globally and rarely audit.
The constant across all of those is the install-time execution primitive. As long as fetching a dependency means running a stranger’s code, the cycle continues. The September 2025 Shai-Hulud was a warning. Eight months later we have six campaigns and counting, and the line on the chart is going up. The defenders’ counter-argument — that we will catch each variant faster, that we will harden tokens, that we will pin versions — is correct in detail and wrong at the level of the problem. We will not exit this loop by getting better at the cleanup. We will exit it when running npm install stops being a remote code execution event by default.
What to do this quarter
If you operate any infrastructure that pulls JavaScript or Python dependencies from a public registry — and at this point, that includes almost everyone, since these languages have eaten DevOps tooling — pick three things from this list and finish them by the end of the quarter:
Move CI to npm ci --ignore-scripts (or pnpm v10/v11 with allowBuilds whitelist) and accept the one-week paper cut while you whitelist the half-dozen packages that legitimately need build hooks. Move every publish workflow to a separate, manually-approved GitHub environment with a scoped, expiring publish token, and rotate the existing legacy tokens out. Add detections for the IOCs above to whatever telemetry you have — egress logs, GitHub audit logs, npm publish notifications.
If you ship packages, narrow the blast radius of your publish identity. If you depend on packages, narrow what the install can do. Neither is novel; both are uncomfortable; both work.
The worms are not slowing down. They are getting better. Eventually the registries will change defaults, because the cost of not doing so is becoming visibly catastrophic, and npm ci --ignore-scripts will become the install command. We will look back on the package-manager-as-script-host era the way we look back on eval(http_get(url)) and wonder what we were thinking. In the meantime, the gap between what is technically the current default and what is operationally safe is yours to close on every machine and runner you own.