RC RANDOM CHAOS
securityengineering

Your npm install Just Ran Someone Else's Code

Supply chain security is not a dependency problem. It is a trust delegation problem. And the system was never designed to handle the weight.

· 5 min read

Every modern application is mostly other people’s code.

That is not an exaggeration. A typical project pulls in hundreds, sometimes thousands, of packages. Each one is maintained by someone the development team has never met, operating under no contractual obligation, with no formal review process, and no accountability if something goes wrong.

The assumption was that open source would be self-correcting. Many eyes make all bugs shallow. The community would catch problems. The ecosystem would police itself.

That assumption no longer holds.

The Trust Chain

When a developer runs an install command, they are not just downloading code. They are executing it.

Package managers support lifecycle scripts. Pre-install hooks, post-install hooks, arbitrary shell commands triggered automatically during dependency resolution. The developer does not need to approve this execution. In most environments, they do not even see it.

This means that adding a dependency is not a passive act. It is a trust delegation. The developer is granting execution rights to a stranger, and to every transitive dependency that stranger chose to include.

The chain extends further than most teams realize. A single top-level package might resolve into dozens of transitive dependencies. Each one is a node in a trust graph that nobody has drawn and nobody is monitoring.

The system was designed for convenience. It was not designed for verification.

What Changed

Open source package ecosystems grew faster than governance could follow.

In the early period, the major registries were small enough that reputation functioned as a filter. Maintainers were known. Packages were few. The cost of trust was low because the scope of trust was small.

What changed was scale. Registries now host millions of packages. Maintainership is often anonymous or pseudonymous. Transfer of ownership happens without notification. A package that was safe last month may have changed hands, and the new owner’s intentions are unknown.

Attackers recognized this immediately.

The supply chain became an attack surface not because it was poorly defended, but because it was never conceived as something that needed defending. The system was built around the assumption that participation implied good faith. That assumption made the ecosystem fast. It also made it structurally exploitable.

The Mechanism

The attack patterns are not sophisticated. They do not need to be.

Typosquatting. Register a package with a name one character off from a popular library. Wait for someone to mistype an install command. The malicious package runs its payload during installation, before the developer realizes anything is wrong.

Dependency confusion. Discover that an organization uses internal package names that do not exist in the public registry. Register those names publicly. Many build systems will resolve the public package over the internal one. The payload executes inside the build pipeline, often with elevated privileges.

Maintainer compromise. Identify a widely used package maintained by a single person. Gain access to their account through credential reuse, social engineering, or simply offering to help with maintenance. Push an update. Thousands of downstream projects pull it automatically.

None of these require breaking into a system. They require understanding how the system already works and using its own mechanics against it.

Chris Roberts would recognize the pattern. You do not break in. You wait for the system to hand you access.

The Visibility Gap

Most organizations do not have a complete inventory of what runs inside their build pipelines.

They know their direct dependencies. They may have a lockfile. They rarely have a current, verified map of the full transitive dependency tree, the maintainers behind each node, the history of ownership transfers, or the lifecycle scripts that execute during installation.

This is a visibility problem dressed as a tooling problem.

Software composition analysis tools exist. They scan for known vulnerabilities. They produce reports. But they operate on a model of known-bad, not unknown-untrusted. A newly compromised package with no CVE will pass every scan. A legitimate package with a malicious post-install script will not trigger a vulnerability alert.

The system measures what it can count. Known vulnerabilities are countable. Trust integrity is not.

Ron Gula would frame this directly. If you cannot observe the execution behavior of your dependency chain, you do not control your build environment. You are recording what you hope is happening.

The Structural Problem

The incentive structure of open source maintenance makes this worse over time, not better.

Maintainers of critical packages are often unpaid volunteers. They carry the weight of infrastructure used by thousands of organizations, with no resources, no security team, and no obligation to continue. Burnout is common. Abandonment is common. When a maintainer walks away, the package does not disappear. It sits in the registry, still being pulled, still being trusted, now unmaintained and available for takeover.

The organizations that depend on this code rarely contribute to its maintenance. They consume it freely and assume continuity. That assumption is a risk position, but it is almost never modeled as one.

Ryan Cloutier would frame this as governance failure. The organization has delegated a critical trust decision to an unmanaged external supply chain and has no mechanism to detect when that trust is violated. This is not a technical gap. It is an unmeasured business risk.

The Deeper Pattern

Supply chain compromise is not fundamentally a dependency management problem.

It is a trust delegation problem in an ecosystem that was never designed to carry this much weight. The registries were built for sharing, not for assurance. The tooling was built for speed, not for verification. The culture was built on the assumption that openness and good faith would scale alongside adoption.

What actually scaled was dependency depth, maintainer fatigue, attacker interest, and organizational blindness to where their trust boundaries actually sit.

This kind of failure does not stay contained to development environments. It propagates through build systems, into production artifacts, and into every environment that artifact touches.

The install command still works the same way it always did.

It now executes code you do not control, from people you do not know, inside systems you assume are trusted. And most teams have no way to tell the difference.