Hacker Newsnew | past | comments | ask | show | jobs | submit | ashishb's commentslogin

I run them inside a sandbox.

The npm community is too big that one can never discard it for frontend development.


> auditing your dependencies is

How do you do that practically? Do you read the source of every single package before doing a `brew update` or `npm update`?

What if these sources include binary packages?

The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3

Can anyone even review it in a month? And they publish a new update weekly.


> The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3

You’re looking at the number of dependents. The React package has no dependencies.

Asides:

> Do you read the source of every single package before doing a `brew update` or `npm update`?

Yes, some combination of doing that or delegating it to trusted parties is required. (The difficulty should inform dependency choices.)

> What if these sources include binary packages?

Reproducible builds, or don’t use those packages.


> You’re looking at the number of dependents. The React package has no dependencies.

Indeed.

My apologies for misinterpreting the link that I posted.

Consider "devDependencies" here

https://github.com/facebook/react/blob/main/package.json

As far as I know, these 100+ dev dependencies are installed by default. Yes, you can probably avoid it, but it will likely break something during the build process, and most people just stick to the default anyway.

> Reproducible builds, or don’t use those packages.

A lot of things are not reproducible/hermetic builds. Even GitHub Actions is not reproducible https://nesbitt.io/2025/12/06/github-actions-package-manager...

Most frontend frameworks are not reproducible either.

> don’t use those packages.

And do what?


> As far as I know, these 100+ dev dependencies are installed by default.

devDependencies should only be installed if you're developing the React library itself. They won't be installed if you just depend on React.


> They won't be installed if you just depend on React.

Please correct me if I am wrong, here's my understanding.

"npm install installs both dependencies and dev-dependencies unless NODE_ENV is set to production."


It does not recursively install dev-dependencies.

> It does not recursively install dev-dependencies.

So, these ~100 [direct] dev dependencies are installed by anyone who does `npm install react`, right?


No. They’re only installed if you git clone react and npm install inside your clone.

They are only installed for the topmost package (the one you are working on), npm does not recurse through all your dependencies and install their devDependencies.


> ~100 [direct]

When you do `npm install react` the direct dependency is `react`. All of react's dependencies are indirect.


Run `npm install react` and see how many packages it says it added. (One.)

If you're trying to audit React, don't you either need to audit its build artifacts rather than its source, or audit those dev dependencies too?

> And do what?

Keep on keepin on


The best tool for your median software-producing organization, who can’t just hire a team of engineers to do this, is update embargoes. You block updating packages until they’ve been on the registry for a month or whatever by default, allowing explicit exceptions if needed. It would protect you from all the major supply-chain attacks that have been caught in the wild.

> The popular Javascript React framework has 15K direct and 2K indirect dependencies - https://deps.dev/npm/react/19.2.3

You’re looking a dependents. The core React package has no dependencies.


In security-sensitive code, you take dependencies sparingly, audit them, and lock to the version you audited and then only take updates on a rigid schedule (with time for new audits baked in) or under emergency conditions only.

Not all dependencies are created equal. A dependency with millions of users under active development with a corporate sponsor that has a posted policy with an SLA to respond to security issues is an example of a low-risk dependency. Someone's side project with only a few active users and no way to contact the author is an example of a high-risk dependency. A dependency that forces you to take lots of indirect dependencies would be a high-risk dependency.

Here's an example dependency policy for something security critical: https://github.com/tock/tock/blob/master/doc/ExternalDepende...

Practically, unless you code is super super security sensitive (something like a root of trust), you won't be able to review everything. You end up going for "good" dependencies that are lower risk. You throw automated fuzzing and linting tools, and these days ask AI to audit it as well.

You always have to ask: what are the odds I do something dumb and introduce a security bug vs what are the odds I pull a dependency with a security bug. If there's already "battle hardened" code out there, it's usually lower risk to take the dep than do it yourself.

This whole thing is not a science, you have to look at it case-by-case.


If that is really the case (I don't know numbers about React), in projects with a sane criteria of security, they would either only jump between versions that have passed a complete verification process (think industry certifications); or the other option is that simply by having such an enormous amount of dependencies would render that framework an undesirable tool to use, so they would just avoid it. What's not serious is living the life and incorporating 15-17K dependencies blindly because YOLO.

(so yes, I'm stating that 99% of JS devs who _do_ precisely that, are not being serious, but at the same time I understand they just follow the "best practices" that the ecosystem pushes downstream, so it's understandable that most don't want to swim against the current when the whole ecosystem itself is not being serious either)


> How do you do that practically? Do you read the source of every single package before doing a `brew update` or `npm update`?

There are several ways to do this. What you mentioned is the brute-force method of security audits. That may be impractical as you allude to. Perhaps there are tools designed to catch security bugs in the source code. While they will never be perfect, these tools should significantly reduce the manual effort required.

Another obvious approach is to crowd source the verification. This can be achieved through security advisory databases like Rust's rustsec [1] service. Rust has tools that can use the data from rustsec to do the audit (cargo-audit). There's even a way to embed the dependency tree information in the target binary. Similar tools must exist for other languages too.

> What if these sources include binary packages?

Binaries can be audited if reproducible builds are enforced. Otherwise, it's an obvious supply chain risk. That's why distros and corporations prefer to build their software from source.

[1] https://rustsec.org/


More useful than reading the code, in most cases, is looking at who's behind the code. Can you identify the author? Do they have an identity and reputation in the space? Are you looking at the version of the package they manage? People often freak out about the number of packages in such ecosystems but what matters a lot more is how many different people are in your dependency tree, who they are, and how they operate.

(The next most useful step, in the case where someone in your dependency tree is pwned, is to not have automated systems that update to the latest version frequently. Hang back a few days or so at least so that any damage can be contained. Cargo does not update to the latest version of a dependency on a built because of its lockfiles: you need to run an update manually)


> More useful than reading the code, in most cases, is looking at who's behind the code. Can you identify the author? Do they have an identity and reputation in the space?

That doesn't necessarily help you in the case of supply chains attacks. A large proportion of them are spread through compromised credentials. So even if the author of a package is reputable, you may still get malware through that package.


Normally it would omly be the diff from a previous version. But yes, it's not really practical for small companies or individuals atm. Larger companies do exactly this.

We need better tooling to enable crowdsourcing and make it accessible for everyone.


> Larger companies do exactly this.

Someone committed malicious code in Amazon Developer Q.

AWS published a malicious version of their own extension.

https://aws.amazon.com/security/security-bulletins/AWS-2025-...


Show me how you will escape a docker sandbox.

This is a well understood and well documented subject. Do your own research.

Start here to help give you ideas for what to research:

https://linuxsecurity.com/features/what-is-a-container-escap...


This kind of response isn't helpful. He's right to ask about the motivations for the claim that containers in general are "not a sandbox" when the design of containers/namespaces/etc. looks like it should support using these things to make a sandbox. He's right to be confused!

If you look at the interface contract, both containers and VMs ought to be about equally secure! Nobody is an idiot for reading about the two concepts and arriving at this conclusion.

What you should have written is something about your belief that the inter-container, intra-kernel attacker surface is larger than the intra-hypervisor, inter-kernel attack surface and so it's less likely that someone will screw up implementing a hypervisor so as to open a security hole. I wouldn't agree with this position, but it would at least be defensible.

Instead, you pulled out the tired old "education yourself" trope. You compounded the error with the weasely "are considered" passive-voice construction that lets you present the superior security of VMs as a law of nature instead of your personal opinion.

In general, there's a lot of alpha in questioning supposedly established "facts" presented this way.


> This is a well understood and well documented subject. Do your own research.

Anything including GNU/Linux kernel can be broken with such security vulnerabilities.

This is not a weakness in the design of containers. `npm install`, on the other hand, is broken by design (due to post-install.


> This is not a weakness in the design of containers.

Partially correct.

Many container escapes are also because the security of the underlying host, container runtime, or container itself was poorly or inconsistently implemented. This creates gaps that allow escapes from the container. There is a much larger potential for mistakes, creating a much larger attack surface. This is in addition to kernel vulnerabilities.

While you can implement effective hardening across all the layers, the potential for misconfiguration is still there, therefore there is still a large attack surface.

While a virtual host can be escaped from, the attack surface is much smaller, leaving less room for potential escapes.

This is why containers are considered riskier for a sandbox than a virtual host. Which one you use, and why, really should depend on your use case and threat model.

Sad to say it, a disappointing amount of people don't put much hardening into their container environments, including production k8s clusters. So it's much easier to say that a virtual host is better for sandboxing than containers, because many people are less likely to get it wrong.


> Many container escapes are also because the security of the underlying host, container runtime, or container itself was poorly or inconsistently implemented.

Sure, so running `npm install` inside the container is no worse than `npm install` on my machine. And in most cases, it is much better.


Containers are more isolation than without. That was never in debate in our conversation.

Escaping a properly set up container is a kernel 0day. Due to how large the kernel attack surface is, such 0days are generally believed to exist. Unless you are a high value target, a container sandbox will likely be sufficient for your needs. If cloud service providers discounted this possibility then a 0day could be burned to attack them at scale.

Also, you can use the runsc (gvisor) runtime for docker, if you are careful not to expose vulnerable protocols to the container there will be nothing escaping it with that runtime.


You start with the assumption of "properly set up container". Also I believe you are oversimplifying the attack surface.

A container escape can be caused by combinations of breakdowns in several layers:

- Kernel implementation - aka, a bug. It's rare, but it happens

- Kernel compile time options selected - This has become more rare, but it can happen

- Host OS misconfiguration - Can be a contributing factor to enabling escapes

- Container runtime vulnerability - A vulnerability in the runtime itself

- Container runtime misconfiguration - Was the runtime configured properly?

- Individual container runtime misconfiguration - Was the individual container configured to run securely?

- Individual Container build - what's in the container, and can be leveraged to attack the host

- Running container attack surface - What's the running container's attack surface

The last two are included to be complete, but in the case of the original article running untrusted python code makes them irrelevant in this circumstance.

My point you must consider the system as a whole to consider its overall attack surface and risk of compromise. There is a lot more that can go wrong to enable a container escape than you implied.

There are some people who are knowledgeable enough to ensure their containers are hardened at every level of the attack surface. Even then, how many are diligent enough to ensure that attention to detail every time? how many automate their configurations?

Most default configurations are not hardened as a compromise to enable usability. Most people who build containers do not consider hardening every possible attack surface. Many don't even know the basics. Most companies don't do a good job hardening their shared container environments - often as a compromise to be "faster".

So yeah, a properly set up container is hard to escape.

Not all containers are set up properly - I'd argue most are not.


> Escaping a properly set up container is a kernel 0day.

Not it is not. In fact many of the container escapes we see are because of bugs in the container runtimes themselves which can be quite different in their various implementations. CVE-2025-31133 was published 2? months ago and had nothing at all do with the kernel - just like many container escapes don't.


If a runtime is vulnerable then it didn't "set up a container properly".

Containers are a kernel technology for isolating and restricting resources for a process and its descendants. Once set up correctly, any escape is a kernel 0day.

For anyone who wants to understand what a container is I would recommend bubblewrap: https://github.com/containers/bubblewrap This is also what flatpak happens to use.

It should not take long to realize that you can set it up in ways that are secure and ways which allow the process inside to reach out in undesired ways. As runtimes go, it's as simple as it gets.


Note CVE-2025-31133 requires one of: (1) persistent container (2) attacker-controlled image. That means that as long as you always use "docker run" on known images (as opposed to "docker start"), you cannot be exploited via that bug even if the service itself is compromised.

I am not saying that you should never update the OS, but a lot of of those container escapes have severe restrictions and may not apply to your specific config.


Note this lists 3 vulnerabilities as an example: CVE-2016-5195 (Dirty COW), CVE-2019-5736 (host runc override) and CVE-2022-0185 (io_uring escape)

Out of those, only first one is actually exploitable in common setups.

CVE-2019-5736 requires either attacker-controlled image or "docker exec". This is not likely to be the case in the "untrusted python" use case, nor in many docker setups.

CVE-2022-0185 is blocked by seccomp filter in default installs, so as long as you don't give your containers --privileged flags, you are OK. (And if you do give this flag, the escape is trivial without any vulnerabilities)


The burden of proof lies with the person making empirically unfalsifiable claims.

Exploit the Linux kernel underneath it (not the only way, just the obvious one). Docker is a security boundary but it is not suitable for "I'm running arbitrary code".

That is to say, Docker is typically a security win because you get things like seccomp and user/DAC isolation "for free". That's great. That's a win. Typically exploitation requires a way to get execution in the environment plus a privilege escalation. The combination of those two things may be considered sufficient.

It is not sufficient for "I'm explicitly giving an attacker execution rights in this environment" because you remove the cost of "get execution in the environment" and the full burden is on the kernel, which is not very expensive to exploit.


> Exploit the Linux kernel underneath it (not the only way, just the obvious one). Docker is a security boundary but it is not suitable for "I'm running arbitrary code".

Dockler is better for running arbitrary code compared to the direct `npm install <random-package>` that's common these days.

I moved to a Dockerized sandbox[1], and I feel much better now against such malicious packages.

  1 - https://github.com/ashishb/amazing-sandbox

It's better than nothing, obviously. But I don't consider `npm install <random-package>` to be equivalent to "RCE as a service", although it's somewhat close. I definitely wouldn't recommend `npm install <actually a random package>`, even in Docker.

I also implemented `insanitybit/cargo-sandbox` using Docker but that doesn't mean I think `insanitybit/cargo-sandbox` is a sufficient barrier to arbitrary code execution, which is why I also had a hardened `cargo add` that looked for typosquatting of package names, and why I think package manager security in general needs to be improved.

You can and should feel better about running commands like that in a container, as I said - seccomp and DAC are security boundaries. I wouldn't say "you should feel good enough to run an open SSH server and publish it for anyone to use".


> `npm install <random-package>` to be equivalent to "RCE as a service"

It is literally that. When you write "npm install foo", npm will proceed to install the package called "foo" and then run its installation scripts. It's as if you'd run curl | bash. That npm install script can do literally anything your shell in your terminal can do.

It's not "somewhat close" to RCE. It is literally, exactly, fully, completely RCE delivered as a god damn service to which you connect over the internet.


I'm familiar with how build scripts work. As mentioned, I build insanitybit/cargo-sandbox exactly to deal with malicious build scripts.

The reason I consider it different from "I'm opening SSH to the public, anyone can run a shell" is because the attack typically has to either be through a random package, which significantly reduces exposure, or through a compromised package, which requires an additional attack. Basically, somewhere along the way, something else had to go wrong if `npm install <x>` gives an attacker code execution, whereas "I'm giving a shell to the public" involves nothing else going wrong.

Running a command yourself that may include code you don't expect is not, to me, the same as arbitrary code execution. It often implies it but I don't consider those to be identical.

You can disagree with whether or not this meaningfully changes things (I don't feel strongly about it), but then I'd just point to "I don't think it's a sufficient barrier for either threat model but it's still an improvement".

That isn't to downplay the situation at all. Once again,

> that doesn't mean I think `insanitybit/cargo-sandbox` is a sufficient barrier to arbitrary code execution, which is why I also had a hardened `cargo add` that looked for typosquatting of package names, and why I think package manager security in general needs to be improved.


> definitely wouldn't recommend `npm install <actually a random package>`, even in Docker.

That's not the main attack vector. The attack vector is some random dependency that is used by a lot of popular packages, which you `npm install` indirectly.


That doesn't change what I said. It definitely doesn't change what I said about docker as a security boundary.

Again, it's great to run `npm` in a container. I do that too because it's the lowest effort solution I have available.


That should definitely improve.

Right now, you are pretty much locked into the theme (and it's version) when you set up your website for the first time.


Yeah. That's one flip side.

Hugo-papermod, the most famous Hugo theme, doesn't support the latest 10 releases of Hugo.

So, everyone using it is locked into using an old version (e.g. via Docker).


Why not put the whole site behind CDN?

> The swap bypassed our policy because the deny rule was bound to a specific file path, not the file itself or the workspace root.

This policy is stupid. I mount the directory read inside the container to make it impossible to do it (except for a security leak in the container itself)


> Has anyone given it a try?

Yes, I don't think this will persist caches & configs outside of the current dir, for example, the global npm/yarn/uv/cargo cache or even Claude/Codex/Gemini code config.

I ended up writing my own wrapper around Docker to do this. If interested, you can see the link in my previous comments. I don't want to post the same link again & again.


I had the same setup that I posted about a few months back[1], and then I migrated all of it into a single tool[2] for ease of use.

  1 - https://news.ycombinator.com/item?id=45766478
  2 - http://github.com/ashishb/amazing-sandbox


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: