What you describe sounds a lot like Diátaxis[1], which is a strategy for writing and organizing technical documentation. It categorizes docs into one of four categories: tutorials, explanations, how-tos, and references.
Category is derived from a fairly simple heuristic: whether the content informs action or cognition, and whether the content serves the reader’s application or acquisition of a skill[2]. I’m a fan and it’s simple enough that most anyone can learn it in an afternoon.
Unless my understanding of how IPv6 is flawed, I don’t think your assertion is true in practice. One of the big benefits to IPv6 is that addresses are plentiful and fairly disposable. Getting a /48 block and configuring a router to assign from the block is pretty straightforward.
I’m aka unsure if IPv4 really gets you the privacy advantages you think it does. Your IP address is a data point, but the contents of your TCP/HTTP traffic, your browser JS runtime, and your ISP are typically the more reliable ways to identify you individually.
Other response address how you could go about this, but I'd just like to note that you touch on the core problem of security as a domain: At the end of the day, it's a problem of figuring out who to trust, how much to trust them, and when those assessments need to change.
To use your example: Any cybersecurity firm or practitioner worth their salt should be *very* explicit about the scope of their assessment.
- That scope should exhaustively detail what was and wasn't tested.
- There should be proof of the work product, and an intelligible summary of why, how, and when an assessment was done.
- They should give you what you need to have confidence in *your understanding of* you security posture as well as evidence that you *have* a security posture you can prove with facts and data.
Anybody who tells you not to worry and take their word for something should be viewed with extreme skepticism. It is a completely unacceptable frame of mind when you're legally and ethically responsible for things you're stewarding for other people.
Make Google multiple millions by improving ad delivery and conversion within Gmail. Probably by also helping Google land big corporate or public contracts, but last I checked most of the money was made via ads in the free tier of GMail.
If you're using the container to manage stuff on the host, it'll likely need to be a process running as root. I think the most common form of this is Docker-in-Docker style setups where a container is orchestrating other containers directly through the Docker socket.
While this is true, the general security stance on this is: Docker is not a security boundary. You should not treat it like one. It will only give you _process level_ isolation. If you want something with better security guarantees, you can use a full VM (KVM/QEMU), something like gVisor[1] to limit the attack surface of a containerized process, or something like Firecracker[2] which is designed for multi-tenancy.
The core of the problem here is that process isolation doesn't save you from whole classes of attack vectors or misconfigurations that open you up to nasty surprises. Docker is great, just don't think of it as a sandbox to run untrusted code.
I hear the "Docker is not a security boundary." mantra all the time, and IIRC it was the official stance of the Docker project a long time ago, but is this really true?
Of course if you have a kernel exploit you'd be able to break out (this is what gvisor mitigates to some extent), nothing seems to really protect against rowhammer/memory timing style attacks (but they don't seem to be commonly used). Beyond this, the main misconfigurations seem to be too wide volume bindings (e.g. something that allows access to the docker control socket from inside the container, or an obviously stupid mount like mounting your root inside the container).
Virtual machines are treated as a security boundary despite the fact that with enough R&D they are not. Hosting minecraft servers in virtual machines is fine, but not a great idea if they’re cohosted on a machine that has billions of dollars in crypto or military secrets.
Docker is pretty much the same but supposedly more flimsy.
Both have non-obvious configuration weaknesses that can lead to escapes.
> Virtual machines are treated as a security boundary despite the fact that with enough R&D they are not. Hosting minecraft servers in virtual machines is fine, but not a great idea if they’re cohosted on a machine that has billions of dollars in crypto or military secrets.
While I generally agree with the technical argument, I fail to see the threat model here. Is it that some external threat would have prior knowledge that an important target is in close proximity to a less hardened one? It doesn't seem viable to me for nation states to spend the expensive R&D to compromise hobbyist-adjacent services in a hope that they can discover more valuable data on the host hypervisor.
Once such expensive malware is deployed, there's a huge risk that all the R&D money is spent on potentially just reconnaissance.
I think you’re missing the point, which was that high value targets adjacent to soft targets make escapes a legitimate target, but in low value scenarios vm escapes aren’t worth the R&D
And not without cause. We've been pitching docker as a security improvement for well over a decade now. And it is a security improvement, just not as much as many evangelists implied.
Not 99%. Many people run an hypervisor and then a VM just for Docker.
Attacker now needs a Docker exploit and then a VM exploit before getting to the hypervisor (and, no, pwning the VM ain't the same as pwning the hypervisor).
Agreed - this is actually pretty common in the Proxmox realm of hosters. I segment container nodes using LXC, and in some specific cases I'll use a VM.
Not only does it allow me to partition the host for workloads but I also get security boundaries as well. While it may be a slight performance hit the segmentation also makes more logical sense in the way I view the workloads. Finally, it's trivial to template and script, so it's very low maintenance and allows for me to kill an LXC and just reprovision it if I need to make any significant changes. And I never need to migrate any data in this model (or very rarely).
Here here. I also have ADHD though I couldn’t use stimulant medications due to bad reactions to it, but I’ve had success with non-stimulant medications (Straterra aka atomoxetine [1]).
A big thing I struggled with prior to medical treatment that I don’t often hear discussed about ADHd was rejection sensitivity.
For those unfamiliar: imagine a time someone said something that hurt your feelings or caused a strong emotional reaction.
Now imagine that as a routine emotional response to day to day interactions. Feeling intensely sad, irritated, insulted, etc. to extents completely o it of proportion to whatever was said or even implied.
It’s brutal. It contributes to a lot of depression and social anxiety for folks with ADHD. It doesn’t matter if you’re aware of the response being disproportionate—you get to go on that emotional roller coaster whenever somebody says they don’t care for your favorite food, accidentally cut you off in a conversation, or the day just turns out differently than you were expecting.
Medical treatment makes a huge difference—in my particular case the difference between feeling like I had the emotional regulation of a toddler and not needing to constantly question every emotion I felt prior to responding to things I was reacting to.
Stimulant medications didn’t work for me, but they do this for most people with ADHD (more effectively, too!) and like alterom it saddens me whenever FUD like this crops up.
Rejection sensitivity may be the reason I detest to-do lists. The lists inevitably languish and slowly turn into a perpetual reminder of who I haven't become, i.e. a rejection from past-me.
If I were to put on my security hat, things like this give me shivers. It's one thing if you control the script and specified the dependencies. For any other use-case, you're trusting the script author to not install python dependencies that could be hiding all manner of defects or malicious intent.
This isn't a knock against UV, but more a criticism of dynamic dependency resolution. I'd feel much better about this if UV had a way to whitelist specific dependencies/dependency versions.
If you’re executing a script from an untrusted source, you should be examining it anyway. If it fails to execute because you haven’t installed the correct dependencies, that’s an inconvenience, not a lucky security benefit. You can write a reverse shell in Python with no dependencies and just a few lines of code.
it's a stretch to "executing a script with a build user" or "from a validated distro immutable package" to "allowing something to download evergreen code and install files everywhere on the system".
I've used Tiger/Saint/Satan/COPS in the distant past. But I think they're somewhat obsoleted by modern packaging and security like apparmor and selinux, not to mention docker and similar isolators.
most people like their distro to vet these things. uv et all had a reason when Python2 and 3 were a mess. i think that time is way behind us. pip is mostly to install libraries, and even that is mostly already done by the distros.
Sorry I was half asleep! Meant that you can easily look at the code in the script and audit what it does – you can just run `cat` in it and you’re done!
But it’s much harder to inspect what the imports are going to do and be sure they’re free of any unsavory behavior.
If that’s your concern you should be auditing the script and the dependencies anyway, whether they’re in a lock file or in the script. It’s just as easy to put malicious stuff in a requirements.txt
There's a completely irrational knee-jerk reaction to curl|sh. Do you trust the source or not? People who gripe about this will think nothing of downloading a tarball and running "make install", or downloading an executable and installing it in /usr/local/bin.
I will happily copy-paste this from any source I trust, for the same reason I'll happily install their software any other way.
It really depends on the use case. A one-off install on a laptop that I don't use for anything that gets close to production - fine by me.
For anything that I want to depend on, I prefer stronger auditability to ease of install. I get it, theoretically you can do the exact same thing with curl/sh as with git download/inspecting dependencies, installing the source and so on. But in reality, I'm lazy (and per another thread, a 70s hippie) and would like to nix any temptation to cut corners in the bud.
I hate that curl $SOMETHING | sh has become normalized. One does not _have_ to blindly pipe something to a shell. It's quite possible to pull the script in a manner that allows examination. That Homebrew also endorses this behaviour doesn't make it any less of a risky abdication of administrative agency.
But then I'm a weirdo that takes personal offense at tools hijacking my rc / PATH, and keep things like homebrew at arm's length, explicitly calling shellenv when I need to use it.
It’s not an unreasonable take given historic behavior. Rather than decrying the cynicism, what steps can we take to ensure companies like Tesla/Waymo/etc are held accountable and incentivized to prioritize safety?
Do we need hasher fines? Give auto regulators as much teeth as the FAA used to have during accident investigations?
Genuinely curious to see how addressing reasonable concerns in these areas can be done.
Why isn't allowing people to sue when they get hurt and general bad PR around safety enough? Did you see what happened to Boeing's stock price after those 737 crashes?
I’d counter that with the Equifax breach that raised thei stock prices when it became clear they weren’t being fined into oblivion. Suing is also generally only a realistic option if you have money for a lawyer.
Right. We have a precedent for how to have an ridiculously safe transportation system: accidents are investigated by the NTSB, and every accident is treated as an opportunity to make sure that particular failure never happens again.
I agree. But Google has gone in that direction long ago: ads are now harder to distinguish from genuine search results. In many cases, the organic results are buried so deep that they don’t even appear in the first visible section of the page anymore.
Google could also have allowed invisible pay-for-placement without marking it as an ad. Presumably they didn't do that because undermining the perceived trustworthiness of their search results would have been a net loss. I wonder if chat will go in that same direction or not.
Pretty sure it's illegal to present advertisement and not label it as such in some form.
But as with everything, as new technologies emerge, you can devise legal loopholes that don't totally apply to you and probably need regulation before it's decided that "yeah, actually, that does apply to me".
Category is derived from a fairly simple heuristic: whether the content informs action or cognition, and whether the content serves the reader’s application or acquisition of a skill[2]. I’m a fan and it’s simple enough that most anyone can learn it in an afternoon.
1. https://diataxis.fr/
2. https://diataxis.fr/compass/
reply