A consequence of universal healthcare that people don't talk about much is that it turns unhealthy citizens from an individual cost into more of a collective one. So it makes sense that countries with universal healthcare regulate in favor of their citizens as opposed to their food industry, because they're paying for the consequences more directly.
Not that this affects the political calculus (where perception may as well be reality), but the cost burden specific to universal healthcare is actually opposite this intuition.
Things like obesity, smoking, and alcoholism all kill you before you can get too old. Healthy citizens end up using far more of the far more expensive end-of-life care, to the point where it outweighs the extra healthcare the unhealthy citizens use in their youth.
This (French) study [0] published in 2023 on data from 2019 calculates that the costs from legal drugs such as tobacco and alcohol, including higher helthcare spend during the life of smokers/drinkers, are still higher than revenue from unspent money on pensions and taxes, and cost of healthy person living years.
This is both an argument in favor of universal healthcare, and my favorite argument for why the US should not implement it without first changing a whole array of perverse incentives.
Indeed, I would caution pretty much everyone else in the world (except maybe Asians, but even then) to be circumspect when taunting Americans for their obesity rates. Germany, to use an example from this discussion, has been going up steadily for decades. Doesn't seem like this is a US-specfic problem or something that Europe has a good answer for.
Europe is just lagging behind. There's not that much difference between the US and Europe. Europe just has more history and culture which makes the changes less extreme.
I would blame how Austria, a very small country, is organized into 9 provinces that actually have their own budget and can pass their own laws on some topics.
Rail service is funded at the federal level, so there's less arguing about who pays for what. Bus service, however, is managed by regional transport associations funded by the provinces. This creates disincentives for cross-province bus routes because no single province wants to pay more than its 'fair' share for a service that primarily benefits voters in another province.
Similar dynamics play out at the city/province level. Take Linz, the provincial capital of Upper Austria: the city has had a social democratic (SPÖ) mayor continuously since 1945, while the province has had a conservative (ÖVP) governor for exactly the same period of 80 years. This disincentivizes the province government from helping to fund public transport within or into the city, because it's a win for social democratic city voters, while the more conservative rural voters would rather take the car anyway since they often can't do the whole trip by public transport.
Arguably the reason for the excellent public transport in the city of Vienna is that they are also their own province. Their mayor/governor, who has been a social democrat as well for the last 80 years, always controls both levels of funding.
I think as AI gets smarter, defenders should start assembling systems how NixOS does it.
Defenders should not have to engage in an costly and error-prone search of truth about what's actually deployed.
Systems should be composed from building blocks, the security of which can be audited largely independently, verifiably linking all of the source code, patches etc to some form of hardware attestation of the running system.
I think having an accurate, auditable and updatable description of systems in the field like that would be a significant and necessary improvement for defenders.
I'm working on automating software packaging with Nix as one missing piece of the puzzle to make that approach more accessible:
https://github.com/mschwaig/vibenix
(I'm also looking for ways to get paid for working on that puzzle.)
Nix makes everything else so hard that I've seen problems with production configuration persist well beyond when they should because the cycle time on figuring out the fix due to evaluations was just too long.
In fact figuring out what any given Nix config is actually doing is just about impossible and then you've got to work out what the config it's deploying actually does.
Yes, the cycle times are bad and some ecosystems and tasks are a real pain still.
I also agree with you when it comes to the task of auditing every line of Nix code that factors into a given system. Nix doesn't really make things easier there.
The benefit I'm seeing really comes from composition making it easier to share and direct auditing effort.
All of the tricky code that's hard to audit should be relied on and audited by lots of people, while as a result the actual recipe to put together some specific package or service should be easier to audit.
Additionally, I think looking at diffs that represent changes to the system vs reasoning about the effects of changes made through imperative commands that can affect arbitrary parts of the system has similar efficiency gains.
You are describing a propper dependency/code hierarchy.
The merging of attribute sets/modules into a full NixosConfiguration makes this easy. You have one company/product wide module with a bunch stuff in it and many specialized modules with small individual settings for e.g. customers.
Sure, building a complete binary/service/container/nixos can still take plenty of time but if this is your only target to test with, you'd have that effort with any naive build system. But nix isnt one of them.
I think that's the real issue here. Modularizing your software/systems and testing modules as independently as possible. You could write test nix modules with a bunch of assertions and have it evaluate at build time. You could build a foundation service and hot plug different configurations/data, build with nix, into it for testing. You could make test results nix derivations so they dont get rerun when nothing changed.
Nix is slow, yes. But only if you dont structure your code in a way to tame all that redundant work, it comes around and bites you. Consider how slow eg. make is and much its not a big issue for make.
I think for actual Nix adoption focusing on the cycle time first would bring the biggest benefit by far because then everything will speed up. It's a bit like the philosophy behind 'Go', if the cycle is a quick one you will iterate faster, keep focus and you'll be more productive. This is not quite like that but it is analogous.
That said, I fully agree with your basic tenet about how systems should be composed. First make it work, but make deployment conditional on verified security and only then start focusing on performance. That's the right order and right now we do things backward, we focus on the happy and performant path and security is - at best - an afterthought.
If you make a conventional AI agent do packaging and configuration tasks, it has to do one imperative step after the other. While it can forget, it can't really undo the effects of what it already did.
If you purpose-build these tools to work with Nix, in the big picture view how these functional units of composition can affect each other is much more constrained.
At the same time within one unit of composition, you can iterate over a whole imperative multi-step process in one go, because you're always rerunning the whole step in a fresh sandbox.
LLMs and Nix work together really well in that way.
From a security perspective I am far more worried about AI getting cheaper than smarter. Seems like a tool that will be used to make attacking any possible surface more efficient at scale.
Sure, but we can also use AI for cheap automated "red team" penetration tests. There are already several startups building those products. I don't think either side will gain a major advantage.
This could be worse, too. With more machines being identical, the same security hole reliably shows up everywhere (albeit not necessarily at the same time). Sometimes the heterogeny impedes attackers.
I mentioned another alternative to adding flake-specific metadata to data structures that are transferred over the network, as part of the signed traces or otherwise, in a comment on that PR Eelco linked.
It's keeping flake-specific data locally, to guarantee that it matches how the user ended up with the data, not how the builder produced it. I think otherwise from the user POV such data could again look misleading.
Good point. It is misleading if different flakes end up producing the same derivation, and we don't want to resign our build trace entry to account for that (which would amplify reads into writes). Separate indirection for this eval->store layer accounting sounds good.
I work with Nix a lot, and I had never seen `__findFile`.
It's kind of crazy how much there is to know about Nix. I wish there was a bit less surface area to the language. On the other hand it's really interesting how much specialized knowledge there is in the community around various topics. Some people package things, some people write library code, some write glue code that wraps other build tools, some write VM-based tests, some write generators that transform store paths into things like container images, some just manage their dot files, some are experts for how we deal with some specific proprietary ecosystem like cuda, some write infra code or tools around the Nix code, some work on one of the Nix implementations.
As someone who runs NixOS on my home machine, I feel this. I have an okay handle on day-to-day operations on my machine but a lot of it still feels like magic to me in both a good a scary sense. I'm still looking for great resources to go deep on Nix and really grok it.
As a current PhD student (working on Nix stuff) let me take this opportunity to congratulate you on your successful PhD defense and publicly thank you for your writing. That you write and what you write are inspiring.
It saves you from escaping stuff inside of multiline-strings by using meaningful whitespace.
What I did not like about CCL so much that it leaves a bunch of stuff underspecified.
You can make lists and comments with it, but YOU have to decide how.
That whole post reads like a parody. Making fun of other config formats, only to go full blown category theory to explain one can merge config files.. Also weird to use key-value maps as basic type instead of lists, then having to make lists out of maps, rather than the other way around. And who wants to configure the comment symbol? Schema validation is also not in, but hey we can define data types (which are app specific anyway?)
I wrote a paper about how I think trust should work for software dependencies.
It very much builds on the hash-based cache lookup mechanism this paper calls constructive traces (in contrast to what they call deep constructive traces) to eliminate transitive trust relationships.
I want to try to become an independent researcher, when the funding for my PhD position runs out.
My idea for financing this is finding a few companies who pay a retainer fee to not only get direct easy access to my expertise when they need it, but are also interested in the results of the kind of work I'm doing when they don't need anything specific from me.
I work on supply chain security with systems like Nix, and recently put up a first version of a website: https://groundry.org/
reply