The FreeBSD Foundatioin is based in Boulder, Colorado, USA.
OpenBSD is based in Calgary, Alberta, Canada.
NetBSD is a non-profit based out of Deleware, USA
I am not sure exactly what you mean by "US-focused" though. I do not think the US government has much direct influence in practice. Both governance and engineering contributions in BSD are highly distributed internationally.
That said, FreeBSD in particular has quite a lot of corporate contribution. Netflix is a heavy user of and contributor to FreeBSD for example. And the recent $750,000 laptop push in FreeBSD is being driven by Quantum Leap Research out of Virginia.
The fact that the BSD systems have less coporate reliance does not necessarily offer more protection though. There is less corporate "control" simply because the BSD systems are less important economically.
You could fork Linux anytime you like and your fork would than have as little corporate control as NetBSD. And just like NetBSD, not taking US corporate contributions would mean less engineering investmetn overall and potentially having to do more yourself.
I mean, it would probably be easier for the EU or China to fork Linux than it would be for them to migrate to OpenBSD if they wanted independence from US exposure.
> You can but the firmware that is needed to run it is America
This thinking is part of the reason for the momentum behind RISC-V and LoongArch.
RISC-V is a lot like Linux in that it benefits from International cooperation and innovation while offereing the ability to seize control if needed.
But you are correct that even an open ISA does not protect you from a proprietary hardware implementation at the chip or firmware level that you still do not control. This requires additional open standards.
Bigger picture, it means "domestic" chip design and fabrication capabilities. The world is just starting to wrap its mind around this. But again, RISC-V is really helping here. There are emerging RISC-V chip capabilities in Europe and even in places like India for example. It is easy to laugh off these efforts as non-competitive. But not only will many of them find niches where they will be economically pheasible but they offer an important backstop to geopolitical risk and the flexibility to at least of enough domestic capability to keep the lights on if needed. Building and rolling out a RISC-V ecosystem will take years or decades. But once there, it can be pivotied to or maintained on any RISC-V chip. As long as you have the ability to produce some kind of RISC-V chip, this ecosystem can never really be taken away from you.
And RISC-V offers the same kind of international collaboration that allows both pooling of efforts and protection from reliance on any one actor or region that could become a political threat.
RISC-V understands its role in this regard. It too was an "American" technology but Linux International was setup in Switzerland for a reason.
I think this is objectively true. The Linux Foundation is also US based. We saw this when Russsian contributors were banned from the kernel to comply with US sanctions.
The big difference of course is that relying on Linux does not have to mean realying on US corporations. At the level of a nation-state, and certainly at the level of a larger political collective like the EU, control can always be taken back if political interests diverge or if risks mount. Linux could be forked and maintained out of Europe, Asia, or elsewhere if needed. And technology could even continue to be pulled from the US version if desired.
Above, I mean the kernel. But the "distro" level offers another level of contorl. A distro maintained outside of the US offers a lot of local control and isolation from the risks of US control. The kernel used in this distro does not have to be fully forked to be audited, to remove anything concerning, or to add in whatever is desired. And the same is true of all other software included in the distro.
While maintaining a distro is a lot of work, it can be done at the scale of an individual or a small team. It can be done with a travial number of resources at the nation state level. In some ways, it is crazy that more countries do not have their own distro even if it does start as much more than a "spin" of some maintstream distro. As political tensions mount, this may become a more normal "national security" step to take. Being ready to pivot and isolate from the US is more important than actually doing it. If all your government and military infrastructure is based on a distro you control, you can then pivot quickly if you need to. And there are customization and standardization benefits of having a regionally focussed distro beyond national security.
Distros cannot realistically work without hardware support. Hardware is designed in America. The licensing for the software to use the hardware is controlled by the United States
I mean I can write a kernel right now with all the computer systems theory implemented, but without the architecture specs, the firmware, etc, this is completely useless.
Licensing can be ignored. Specs can be stolen. You think China cares about enforcing American copyright in the slightest? One way other countries can retaliate against American tariffs and invasions is to start ignoring American copyright and IP laws.
Many corporations are free-riding on the Open Source they use. As most of us are honestly.
But I think people cynically underestimate the value of the contributions corporations do make and fail to understand just how much of the software we enjoy is only possible due to corporate funding.
Igalia may be a good example as most of have are not even familiar with them. But the Linux distro that I use comes from their, the Servo browser is being driven by them, and many other projects benefit from their contributions.
Ok, so you agree with him except where he says “in a VM” because you say you can also do it “in a container”.
Of course, you both leave out that you could do it “on real hardware”.
But none of this matters. The real point is that you have to compile on an old distro. If he left out “in a VM”, you would have had nothing to correct.
I'm not disagreeing that glibc symbol versioning could be better. I raised it because this is probably one of the few valid use cases for containers where they would have a large advantage over a heavyweight VM.
But it's like complaining that you might need a VM or container to compile your software for Win16 or Win32s. Nobody is using those anymore. Nor really old Linux distributions. And if they do, they're not really going to complain about having to use a VM or container.
As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo.
But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task. Go figure.
> But they'll happily take a paycheck for writing a bunch of shit Go/Ruby/PHP code that runs on Linux 24/7 without downtime - not because of the quality of their code, but due to the reliability of the platform at _that_ particular task.
But does the lack of a stable ABI have any (negative) effect on the reliability of the platform?
Only for people who want to use it as a desktop replacement for Windows or MacOS I guess? There are no end of people complaining they can't get their wifi or sound card or trackpad working on (insert-obscure-Linux-distribution-here).
Like many others, I have Linux servers running over 2000-3000 days uptime. So I'm going to say no, it doesn't, not really.
>As C/C++ programmer, the thing I notice is ... the people who complain about this most loudly are the web dev crowd who don't speak C/C++, when some ancient game doesn't work on their obscure Arch/Gentoo/Ubuntu distribution and they don't know how to fix it. Boo hoo.
You must really be behind the times. Arch and Gentoo users wouldn't complain because an old game doesn't run. In fact the exact opposite would happen. It's not implausible for an Arch or Gentoo user to end up compiling their code on a five hour old release of glibc and thereby maximize glibc incompatibility with every other distribution.
Glibc strives for backwards (but not forwards) compatibility so barring exceptions (which are extremely rare but nobody is perfect), using a newer glibc than what something was built for does not cause any issues, only using an older glibc would.
I mean, Chimera Linux is pretty LLVM native.
reply