From the web site: “ container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.”
I remember in 1997 when I could boot Linux off a 1.44MB floppy and get a fully a fully functioning Linux environment even with network support in a blitz. If 130MB is considered “lean”, what happened to our Unix principles of minimalism and clean design?
They bumped up against modern considerations of plug-n-play. You'll see that it's the kernel which makes up most of it, the rest of it is quite small. If you wish to compile a kernel for your own hardware, you can slim that down by a _huge_ amount. I've managed to go from around 130 mb -> 25 on one machine.
There’s a lot of neat stuff you can do if you build your own kernel.
On my old laptop I had a small EFI partition with a 30MB “linux.efi” kernel. The initramfs that was included had busybox, wpa_supplicant, elinks, and gcc5. The idea being I could totally switch OSes without having to make bootdisks or worry about having an unbootable computer. You can staticly link all the modules and firmware too which is convenient IMHO.
Openwrt still fits on routers with 4mb of rom space because of this approach. The x86 flavor, for example, doesn't have keyboard and mouse drivers by default.
It allows me to boot my kernel and initramfs directly from the SPI flash chip that stores my laptop's BIOS (coreboot), which is very useful.
My goal is to decrypt my disk from code stored on the motherboard (and physically write-protected) but I'm not there yet because fitting everything on 7.6MB is not easy. I may try using a bigger chip, or use the SPI-kernel to check the signatures of decryption code stored on the HDD.
Why not just go the more conventional approach of having a small stage-0 non-Linux loader stored in ROM which can validate an entire kernel stored on disk? If your goals are solely around trusted boot chain it makes more sense to keep the trust root (initial bootloader and verification) as small and possible to audit as possible, right? Even the Linux kernel seems like a pretty big attack surface to keep in ROM
> I remember in 1997 when I could boot Linux off a 1.44MB floppy
I installed slackware in the early 90's. The kernel was on one floppy, then the barest rootfs was on a 2nd and 3rd floppy. I believe the whole install spanned 13 1.44MB floppies. That's not so different from say, current day Openwrt in overall size, which can still fit into 4MB of ROM if needed.
Ah yes, Slackware disk sets. I actually got the installer on a CD-ROM from Walnut Creek but had to resort to floppies because Slack didn’t recognise my optical drive (those days it was attached via an ISA mounted sound blaster card, not on the motherboard itself!).
The distro I was referring to above was LOAF (and tomsrbt), Linux On A Floppy.
Let's not forget we now have "native apps" which are actually web browsers packed with a full node.js instance and a local database, eating up several hundred megs of space for seemingly trivial tasks.
Did you ever use the QNX demo-disk? A bootable Unix-like with GUI on a single 1.44MB floppy. I agree it's unrealistic these days, but it was an impressive statement-piece.
I remember that even in 1997 you needed one boot-disk containing the kernel, somewhat taylored to your system (SCSI/non-SCSI, networking) already because of space constraints, and a root-disk, and the running system was pretty minimal.
I'm running a normal installation; the largest package (nearly 500MB) is linux-vanilla. The included drivers are mostly what are hogging space. This goes back to the whole plug-and-play another person mentioned; there are a lot of supported devices.
I used it in containers (LXC through Proxmox VE) and my base install with "functioning Linux environment even with network support in a blitz" comes around 8MB, as the kernel comes from the host I simply save on that end. Works very well.
Depends on the distro. Changing compilers or compiler flags and any dependent base libraries has a big impact on kernel size, and distros can tune kernel configs for size.
Fitting Linux on a floppy back in the day was about 80% tuning the kernel, and then playing with BusyBox size and finally things like compression or even repacking objects to align better. (It also helped if you had a floppy that could cheat it's way up to 1.68MB, a little known hack that traded reliability for space)
I remember in 1997 when I could boot Linux off a 1.44MB floppy and get a fully a fully functioning Linux environment even with network support in a blitz. If 130MB is considered “lean”, what happened to our Unix principles of minimalism and clean design?