Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nanos – A Unikernel (nanos.org)
224 points by Alifatisk on March 13, 2024 | hide | past | favorite | 130 comments


> Does this work under Kubernetes?

> Yes, but we caution users to evaluate if you really need kubernetes. Chances are you don't and you will experience severe performance and security problems if you choose to run under k8s. If you still find you must here are instructions for running Nanos under k8s.

https://nanos.org/faq

Pretty interesting FAQ! I hope to see more HN discussion about this page!


> security problems if you choose to run under k8s

I'm here wondering, what security problems could this have? https://docs.ops.city/ops/k8s doesn't elaborate and just says

> Security Warning

> Running unikernels under kubernetes diminishes some of their security benefits.


I can speak to this. Containers, and by extension k8s, break a well known security boundary that has existed for a very long time - whether you are using a real (hardware) server or a virtual machine on the cloud if you pop that instance/server generally speaking you only have access to that server. Yeh, you might find a db config with connection details if you landed on say a web app host but in general you still have to work to start popping the next N servers.

That's not the case when you are running in k8s and the last container breakout was just announced ~1 month ago: https://github.com/opencontainers/runc/security/advisories/G... .

At the end of the day it is simply not a security boundary. It can solve other problems but not security ones.


pure speculation: I imagine it's just "Well, now there will be Linux involved" where previously you'd just be relying on Nanos.


> Yes, but we caution users to evaluate if you really need kubernetes. Chances are you don't

That's an odd, perhaps presumptuous, claim to make considering this isn't even in the same orchestration/scheduling space as Kubernetes. Especially without mentioning alternatives.

From later in the FAQ:

> The complexity that comes with kubernetes is that it requires you to re-invent all the layers of a cloud platform that already exists. If you run a vanilla linux instance on AWS you get out of the box: networking, storage, security, routing, etc all for free.

They might not like the abstractions, but to say you have to "re-invent" all those layers is reaching.


I believe kubernetes is a little misunderstood- kubernetes is not software, and is not an abstraction on top of cloud. It’s an attempt at creating a standard-ish API for cloud.

It’s currently being implemented as an abstraction on top of existing systems, because cloud providers have to leverage what they have and because its API still lacks all the functionality. But gradually it became a managed service that can provision databases, etc, and in another 5-10 years most cloud interactions will happen through it


Mandatory reference to mirage-os, a similar unikernel build on top of xen with a focus on security, in OCaml:

https://mirage.io/

I wish to encounter some unikernels in production sometime.


I guess the main issue is that with Kubernetes, and chiseled containers we are quite close to the same idea, even if a full kernel might be underneath, and thus it is quite hard to gain adoption.


This is the paradox of software as is today- we are good at adding crap on top, we are bad at replacing what’s underneath.

Now, in cloud native we have decomposed applications - logging is now separate, so is storage, etc.

I should be possible to change the stack underneath without anyone noticing, or at least with minimal disruption to operations.


I joke that in the end Andrew Tanenbaum got the upper hand over Linus, because the supposed performance benefit of a monolithic kernel over microkernels is meaningless in a Kubernetes cluster, or distributions like SUSE Linux Enterprise Micro and Fedora CoreOS.


I certainly don't speak for the nanos project, but in my mind this was a key reason for even doing this work. That we could write an adaption layer that would hoist the world as we know it off the sandpile and put it someplace else where things made more sense.

its a shipping container, an adaption jig, an invitation to rethink the world underneath without insisting that anyone rewrite anything.


In certain areas, the additional isolation that virtualization brings might be worth it, but in general I agree with this


Plus the fact mirage apps have to be written in OCaml


A good thing as far as security and reliability are concerned, if you ask me. Not so much if you are aiming at real-time garanties of course.


Yeah, but MirageOS isn't the only unikernel in town, there are others, even with partial POSIX support.


Done is the enemy of better.


devs: there's too much complexity! security is impossible!

also devs: let's add just one more layer on top of linux -> docker -> k8s

godspeed to the nanos team for trying to simplify the stack


This seems designed to run inside virtual machines so there's a similar flavour. But I guess if you are running containers inside VMs you could substract one layer by reimplementing your application into a OS component.


A unikernel application is a VM. It's like a container, but without needing docker + linux to run on.

I was running a web application written in ruby, distributed in a container, running in docker on linux in a VM. That could become a unikernel running directly instead of the VM. Saves quite some layers i'd say :)


What do you mean by "directly"? I'm not familiar with Nanos internals, but after skimming their FAQ[0] it seems that Nanos is kind of VM that can't run on bare metal and still requires hypervisor (presumably Linux, unless your CTO plays a lot of golf with MS salespeople):

Nanos is a single process operating system designed to run as a virtual machine and has no support to run on hardware.

[0]: https://nanos.org/faq


It seems you are right. The OPS documentation mentions that you can deploy it on bare metal, but this is a recent comment [0] that contradicts that .

    right now we don't have any plans to support bare metal
    installs like this as that would imply a bunch of other
    mgmt related tooling that would not be present 
    (eg: start/stop the server, configure networking, 
    deploy a new one, access rights, etc.) it also breaks 
    the assumptions we have that it is only being deployed 
    as a vm which means having to support a ton of random 
    hardware drivers, nanos is intended to always be ran on
    top of a hypervisor of some kind - whether it's public
    cloud or something under your own control 
    (eg: proxmox/vsphere/etc.)


It seems like they make some distinction between true bare metal and somewhat bare metal, which is highly confusing.

___

[0]: https://github.com/nanovms/ops/issues/1522


It's bare metal in the sense that it's self-bootstrapping but the "metal" it supports is only a paravirtualized system. This is what they mean when they say that they don't want to support tons of random hardware drivers: they've written support for KVM paravirt devices (which are nearly universally available on VMs), and that allows the kernel to run on most hosting providers.


So (if I understand correctly):

It minimizes the software stack (and with that: attack surface) that application sits on, inside a VM.

It does not (nor is it expected to) help to minimize said application.

And it does not minimize the software stack that runs the VM.


exactly. its an adpater that provides a short path between the applications expectations and what the VM provides.


Unikernels and virtualization are orthogonal, you can run eg on-prem appliances with unikernels on bare iron. Eg sounds like Netapp ONTAP is/was like this at some point.


I like that I can get more performance, but I don't necessarily want an anemic environment for when things go wrong. Production machines are often pretty barebones to begin with - making investigating issues often feel like working through a key-hole.


UKL (https://github.com/unikernelLinux/) is a sort of best of both worlds, because it's still ordinary Linux so you can run a userspace.


I'd be curious how much performance this might bring vs, say, a Linux kernel with an application loaded direct from initramfs. It's not terribly complicated to build a single-purpose initramfs.

There's a lot that an operating system brings to the table, that casual users may not be aware of. They tend to have a bunch in them not because "why not" but because it's providing value.


When I look at the documentation, apart from syslog there really isn't much. Administration, deployment and testing must be hell.


It's strange to me that Alpine Linux won over Unikernels. I guess it makes sense, once you add logging, observability, printf-debugging (just kidding) to your unikernel, you basically have a full OS anyway.


It's just familiarity and path-dependence. People expect everything to work like it does on their MacBook and if there's a bit of overhead to doing that, oh well.


> logging, observability, printf-debugging (just kidding) to your unikernel, you basically have a full OS anyway.

I mean -- isn't that all the good stuff?

Like -- I boot this thing and I don't really have a filesystem in the way we know it. First of all, that's confusing (I say this as a ZFS devotee). Second, I have no idea why this is acting crazy in production, or in testing, compared to the Linux version? Etc., etc.


> I don't really have a filesystem in the way we know it.

This is a rather common misconception. Nanos (and many other unikernel projects) have filesystems. Most of the applications we target are webapp servers, databases, etc. They all want to, at the very least, write to a tmp file and more commonly want to load lots of files. What they don't have is an interactive userland.


> I mean -- isn't that all the good stuff?

Logging and observability are valuable. But running a full multiuser OS with kernel and userland for your one process, adding extra context switches and what have you to everything you do, just for the rare occasions where you log in and run a few commands, seems crazy to me. As long as it can output logs/traces/etc. to a collector (which is what I'd want to do anyway, no-one wants to have to SSH to each separate instance to look at log files on the local filesystem) and there's a way to attach a debugger (e.g. the Java style where your debugger connects to it on a given port), I don't see any advantage from having e.g. a filesystem per se. Likewise being able to run it locally the same way as production is important, but that doesn't have to mean running it as a Linux process - people are happy running Docker images for local dev, running a unikernel in a VM isn't a lot different from that.


Exactly the same way as a kubernetes has a files system, and you log just the same way.


Might be due to not having a shell inside the "container" severely limits the use of your previously-acquired observability and debugging skills.


I've typically used full ubuntu or some other base image... it's tremendously useful to do apt update / apt install some other tool,


Same i do that and then just wipe/reset the container when i've fixed in my app whatever was wrong


It's not in the same category, Alpine is just a regular Linux distribution on the userspace side, and you have a traditional big generic kernel with your app running in userspace.


Well docker based stuff seems to have a lot more of an ecosystem on the orchestration side of things.

If I want to have a bunch of instances scaled up and wired together network wise and defined in a nice yaml file, kubernetes makes this easy and replicable and I just have to push a container image somewhere like GHCR and have it all rolled out with CI.

You can deploy almost the same system on top of the managed kubernetes offerings from any of the big cloud providers, so other than working out a few differences generally your config will work on anything.

If I want to do this with unikernels I'm lost, would I do it with AWS services like EC2 and auto scaling groups or something? How do I get it all wired together and configured? Perhaps terraform but I'm not sure.

It's all about the maturity of the glue and the architectures, not to mention training and familiarity. Does (mystery unikernels solution) have the equivalent of custom controllers somewhere in that stack?

K8s provides an abstraction layer over resources and lets you define your system holistically.

If this exists in a mature and flexible state for unikernels I'm all ears.

What real advantage is there to using unikernels over alpine linux + k8s (or whatever) orchestration?


> If I want to do this with unikernels I'm lost, would I do it with AWS services like EC2 and auto scaling groups or something?

This is a key point that many many people struggle with until they actually push their unikernels out to the cloud. It is one thing to run ops locally but try doing a 'ops image create -t aws' followed by 'ops instance create -t aws' and then you'll understand how we manage to push a lot of the scheduling work back onto the cloud provider itself.


Yeah I've popped my image onto AWS and got it running in an instance, I figure the next port of call is the terraform part of the docs to see how you'd put something out in production.

It would be cool to write some sort of book like, ops for k8s people, where it takes a lot of the common concepts in k8s and explains how you'd do them with this system.

ops seems like decent a CLI to get things done but as far as I can see not really an orchestration system in the same way as k8s? Perhaps I misunderstood.


You are correct that ops does not provide any sort of orchestration framework today, but it's also not necessarily needed in a lot of cases because the end artifacts are vms not containers.

There is a terraform provider that we ended up writing for one company but only because they were a (large) hashicorp shop and nothing went to prod unless it was in tf: https://docs.ops.city/ops/terraform This doesn't really do much though. It's only there to check a box and fit into an existing paradigm.

A lot of the orchestration bits that k8s provides is because when it was initially adopted there wasn't a great way to attach networks/storage/etc. to various containers like people had for on the clouds with vms. OPS just re-uses the same primitives you'l find on every single cloud out there. If you need more advanced functionality like auto-scaling that also can utilize the clouds ASGs.


Yeah I guess a lot of the orchestration can be effectively handled by terraform, I get what you mean regarding cloud primitives. This is my next step to seeing how this can work in a production environment. Instead of an ingress you can use ELB, for instance.

I guess this means your environment is very "native" to a specific cloud however and not as easily transferrable.

One thing I did like about k8s is that it's easy to self-host with microk8s but tbh, I think in production you'd be using a cloud provider's managed k8s (e.g. AKS) anyway.

Looking at how to attach/configure a ElastiCache, RDS, or some EFS to a project also.

What's the best way to runtime configure an image when you create it as an instance? I couldn't see any options on ops instance create for environment variables or anything like that but this also seems like, weird - IE you wouldn't give a VM environment variables, but with a normal EC2 instance there's an agent that handles this sort of thing. Would you have to bake the configuration into the image? Also how does one output the ELF for the image? Is it just the file of [image name] in ~/.ops/images ?

Or is it the case that terraform should build a correctly pre-configured image as part of the apply process?w

Always good to have other options, of course! I do appreciate the minimalism. This is exciting :)

The terraform docs example uses GCP, perhaps if I figure out how to do it on AWS I could PR the docs with an example.

At that stage where my questions lead to more questions.



I'm hoping @jart does something like this with Redbean[1], just because it's a nicely encapsulated Lua + SQLite framework in itself.

[1]: https://news.ycombinator.com/item?id=31764521


Ian has been at this for years. He's incredibly smart and persistent. It's not gained a lot of traction from my take on it, but I keep a close eye on it because ultimately I think he has the right idea. Time will tell.


Forgot to mention this but https://nanos.org is also related with https://nanovms.com (to deploy unikernels) and ops.city (which handles the package distributions), so it's like a whole ecosystem.

I wonder why Alpine linux won over this though?


Some things in [1] might provide answers. To be fair, from a cursory look at [2], many of the concerns from [1] appear to be addressed--or at least works in progress--in [2]. I also enjoyed [3] on this topic.

[1] https://bcantrill.dtrace.org/2016/01/22/unikernels-are-unfit... [2] https://docs.ops.city/ops [3] https://oxide.computer/podcasts/oxide-and-friends/1184552


Because Linux containers solve most of the problems to a satisfying degree, and are much simpler to develop.


So I had to look this up but Alpine has been in under development since 2005. Nanos has only been in development since 2018.


Nanos yes, but Unikernel no. I think it started 2013? But your point still stands.


After some brief research, I haven't found any notable emphasis on formal verification in any well-known unikernel (imperfectly defined by online searching, LLM exploration, and focusing on languages like Haskell and Rust that tend to more interested in correctness). Did I miss anything? Even the Wikipedia pages lacks any mention of verification.

The closest I've found was a Reddit article titled "Stardust Oxide: I wrote a unikernel in Rust for my bachelors dissertation" with this comment "My impression, after investigating Singularity for my own undergrad: No. They did bytecode varification against a manifest after codegen, in their own intermediate language that all installed programs had to use. For the Rust compiler I imagine you're thinking of doing certification against the module sources? That's not going to provide the same level of verification."


No idea if any others are closer to that ideal, but MirageOS unikernel (OCaml) offers nods in that direction:

"Possible to formally verify critical components" https://www.cs.williams.edu/~cs432/osco/01-whit.pdf

"As part of our mission to build robust and secure systems, we strive to support technology that roots its design in the field of formal methods" has a TCP stack manually derived from a formal model https://tarides.com/blog/2024-01-24-mirageos-designing-a-mor...


Thanks. / My interest is centered around AI safety. It seems clear that any learning system* that can improve its own code must be run in a provably secure virtual environment (at the very least).

* by learning system I include any system that learns from experience


I am a bit confused, there are three sites:

* https://nanos.org/ The core technology?

* https://nanovms.com/ A company providing services and offerings around said technology?

* https://ops.city/ The orchestration?

And I am not sure what "thing" I am using. Is there some disambiguation? I know is OPS is the orchestration CLI, but I am confused at the difference between Nanos and NanoVMs. What should I call the section of my README that deals with this tech? Currently gone with Nanos/OPS but I am confused.


I think Nanovms is the “umbrella” for Nanos & Ops. Nanovms runs the unikernel and offers products that helps you manage them.


I really struggle with unikernels.

Great idea, and love the security context, but I feel like you loose so much.

Logging for example, is it all syslog? How do you manage them? What if you need them in splunk, how does that work?

I love the idea, but can’t figure out how to use them in practice.


We do encourage prod deployments to export logging that they care about over syslog (https://docs.ops.city/ops/klibs#syslog) or something else (https://docs.ops.city/ops/klibs#cloudwatch-logging) yes. Any non-trivial large deployment is going to be exporting to some other source anyways.


The same way as you do logging on Kubernetes.

Unlike Kubernetes, unikernel images are way slimmer and faster.


I experimented a little with NanoVMs. I wanted to deploy them in AWS ARM64 instances to test if it was possible. It took a bit of effort, but I think it's worth it.

There is something nice about them as your mental map is reduced to the minimum amount of components, your app - amalgamated with the unikernel - and the virtualization platform of your choice.

I need to test support for volumes, and if there is interest I might do a write-up about it.


Very stupid question:

How do I package my own applications.

Say I have a binary “helloctl” that I want to package, where do I read about how to do this?

Took a look at the docs and it is neat with details about nanos but no info about how to use my own binaries, except install ops and run the sample nodejs app.

I could of course install ops and try to look around for other pkg commands but a basic “get started” should show me how to package my own stuff. Or may be I am just too lazy :)



Is there anything that isn't go?


The ops-examples github has many examples in many popular languages


Yeh - we have a small language list here:

https://github.com/nanovms/ops-examples

If you're looking for a particular piece of software search on the repo first:

https://repo.ops.city/

If you don't find it and need help creating one ping me or open an issue in ops.


Would unikernels be feasible for online judge platforms (e.g. codeforces, leetcode)? I was thinking something along the lines of: spawning a unikernel for each submission -> running a single file program -> streaming the output back to a server, all in <3s and potentially with multiple submissions running in parallel. I'm fairly new to this, so I'm not sure if that would be overkill.


Yes, it would be overkill.

Online judge platform do a very small subset of problem. You can sandbox it to no network or filesystem accesses, and no syscalls except the few like read/write/select.


Yes, if your goal is to use unikernels. But if your goal is to pick the best solution for this application, there isn't an obvious advantage for unikernels.


Why wouldn’t you use a standard container based approach like Docker?


The last time I tried, it was pretty slow to fire up, especially when I tried to run multiple submissions at once. It's also even more overkill for my use case; I currently use isolate (https://github.com/ioi/isolate) which is just a wrapper around cgroups/namespaces, and it's been a lot faster.

Sidenote: I'm not really looking to replace it, I was just asking out of curiosity since this is my first time hearing of unikernels


> I currently use isolate (https://github.com/ioi/isolate) which is just a wrapper around cgroups/namespaces, and it's been a lot faster.

Yes. This is the fastest you can get.

If you want safer, add pr_set_seccomp _in addition_ to it. but that would be a custom solution.


> Our latest benchmarks show that Nanos serves static content almost twice as fast as Linux

What about compared to low-latency and real-time linux kernels?


"real-time" when it comes to kernels doesn't necessarily imply "fast" and Nanos only targets virtualized environments as well. RT is more for situations like "If I hit the brakes I want the car to stop". Nanos' sweet spot is more for cloud workloads.

Having said that with enough interest I think we could look at project such as https://projectacrn.org/ but it's not really a focus at the moment and would probably become a 'flavor'.

As for scheduling itself though, we recently added support for UMCG: https://nanovms.com/dev/tutorials/user-mode-concurrency-grou... .


RT linux kernels would probably have lower throughput.


With a chart showing 1.579019x the throughput, a more appropriate claim would be "60% faster than Linux".


nanos's small size and linux binary compatibility make it potentially a great candidate for

  - pedagogy 
  - systems language research
  - CPU architecture research
  - embedded systems development where a linux process model is useful
  - OS research (assuming anyone still does that)
  - systems with high performance or low variance needs
  - HPC


Why do you think OS research is something many might not be doing? Genuine question.


I thought I was going to do is research. There was a consensus at the time in the community and at least one famous white paper that I can’t find now (pike? rashid?) that explained it was dead. I kind of agree and kind of don’t. I think we have more exploring to do on what the right structure is, but I’m not sure I’d call it research. I think we really missed an opportunity to lay down a new foundation with cloud services. Distribution and security are still two pretty massive holes, and people are looking at them.


The IRC link doesn't go anywhere except back to the homepage. Is the IRC channel hosted on libera/oftc/efnet?


We need to remove that. We did have a channel on freenode a while back but got rid of it.

Outside of gh discussions there is also https://forums.nanovms.com/. We made a decision a while ago to follow Zig's lead here and have no 'official' community space (https://github.com/ziglang/zig?tab=readme-ov-file#community) instead letting people form their own spaces.


Am I the only one that got excited to see an IRC link?


An 'official' community space is a blessing, I think Zig took the wrong direction here,

I can understand why project maintainers don't want to carry the burden of gardening a space, but Zig is not doing itself any favors here. Instead, I would suggest maintainers to open a "vacancy" for an accessible and privacy friendly space, and if it fits the criteria, bless it. Or just start one yourself and hand it over.

Why? From a quick glance, most of the Zig communities are on discord. That means all folks that value privacy would not participate. And even if you are ok to lose out on that demographic, you still have a highly fractured information space. Although in the case of discord, "information space" is too generous. "Information black hole" is a more apt term, because none of it gets indexed by search engines. One cannot do without a bit of leadership to make a project successful.

... and props for your project!


Zig also has an IRC channel on libera (#zig) that is moderated by Andrew Kelley.[1]

[1] https://github.com/ziglang/zig/wiki/Community


> all folks that value privacy would not participate

How much privacy do you need to talk about a programming language?

Official Discords are great!


No, they aren't. They are silos, with no proper libre/open source clients with propietary protocols. Also, the Discord company can send any of your data to /dev/null anytime.


Chat is fairly transient by nature, anything that is not should be put into reusable documentation anyway.


This looks interesting! One thing I like about docker is being able to inherit from other images, so for instance in their webserver example, I could inherit from a simple webserver Dockerfile and then my local code would be only the static files I wish to serve.

Is there a way to inherit from config files?


You mean like the packages to run the unikernel? They can be shared here https://repo.ops.city


More like being able to extend existing "recipes" for a unikernel, like you can with Dockerfiles. So you can have a Dockerfile that creates a general solution (e.g. nginx) and extend it to provide your exact implementation (nginx + your public files copied over into the right place).


The way this works is that you take a package (like nginx or node or what-have-you) and then you use any of the configuration vars like 'File' or 'Dirs' or 'MapDirs' and the like to create the filesystem with what you want on it: https://docs.ops.city/ops/configuration#files .

You can then deploy this as-is with a simple config.json or you can create your own new package (for instance if it's something like node and you want to share it with your team).


I did it!! https://repo.ops.city/v2/packages/radiosilence/nano-web/show

Well chuffed. I couldn't get ops pkg push to work at all, and instead had to manually upload a tar.gz as no matter what I did, it wouldn't find my local image (though it would find it if I did ops pkg load -l nano-web_0.0.1).

Is there some sort of Discord or IRC to ask dumb questions instead of spamming up HN?

Either way I've made an issue:

https://github.com/nanovms/ops/issues/1594


Edit: I've made a single character fix for the bug causing the issue (regex matching)


Ah, cool! I'll look at bundling my tiny webserver as its own package :)

https://github.com/radiosilence/nano-web


Just watched the demo on running a Rails app on Nano, so fascinating! I wonder how much the performance gain is.


Faster than the speed of light, eh?


Yeah, measuring computation speed in meters per second is like measuring the weight of software dependencies in kilograms



Reminds me of the famous Bill Gates quote: “Measuring software development progress by lines of code is like measuring aircraft building progress by weight”


Amazing work. The code looks very neat too. I love projects that simplify software.


If the OS can run only one single program what is the difference to a service lets say Linux+/bin/init?


At the end of the day you'd still be running linux. So what are some key differences between Linux and Nanos (excluding the millions of lines of code, userland arguments)?

There is still quite a lot of code in your average linux kernel that is not easy to ifdef out. For instance the capability of running many different programs by many different users means you have a scheduler that has to support that. You have to protect those different programs from being screwed with by other 'users' or other programs (eg: just cause I'm able to own one program doesn't give me access immediately to someone else's on the same machine if owned by a different user) even though a lot of companies have largely walked away from that concept.

It goes a lot deeper than that though. Shared memory, IPC amongst processes, semaphores all touch that. Then we start talking about users and their ptys and managing that entire environment. Then you have to start looking at /sys and /proc and /dev and all the other places that get stuffed with all sorts of things.

It really truly is very different because it started with a completely different architecture and deployment model in design.


The unikernel still has vastly reduced overall system surface area— for both good and ill. Less security concerns, random package manager nonsense, things that need to be monitored in case they randomly explode, and all the rest of it… but also harder to get a shell or connect a debugger.


You don't háve anything in the kernel which isn't needed to run the program.


Secure... curl ... |sh.

Closes tab



That is what it does at this moment. Will it do that 8 seconds from now? There is no signature checking.

Distributing unsigned software, then asking users to blindly execute it, is simply irresponsible.


> That is what it does at this moment. Will it do that 8 seconds from now?

In that case, you can simply stick to the commit hash you read the source code from.

https://raw.githubusercontent.com/nanovms/ops/0b7e8bb9e56767...


Again. Who is to say that this will not change or be MITMed?

Download first, review, then execute.

Or even better, download, verify signatures from multiple reputable people, optionally review, then execute.


> Who is to say that this will not change

Isn't the commit and its hash immutable?


There are a ton of different download options https://ops.city/downloads (versus just compiling from source yourself if you're super paranoid), including signed/notarized/stapled builds from companies such as Apple.


1. You are using MD5 hashes which are trivial to create collisions of

2. None of your artifacts or build script are signed by you (third party signs by apple are pointless)

3. Builds have no evidence of being reproducible, or anyone having reproduced and counter signed them.

Compare to say the Bitcoin or Monero processes that have multiple people build and sign every release so it is easy to trust there were no SPOFs.


Tangent: I have an earnest interest in learning about methods for installing software obtained from the internet which would be any more secure than this.


Signed reproducible builds are the bare minimum.

See Arch Linux, Debian, Guix, Stagex


Can you run a vim plus sbcl lisp server on top I wonder?


this is pretty nice and can lead to modular unikernel. however, there is some application we need patch to remove OS specific code.


This is true. There is quite a lot of applications that you can just run out of the box but then I'll give you two cases where that won't be the case and do require patches:

1) In many interpreted languages it is common to have convenience commands that shell out to call a script which shells out to call another script and 8 layers later you get to the actual real command. When I'm creating a package for an application that does this I have to usually figure out what env vars are being set, what paths are being changed and so forth. This is probably a super easy thing for whoever made the software to begin with but not so easy for someone that wants to just use it. So the solution is to make the original author aware that it might go into an unikernel environment or, far less probable, convince them that a better method would be to not do this practice to begin with as it.

2) In older software (specifically I'm looking at mid-to-late 90s), in a time before threads and commodity SMP machines and the cloud it was pretty common to write software that used multiple processes to do many things. Postgres is the most common example I use here (keep in mind postgres is descended from Ingres from the early 80s and Dr. Stonebraker is now on his tenth? twentieth? database venture DBOS (https://www.dbos.dev/) - which definitely has ideas that we are very keen about.

Anyways, that's not really the case today with go, rust, java, etc. For apps like this we will, from time to time port them. That's exactly what we did with postgres too to make it multi-threaded instead: https://repo.ops.city/v2/packages/francescolavra/postgres/16... .

I think there is a lot of opportunity out there for individuals to come in and create newer versions of software like this and get some really awesome benefits while maintaining more or less the same parity.


Would love a AWS deployment guide!



Hetzner cloud?


seconded


Thanks!


I so very much want this idea but coded in Rust. Alas, I'll have to try this out.


Rust + WebAssembly unikernel, https://news.ycombinator.com/item?id=37982137


"OPS is explicitly built to be able to run standalone static binaries such as Go and C"

presumably Rust apps will work on this too then


Yes, Nanos is completely language agnostic. We've had rust customers (not just users) for a few years now.


Why Rust?


Partly because it's an ecosystem that I'd like to see grow. More people using it means that the other Rust-based projects I use will be better too.

Partly because it is the "carbon fiber" of programming languages. Just its use is something you can use to market to other engineers.

Partly because it's very ergonomic and I enjoy working with it. I'd much rather dive into esoteric Rust code than esoteric C code.

Partly because code written in of Rust will have less errors than code written inside of C. It isn't just memory safety - it's all of the type theory/best practices we figured out in the last 50 years.

I understand why it was written in C. It's an easier language to work with and they probably needed something out quickly. But coding in Rust is a sign (not proof, but a sign) that a program is of quality make.


For this type of unikernel project C makes sense. I’m a fan of both C and Rust. I like that Rust prevents typical code safety problems, but I like that C hardly changes over time whereas younger languages like Rust are constantly changing. It’s plenty possible to write correct, clean, memory-safe, and understandable code in C, especially if verified by extensive fuzz testing and code scanning.


It’s really small. I know c doesn’t translate to rust, but it would be a good template for someone doing that. You need the state in an os, and using threads as memory containers is a bad idea here. But you could certainly do the array thing or arc everything steal the interrupt notion from the embedded rust community, write a different async runtime and that would get you most of the way




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: