It's not really about infrastructure but yes kernels and firmwares have to do a lot of stuff the compiler can't verify as safe, eg writing to a magic memory address you obtained from the datasheet that enables some feature of the chip. And that will need to happen in unsafe code blocks. I wouldn't call that a problem but it is a reality.
Are you one of the authors? Concerning the "infrastructure": Rust assumes a runtime, the standard library assumes a stack exists, a heap exists, and that main() is called by an OS; in a kernel, none of this is true. And the borrow checker cannot reason about things like e.g. DMA controllers mutating memory the CPU believes it owns, Memory-mapped I/O where a "read" has side effects (violating functional purity), context switches that require saving register state to arbitrary memory locations, or interrupt handlers that violate the call stack model. That's what I mean by "infrastructure". It's essentially the same issue with every programming language to some degree, but for Rust it is relevant to understand that the "safety guarantees" don't apply to all parts of an operating system, even if written in Rust.
I am a maintainer. I think what you're referring to is the problem where `std` is actually a veneer on C - so for example, when Rust allocates memory on an x86-class desktop, it actually invokes a C library (jemalloc, or whatever the OS is using) and that networking is all built on top of libc. Thus a bunch of nice things like threads, time, filesystem, allocators are all actually the same old C libraries that everything else uses underneath a patina of Rust.
In Xous, considerable effort went in to build the entire `std` in Rust as well, so no C compilers are required to build the OS, including `std`. You can see some of the bindings here in the Rust fork that we maintain: https://github.com/betrusted-io/rust/tree/1.92.0-xous/librar...
Thus to boot the OS, a few lines of assembly are required to set up the stack pointer and some default exception handler state, and from there we jump into Rust and stay in Rust. Even the bootloaders are written in Rust using the small assembly shim then jump to Rust trick.
Xous is Tier-3 Rust OS, so we are listed as a stable Rust target. We build and host the binaries for our `std` library, which native rustc knows how to link against.
Thanks, interesting. My concern was less about which language implements std, but rather about the semantic mismatch between Rust's ownership model and hardware behavior (e.g. DMA aliasing, MMIO side effects). So I was curious what work-around you found; do you e.g. use wrapper types with VolatileCell, or just raw pointers?
Hmm, I think I see what you're asking. I'm not sure if this exactly answers your question, but at least for aliasing, because we have virtual memory all pages have to be white-listed to be valid.
Thus "natural" aliases (say due to a decoder that doesn't decode all the address bits) can't be mapped because the OS would not accept the aliases as valid pages. This is handled through a mechanism that goes through the SVD description of the SoC (SVD is basically an XML file that lists every register and memory region) and derives the list of valid pages. The OS loader then marks those as the set of mappable pages; any attempt to map a page outside that list will lead to a page fault. One nice thing about the SoC RTL being open source is that this entire process of extracting these pages is scripted and extracted from the SoC's source code, so while there can be code bugs, at least human error is eliminated from that process.
DMA devices on their own right can have "god mode" access to memory, because they operate on physical memory addresses and lack page translation tables. To that end the preferred DMA mechanism in hardware has an "allow list" of windows that can be enabled as DMA targets; on reset the list is nil and nothing can be DMA'd, the OS has to correctly configure that. So this is not a Rust-level thing, this is just a driver-level hack. Not all the DMA-capable peripherals have this safety mechanism though, some of the IP blocks are just a big gun with no safety and you're free to point it at your toes.
However, if you set up a DMA and then you read from it later on - you're in unsafe territory. Rust can't reason about that. So for structures that are intended as DMA targets, they are coded in a peculiar way such that they are marked as unsafe and you're using the read_volatile() method on the raw pointer type to force the compiler to not try to optimize out the read for any reason. Furthermore, fence instructions are put around these reads, and a cache flush is required to ensure the correct data is pulled in.
This complexity is baked into a wrapper struct we create specifically to handle dangerous interactions like this.
Thanks, that's exactly what I was asking about. So if I understand correctly: for the hardware interface layer (DMA, MMIO), you're essentially writing disciplined C-style code in unsafe blocks with volatile reads and manual memory barriers, then wrapping it to contain the unsafety. That's pragmatic.
I was looking for information about Xous's raw IPC performance to get an impression of how it performs compared to e.g. the L4 family, especially L4re and sel4. Also a comparison to QNX would be very interesting. Are there any "cycles-per-IPC" benchmarks for Xous available somewhere? What are your plans/goals in this regards?
standard library assumes a stack exists, a heap exists, and that main() is called
A small assembly stub can set up the stack and heap and call main(); from then on you can run Rust code. The other topics you mention are definitely legitimate concerns that require discipline from the programmer because Rust won't automatically handle them but the result will still be safer than C.
Rust's safety model only applies to code you write in your program, and there's a lot that's unsafe (cannot be verified by the compiler) about writing a kernel or a firmware, agreed. You could have similar problems when doing FFI as well.
The Rust runtime will, at a minimum, set up the stack pointer, zero out the .bss, and fill in the .data section. You're right in that a heap is optional, but Rust will get very cranky if you don't set up the .data or .bss sections.
I filter for false positives with language like this:
For each bug you find, write a failing test. Run the test to make sure it fails. If it passes, try 1-3 times to fix the test. If you can't get it to work, delete the test and move on to the next bug.
It's not perfect, you still get some non-bugs where the test fails because it's premises are wrong. Eg, recently I tossed out some tests that were asserting they could index a list at `foo.len()` instead of `foo.len() - 1`. But I've found a bunch of bugs this way too.
You will have an error rate of less than or equal to 1%. You can't average two measurements and get a result with a higher error rate than the worst of the original measurements had.
You wouldn't be well served by averaging a measurement with a 1% error and a measurement with a 90% error, but you will have still have less than or equal to 90% error in the result.
If the errors are correlated, you could end up with a 1% error still. The degenerate case of this is averaging a measurement with itself. This is something clocks are especially prone to; if you do not inertially isolate them, they will sync up [1]. But that still doesn't result in a greater error.
You could introduce more error if you encountered precision issues. Eg, you used `(A+B)/2` instead of `A/2 + B/2`; because floating point has less precision for higher numbers, the former will introduce more rounding error. But that's not a function of the clocks, that's a numerics bug. (And this is normally encountered when averaging many measurements rather than two.)
There are different ways to define error but this is true whether you consider it to be MSE or variance.
This is not how continuous probabilities work. The probability that a clock is exactly right is zero; hence there is always some error in a measurement of time. Adding additional clocks will always cause the error to be less or equal to the maximum error.
> You don't want folks to be able to iterate each object by incrementing the id
If you have a lot of public or semi-public data that you don't want people to page through, then I suppose this is true. But it's important to note that separate natural and primary keys are not a replacement for authorization. Random keys may mitigate an IDOR vulnerability but authorization is the correct solution. A sufficiently long and securely generated random token can be used as both as an ID and for authorization, like sharing a Google Doc with "anyone who has a link," but those requirements are important.
To complete the circle, now that we have winnowed the space down to these options, we would normalize them and end up with 0.16 / (0.16 + 0.16) = 0.5 = 50% in both cases.
The reason I'm not putting % signs on there is that, until we normalize, those are measures and not probabilities. What that means is that an events which has a 16% chance of happening in the entire universe of possibility has a "area" or "volume" (the strictly correct term being measure) of 0.16. Once we zoom in to a smaller subset of events, it no longer has a probability of 16% but the measure remains unchanged.
In this previous comment I gave a longer explanation of the intuition behind measure theory and linked to some resources on YouTube.
I think it is different in the continuous case though, because you can average two (reasonably accurate) chronometers and get a better measurement. But we can't average true and false, at least not in the context of this problem definition.
But the chronometers are will sync with each other if you don't store them apart, which would result correlated noise that an average won't fix.
The saying probably assumes that each chronometer has a certain small probability of malfunctioning, resulting in a significant error (basically a fat-tailed error distribution). With three chronometers, you can use a robust estimator of the true value (consensus value or median). With two, there's no robust estimator and if you use the mean, you have twice the probability of being significantly wrong (though only by half as much).
You can definitely average two relatively accurate chronometers but you if you only have two it’s difficult to tell if one is way fast or way slow.
In a perfect world they drift less than a minute per day and you’re relatively close to the time with an average or just by picking one and knowing that you don’t have massive time skew.
I believe this saying was first made about compasses which also had mechanical failures. Having three lets you know which one failed. The same goes for mechanical watches, which can fail in inconsistent ways, slow one day and fast the next is problematic the same goes for a compass that is wildly off, how do you know which one of the two is off?
> In a perfect world they drift less than a minute per day...
A minute per day would be far too much drift for navigation, wouldn't it?
From Wikipedia [1]:
> For every four seconds that the time source is in error, the east–west position may be off by up to just over one nautical mile as the angular speed of Earth is latitude dependent.
That makes me think a minute might be your budget for an entire voyage? But I don't know much about navigation. And it is beside the point, your argument isn't changed if we put in a different constant, so I only mention out of interest.
> Having three lets you know which one failed.
I guess I hadn't considered when it stops for a minute and then continues ticking steadily, and you would want to discard the measurement from the faulty watch.
But if I just bring one watch as the expression councils, isn't that even worse? I don't even know it malfunctioned and if it failed entirely I don't have any reference for the time at the port.
My interpretation had been that you look back and forth between the watches unable to make a decision, which doesn't matter if you always split the difference, but I see your point.
> A minute per day would be far too much drift for navigation, wouldn't it?
Even that was much better than the dead-reckoning they had to do in bluewater before working chronometers were invented. Your ship's "position" would be a triangle that might have sides ten miles long at lower latitudes.
I’ve never heard the bring one or three, I’ve always just heard three. I think that exact saying implies that if you have two and one isn’t working out you’ll go crazy but if you have one you’ll be oblivious until it’s too late.
A well serviced rolex in 2026 with laser cut gears drifts +/- 15sec per day.
One with hand filed gears is going to be +/- a minute on a good day, and that’s what early navigation was using. I have watches with hand filed gears and they can be a bit rough.
Prior to that, it was dead reckoning, dragging a string every now and again to calculate speed and heading and the current and then guesstimating your location on a twice daily basis.
Those two wildly inaccurate systems mapped most of the world for us.
Reading the comment thread here made me realize the idea seems to be that having 2 just means double the probability of one of them failing in some undetectable way. The resulting error magnitude is reduced by half, but the probability of that error is doubled. So it gains you nothing to expected value to have 2. Unlike with 3, where the probability of undetectable failure and the error rate from partial failure are both reduced by the ability to make comparative measurements (eg pick the middle number not the average)
Though not without significant errors, the most amusing to me being that islands had a tendency to multiply because different maps would be combined and the cartographer would mistake the same island on two maps as being separate islands due to errors. A weird case of aliasing I suppose.
The book “Longitude” is fascinating, and discusses the challenges prior to chronometers (many people died), as well as the rewards offered for a precise chronometer, the attempts, etc.
You could but you would lose the performance benefits you were seeking by encoding information into the ID. But you could also use a randomized, proprietary base64 alphabet rather than properly encrypting the ID.
XOR encryption is cheap and effective. Make the key the static string "IfYouCanReadThisYourCodeWillBreak" or something akin to that. That way, the key itself will serve as a final warning when (not if) the key gets cracked.
Symmetric encryption is computationally ~free, but most of them are conceptually complex. The purpose of encryption here isn't security, it's obfuscation in the service of dissuading people from depending on something they shouldn't, so using the absolutely simplest thing that could possibly work is a positive.
XOR with fixed key is trivially figure-out-able, defeating the purpose. Speck is simple enough that a working implementation is included within the wikipedia article, and most LLMs can oneshot it.
Yes, XOR is a real and fundamental primitive in cryptography, but a cryptographer may view the scheme you described as violating Kerckhoffs's second principle of "secrecy in key only" (sometimes phrased, "if you don't pass in a key, it is encoding and not encryption"). You could view your obscure phrase as a key, or you could view it as a constant in a proprietary, obscure algorithm (which would make it an encoding). There's room for interpretation there.
Note that this is not a one-time pad because we are using the same key material many times.
But this is somewhat pedantic on my part, it's a distinction without a difference in this specific case where we don't actually need secrecy. (In most other cases there would be an important difference.)
Encoding a type name into an ID is never really something I've viewed as being about performance. Think of it more like an area code, it's an essential part of the identifier that tells you how to interpret the rest of it.
You could also say, if I tell you something is an opaque identifier, and you introspect it, it's your problem if your code breaks. I told you not to do that.
I take your hedging to mean you are probably self diagnosing. It's worth talking to a doctor and getting the ball rolling on a formal diagnosis. ADHD is not the only diagnosis with those symptoms. For instance bipolar and autism spectrum disorder. Again, not a doctor, take that with a grain of salt.
There are probably new tactics you can adopt in this thread, and they may help and are worth trying. Advice which is actionable today is valuable. But if this is severe enough to disrupt your life, the best strategy is a combination of therapy, medication, and lifestyle changes (eg exercise).
Easier said than done, I know. I have my own issues I'm struggling with and I get it. I'm in the midst of trying that same three pronged approach.
Please also understand that these diagnosis do not all have the same consequences for not treating them. If you don't want to pursue formal diagnosis and treatment, that is your right, but I would urge you to investigate whether or not you are bipolar in any case. If you have your first manic episode, and you don't understand that is what is happening, it could be dangerous. What you're describing sounds more like ADHD to me personally but is not inconsistent with hypomania either. Again, not a doctor, grain of salt.
Note that if you ever want to be a pilot, THINK VERY HARD BEFORE GETTING DIAGNOSED OR MEDICATED. This doesn't apply to most people, but it is the major gotcha on an otherwise straightforward decision.
/r/flying is full of people who wish they didn't have this in their medical record. The FAA is totally backwards about medical stuff and has a very dim view towards ADHD & associated meds.
I'm disappointed to acknowledge you have a point. Shame on the FAA for pushing people into the closet with this.
If one did want to become a pilot, I do think it would be critical to determine whether or not they were prone to manic episodes. That really could be very dangerous to a pilot and their crew, passengers, etc.
Also, from my 15 minutes of preliminary research, I don't think that applies to pilots of ultralights. So if your dream is simply to fly, it's still achievable.
Yes, you are correct. My point is that a lot of people who self diagnose as ADHD have a different disorder that causes executive function issues, and it's important to rule out bipolar because it has very different consequences. I don't care if someone with untreated ADHD or ASD flies a plane, but untreated bipolar disorder could actually be dangerous. (Not a doctor.)
reply