Hacker Newsnew | past | comments | ask | show | jobs | submit | quake's commentslogin

It's a very difficult hike. I did it in 2008 and between the ladders, knee-deep mud, timing your hikes with the tides, and general swampiness, 6-8 days is average. There are sections in the South end of the trail where it's not unusual to only get 8 km in.


The fastest known time is 9.5 hours, set by Matt Cecil in August 2014.


I've also worked in this space for a few years and the amount of HN-style overconfident "we can fix this in hardware like the old days, the computers are coming for us!" comments without understanding the automotive industry or how cars are wired is pretty hilarious.

Something that should be noted for anyone who actually reads this is that the level of vulnerability is wildly different between automakers. No universal solution exists.


Yep - and not just between automakers, the security model varies wildly between different electrical architectures from the same manufacturer. Like any industry, there are hard problems, some of which are technically difficult, and some of which are self-inflicted from history/culture/insularity. No sector with any significant value or market competition has only the latter.


ASIL-critical inputs/outputs should not be encrypted,full end stop. Do I really trust that the dinky economy-scale micro that GM would pick is always going to hold up that encryption when I'm starting to drift off road? Absolutely the hell not.

I worked in this space (auto RE, including keyless entry) for a while, and there's almost no way this would work at scale without a top-down platform redo for automakers.


> Do I really trust that the dinky economy-scale micro that GM would pick is always going to hold up that encryption when I'm starting to drift off road?

Is your concern that the key management can leave a mess of key disagreement? But that's like the sensors failing altogether, and that already has to be taken into account.

So yes, I would trust "that the dinky economy-scale micro that GM would pick is always going to hold up that encryption when I'm starting to drift off road" because I have to trust that the computers will handle sensor failure correctly.

That said I'd only trust that if the crypto is sensible. Specifically authenticated encryption is essential. Key exchange, pairing -- those are important too. It needn't be complicated to set up: trust-on-first-use-after-reset (with reset being not trivial to execute) should suffice.

> [...] there's almost no way this would work at scale without a top-down platform redo for automakers.

That's possible, but I doubt it.


Your arguments are almost identical to the ones greybeard embedded devs have against the Arduino. Yeah, it's expensive, uses an outdated micro (at least the AVR-based Arduinos), but it's effective because of its popularity. Basically a flywheel effect. Doesn't have to be good or optimal, just has to be flexible and have a big community.

I doubt anyone is using an off the shelf Pi with SD card for an actual safety critical deployment and expect to get it certified. There are options like the Revolution Pi which is half PLC and uses the Pi's Broadcom SOC for non-safety calculations. Some even support CODESYS.

I agree that most ARM embedded Linux SOC's can be absolute dumpster fires when it comes to peripheral documentation and poorly maintained device trees (looking at you, Texas Instruments!!!) but that's nothing new in embedded dev. Learning how each manufacturer/platform do hardware peripherals is half the battle.

So I agree that the pi isn't always the best device for an application. Cost and power savings on an ESP32, better processing on your old laptop-turned-server, and so on. But the Pi does have excellent documentation, and was lucky enough to gain enough traction to create an ecosystem that reduces friction to just get something running for beginners, which is literally its original design intention.


I once encountered a hydroponic nutrient dosing system that was, no shit, a RPi 3+ with a custom HAT for the electrochemistry and actuation. These were sold to businesses running container farms and the like.

At the end of the day, it seemed like the manufacturer had the (good) idea to automate the dosing, but thought that all the standard industrial automation tactics (PLCs, ladder logic, HMIs, etc) were somehow overkill for the application.

Which meant that the end users had to write all the software to make it work with a standard industrial automation system anyway. It was super annoying.


Memfault is another option.


Takes some knowledge of the std lib of whatever language you're using (easy with C and Rust, harder with C++) to know what calls will try to allocate memory. But another method is to use a static buffer of bytes designated in the link table as your "heap" and have tasks only allocate when they start, and ensure they do not free that memory. Algorithmically, allocation is the easy part, reusing freed blocks is more difficult. So if your embedded allocator doesn't free and faults at OOM, if you allocate only at the start of a program, you can still be more confident in memory safety


The problem with allocating is not the algorithm complexity, it's the fact it may need to allocate new pages from the operating system, which is a somewhat expensive operation.

If you use a global allocator, when this happens is entirely non-deterministic. If you use a local iterator, at least you control when that happens, and have guarantees on the asymptotical behaviour.


I can probably chime in here. I have used Rust professionally for 2 years but I've been working with it as a hobby for 3. Experience predominantly in C/C++ embedded and systems environments.

I've used Rust for *nix based IOT programs to ingest data from different sources (network I/O, serial, CAN, files, etc), as well as for bare-metal embedded development. I've also made some dinky webservers as an exercise and various random tooling.

It's a very, very flexible language, once you get over its quirks. I found myself using it to make tools that I would usually turn to Python for. I'm quite impressed with how "high level" the language feels despite being able to work with low-level concepts. Simple stuff like mapping iterators and the Option<T> type while working without an OS is a fantastic feeling.

As most people will likely say, take Rust for a test drive with a basic project to decide for yourself. I would say to keep it simple and not go down the async route as your first foray, though.


Do you have any recommendations on libraries for parsing CAN with Rust (or are your tools OSS)? I'll be working on a project in a couple of months that'll need to pull data off CAN.


What the author fails to comment upon is the sheer breadth of embedded applications and focuses almost exclusively on IOT to prove a point about distributed systems (which he does quite well at a medium-high level). The analogy falls apart pretty quickly when you compare webdev to a safety critical system with hard real-time guarantees. Such a system may not have a viable logging framework due to deployment location or security reasons. From a 10,000 foot level I agree with the premise, but I think the title fails to explain the real topic: "similarities between embedded IoT and web development"


As you say, web dev is not like hard realtime development. It's not like resource-constrained development, either. Not many web servers are trying to run on 64K of RAM, or on an 8051.

Note that I'm not saying "easier" or "harder". It is different, though.


(author here)

This is a real difference, and one I want to expand on in my next article. You can fuck around much more in a web context than embedded, but you still often find out.

Doing web "well" requires thinking in terms of realtime and memory constraints. I don't want the latency of my microservice to jitter, nor do I want it to use random amounts of memory. Not only will I be able to provision smaller machines, but I will also guarantee better SLAs. Building your microservice just like you would a hard realtime constraint (guaranteed memory usage per request, preferably no allocations at all, time guarantees (response within 50 ms or I fail) helps make a robust, performant and resilient web application.


(author here) Actually I think that applies very similarly. The same way you can't just log willy nilly in a web system because you can easily overwhelm your logging infrastructure or introduce unwanted dependencies, you have to be mindful of instrumentation in low-level code. The same way you might not want to write to flash (either you have none, you don't want to wear it out, security, no way to read it back out anyway) has parallels to how you containerize and deploy your app (as a lambda? as a docker without persistent storage?)

The real-time aspect and hard resource constraint are indeed the fundamental difference. Some of these (static memory allocation) make a lot of sense when building microservices, for example, but it's much more squishy. I personally like building my microservices very similarly to my embedded systems: event driven, with different priorities to ensure time constraints, and bounded statically allocated memory and queues. Even in languages like golang, my allocations are usually done up front, memory ownership is very clearly delineated.

While you might not have realtime in web systems, you very much want to avoid GC pauses and similar behaviour to avoid spiking your latency, as these things compound. The underlying concepts are a bit similar, at least in my head. I think "this needs to be O(1) in runtime and memory to be repeatable. If the constant factor is bad, we can work on that. But I don't need fast and elastic).

As for distributed system, I actually consider a SPI / i2c / CAN system to be the distributed system, and a lot of patterns (retry mailboxes, timeouts, promises, bounded queues, circuit breaker) make sense in both.

I definitely plan to write more about these lower level details, and provide some code examples to make the parallels more evident.


Not to mention the amount of similarities between SkyTrain and BART with the automated system and custom parts. It didn't even strike me as odd that the trains didn't have drivers until a dump of snow hit the rails and someone had to open the front compartment of the train to access the manual controls to drive it into Columbia Station (why is it always Columbia?)


I've found Cargo more than up to the task of managing build configurations, and doesn't require monkeying around with CMake scripts or Makefiles. It was pointed out in another comment but you can gate features and crates based on the target you're compiling to. Cargo also supports custom build profiles so you can also pick and choose what you want even if it's all on the same target.

Creating a heap in Rust on a cortex M is safe and cheap-ish with a crate supported by the rust-lang developers. Much easier than implementing your own free() method on a memory pool.

I think you would like rtic. Not a pre-emptive rtos, but a way to manage context between ISR's without relying on some kind of module or global variable that can get corrupted by multiple accessors. Very minimal overhead compared to FreeRTOS


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: