Hacker Newsnew | past | comments | ask | show | jobs | submit | Youden's commentslogin

I have 25Gbps from Init7 at home. My "router" is a Minisforum MS-01 with a second-hand Mellanox ConnectX-5, running VyOS.

My main home server is a Supermicro SYS-510D-4C-FN6P. It has dual 25Gbps ports onboard but also an Intel E810-XXVDA4T with another 4x25Gbps ports.

Both of them are perfectly capable of saturating their ports using stock forwarding on Linux, no DPDK, VPP, anything, without breaking a sweat. Both of them were substantially cheaper than the machine in the article.

Is there something I'm missing? Why does this workstation need a ~$1000 motherboard and a ~$1000 Xeon CPU? Those two components alone cost more than either of my computers and seem like severe overkill.


SCION is much slower than normal IP.

Huh?

"SCION OSS border router performance reached a ceiling of around 400k-500k packets per second, which is roughly equivalent to 5-6 Gbit/s at a 1500-byte MTU." vs. 1.4 M PPS for IP (on an older CPU) https://toonk.io/linux-kernel-and-measuring-network-throughp...

Ah. Thanks!

Your MS-01 routes line-rate 25Gbps in software with VyOS w/o kernel bypass? That's very surprising to me. At what packet sizes?

My understanding is that the setup needs to allow them to work on packet routing at those speeds, not just send/receive, to simulate SCION.

Ah, so they need to hold giant routing tables in memory and do lookups in them or something like that?

Does not look like it [1]. It appears to be a protocol that enumerates your exact path, interface by interface, on every data packet. So you can just blindly forward to the next hop written in the packet itself.

By my guess, a competent and efficient implementation should be able to run the routing logic at ~30-100 million packets per second per core. That would be ~300-1,000 Gb/s per core, so you would bottleneck on your memory bandwidth if you have even a single copy.

[1] https://www.ietf.org/archive/id/draft-dekater-scion-dataplan...


Is this some MPLS-like thing?

Don't forget checking the MACs.

In defence of young people, it's "determined" by the people who actually go out and vote the same way a child "determines" what's for dinner when asked "would you like broccoli or brussels sprouts?"

American democracy is broken. Not in an abstract, hand-wavy feelings way but a hard, numerical, mathematical way. A two party system results in no real choice. First past the post results in a two party system. America uses first past the post. Therefore, Amercian democracy gives voters no real choice.


Margins in recent elections have been thin enough that higher voter turnout among young generations could have easily changed the outcome.

Blaming broken democracy is just a cop out. Youth voter turnout for primary elections, where there are many candidates, is also low. More parties isn’t going to change anything.


You're missing the point. There were only two possible outcomes: Democrats or Republicans. Both were bad and unappealing. Both are too dependent on the status quo to serve as vehicles for real change (so primaries are pointless too).

"More parties", through elimination of first past the post, absolutely changes things. It allows you to vote for someone who truly represents you and your interests without "throwing away" your vote. That's impossible today.


Dems didn't really get primaries in 2024, so that certainly didn't help.

>"More parties", through elimination of first past the post, absolutely changes things.

Indeed. But that's the one single thing D's and R's can agree on not doing. It'll need to be done state by state to get any real leverage.


Well we are getting some "real change" now, so I guess the monkey's paw works.

There was a real choice in 2016, 2020 and 2024. Trumps opponent was every single time very different to him.

Democrats continue to offer up horrible candidates, and their idiotic primary system confirms those horrible candidates every 4 years. A slice of cheese could have beaten Trump, but somehow the DNC managed to offer up the most boring, milquetoast, unlikable, uncharismatic, centrist candidates they could find and beat him once out of three times. They're just kicking own-goals over and over, and they're not learning which direction to run down the field.

Biden was not centrist he was left. He was a good president I do not understand why Americans complain as if the choice between the two was so hard.

His policies were tempered, image-wise and often in substance, by his affinity for Joe Manchin alongside his disdain for Bernie Sanders. Balanced alongside the middle eastern foreign policy, he comes across as centrist despite the BBB.

Look up the build back better act that Biden proposed and tell me if you think that was centrist. It originally proposed extending the child tax credit (basically basic income for people with kids).

The Inflation Reduction Act, the negotiated paired down version was still the biggest climate bill in history.

He also attempted to cancel 10 to 20k each of student debt, a progressive priority. That was blocked by the Supreme court.

The list goes on.

If the electorate had given Biden a bigger majority in Congress he would have passed much more progressive legislation.


The self reinforcing prophecy of “somebody else’s job”.

It’s the job of politicians to pander to us, the good voter. Since they didn’t offer us something good, we didn’t vote, and that results in this current situation.

Politics is not my job, being aware of how politics works is not my job. My job is just to let them know they aren’t good enough. It’s because they aren’t good enough, that we landed up in this situation.


If you search "pi n150 3588" (without the quotes), Kagi, Google and DuckDuckGo all make it clear that "3588" means "RK3588" or "Rockchip RK3588".

Back in the old days we didn't have all these AI things and personalization to predict our intent, we had to put context in our queries :)


Yes, but they seem to be talking about a specific product?

    "(4GB was $70 now $110, next batch probably ~$150 by now if nothing improves)"
I think it's reasonable to ask, what's $70, now $110 and $150?

If you're quoting specific numbers for a specific product that you're claiming should be in the thread and then you refuse to link it...


I use GrapheneOS in Switzerland and am yet to find a bank or financial app that doesn't work. ZKB, UBS, Cembra, BEKB, SGKB, WIR, N26, Revolut, debiX+, SaxoTrader, Swisscard, various TWINT apps, YAPEAL and Yuh are all installed on my phone right now and all work. Most of them don't use the Play Integrity API at all and the few that do are satisfied with the minimal level that's satisfied by GrapheneOS.

The catch is that you need Google Play Services installed and for many, you need to disable GrapheneOS' "Secure App Spawning" feature, which often trips root detection heuristics.

I know many Russians living here and when sanctions came in, their accounts became unable to receive deposits until they provided evidence of a valid residence permit. Some have problems during permit renewals as well but overall, it's nothing like as bad as it is for Americans.


Are all of these apps only available through Google's Play Store repo, or are any of the available as an apk file directly from the bank's site? For some reason most companies only distribute their apps through Google and Apple's repos. People shouldn't have to have an account with and agree to a third party US company's ToS just to download a banking app.

Why are Google Play Services required?

Genuine questions - I'm not from Switzerland and I don't have a Google account.


They're only available through Google Play. That's near-universal for commercial apps.

Google Play Services, among other things, is the main way to get notifications and location on Android, so any app that uses either of those things tends not to function if it's missing.


IIRC none of them is available from an alternative store

For what it's worth, this is entirely a carrier problem and has little to do with the technology.

Various people and the article have outlined some bad experiences but to give a contrasting example: Digital Republic, a local MVNO here in Switzerland, allows you to replace your eSIM by simply logging into their web portal with TOTP-based 2FA and clicking a button. No SMS, no contact with support, no reidentification.

In theory, all carriers could do this.


The flaw with the technology is that it is designed so you need the co-operation of your carrier, when previously you did not. Indeed, for the first versions moving a sim profile could not even be initiated independently by a user, but required them to contact support. Now there is the "device change" protocol which can be triggered by an app on the phone, but I think it still requires the co-operation of carrier servers.


> Now there is the "device change" protocol which can be triggered by an app on the phone, but I think it still requires the co-operation of carrier servers.

And it won't work if your phone is broken, while a regular SIM could still easily be removed.


This is especially bad in the US, where the government doesn't like to force companies to implement consumer-friendly laws. It was such a great thing when GSM SIMs were introduced, to avoid the carrier lock that was so common in the early days of cell phones.


I only have experience with two carriers in NL and they’re the exact opposite.

No QR code, only an iOS app which needs to be installed on the phone using the plan. My mum was visiting from abroad once and I had to download the app on her phone — which required me to first log into the App Store with my Dutch account.

Another app that could have been a QR code.


Apple force removed SIMs.


Inspired by the other (somewhat aggressive) replies, I looked into what the US promised exactly and unfortunately, it looks like there was never a promise to defend Ukraine.

The relevant document is the Budapest Memorandum [0]. Ukraine, Russia, the UK and the USA are signatories and essentially each agree to respect Ukraine's borders and sovereignty and not to engage in certain hostile acts.

However the only obligation in the event of a breach is that if nuclear weapons are used against Ukraine, or Ukraine is threatened by them, the signatories must seek immediate action from the UN Security Council.

I hate to say it but it looks like the US and UK are adhering to the agreement as-written. The problem as I see it is that Ukraine accepted the agreement without stronger security guarantees.

[0]: https://en.wikisource.org/wiki/Ukraine_Memorandum_on_Securit...


Maybe, but Bill Clinton recently:

>“We forced Ukraine to give up nuclear weapons, cruise missiles, and strategic bombers. We promised to protect Ukraine from Russia. We made Ukraine vulnerable. So yes, this is our war.”

He was the one who did the deal.


I haven't actually seen any evidence of such a promise though. From what I've read, the US explicitly avoided making any such commitments at the time.

Was he stating a fact, or was this perhaps some political rhetoric?


He should be the one to enforce it then.


Are you saying you want to put him back in charge?


He had decent approval ratings and was the last president to run an actual budget surplus rather than the huge deficits of late so you could do worse.


I mean, fair. I just don't think the person I responded to actually thought that part through.


Ah yes, let's not have consistent policy in the USA, and let's not keep our promises/guarantees. That will make America great.

If the President says 'this is what I negotiated' we should fulfill that agreement, not look for ways to get out of the agreement or legal loopholes (sure the Ukrainians agreement said one thing, but in the English version we put something less binding). I get that doing so wouldn't be billionaire behavior that we worship (how can I get the better end of the deal AND get out of whatever commitments I made).


We can dig into the differences of "security guarantees" versus "security assurances" and the precise requirements in the text, but ultimately the disarmament of Ukraine made a promise to the rest od the world about the possibility of nuclear disarmament. And the question is whether it's possible to have security from Russia unless a country has nuclear weapons.

A defended Ukraine promised the rest of the world that they could be a country and not need to build nuclear weapons. An abandoned Ukraine means that every country needs to have nuclear weapons or the world will stand aside as nuclear powers invade every other country.

It's quite clear which world Trump wants to live in. As soon as North Korea got the bomb, he started acting sycophantically and weak towards North Korea.

We are entering a far more dangerous world, and as far as defense spending goes a far more expensive world by not giving Ukraine the conventional weapons it needs to defend itself.


"However, this is not just a cyclical shortage driven by a mismatch in supply and demand, but a potentially permanent, strategic reallocation of the world’s silicon wafer capacity. [...] This is a zero-sum game: every wafer allocated to an HBM stack for an Nvidia GPU is a wafer denied to the LPDDR5X module of a mid-range smartphone or the SSD of a consumer laptop."

I wonder if this will result in writing more memory-efficient software? The trend for the last couple of decades has been that nearly all consumer software outside of gaming has moved to browsers or browser-based runtimes like Electron. There's been a vicious cycle of heavier software -> more RAM -> heavier software but if this RAM shortage is permanent, the cycle can't continue.

Apple and Google seemed to be working on local AI models as well. Will they have to scale that back due to lack of RAM on the devices? Or perhaps they think users will pay the premium for more RAM if it means they get AI?

Or is this all a temporary problem due to OpenAI's buying something like 40% of the wafers?


> I wonder if this will result in writing more memory-efficient software?

If the consumer market can't get cheap RAM anymore, the natural result is a pivot back to server-heavy technology (where all the RAM is anyway) with things like server-side rendering and thin clients. Developers are far too lazy to suddenly become efficient programmers and there's plenty of network bandwidth.


Developers would prefer to write good software, the challenge and the craftsmanship are a draw.

However, the customers do not care and will not pay more so the business cannot justify it most of the time.

Who will pay twice (or five times) as much for software written in C instead of Python? Not many.


Well this is patently false. For the past 3 decades, programmers have intentionally made choices which perform as poorly as the hardware will allow them. You can pretty much draw a parallel line with hardware advancement and the bloating of software.

It hasn't gotten 100x harder to display hypermedia than it was 20 years ago. Yet applications use 10x-100x more memory and CPU than they used to. That's not good software, that's lazy software.

I just loaded "aol.com" in Firefox private browsing. It transferred 25MB, the tab is using 307MB of RAM, and the javascript console shows about 100 errors. Back when I actually used AOL, that'd be nearly 10x more RAM than my system had, and would be one of the largest applications on my machine. Aside from the one video, the entire page is just formatted text and image thumbnails.


> You can pretty much draw a parallel line with hardware advancement and the bloating of software.

I do not think it is surprising that there is a Jevons paradox-like phenomena with computer memory and like other instances of it, it does not necessarily follow that this must be a result of a corresponding decline in resource usage efficiency.


This is by design. Rent your computer.. don't buy! Use Geforce Now!


There is a small part of me that wonders if my $3000 computer is worth it when that could get me about 12 years of geforce now gaming with an updated graphic card and processor at all times. But I like to tinker so I'll probably end up spending $10k or more by the end of that 12 years instead.


There's plenty of scope for local AI models to become more efficient, too. MoE doesn't need too much RAM: only the parameters for experts that are active at any given time truly need to be in memory, the rest can be in read-only storage and be fetched on demand. If you're doing CPU inference this can even be managed automatically by mmap, whereas loading params into VRAM must currently be managed as part of running an inference step. (This is where GPU drivers/shader languages/programming models could also see some improvement, TBH)


But aren't the experts chosen on a token by token basis, which means bandwidth limitations?


Yes, with the direct conclusion from that being tl;dr in theory OPs explanation could mitigate RAM, in practice, it’s worse

(Source: I maintain an app integrated with llama.cpp, in practice no one likes 1 tkn/s generation times that you get from swapping, and honestly MoE makes RAM situation worse because in practice, model developers have servers and batch inference and multiple GPUs wired together. They are more than happy to increase the resting RAM budget and use even more parameters, limiting the active experts is about inference speed from that lens, not anything else)


MoE works exactly the opposite way you described. MoE means that each inference pass reads a subset of the parameters, which means that you can run a bigger model with the same memory bandwidth and achieve the same number of tokens per second. This means you're using more memory in the end.


It's not a zero sum game because silicon wafers are not a finite resource. Industry can and will produce more.


If industry has a bit of fear that the demand will slow down by the time they can output meaningful amount of chips, then probably not. Time will show.


Neither are paperclips.


I'm waiting for the good AI powers software.... Any day now.

Ideally, llm should be able to provide the capability to translate from memory inefficient languages to memory efficient languages, and maybe even optimize underlying algorithms in memory use for this.

But I'm not going to hold my breath


This is a temporary problem driven by the AI bubble. It's going to hurt until the bubble pops, but when that happens other things are going to hurt


Code the AI will produce will solve the memory usage problems which is itself a result of lazy or poor human coders.


Nice assertion. Perhaps you meant that AI could be directed towards less memory intensive implementations. That would still have to be directed by those same lazy/poor coders because the code the AI is learning from is their bad code (for the most part).


IDK, given the prevalence of Electron and other technically-correct-but-inefficient code out there, at bare minimum it would require decent prompting to help.


> There's been a vicious cycle of heavier software -> more RAM -> heavier software but if this RAM shortage is permanent, the cycle can't continue.

What do you mean it can't continue? You'll just have to deal with worse performance is all.

Revolutionary consumer-side performance gains like multi-core CPUs and switching to SSDs will be a thing of distant past. Enjoy your 2 second animations, peasant.


From the article: "[...] the Linux support for various parts of the boards, not being upstreamed and mainlined, is very likely to be stuck on an older version. This is usually what causes headaches down the road [...]".

The problem isn't support for the ARM architecture in general, it's the support for this particular board.

Other boards like the Raspberry Pi and many boards based on Rockchip SoCs have most of the necessary support mainlined, so the experience is quite painless. Many are starting to get support for UEFI as well.


The exception (even those are questionable as running plain Debian did not work right on Pi 3B and others when I tried recently) proves the rule. You have to look really hard to find an x86 computer where things don't just basically work, the reverse is true for ARM. The power draw between the two is comparable these days, so I don't understand why anyone would bother with ARM when you've got something where you need more than minimally powerful hardware.


The Pi 3B doesn't have UEFI support, so it requires special support on the distro side for the boot process but for the 4 and newer you can flash (or it'll already be there, depending on luck and age of the device) the firmware on the board to support UEFI and USB boot, though installing is a bit of a pain since there's no easy images to do it with. https://wiki.debian.org/RaspberryPi4

I believe some other distros also have UEFI booting/installers setup for PI4 and newer devices because of this, though there's a good chance you'll want some of the other libraries that come with Raspberry PI OS (aka Raspbian) still for some of the hardware specific features like CSI/DSI and some of the GPIO features that might not be fully upstreamed yet.

There's also a port of Proxmox called PXVirt (Formerly Proxmox Port) that exists to use a number of similar ARM systems now as a virtualization host with a nice ui and automation around it.


This. The issue is the culture inside many of these HW companies that is oppositional to upstreaming changes and developing in the open in general.

Often an outright mediocre software development culture generally, that sees software as a pure cost centre, in fact. The "product" is seem to be the chip, the software "just" a side show (or worse, a channel by which their IP could leak).

The Rockchip stuff is better, but still has similar problems.

These companies need to learn that their hardware will be adopted more aggressively for products if the experience of integrating with it isn't sub-par.


They exist in a strange space. They want to be a Linux host but they also want to be an embedded host. The two cultures are pretty different in terms of expectations around kernels. A Linux sysadmin will (rightly) balk at not having an upgrade path for the kernel while a lot of embedded stuff that just happens to use Linux, often has a single kernel released… ever.

I’m not saying one approach is better than the other but there is definitely a lot of art in each camp. I know the one I innately prefer but I’ve definitely had eyebrows raised at me in a professional setting when expressing that view; Some places value upgrading dependencies while others value extreme stability at the potential cost of security.


> Some places value upgrading dependencies while others value extreme stability at the potential cost of security.

Both are valid. The latter is often used as an excuse, though. No, your $50 wifi connected camera does not need the same level of stability as the WiFi connected medical device that allows doctor to remotely monitor medication. Yes, you should have a moderately robust way to update and build and distribute a new FW image for that camera.

I can't tell you the number of times I've gotten a shell on some device only to find that the kernel/os-image/app-binary or whatever has build strings that CLEARLY feature `some-user@their-laptop` betraying that if there's ever going to be an updated firmware, it's going to be down to that one guy's laptop still working and being able to build the artifact and not because a PR was merged.


The obvious counterpoint is that a PR system is also likely to break unless it is exercised+maintained often enough to catch little issues as they appear. Without a set of robust tests the new artifact is also potentially useless to a company that has already sold their last $50 WiFi camera. If the artifact is also used for their upcoming $54.99 camera then often they will have one good version there too. The artifact might work on the old camera but the risk/reward ratio is pretty high for updating the abandonware.


My uninformed normie view of the ecosystem suggests that it's the support for almost every particular board, and that's exactly the issue. For some reason, ARM devices always have some custom OS or Android and can't run off-the-shelf Linux. Meanwhile you can just buy an x86/amd64 device and assume it will just work. I presume there is some fundamental reason why ARM devices are so bad about this? Like they're just missing standardization and every device requires some custom firmware to be loaded by the OS that's inevitably always packaged in a hacky way?


Its the kernel drivers, not firmware. There is no bios or acpi, so the kernel itself has to support a specifc board. In practice it means there is a dtb file that configures it and the actual drivers in the kernel.

Manufacturers hack it together, flash to device and publish the sources, but dont bother with upstreaming and move on.

Same story as android devices not having updates two years after release.


But "no BOIS or ACPI" and requiring the kernel to support each individual board sounds exactly like the problem is the ARM architecture in general. Until that's sorted it makes sense to be wary of ARM.


It's not a problem with ARM servers or vendors that care about building well designed ARM workstations.

It's a problem that's inherit to mobile computing and will likely never change unless with regulation or an open standards device line somehow hitting it out of the park and setting new expectations a la PCs.

The problem is zero expectation of ever running anything other than the vendor supplied support package/image and how fast/cheap it is to just wire shit together instead of worrying about standards and interoperability with 3rd party integrators.


How so? The Steam Deck is an x86 mobile PC with all the implications of everything (well, all the generic hardware e.g. WiFi, GPU IIRC) work out of the box.


When I say mobile, I mean ARM SoCs in the phone, embedded and IoT lineage, not so much full featured PCs in mobile form factor.


What is ACPI other than a DTB baked into the firmware/bootloader?

Any SBC could buy an extra flash chip and burn an outdated U-Boot with the manufacturer's DTB baked in. Then U-Boot would boot Linux, just like UEFI does, and Linux would read the firmware's fixed DTB, just like it reads x86 firmware's fixed ACPI tables.

But - cui bono?

You need drivers in your main OS either way. On x86 you are not generally relying on your EFI's drivers for storage, video or networking.

It's actually nice that you can go without, and have one less layer.


It is more or less like wifi problem on laptops, but multiplied by the number of chips. In a way it's more of a lunux problem than arm problem.

At some point the "good" boards get enough support and the situation slowly improves.

We reached the state where you dont need to spec-check the laptop if you want to run linux on it, the same will happen to arm sbc I hope.


Is a decision of linux about how to handle HW in the ARM world. So is a little like in the middle.


It's the shape of the delivered artifact that's driven the way things are implemented in the ecosystem, not a really fundamental architecture difference.

The shape of historically delivered ARM artifacts has been embedded devices. Embedded devices usually work once in one specific configuration. The shape of historically delivered ARM Linux products is a Thing that boots and runs. This only requires a kernel that works on one single device in one single configuration.

The shape of historically delivered x86 artifacts is socketed processors that plug into a variety of motherboards with a variety of downstream hardware, and the shape of historically delivered x86 operating systems is floppies, CDs, or install media that is expected to work on any x86 machine.

As ARM moves out of this historical system, things improve; I believe that for example you could run the same aarch64 Linux kernel on Pi 2B 1.2+, 3, and 4, with either UEFI/ACPI or just different DTBs for each device, because the drivers for these devices are mainline-quality and capable of discovering the environment in which they are running at runtime.

People commonly point to ACPI+UEFI vs DeviceTree as causes for these differences, but I think this is wrong; these are symptoms, not causes, and are broadly Not The Problem. With properly constructed drivers you could load a different DTB for each device and achieve similar results as ACPI; it's just different formats (and different levels of complexity + dynamic behavior). In some ways ACPI is "superior" since it enables runtime dynamism (ie - power events or even keystrokes can trigger behavior changes) without driver knowledge, but in some ways it's worse since it's a complex bytecode system and usually full of weird bugs and edge cases, versus DTB where what you see is what you get.


This has often been the case in the past but the situation is much improved now.

For example I have an Orange Pi 5 Plus running the totally generic aarch64 image of Home Assistant OS [0]. Zero customization was needed, it just works with mainline everything.

There's even UEFI [1].

Granted this isn't the case for all boards but Rockchip at least seems to have great upstream support.

[0]: https://github.com/home-assistant/operating-system/releases

[1]: https://github.com/edk2-porting/edk2-rk3588


Yeah but you can get a n100 on sale for about the same price, and it comes with a case, nvme storage (way better then sd card), power supply, proper cooling solution, and less maintanance…


The Orange Pi 5 Plus on its own should be much cheaper than an N100 system. Only when you add in those extras does the price even out. I bought mine in an overpriced bundle for 182€ a few months ago.

It supports NVMe SSDs same as an N100.

Maintenance is exactly the same; they both run mainline Linux.

Where the N100 perhaps wins is in performance.

Where the Orange Pi 5 Plus (and other RK3588-based boards) wins is in power usage, especially for always-on, low-utilization applications.


You can get an n100 system for $110 on sale. Price went up but I still see $135 on eBay now. However YMMV because Europe prices are different

For power I don’t know about orange pi 5 but for many SBC power was a mixed bag. I had pretty bad luck with random SBC taking way more power for random reasons and not putting devices in idle mode. Even raspberry pi was pretty bad when it launched.

It’s frustrating because it’s hard to fix. With x64 you can often go into bios and enable power modes, but that’s not the case with arm. For example pcie4 can easily draw 2w+ when active. (The interface!)

See for example here:

https://github.com/Joshua-Riek/ubuntu-rockchip/issues/606

My n100 takes 6W and 8w (8 and 16gb). If pi5 takes 3w that’s not large enough to matter especially when it’s so inconsistent.

Now one place where I used to like rpi zero was gpio access. However I’m transitioning to rp2350 as it’s just better suited for that kind of work, easier to find and cheaper.


I have no idea what US prices are like but I put in a reasonable amount of effort and at least right now here in Europe, N100 and RK3588 prices are pretty similar for comparable packages (RAM, case, power etc.). One other thing to note is that the N100 is DDR4 while the RK3588 uses DDR5.

I never ran into that bug but I came to the Orange Pi 5 Plus in 2025, so there's a chance the issues were all worked out by the time I started using it.

Looking at a couple of reviews, the Orange Pi 5 Plus drew ~4W idle [0] while an N100 system drew ~10W [1].

1W over a year is 8.76kWh, which here costs ~$2. If those numbers hold (and I'm not saying they do necessarily but for the sake of argument) and with an estimated lifespan of 5 years, you might be looking at a TCO of $140 hardware + $40 power = $180 for an Orange Pi 5 vs. $140 hardware + $100 power = $240 for an N100. That would put an N100 at 33% more expensive. Even if it draws just 6W compared to 4W, that's $200 vs. $180, 11% more expensive.

I'm not saying the Orange Pi 5 Plus is clearly better but I don't think it's as simple as one might think.

[0]: https://magazinmehatronika.com/en/orange-pi-5-plus-review/

[1]: https://www.servethehome.com/fanless-intel-n100-firewall-and...


Maybe this was the case a few years ago, but I would argue the landscape has changed a lot since then - with many more distro options for Arm64 devices.


So, I agree but less than I did a few months ago. I purchased an Orange Pi 5 Ultra and was put off by the pre-built image and custom kernel. The “patch“ for the provided kernel was inscrutable as well. Now I’m running a vanilla 6.18 kernel on a vanilla uboot firmware (still a binary blob required to build that though) with a vanilla install of Debian. That support includes the NPU, GPU, 2.5G Ethernet and NVMe root/boot. I don’t have performance numbers but it’s definitely fast enough for what I use it for.


Interesting, where did you get an image with a 6.18 kernel that has NPU support?


NPU support in general seems to be moving pretty fast, it shares a lot of code with the graphics drivers.


I started with the published Debian image and then just built my own... and then installed onto an NVMe SSD.


No it's definitely a problem with the ARM architecture, specifically that it's standard to make black box SoCs that nobody can write drivers for and the manufacturer gives you one binary version and then fucks off forever. It's a problem with the ARM ecosystem as a whole for literally every board (except Raspberry Pi), likely stemming from the bulk of ARM being throwaway smartphones with proprietary designs.

If ARM cannot outdo x86 on power draw anymore then it really is entirely pointless to use it because you're trading off a lot, and it's basically guaranteed that the board will be a useless brick a few years down the line.


There's also a risk of your DeviceTree getting pruned from the kernel in X years when it's decided that "no one uses that board anymore", which is something that's happened to several boards I bought in the 2010's, but not something that's happened to any PC I've ever owned.


It’s weirded me out for a long time that we’ve gone from ‘we will probe the hardware in a standard way and automatically load the appropriate drivers at boot’ ideal we seemed to have settled on for computers in the 2000s - and still use on x86 - back to ‘we’ll write a specific description file for every configuration of hardware’ for ARM.


Isn't this one of the benefits of ACPI? That the kernel asks the motherboard for the hardware information that on ARM SoCs is stored in the device tree?


Yep


That makes sense, as the Pi is as easy as x86 at this point. I almost never have to compile from scratch.

I'm not a compiler expert... But it seems each ARM64 board needs its own custom kernel support, but once that is done, it can support anything compiled to ARM64 as a general target? Or will we still need to have separate builds for RPi, for this board, etc?


Little bit of both. Pi still uses a sort of unique boot sequence due to it’s heritage. Most devices will have the CPU load the bootloader and then have the OS bring up the GPU. Pi sort of inverts this, having the GPU leading the charge with the CPU held at reset until after the GPU has finished it’s boot sequence.

Once you get into the CPU though the Aarch64 registers become more standardized. You still have drivers and such to worry about and differing memory offsets for the peripherals - but since you have the kernel running it’s easier to kind of poke around until you find it. Pi 5 added someone complexity to this with the RP1 South Bridge which adds another layer of abstraction.

Hopefully that all makes sense. Basically the Pi itself is backwards while everything else should conform. It’s not Arm specific, but how the Pi does things.


Apart from very rare cases, this will run any linux-arm64 binary.


Fot the Pi you have to rely on the manufacturer's image too. It does not run a vanilla arm64 distro


With this board the SoC is the main problem. CIX is working on mainlining that stuff for over a year and we still dont have gpu and npu support in mainline


I still have to run my own build of kernel on Opi5+, so that unfortunately tracks. At least I dont have to write the drivers this decade


Why? I'm running an Orange Pi 5+ with a fully generic aarch64 image of Home Assistant OS and it works great. Is there some particular feature that doesn't work on mainline?


for server use you can live with generic images. When you want stuff like HDMI audio out and all, generic images usually won't do.


I use it as a desktop, so I need HDMI and actual video drivers, which were added to mainline like this year.


> The problem isn't support for the ARM architecture in general,

Of course it is not. That's why almost every ARM board comes with it's own distro, sometimes bootloader and kernel version. Because "it is supported". /s


> Neo brokers offering highly leveraged index funds securities

These, eToro and the like, aren't "brokers" so much as online betting platforms for the stock market.

A typical broker like Interactive Brokers, Charles Schwab etc. acts as a gateway to the market, other traders act as counterparties, and is bound by strict regulation

These "neo brokers" as you call them don't. Those "securities" you're buying are offered by the broker, at a price set by the broker, the broker may be the counterparty and they can't be transferred. Just like a casino.

This is all laid out in the terms and conditions for anybody who cares to read them, e.g. [0], sections 7.1 and 17.

If you want to gamble based on stock prices using leverage, at least do it right: use derivatives. They're leveraged but thoroughly regulated and traded on central exchanges.

[0]: https://www.etoro.com/wp-content/uploads/2025/10/eToro-EU-Te...


I was thinking of platforms like Trade Republic, and it's my understanding that they are backed by a licensed bank and what you trade are indeed regulated derivatives. Have I been misinformed?


Hadn't heard of that one, I don't immediately see anything that makes it seem as dodgy as eToro. Do you have a link to any of these highly-leveraged securities they're trading?


This is a 73000x security on silver: https://www.ls-tc.de/en/turbo/4244036

It's got an ISIN and all that.

L+S is the exchange where Trade Republic trades: https://support.traderepublic.com/en-nl/705-On-which-trading...


Ah, I see. That's indeed completely separate from what I was talking about. I didn't know such securities/exchanges existed.


Is it really important that people be educated in the reading of analog clocks?

I think it's clear to most people that digital clocks are easier to read - they're numbers that you read the same as any other numbers; they can be read at a glance without special training.

Analog clocks can also be read at a glance but require the reader to acquire a non-transferable skill.

When I was growing up (90's, 00's), digital clocks weren't yet ubiquitous the way they are now so I can understand why they were taught to me as a child but in 2025, I suspect the average adult finds a digital clock within their line of sight ~20x more frequently than any kind of analog clock.

If you read this and still think it's important that children learn how to read analog clocks, I'd like to know: assuming digital clocks continue their growth and analog clocks become less and less common, when exactly can we stop teaching analog clocks?

In a similar vein, if there's anyone around here who learned the abacus in school, I'm curious what you think of this. Is the analog clock the abacus, waiting to be phased out in favour of the calculator, or is there another way to look at it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: