Hacker Newsnew | past | comments | ask | show | jobs | submit | alexrp's commentslogin

Most people would be better off waiting for the multiple RVA23 boards that are supposed to come out this year, at least if they don't want to be stuck running custom vendor distros. "RVA23 except V" at this price point and at this point in time is a pretty bad value proposition.

It's honestly a bit hard to understand why they bothered with this one. No hate for the Milk-V folks; I have 4 Jupiters sitting next to me running in Zig's CI. But hopefully they'll have something RVA23-compliant out soon (SpacemiT K3?).


> But hopefully they'll have something RVA23-compliant out soon (SpacemiT K3?).

A handful of developers already have access to SpacemiT K3 hardware, which is indeed RVA23 compliant and already runs Ubuntu 26.04.

geekbench: https://browser.geekbench.com/v6/cpu/16145076

rvv-bench: https://camel-cdr.github.io/rvv-bench-results/spacemit_x100/... (which as instruction throughput measurements and more)


This is around the performance of a Core 2 Duo, if I understand correctly?

The single core performance is roughly in the middle between Pi4 Cortex-A72 and Pi5 Cortex-A76.

It's slightly faster than a 3GHz Core 2 Dua in scalar single threaded performance, but it has 8 cores instead of two and more SIMD performance. There are also 8 additional SpacemiT-A100 cores with 1024-bit wide vectors, which are more like an additional accelerator.

The geekbench score is a bit lower than it should be, because at least three benchmarks are still missing SIMD acceleration on RISC-V (File Compression, Asset Compression, Ray Tracer), and the HTML5 browser test is also missing optimizations.

I'd estimate it should be able to get to the 500 range with comparable optimization to other architectures.

The Milk-V Titan mention in the original post is actually slightly faster in scalar performance, but has no RISC-V Vector support at all, which causes it's geekbench score to be way lower.


Do you happen to know how does one access/use those A100 cores?

No.

The problem is that you can't migrate threads between cores with different vector length.

The current ubuntu 26.04 image, that is installed, lists 16 cores in htop, but you can only run applications on the first 8 (e.g. taskset -c 10 fails). If you query whats running on the A100 cores you see things like a "kworker" processes.

I suspect that it should be possible to write a custom kernel module that runs on the A100s with the current kernel, but I'm not sure.

I expect it will definitely be possible to boot a OS only one the 8 A100 cores.

Well have to see if they manage to figure out how to add support for explicitly pinning user mode processes to the cores.

The ideal configuration would be to have everything run only on the X100s, but with an opt-in mechanism to run a program only on an A100 core.


That’s actually decent, thanks.

Something is odd here, the Core 2 Duo only has up to SSE 4.1, while the RVA23 instruction set is analogous to x64-v3. I find it hard to believe that the SpacemiT K3 matched a Core 2 duo single core score while leveraging those new instructions.

To wit the Geekbench 6.5.0 RISC-V preview has 3 files, 'geekbench6', 'geekbench_riscv64', and 'geekbench_rv64gcv', which are presumably the executables for the benchmark in addition to their supported instruction sets. This makes the score an unreliable narrator of performance, as someone could have run the other benchmarks and the posted score would not be genuine. And that's on top of a perennial remark that even the benchmark(s) could just not be optimized for RISC-V.


If it's anything like the k1, I wouldn't be surprised if Core 2 performance was on the table. The released specs are are ~Sandybridge-Haswell like, but those were architectures made by (at the time) the top CPU manufacturer and were carefully balanced architectures to maximize performance while minimizing transistors. SpaceMIT is playing on easy mode (they are making a chip on a ~2-4x smaller process node and aren't pioneering bleeding edge techniques), but balancing an out of order CPU is still tough, and it's totally possible to lose 50% of theoretical ipc if you don't have the memory bandwith, cache hierarchy, scheuling etc.

Cache issues add another layer here, if it's not the whole issue. Device tree patches for the K3 have 2 clusters of 4 cores with shared 4MB L2 cache per cluster. Core 2 Duo P8400 has 3MB L2 shared between 2 cores, and Sandybridge-Haswell have per core L2 and shared L3.

I don't think you'll be able to get away from custom distros even with RVA23. It solves the problem of binary compatibility - everything compiled for RVA23 should be pretty portable at the instruction level (won't help with the usual glibc nonsense of course).

But RVA23 doesn't help with the hardware layer - it's going to be exactly the same as ARM SBCs where there's no hardware discovery mechanism and everything has to be hard-coded in the Linux device tree. You still need a custom distro for Raspberry Pi for example.

I believe there has been some progress in getting RISC-V ACPI support, and there's at least the intent of making mconfigptr do something useful - for a while there was a "unified discovery" task group, but it seems like there just wasn't enough manpower behind it and it disbanded.

https://github.com/riscvarchive/configuration-structure/blob...

https://riscv.atlassian.net/browse/RVG-50


> You still need a custom distro for Raspberry Pi for example.

Are you sure that's still the case? I just checked the Raspberry Pi Imager and I see several "stock" distro options that aren't Raspbian.

Regardless, I take your point that we're reliant on vendors actually doing the upstreaming work for device trees (and drivers). But so far the recognizable players in the RISC-V space do all(?) seem to be doing that, so for now I remain hopeful that we can avoid the Arm mess.


I'm not totally sure, but I would imagine those stock distros actually have dedicated packages for Raspberry Pi kernel images.

See this for example: https://www.phoronix.com/news/Raspberry-Pi-5-Ethernet-Linux

If you look at the patch series, it's directly adding information about the address of the ethernet device. That's the sort of thing that would be discovered automatically in the x86 world. It wouldn't need to be hard-coded into the kernel for each individual board that is supported.


I feel this is becoming a bit of a tech urban legend such as ZFS requires ECC.

As far as I understand the RVA23 requirement is an ubuntu thing only and only for current non LTS and future releases. Current LTS doesn't have such requirements and neither other distributions such as Fedora and Debian that support riscv64.

So no, you are not stuck running custom vendor distros because of this but more because the other weird device drivers and boot systems that have no mainline support.


I'm fairly sure I recall Fedora folks signaling that they intend to move to RVA23 as soon as hardware becomes generally available.

It is of course possible that Debian sticks with RV64GC for the long term, but I seriously doubt it. It's just too much performance to leave on the table for a relatively new port, especially when RVA23 will (very) soon be the expected baseline for general-purpose RISC-V systems.


As someone from the Fedora/RISC-V project, it'll depend on what our users want. We cannot support both RV64GC and RVA23 (because we don't have the build or software infra to do it) so we have to be careful when we move. Doing something like building with RV64GC generally but having targeted optimizations - perhaps two kernel variants and some libraries - might be possible, but also isn't easy.

Things are different for CentOS / RHEL where we'll be able to move to RVA23 (and beyond) much more aggressively.


First things first: thank you for your work.

That being said: does it make sense to keep a nee but low performance platform alive? As the platform is new and likely doesn’t have many users, wouldn’t it make sense to nudge (as in “gently push”) users towards a higher performance platform?

Chances are the low-performance platform will die anyway, and fedora will not be exploiting the full offering of the high performance platform.


It's about what users think in our forums: https://discussion.fedoraproject.org/tag/risc-v-sig

I'm not completely sure, but I suspect Fedora will stick to the current baseline for quite some time.

But the baseline is quite minimal. It's biased towards efficient emulation of the instructions in portable C code. I'm not sure why anyone would target an enterprise distribution to that.

On the other hand, even RVA23 is quite poor at signed overflow checking. Like MIPS before it, RISC-V is a bet that we're going to write software in C-like languages for a long time.


> On the other hand, even RVA23 is quite poor at signed overflow checking

When I tried to measure the impact of -ftrapv in RVA23 and armv9, it was roughly the same: https://news.ycombinator.com/item?id=46228597#46250569

reminder:

    unsigned 64-bit:
    add: RV: add+bltu       Arm: adds+bcc
    sub: RV: sub+bltu       Arm: subs+bcs
    mul: RV: mulhu+mul+beqz Arm: umulh+mul+cbz
    
    unsigned 32-bit:
    add: RV: addw+bgeu     Arm: adds+bcc
    sub: RV: subw+bgeu     Arm: subs+bcs
    mul: RV: mul+slli+beqz Arm: umul+cmp lsr 32

    signed 64-bit:
    add: RV: add+slt+slti+beq  Arm: adds+bcc
    sub: RV: sub+slt+slti+beq  Arm: subs+bcs
    mul: RV: mulh+mul+srai+beq Arm: smulh+mul+cmp asr 63
    
    signed 32-bit:
    add: RV: addw+add+beq   Arm: adds+bvc
    sub: RV: subw+sub+beq   Arm: subs+bvs
    mul: RV: mul+sext.w+bew Arm: smul+asr+cmp asr 31

> On the other hand, even RVA23 is quite poor at signed overflow checking.

On the other hand it avoids integer flags which is nice. I doubt it makes a measurable performance impact either way on modern OoO CPUs. There's going to be no data dependence on the extra instructions needed to calculate overflow except for the branch, which will be predicted not-taken, so the other instructions after it will basically always run speculatively in parallel with the overflow-checking instructions.


It's nice for a C simulator to avoid condition codes. It's not so nice if you want consistent overflow checks (e.g., for automatically overflowing from fixnums to bignums).

Even with XNOR (which isn't even part of RVA23, if I recall correctly), the sequence for doing an overflow check is quite messy. On AArch64 and x86-64, it's just the operation followed by a conditional jump: https://godbolt.org/z/968Eb1dh1


Non-flag based overflow checks are still pretty cheap. The overflow check is only 1 extra instruction for unsigned (both add and multiply), and 3/4 extra for signed overflow (see https://godbolt.org/z/nq1nb5Whr for details). It's also worth noting that in many cases, the overflow checks will be removable or simplify-able by the compiler entirely (e.g. if you're adding 1 or know the sign of one of the operands etc). As such, the extra couple instructions are likely worthwhile if it makes designing a wider core easier. Signed overflow instructions would be reasonable to add, but it's not like modern high performance cores are bottlenecked by scalar instructions that don't touch memory anyway.

Our CI workflow literally just invokes a plain old shell script (which is runnable outside CI). We really don't need an overcomplicated professional CI/CD solution.

One of the nice things about switching to Forgejo Actions is that the runner is lightweight, fast, and reliable - none of which I can say for the GitHub Actions runner. But even then, it's still more bloated than we'd ideally like; we don't need all the complexity of the YAML workflow syntax and Node.js-based actions. It'd also be cool for the CI system to integrate with https://codeberg.org/mlugg/robust-jobserver which the Zig compiler and build system will soon start speaking.

So if anything, we're likely to just roll our own runner in the future and making it talk to the Forgejo Actions endpoints.


> The reason they move to a lesser known Git provider sounds more like a marketing stunt.

We had technical problems that GitHub had no interest in solving, and lots of small frustrations with the platform built up over years.

Jumping from one enshittified profit-driven platform to another profit-driven platform would just mean we'd set ourselves up for another enshittification -> migration cycle later down the line.

No stunt here.


Well that explains a lot, because I thought that you guys moved due to their direction sounds more like a political act.

Btw why not GitLab?


Worth noting that LLVM has AVR and MSP430 backends, so there's no particular resistance to 8-bit/16-bit targets.


Oh, thanks for the correction. I couldn’t find a conprehensive list of backends (which is weird) and the lists I did find only included 16+ bit targets.


Hug of death followed by a DDoS. At the time of me writing this, it loads instantly again.


As I pointed out in a different comment, even IBM have to maintain a GitHub Actions runner fork with s390x support because upstream just cannot even be bothered to accept the relevant patches: https://github.com/uweigand/runner

If IBM cannot get Microsoft to work with them on something so small but impactful, there's no chance we can.

> Personally - I think GitHub is a cultural artifact now. Of the entire planet. Hackers and curious minds from Japan to Alaska and everything in-between flock to GitHub.

And it's in the hands of a for-profit company pushing LLM nonsense. That should be alarming! Let's instead encourage people to use platforms managed by non-profits.


> obscure OS not being supported

Believe it or not, there are platforms outside of the big 3.

The GitHub Actions runner does not work on FreeBSD, NetBSD, OpenBSD, and illumos, all of which are operating systems we either have existing support for, or intend to start supporting properly soon. (We already have FreeBSD CI; machines for the other 3 are arriving at my place tomorrow as it happens.)

And that's ignoring CPU architectures; the upstream GitHub Actions runner only supports x86 and aarch64. We had to maintain a fork that adds support for all the other architectures we care about such as riscv, loongarch, s390x, etc. We will also likely be adding mips64 and powerpc64 to the mix in the future.

Even IBM have to maintain an s390x fork because Microsoft can't even be bothered to accept PRs that add more platforms: https://github.com/uweigand/runner


> We already have FreeBSD CI; machines for the other 3 are arriving at my place tomorrow as it happens.

That's great. I hope it works out, and you have CI for NetBSD, OpenBSD, and illumos, too.

Go's support for NetBSD has been a big boon to the more casual NetBSD user who isn't going to maintain a port. It means a random Go open-source project you use probably works on NetBSD already, or if it doesn't, it can be fixed upstream. Maybe Zig could play a similar role.

It's a shame GitHub doesn't have native CI even for FreeBSD on x86-64. I can see the economic case against it, of course. That said, the third-party Cross-Platform GitHub Action (https://github.com/cross-platform-actions/action) has made Free/Net/OpenBSD CI practical for me. I have used it in many projects. The developer is currently working on OmniOS support in https://github.com/cross-platform-actions/omnios-builder.


> Go's support for NetBSD has been a big boon to the more casual NetBSD user who isn't going to maintain a port. It means a random Go open-source project you use probably works on NetBSD already, or if it doesn't, it can be fixed upstream. Maybe Zig could play a similar role.

In fact, we do already have cross-compilation support for NetBSD (and FreeBSD). But we currently only "test" NetBSD by building the language behavior tests and standard library tests for it on Linux, i.e. we don't actually run them, nor do we build the compiler itself for NetBSD. Native CI machines will allow us to fill that gap.

As it happens, Go's cross-compilation support does indeed make our lives easier for provisioning CI machines since we can build the Forgejo Runner for all of them from one machine: https://codeberg.org/ziglang/runner/releases/tag/v12.0.0


> So much vague outrage over nothing.

So you just chose to ignore the technical problems we have with GitHub Actions and then say there are no problems. That's certainly a take.

> That CI system created by so called monkeys is the one of the best free CI service in the world.

We self-host all our CI machines so the "free" hosted runners have no relevance here.

> Not everyone has the millions of dollars like Zig Foundation to create their own CI servers.

We don't have "millions of dollars". If only!

I'd also note that we spend our money very efficiently; most of our CI machines are consumer-grade hardware hosted in team member's homes. We don't just throw endless amounts of money at cloud providers.

> After that they appreciate GitHub Sponsors, but say it is now a complete liability just because a project leader left. What are the actual changes? Any new rule? But no, it is now a "liability" and we should accept it.

GitHub Sponsors is a liability because Microsoft can increase their cut at any time, or even axe it outright if they don't think it's profitable for them anymore. This risk is very real considering that, as Andrew pointed out, the feature has been neglected for years. It is objectively less risky for us to have donors use a platform like Every.org.


Can’t any donation platform you don’t fully control cut you off at any point?

What exactly is different about GitHub sponsors here?


In theory sure, but you have to evaluate how likely it is.

Some typical dynamics:

Big org platform -> exposed to risk, as you are not a significant addition to their bottom line

Small donation platform -> Can be easily bullied by payment processors to "derisk"

---

every.org is a bit special, as it only lists 501c nonprofits - which the Zig Foundation is - and AFAIK has a decent track record. Most other open source projects don't clear that bar.


We always have the option of exiting Codeberg to a self-hosted Forgejo instance if that should become necessary for some reason. (Not that I expect it will, considering Codeberg is a non-profit.)

We do self-host all our CI machines.


But when you exit


It will be less controlversial than the standard library changes that are coming in the future.


> Forgejo is more busy managing ideals, than creating software.

Can't say I agree with this point. Zig has been trying out Forgejo/Codeberg as an alternative to GitHub, and about two months into the experiment, almost all of our technical concerns with Forgejo (and Forgejo Actions) have been addressed, with the only straggler being a UI bug related to the Cancel button in the Actions infrastructure (which has a WIP PR open, and which also has a straightforward workaround).

I can't speak to the platforms themselves, but in regards to their CI systems, it looks to me like the Forgejo Actions runner sees more development than the Gitea act_runner. For example, Forgejo gained support for concurrency groups recently, which to my knowledge are still not supported in Gitea.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: