Hacker Newsnew | past | comments | ask | show | jobs | submit | pron's commentslogin

Except deciding whether an implementation exists or not is itself not a tractable problem in the general case.

A fundamental problem is that program verification is intractable in the computational complexity sense. The question of whether a program satisfies a spec or not is called in the literature the model checking problem (not to be confused with model checkers, which are various algorithms for automatically solving some model checking problems). In the worst case, there is no faster way to determine whether a program satisfies a spec or not than explicitly testing each of its reachable states.

The question is, of course, what portion of programs are much easier than the worst case (to the point of making verification tractable). That is not known, but the results are not necessarily encouraging. Programs that have been deductively verified proof assistants were not only very small, and not only written with proofs in mind, but were also restricted to use simple and less efficient algorithms to make the proofs doable. They tend to require between 10x and 1000x lines of proof per line of code.

(an old blog post of mine links to some important results: https://pron.github.io/posts/correctness-and-complexity)

There is a belief that programs that people write and more-or-less work should be tractably provable, as that's what allowed writing them in the first place (i.e. the authors must have had some vague proof of correctness in mind when writing them). I don't think this argument is true (https://pron.github.io/posts/people-dont-write-programs), and we use formal methods precisely when we want to close the gap between working more-or-less and definitely always working.

But even if verifying some program is tractable, it could still take a long time between iterations. Because it's reasonable that it would take an LLM no less than a month to prove a correct program, there's no point in stopping it before. So a "practical" approach could be: write a program, try proving it for a month, and if you haven't finished in a month, try again. That, of course, could mean waiting, say, six months before deciding whether or not it's likely to ultimately succeed. Nevertheless, I expect there will be cases where writing the program and the proof would have taken a team of humans 800 years, and an LLM could do it in 80.


The question is what do we mean by "a fast language"? We could mean it to be how fast the fastest code that a performance expert in that language, with no resource constraints, could write. Or, we can restrict it to "idiomatic" code. Or we can say that a fast language is the one where an average programmer is most likely to produce fast code with a given budget (in which case probably none of the languages mentioned here are among the fastest).

It's compilers and compiler optimizations that make code run fast. The real question is if the Rust language and the richer memory semantics it has help the Rust compiler to provide a bit more context for optimizing that the C compiler wouldn't have do unless you hand optimize your code.

If you do hand optimize your code, all bets are off. With both languages. But I think the notion that the Rust compiler has more context for optimizing than the C compiler is maybe not as controversial as the notion that language X is better/faster than language Y. Ultimately, producing fast/optimal code in C kind of is the whole point of C. And there aren't really any hacks you can do in C that you can't do in Rust, or vice versa. So, it would be hard to make the case that Rust is slower than C or the other way around.

However, there have been a few rewrites of popular unix tools in Rust that benchmark a bit faster than their C equivalents. Could those be optimized in C. Probably; but they just haven't. But there is a case there of arguing that maybe Rust code is a bit easier to make fast than C code.


> It's compilers and compiler optimizations that make code run fast

Well, then in many cases we are talking about LLVM vs LLVM.

> Ultimately, producing fast/optimal code in C kind of is the whole point of C

Mostly a nitpick, but I'm not convinced that's true. The performance queen has been traditionally C++. In C projects it's not rare to see very suboptimal design choices mandated by the language's very low expressivity (e.g. no multi-threading, sticking to an easier data structure, etc).


Compilers are only as good as the semantics you give them. C and C++ both have some pretty bad semantics in many places that heavily encourage inefficient coding patterns.

The compiler backend yes. But there probably is a lot of work happening elsewhere in the compiler tools.

> It's compilers and compiler optimizations that make code run fast.

Compiler optimisations certainly play a large role, but they're not the only thing. Tracing-moving garbage collectors can trade off CPU usage for memory footprint and allow you to shift costs between them, so depending on the relative cost of CPU and RAM, you could gain speed (throughput) in exchange for RAM at a favourable price.

Arenas also offer a similar tradeoff knob, but they come with a higher development/evolution price tag.


It might be a minute or two before we get to see the words "favourable price" anywhere near the word RAM again.

> we can say that a fast language is the one where an average programmer is most likely to produce fast code with a given budget

I'd say most people use this definition, with the caveat that there's no official "average programmer", and everyone has different standards.


Right, but if we assume that programmers' compensation is statistically correlated with their skill, then we can drop "average" and just talk about budget.

That seems like a wild assumption to make.

Statistically? I don't think it's that wild.

If you prefer it, salaries correlate with years of experience, and the latter surely correlates with skills, right?

(No, this doesn't mean that every 10 years XP dev is better than a 3 years XP one, but it's definitely a strong correlation)


I think when designing a language, and a set of libraries for it, the designer has an idea of how code for said language should be written, what 'idiomatic' code looks like.

In that context, the designer can reason about how should code written that way should perform.

So I think this is a meaningful question for a langauge designer, which makes it a meaningful question for the users as well, when phrased like this:

'How does idiomatic code (as imagined by the language creators) perform in language X vs Y?'


These are the languages an "average programmer" would use. What language are you thinking of?

I may be biased, but I think that if you have a budget that's reasonable in the industry for some project size and includes not only the initial development but also maintenance and evolution over the software's lifetime, especially when it's not small (say over 200KLOC), and you want to choose the language that would give you the fastest outcome, you will not get a faster program than if you chose Java. To get a faster program in any language, if possible, would require a significantly higher budget (especially for the maintenance and evolution).

Do you think C# / .NET doesn't stack up in terms of budget, or not stack up in terms of runtime speed?

It's probably in the same ballpark. To me, the contenders for "the fastest language" include Java, C#, and Go and not many more.

Ah thanks. That clarifies things.


I don't think so, but it may not be far behind. More importantly, though, I'm fairly confident it won't be Assembly, or C, or C++, or Rust, or Zig, but also not Python, or TS/JS. The candidates would most likely include Java, C#, and Go.

Purely by the numbers, an "average programmer" is much more likely to use Javascript, Python, or Java. The native languages have been a bit of a niche field since the late 90's (i.e. heavily slanted towards OS, embedded, and gamedev folks)

That's a very narrow way of looking at things. ATS has a much stronger "deterministic safety net" than Rust, yet the reason to use Rust over ATS is that "fighting the compiler" is easier in Rust than in ATS. On the other hand, if any cost is worth whatever level of safety Rust offers for any project, than Rust wouldn't exist because there are far more popular languages with equal (or better) safety. So Rust's design itself is an admission that 1. more compile-time safety is always better, even if it complicates the language (or everyone who uses Rust should use ATS), and 2. any cost is worth paying for safety (or Rust wouldn't exist in the first place).

Safety has some value that isn't infinite, and a cost that isn't zero. There are also different kinds of safety with different value and different costs. For example, spatial memory safety appears to have more value than temporal safety (https://cwe.mitre.org/top25/archive/2025/2025_cwe_top25.html) and Zig offers spatial safety. The question is always what you're paying and what you're getting in return. There doesn't appear to be a universal right answer. For some projects it may be worth it to pay for more safety, and for other it may be better to pay for something else.


You’re changing the argument. The point wasn’t whether more safety is “worth it”, but that comparing ease while ignoring which invariants are enforced is misleading. Zig can feel simpler because it encodes fewer guarantees. I’m not saying one approach is better, only that this comparison shifts the goalposts.

Then we're in agreement. Both languages give you something that may be important, but it has a price.

You're changing the argument again. I'm not in agreement with your statement.

Imo "safety" in safe Rust is higher than it is in more popular languages.

Data races, type state pattern, lack of nulls, ...


This is comparing what Rust has and other languages don't without also doing the opposite. For example, Java doesn't enforce data-race freedom, but its data races are safe, which means you can write algorithms with benign races safely (which are very useful in concurrent programming [1]), while in Rust that requires unsafe. Rust's protection against memory leaks that can cause a panic is also weaker, as is Rust's ability to recover from panics in general. Java is now in the process of eliminating the unsafe escape hatch altogether except for FFI. Rust is nowhere near that. I.e. sometimes safe Rust has guarantees that mean that programs need to rely on unsafe code more so than in other languages, which allows saying that safe Rust is "safer" while it also means that fewer programs are actually written purely in safe Rust. The real challenge is increasing safety without also increasing the number of programs that need to circumvent it or increasing the complexity of the language further.

[1]: A benging race is when multiple tasks/threads can concurrently write to the same address, but you know they will all write the same value.


> 1. more compile-time safety is always better, even if it complicates the language (or everyone who uses Rust should use ATS), and 2. any cost is worth paying for safety (or Rust wouldn't exist in the first place).

You keep repeating this. It's not true. If what you said was true, Rust would have adopted HKT, and God knows whatever type astronomy Haskell & Scala cooked up.

There is a balancing act, and Rust decided to plant a flag in memory safety without GC. The fact that Zig, didn't expand on this, but went backwards is more of an indictment of programmers unwilling to adapt and perfect what came before, but to reinvent it in their own worse way.

> There are also different kinds of safety with different value and different costs. For example, spatial memory safety appears to have more value than temporal safety (https://cwe.mitre.org/top25/archive/2025/2025_cwe_top25.html)

How did you derive this from the top 25 of CWEs? Let's say you completely remove the spatial memory issues. You still get temporal memory issues at #6.


Rust does have a GC, but I agree it planted its flag at some intermediate point on the spectrum. Zig didn't "go backwards" but planted its own flag ever so slightly closer to C than to ATS (although both Rust and Zig are almost indistinguishable from C when compare to ATS). I don't know if where Rust planted its flag is universally better than where Zig planted its flag, but 1. no one else does either, 2. both are compromises, and 3. it's uncertain whether a universal sweet spot exists in the first place.

> How did you derive this from the top 25 of CWEs? Let's say you completely remove the spatial memory issues. You still get temporal memory issues at #6.

Sure, but spatial safety is higher. So if Rust's compromise, we'll exact a price on temporal safety and have both temporal and spatial safety, is reasonable, then so is Zig's that says, the price on temporal safety is too high for what you get in return, but spatial safety only is a better deal. Neither go as far as ATS in offering, in principle, the ability to avoid all bugs. Nobody knows whether Rust's compormise is universally better than Zig's or vice versa (or perhaps neither is universally better), but I find it really strange to arbitrarily claim that one compromise is reasonable and the other isn't, where both are obviously compromises that recognise there are different benefits and different costs, and that not every benefit is worth any cost.


> Rust does have a GC

It doesn't. Not by any reasonable definition of having a GC.

And "opt-in non-tracing GC that isn't used largely throughout the standard library" is not a reasonable definition.

> Nobody knows whether Rust's compormise is universally better than Zig's

When it comes to having more segfaults, we know. Zig "wins" most segfaults per issue Razzie Award.

This is what happens when you ignore one type of memory safety. You have to have both. Just ask Go.


> And "opt-in non-tracing GC that isn't used largely throughout the standard library" is not a reasonable definition.

Given that refcounting and tracing are the two classic GC algorithms, I don't see what specifying "non tracing" here does, and reference-counting with special-casing of the one reference case is still reference counting. I don't know if the "reasonable definition" of GC matters at all, but if it does, this does count as one.

I agree that the one-reference case is handled in the language and the shared reference case is handled in the standard library, and I think it can be reasonable to call using just the one-reference case "not a GC", but most Rust programs do use the GC for shared references. It is also true that Rust depends less on GC than Java or Go, but that's not the same as not having one.

> When it comes to having more segfaults, we know. Zig "wins" most segfaults per issue Razzie Award.

And Rust wins the Razzie Award for most painful development and lack of similarly powerful arenas. It's like declaring that you win by paying $100 for something while I paid $50 for something else without comparing what we got for the money, or declaring that you win by getting a faster car without looking at how much I paid for mine.

> This is what happens when you ignore one type of memory safety.

When you have less safety for any property, you're guarnateed to have more violations. This is what you buy. Obviously, this doesn't mean that avoiding those extra violations is necessarily worth the cost you pay for that extra safety. When you buy something, looking just at what you pay or just at what you get doesn't make any sense. The question is whether this is the best deal for your case.

Nobody knows if there is a universal best deal here let alone what it is. What is clear is that nothing here is free, and that nothing here has infinite value.


> I don't know if the "reasonable definition" of GC matters at all

If you define all non-red colors to be green, it is impossible to talk about color theory.

> And Rust wins the Razzie Award for most painful development and lack of similarly powerful arenas.

That's a non-quantifiable skill issue. Segfaults per issue is a quantifiable thing.

> When you have less safety for any property, you're guarnateed to have more violations.

If that's what you truly believed outside some debate point. Then you'd be advocating for ATS or Ada.SPARK, not Zig.


> If you define all non-red colors to be green, it is impossible to talk about color theory.

Except reference counting is one of the two classical GC algorithms (alongside tracing), so I think it's strange to treat it as "not a GC". But it is true that GC/no-GC distinction is not very meaningful given how different the tradeoffs that different GC algorithms make are. Even within these basic algorithms there are combinations. For example a mark-and-sweep collector is quite different from a moving collector, or CPython uses refcouting for some things and tracing for others.

> That's a non-quantifiable skill issue. Segfaults per issue is a quantifiable thing.

That it's not as easily quantifiable doesn't make it any less real. If we compare languages only by easily quantifiable measures, there would be few differences between them (and many if not most would argue that we're missing the differences that matter to them most). For example, it would be hard to distinguish between Java and Haskell. It's also not necessarily a "skill issue". I think that even skilled Rust users would admit that writing and maintaining a large program in TypeScript or Java takes less effort than doing the same in Rust.

Also, ATS has many more compile-time safety capabilities than either Rust or Zig (in fact, compared to ATS, Rust and Zig are barely distinguishable in what they can guarantee at runtime), so according to your measure, both Rust and Zig lose when we consider other alternatives.

> Then you'd be advocating for ATS or Ada.SPARK, not Zig.

Quite the opposite. I'm pointing out that, at least as far as this discussion goes, every added value comes with added cost that needs to be considered. If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust. I'm saying that we don't know where the cost-benfit sweet point is or, indeed, even if there's only one such sweey point or multiple. I'm certainly not advocating for Zig as a universal choice. I'm advocating for selecting the right tradeoffs for every project, and I'm rejecting the claim that whatever benefits Rust or Zig have compared to the other are free. Both (indeed, all languages) require you to pay in some way to get what they're offering. In other words, I'm advocating can both be more or less appropriate than the other, depending on the situation and against the position that Rust is always superior, which is based on only looking at its advantages and ignoring its disadvantages (which, I think, are quite significant).


> Except reference counting is one of the two classical GC algorithms (alongside tracing), so I think it's strange to treat it as "not a GC". But it is true that GC/no-GC distinction is not very meaningful given how different the tradeoffs that different GC algorithms make are.

That's not the issue. Calling anything with opt-in reference counting a GC language. You're just fudging definitions to get to the desired talking point. I mean, C is, by that definition, a GC language. It can be equipped with

> That it's not as easily quantifiable doesn't make it any less real.

It makes it more subjective and easy to bias. Rust has a clear purpose. To put a stop to memory safety errors. What does it's painful to use? Is it like Lisp to Haskell or C to Lisp.

> For example, it would be hard to distinguish between Java and Haskell.

It would be possible to objectively distinguish between Java and Haskell, as long as they aren't feature-by-feature compatible.

If you can make a program that halts on that feature, you can prove you're in language with that feature.

> If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust.

Yeah, because you fight a strawman. Having a safe language is a precondition but not enough. I want it to be as performant as C as well.

Second, even if you have the goal of moving to ATS, developing ATS-like isn't going to help. You need a mass of people to move there.


> Calling anything with opt-in reference counting a GC language

Except I never called it "a GC language" (whatever that means). I said, and I quote, "Rust does have a GC". And it does. Saying that it's "opt in" when most Rust programs use it (albeit to a lesser extent than Java or Go programs, provided we don't consider Rust's special case of a single reference to be GC) is misleading.

> Rust has a clear purpose. To put a stop to memory safety errors.

Yes, but 1. other languages do it, too, so clearly "stopping memory errors" isn't enough, 2. Rust does it in a way that requires much more use of unsafe escape hatches than other languages, so it clearly recognises the need for some compromise, and 3. Rust's safety very much comes at a cost.

So its purpose may be clear, but it is also very clear that it makes tradeoffs and compromises, which implies that other tradeoffs and compromises may be reasonable, too.

But anyway, having a very precise goal makes some things quantifiable, but I don't think anyone thinks that's what makes a language better than another. C and JS also have very clear purposes, but does that make them better than, say, Python?

> Having a safe language is a precondition but not enough. I want it to be as performant as C as well... You need a mass of people to move there.

So clearly you have a few prerequisites, not just memory safety, and you recognise the need for some pragmatic compromises. Can you accept that your prerequisites and compromises might not be universal and there may be others that are equally reasonable, all things considered?

I am a proponent of software correctness and formal methods (you can check out my old blog: https://pron.github.io) and I've learnt a lot over my decades in industry about the complexities of software correctness. When I choose a low-level language, to switch away from C++ my prerequisites are: a simple language with no implicitness (I want to see every operation on the page) as I think it makes code reviews more effective (the effectiveness of code reviews has been shown empirically, although not the relationship to language design) and fast compilation to allow me to write more tests and run them more often.

I'm not saying that my requirements are universally superior to yours, and my interests also lie in a high emphasis on correctness (which extends far beyond mere memory safety), it's just that my conclusions and perhaps personal preferences lead me to prefer a different path to your preferred one. I don't think anyone has any objective data to support the claim that my preferred path to correctness is superior to yours or vice-versa.

I can say, however, that in the 1970s, proponents of deductive proofs warned of an impending "software crisis" and believed that proofs are the only way to avoid it (as proofs are "quantifiably" exhaustive). Twenty years later, one of them, Tony Hoare, famously admitted he was wrong, and that less easily quantifiable approaches turned out to be more effective than expected (and more effective than deductive proofs, at least of complicated properties). So the idea that an approach is superior just because it's absolute/"precise" is not generally true.

Of course, we must be careful not to extrapolate and generalise in either direction, but my point is that software correctness is a very complicated subject, and nobody knows what the "best" path is, or even if there is one such best path.

So I certainly expect a Rust program to have fewer memory-safety bugs than a Zig programs (though probably more than a Java program), but that's not what we care about. We want the program to have the fewest dangerous bugs overall. After all, I don't care if my user's credit-card data is stolen due to a UAF or due to SQL injection. Do I expect a Rust program to have fewer serious bugs than a Zig program? No, and maybe the opposite (and maybe the same) due to my preferred prerequisites I listed above. The problem with saying that we should all prefer the more "absolute" approach, though it could possibly harm less easily-quantifiable aspects, because it's at least absolute in whatever it does guarantee is that this belief has already been shown to not be generally true.

(As a side note, I'll add that a tracing GC doesn't necessarily have a negative impact on speed, and may even have a positive one. The main tradeoff is RAM footprint. In fact, the cornerstone of tracing algorithms is that they can reduce the cost of memory management to be arbitrarily low given a large-enough heap. In practice, of course, different algorithms make much more complicated pragmatic tradeoffs. Basic refcounting collectors primarily optimise for footprint.)


> Except I never called it "a GC language" (whatever that means). I said, and I quote, "Rust does have a GC".

Ok, semantics aside, my point still stands. C also has a GC. See Boehm GC. And before you complain RC is part of std I will point that std is optional and is on track to become a freestanding library.

> Can you accept that your prerequisites and compromises might not be universal

Not the way hardware is moving, which is to say more emphasis on more cores and with no more free lunch from hardware. Regardless of whether it is on-prem or in the cloud, mandatory GC is not a cost you can justify easily anymore.

> As a side note, I'll add that a tracing GC doesn't necessarily have a negative impact on speed, and may even have a positive one

Yeah, but it has a negative impact on memory. As witnessed in the latest RAM crisis, there is no guarantee you can just rely on more memory providing benefits.

> After all, I don't care if my user's credit-card data is stolen due to a UAF or due to SQL injection.

Sure, but those that see fewer UAF errors have more time to deal with logic errors. Of course there are confounding variables such as believing you are king of the world, or that Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.


> C also has a GC. See Boehm GC. And before you complain RC is part of std I will point that std is optional and is on track to become a freestanding library.

Come on. The majority of Rust programs use the GC. I don't understand why it's important to you to debate this obvious point. Rust has a GC and most Rust programs use it (albeit to a much lesser extent than Java/Python/Go etc.). I don't understand why it's a big deal.

You want to add the caveat that some Rust programs don't use the GC and it's even possible to not use the standard library at all? Fine.

> Not the way hardware is moving, which is to say more emphasis on more cores and with no more free lunch from hardware. Regardless of whether it is on-prem or in the cloud, mandatory GC is not a cost you can justify easily anymore.

This is simply not true. There are and have always been types of software that, for whatever reason, need low-level control over memory usage, but the overall number of such cases has been steadily decreasing over the past decades and is continuing to do so.

> As witnessed in the latest RAM crisis, there is no guarantee you can just rely on more memory providing benefits.

What you say about RAM prices is true, but it still doesn't change the economics of RAM/CPU sufficiently. There is a direct correspondence between how much extra RAM a tracing collector needs and the amount of available CPU (through the allocation rate). Regardless of how memory management is done (even manually), reducing footprint requires using more CPU, so the question isn't "is RAM expensive?" but "what is the relative cost of RAM and CPU when I can exchange one for the other?" The RAM/CPU ratios available in virtually all on-prem or cloud offerings are favourable to tracing algorithms.

If you're interested in the subject, here's an interesting keynote from the last International Symposium on Memory Management (ISMM): https://youtu.be/mLNFVNXbw7I

> Sure, but those that see fewer UAF errors have more time to deal with logic errors.

I think that's a valid argument, but so is mine. If we knew the best path to software correctness, we'd all be doing it.

> Of course there are confounding variables such as believing you are king of the world, or that Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.

I understand that's something you believe, but it's not supported empirically, and as someone who's been deep in the software correctness and formal verification world for many, many years, I can tell you that it's clear we don't know what the "right" approach is (or even that there is one right approach) and that very little is obvious. Things that we thought were obvious turned out to be wrong.

It's certainly reasonable to believe that the Rust approach leads to more correctness than the Zig approach, and some believe that, and it's equally reasonable to believe that the Zig approach leads to more correctness than the Rust approach, and some people believe that. It's also reasonable to believe that a different approaches is better for correctness in different circumstances. We just don't know, and there are reasonable justifications in both directions. So until we know, different people will make different choices, based on their own good reasons, and maybe at some point in the future we'll be able to have some empirical data that gives us something more grounded in fact.


> Come on. The majority of Rust programs use the GC.

This part is false. You make a ridiculous statement and expect everyone to just nod along.

I could see this being true iff you say all Rust UI programs use "RC".

> This is simply not true. There are and have always been types of software that, for whatever reason, need low-level control over memory usage, but the overall number of such cases has been steadily decreasing over the past decades

Without ever increasing memory/CPU, you're going to have to squeeze more performance out of the stone (more or less unchanging memory/CPUs).

GC will be a mostly unacceptable overhead in numerous instances. I'm not saying it will be fully gone, but I don't think the current crop of C-likes is accidental either.

> I understand that's something you believe, but it's not supported empirically

It's supported by Google's usage of Rust.

https://security.googleblog.com/2025/11/rust-in-android-move...

> Stable and high-quality changes differentiate Rust. DORA uses rollback rate for evaluating change stability. Rust's rollback rate is very low and continues to decrease, even as its adoption in Android surpasses C++.

So for similar patches, you see fewer errors in new code. And the overall error rate still favors Rust.


> Without ever increasing memory/CPU, you're going to have to squeeze more performance out of the stone (more or less unchanging memory/CPUs).

The memory overhead of a moving collector is related only to the allocation rate. If the memory/CPU is sufficient to cover that overhead, which, in turn help save more costly CPU, it doesn't matter if the relative cost reduced (also, it's not even reduced; you're simply speculating that one day it could be).

> I'm not saying it will be fully gone

That's a strange expression given that the percentage of programs written in languages that rely primarily on a GC for memory management has been rising steadily for about 30 years with no reversal in trend. This is like saying that more people will find the cost of typing a text message unacceptable so we'll see a rise in voicemail messages, but of course text messaging will not be fully gone.

Even embedded software is increasingly written in languages that rely heavily on GC. Now, I don't know the future market forces, and maybe we won't be using any programming languages at all but LLMs will be outputting machine code directly, but I find it strange to predict with such certainty that the trend we've been seeing for so long will reverse in such full force. But ok, who knows. I can't prove that the future you're predicting is not possible.

> It's supported by Google's usage of Rust.

There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing, and therefore in the total reduction of bugs, and you said that maybe a complex language like Rust, with lots of implicitness but also temporal memory safety could perhaps have a positive effect on other bugs, too, in comparison. What you linked to is something about Rust vs C and C++. Zig is at least as different from either one as it is from Rust.

> And the overall error rate still favors Rust.

Compared to C++. What does it have to do with anything we were talking about?


> That's a strange expression given that the percentage of programs written in languages that rely primarily on a GC for memory management has been rising steadily for about 30 years

I wish I knew what you mean by programs relying primarily on GC. Does that include Rust?

Regardless, but extrapolating current PL trends so far is a fools errand. I'm not looking at current social/market trends but limits of physics and hardware.

> There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing

No, let me remind you:

> > [snip] Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.

> I understand that's something you believe, but it's not supported empirically we were talking how not having to worry about UB allows for easier defect catching.

> Compared to C++.

Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time. Even if it isn't a 1-to-1 comparison with Zig, we have other examples like Bun vs Deno, where Bun incurs more segfaults (per issue).

Also don't see how much of Zig design could really assist code reviews and testing.


> Does that include Rust?

No. Most memory management in Rust is not through it's GC, even though most Rust programs do use the GC to some extent.

> I'm not looking at current social/market trends but limits of physics and hardware.

The laws of physics absolutely do not predict that the relative cost of CPU to RAM will decrease substantially. Unforeseen economic events may always happen, but they are unforeseen. It's always possible that current trends would reverse, but that's a different matter from assuming they are likely to reverse.

> Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time.

I don't know how reasonable it is to think that. If Rust's value comes from eliminating spatial and temporal memory safety issues, surely there's value in eliminating the more dangerous of the two, which Zig does as well as Rust (but C++ doesn't).

But even if you think that's reasonable for some reason, I think it's at least as reasonable to think the opposite, given that in almost 30 years of programming in C++, by far my biggest issue with the language has been its complexity and implicitness, and Zig fixes both. Given how radically different Zig is from C++, my preferenece for Zig stems precisely from it solving what is, to me, the biggest issue with C++.

> Also don't see how much of Zig design could really assist code reviews and testing.

Because it's both explicit and simple. There are no hidden operations performed by a routine that do not appear in that routine's code. In C++ (or Rust), to know whether there's some hidden call to a destructor/trait, you have to examine all the types involved (to make matters worse, some of them may be inferred).


> No. Most memory management in Rust is not through it's GC, even though most Rust programs do use the GC to some extent.

Most? You still haven't proved that. So most Rust programs mostly use GC, yet it's not a GC language; those are some very mind-contorting definitions.

> The laws of physics absolutely do not predict that the relative cost of CPU to RAM will decrease substantially.

Laws of physics do absolutely tell you that more computation means more heat. Also trying to approach the size of atoms is another no-go. That's why current chip densities have stalled but have been kept on life support via chip stacking and gate redesigns. The 2nm process is mostly a marketing term (https://en.wikipedia.org/wiki/2_nm_process) the actual gate is around 45x20nm.

Not to mention that when working with the way atoms work (i.e. their random nature) and small scales, small irregularities mean low yields.

They put a soft cap on any exponential curve. And hard cap by placing a literal singularity.

> I don't know how reasonable it is to think that.

Why not? With modern collections (std::vector, std::span, and std::string) and modern pointers (std::unique_ptr, std::shared_ptr) you get decent memory safety.

> Because it's both explicit and simple.

Being a simple language doesn't guarantee lack of complexity in implementation (see Brainfuck). The question is how much language complexity buys implementation simplicity. C++ of course has neither because it started with a backwards compatibility goal (it did get abandoned at some point).

By Zig's explicitness, you mean everything is public? I've seen that stuff backfire spectacularly, because you don't get any encapsulation, which means maximum coupling.


But arenas have substantial benefits. They may be one of the few remaining reasons to use a low-level (or "systems programming") language in the first place. Most things are tradeoffs, and the question isn't what you're giving up, but whether you're getting the most for what you're paying.

Arenas are also available in languages with automatic memory management, e.g. D, C# and Swift, to use only modern languages as example.

Thus I don't consider that a reason good enough for using Zig, while throwing away the safety from modern languages.


First, Zig is more modern than any of the languages you mention. Second, I'm not aware that any of those languages offer arenas similar in their power and utility to Zig's while offering UAF-freedom at the same time. Note that "type-safe" arenas are neither as powerful as general purpose arenas nor fully offer UAF-freedom. I could be wrong (and if I am, I'd really love to see an arena that's both general and safe), but I believe that in all these languages you must compromise on either safety or the power of the arena (or both).

> First, Zig is more modern than any of the languages you mention

How so? This feels like an empty statement at best.


"modern: relating to the present or recent times as opposed to the remote past". I agree it's not a useful concept here but I didn't bring it up. Specifically, I don't think there's any consideration that had gone into the design of D, C#, or Rust that escaped Zig's designer. He just consciously made different choices based on the data available and his own judgment.

Not really modern, it is Object Pascal/Modula-2 repackaged in C like syntax.

The only thing relatively modern would be compile time execution, if we forget about how long some languages have had reader macros, or similar capabilities like D's compile time metaprogramming.

Also it is the wrong direction when the whole industry is moving into integrity by default on cyber security legislation.

There are several examples around of doing arenas in said languages.

https://dlang.org/phobos/std_experimental_allocator.html

You can write your own approach with the low level primitives from Swift, or ping back into the trusty NSAutoreleasePool.

One example for C#, https://github.com/Enichan/Arenas


> Not really modern, it is Object Pascal/Modula-2 repackaged in C like syntax.

That's your opinion, but I couldn't disagree more. It places partial evaluation as its biggest focus more so than any other language in history, and is also extraordinarily focused on tooling. There isn't any piece of information nor any technique that was known to the designers of those older languages and wasn't known to Zig's designer. In some situations, he intentionally chose different tradeoffs on which there is no consensus. It's strange to insist that there is some consensus when many disagree.

I have been doing low-level programming (in C, C++, and Ada in the 90s) for almost 30 years, and over that time I have not seen a low-level language that's as revolutionary in its approach to low-level programming as Zig. I don't know if it's good, but I find its design revolutionary. You certainly don't have to agree with my assessment, but you do need to acknowledge that some people very much see it that way, and don't think it's merely a "repackaged" Pascal-family language in any way.

I guess you could say that you personally don't care about Zig's primary design points and when you ignore them you're left with something that you find similar to other languages, but that's like saying that if you don't care about Rust's borrow- and lifetime checking, it's basically just a mix of C++ and ML. It's perfectly valid to not care about what matters most to some language's designer, and it's perfectly valid to claim that what matters to them most is misguided, but it is not valid to ignore a language's design core when describing it just because you don't care about it.

> Also it is the wrong direction when the whole industry is moving into integrity by default on cyber security legislation.

Again, that is an opinion, but not one I agree with. For one, Rust isn't as safe as other safe languages given its relatively common reliance on unsafe. If spatial and temporal memory safety were the dominating concerns, there wouldn't be a need for Rust, either (and it wouldn't have exposed unsafe). Clearly, everyone recognises that there are other concerns that sometimes dominate, and it's pretty clear that some people, who are no less knowledgeable about the software industry and its direction, prefer Zig. There is no consensus here either way, and I'm not sure there can be one. They are different languages that suit different people/projects' preferences.

Now, I agree that there's definitely more motion toward more correctness - which is great! - and I probably wouldn't write a banking or healthcare system in Zig, but I wouldn't write it in Rust, either. People reach for low level languages precisely when there may be a need to compromise on safety in some way, and Rust and Zig make different compromises, both of which - as far as I can tell - can be reasonable.

> There are several examples around of doing arenas in said languages.

From what I can tell, all of them either don't provide freedom from UAF, or they're not nearly as general as a proper arena.

I know of one safe and general arena design in RTSJ, which immidiately prevents a reference to a non-enclosing arena from being written into an object, but it comes with a runtime cost (which makes sense for hard realtime, where you want to sacrifice performance for worst-case predictability).


> You certainly don't have to agree with my assessment, but you do need to acknowledge that some people very much see it that way, and don't think it's merely a "repackaged" Pascal-family language in any way.

My opinion is that 99% of those people never knew anything beyond C and C++ for systems programming, and even believe the urban myth that before C there were no systems programming languages.

Similar to those that only discover compiled languages and type systems exist, after spending several years with Python and JavaScript, and then even Go seems out of this world.


I don't know about the numbers. Some of Zig's famous proponents are Rust experts. I don't know the specific percentages, but you could level a similar accusation at Rust's proponents, too, i.e. that they have insufficient exposure to alternative techniques. And BTW, Zig's approach is completely different from that of C, C++, Rust, or the Pascal family languages. So if we were to go by percentages, we could dismiss all criticisms against Zig on the same basis (i.e. most people may think it's like C++, or C, or Modula, but since it isn't, then their criticisms are irrelevant). In fact, because Rust is a fairly old language and Zig isn't, it's more likely that more Zig developers are familiar with Rust than vice-versa.

But also I don't see why that even matters. If even some people with a lot of experience in other approaches to systems programming and even with experience with deeper aspects of software correctness accept this assessment, then you can't waive it away. It's okay to think we're wrong - after all no one has the sufficient empirical evidence to support their claim either way - but you cannot ignore the fact that some of those with extensive experience disagree with you, just as I'm happy to accept that some of them disagree with me.


Wouldn't C# and Swift make it tough to integrate with other languages? Whereas something written in Zig (or Rust) can integrate with anything that can use the C ABI?

Both c# and swift have first party c abi integration

I don't think that a language that was meant to compete with C++ and in 10+ years hasn't captured 10% of C++'s (already diminished) market share could be said to have become "kind of the default" for anything (and certainly not when that requires generalising from n≅1).

It has for Amazon, Adobe, Microsoft, Google and the Linux kernel.

It remains to be seen which big name will make Zig unavoidable.


> It has for Amazon, Adobe, Microsoft, Google and the Linux kernel.

I don't think so. I don't know about Adobe, but it's not a meaningful statement for the rest. Those companies default to writing safe code in languages other than Rust, and the Linux kernel defaults to unsafe code in C. BTW, languages favoured by those projects/companies do not reliably represent industry-wide preferences, let alone defaults. You could certainly say that of the two languages accepted so far in the Linux kernel, the only safe one is Rust, but there's hardly any "default" there.

> It remains to be seen which big name will make Zig unavoidable.

I have no idea whether or not Zig will ever be successful, but at this point it's pretty clear that Rust's success has been less than modest at best.


It is a clear mandate on those companies that whatever used to be C or C++, should be written in Rust for green field development.

Whatever could be done in programming languages with automatic memory management was already being done.

Anyone deploying serverless code into Amazon instances is running of top of Firecracker, my phone has Rust code running on it, and whatever Windows 11 draws something into the screen, it goes through Rust rewrite of the GDI regions logic, all the Azure networking traffic going through Azure Boost cards does so via Rust firmware.

Adobe is the sponsor for the Hylo programming language, and key figures in the C++ community, are doing Rust talks nowadays.

"Adobe’s memory safety roadmap: Securing creativity by design"

https://blog.adobe.com/security/adobes-memory-safety-roadmap...

Any hobby language author would like to have 1% of the said modest Rust's success, I really don't get the continuous downplay of such achievement.


> It is a clear mandate on those companies that whatever used to be C or C++, should be written in Rust for green field development. Whatever could be done in programming languages with automatic memory management was already being done.

I don't know how true either of these statements is or to what extent the mandate is enforced (at my company we also have language mandates, but what they mean is that to use a different language all you need is an explanation and a manager to sign off), but I'll ask acquaintances in those companies (Except Adobe; don't know anyone there. Although the link you provided doesn't say Rust; it says "Rust or Swift". It also commits only to "exploring ways to reduce the use of new C and C++ code in safety critical parts of our products to a fraction of current levels").

What I do know is that the rate at which Rust is adopted, is significantly lower than the rate at which C++, Java, C#, Python, TS, and even Go were adopted, even in those companies.

Now, there's no doubt that Rust has some real adoption, and much more than just hobby languages. Its rate of adoption is significantly higher than that of Haskell, or Clojure, or Elixir were (but lower than that of Ruby or PHP). That is without a doubt a great accomplishment, but not what you'd expect from a language that wishes to become the successor to C++ (and doesn't suffer from lack of hype despite its advanced age). Languages that offer a significant competitive advantage, or even the perception of one, spread at a faster pace, certainly those that eventually end up in the top 5.

I also think there's little doubt that the Rust "base" is more enthusiastic than that of any language I remember except maybe that of Haskell's resurgence some years back (and maybe Ruby), and that enthusiasm may make up for what they lack in numbers, but at some point you need the numbers. A middle-aged language can only claim to be the insurgent for so long.


P.S.

I spoke with someone at AWS, and he says that there is an investment in using Rust for low-level code, but there is no company-wide mandate, and projects are free to pick C or C++.


>>It is a clear mandate on those companies that whatever used to be C or C++, should be written in Rust for green field development. >>Any hobby language author would like to have 1% of the said modest Rust's success, I really don't get the continuous downplay of such achievement.

This is a political achievement, not technical one. People are bitter about it as it doesn't feel organic and feel pushed onto them.


There is technical achievement in:

> Anyone deploying serverless code into Amazon instances is running of top of Firecracker, my phone has Rust code running on it, and whatever Windows 11 draws something into the screen, it goes through Rust rewrite of the GDI regions logic, all the Azure networking traffic going through Azure Boost cards does so via Rust firmware.

Ignoring it doesn't make those achievements political rather than technical.


I was referring to mandate to use it at big companies. This is a political achievement. Teams/contributors making their own choice and then shipping good software counts as technical one but that wasn't the main point of the post I replied to.

> I was referring to mandate to use it at big companies.

I've worked in almost all of big tech, and these companies don't create mandate just because "Trust me bro" or to gain some "political achievements". Their are teams who champion new technology/languages, they create proof of what new technology will bring to the table which cannot be filled with existing ones. I left amazon 7 years ago so don't know about recent development. However at Meta/Google teams are encouraged to choose from the mandate languages and if they can't they need to request for exemption and justify the exception.


I wonder what you consider a successful language.

Rust appeared in 2012. Zig in 2016. I consider them both two successful programming languages, but given they are only 4 years apart, it's easy to compare Zig today with 4-years-back Rust and see they are very far apart, in term of maturity, progress, community size and adoption.

Rust is a very successful language so far, but expecting that in 10y it can overthrow C++ is silly. Codebases add up more than they are replaced.


While certain teams within Google are using rust by default, I'm not sure rust is anywhere close in scale for new lines of code committed per week to c++.

For Android specifically, by Q3 of last year more new lines of Rust were being added per week than new lines of C++: https://security.googleblog.com/2025/11/rust-in-android-move...

Sure and android is a small part of Google. Everyone in ads, search, cloud are still predominantly c++ (or something higher level like Java). Rust is gaining momentum but overall it's still small.

The problem is that the number of browser engines is n=2.

Interestingly, Ladybird, which aims at being the n = 3, is also written in C++.

Ladybird is in the process of switching over to Swift, and has been for a little over a year now.

Not linking to the pedophilic nazi-site, and as Nitter is dead-ish, here is the full-text announcement archived on tildes: https://tildes.net/~comp/1j7m/ladybird_chooses_swift_as_its_...


London doesn't have the income level of Mississippi, although that might be true for the UK average. I'd say that the UK may be "seriously broken", but not more so than other post-industrial countries, including the US (or France, or Japan). It's just broken in different ways. E.g. life expectancy in the UK is significantly higher than in America even though they were the same in the '80s. Education levels (and measures such as literacy profficiency and skills etc.) are also significantly better in the UK than in the US. Somewhat tongue in cheek, Americans are richer but they don't seem to be putting their money to good use, as Brits are better educated and live longer.


The number of domains where low-level languages are required, and that includes C, C++, Rust, and Zig, has gone down over time and continues to do so. All of these languages are rarely chosen when there are viable alternatives (and I say "rarely" taking into account total number of lines of code, not necessarily number of projects). Nevertheless, there are still some very important domains where such languages are needed, and Rust's adoption rate is low enough to suggest serious problems with it, too. When language X offers significant advantages over language Y, its adoption compared to Y is usually quite fast (which is why most languages get close to their peak adoption relatively quickly, i.e. within about a decade).

If we ignore external factors like experience and ecosystem size, Rust is a better language than C++, but not better enough to justify faster adoption, which is exactly what we're seeing. It's certainly gained some sort of foothold, but as it's already quite old, it's doubtful it will ever be as popular as C++ is now, let alone in its heydey. To get there, Rust's market share will need to grow by about a factor of 10 compared to what it is now, and while that's possible, if it does that it will have been the first language to ever do so at such an advanced age.


> When language X offers significant advantages over language Y

So e.g. the silver bullet characteristics reported by Google among others in "More fast and fix things" ?

https://security.googleblog.com/2025/11/rust-in-android-move...

There's always resistance to change. It's a constant, and as our industry itself ages it gets a bit worse. If you use libc++ did you know your sort didn't have O(n log n) worst case performance until part way through the Biden administration? A suitable sorting algorithm was invented back in 1997, those big-O bounds were finally mandated for C++ in 2011, but it still took until a few years ago to actually implement it for Clang.


Except, as you say, all those factors always exist, so we can compare things against each other. No language to date has grown its market share by a factor of ten at such an advanced age [1]. Despite all the hurdles, successful languages have succeeded faster. Of course, it's possible that Rust will somehow manage to grow a lot, yet significantly slower than all other languages, but there's no reason to expect that as the likely outcome. Yes, it certainly has significant adoption, but that adoption is significantly lower than all languages that ended up where C++ is or higher.

[1]: In a competitive field, with selection pressure, the speed at which technologies spread is related to their relative advantage, and while slow growth is possible, it's rare because competitive alternatives tend to come up.


This sounds like you're just repeating the same claim again. It reminds me a little bit of https://xkcd.com/1122/

We get it, if you squint hard at the numbers you can imagine you're seeing a pattern, and if you're wrong well, just squint harder and a new pattern emerges, it's fool proof.


Observing a pattern with a causal explanation - in an environment with selective pressure things spread at a rate proportional to their relative competitive advantage (or relative "fitness") - is nothing at all like retroactively finding arbitrary and unexplained correlations. It's more along the lines of "no candidate has won the US presidential election with an approval of under 30% a month before the election". Of course, even that could still happen, but the causal relationship is clear enough so even though a candidate with 30% in the polls a month before the election could win, you'd hardly say that's the safer bet.


You're basically just re-stating my point. You mistakenly believe the pattern you've seen is predictive and so you've invented an explanation for why that pattern reflects some underlying truth, and that's what pundits do for these presidential patterns too. You can already watch Harry Enten on TV explaining that out-of-cycle races could somehow be predictive for 2026. Are they? Not really but eh, there's 24 hours per day to fill and people would like some of it not to be about Trump causing havoc for no good reason.

Notice that your pattern offers zero examples and yet has multiple entirely arbitrary requirements, much like one of those "No President has been re-elected with double digit unemployment" predictions. Why double digits? It is arbitrary, and likewise for your "about a decade" prediction, your explanation doesn't somehow justify ten years rather than five or twenty.


> You mistakenly believe the pattern you've seen is predictive

Why mistakenly? I think you're confusing the possibility of breaking a causal trend with the likelihood of doing that. Something is predictive even if it doesn't have a 100% success rate. It just needs to have a higher chance than other predictions. I'm not claiming Rust has a zero chance of achieving C++'s (diminished) popularity, just that it has a less than 50% chance. Not that it can't happen, just that it's not looking like the best bet given available information.

> Notice that your pattern offers zero examples

The "pattern" includes all examples. Name one programming language in the history of software that's grown its market share by a factor of ten after the age of 10-13. Rust is now older than Java was when JDK 6 came out and almost the same age Python was when Python 3 came out (and Python is the most notable example of a late bloomer that we have). Its design began when Java was younger than Rust is now. Look at how Fortran, C, C++, and Go were doing at that age. What you need to explain isn't why it's possible for Rust to achieve the same popularity as C++, but why it is more likely than not that its trend will be different from that of any other programming language in history.

> Why double digits? It is arbitrary, and likewise for your "about a decade" prediction

The precise number is arbitrary, but the rule is that the rate of adoption of any technology (or anything in a field with selective pressure) spreads at a rate proportional to its competitive advantage. You can ignore the numbers altogether, but the general rule about the rate of adoption of a technology or any ability that offers a competitive advantage in a competitive environment remains. The rate of Rust's adoption is lower than that of Fortran, Cobol, C, C++, VB, Java, Python, Ruby, C#, PHP, and Go and is more-or-less similar to that of Ada. You don't need numbers, just comparisons. Are the causal theory and historical precedent 100% accurate for any future technology? Probably not, as we're talking statistics, but at this point, it is the bet that this is the most likely outcome that a particular technology would buck the trend that needs justification.

I certainly accept that the possibility of Rust achieving the same popularity that C++ has today exists, but I'm looking for the justification that that is the most likely outcome. Yes, some places are adopting Rust, but the number of those saying nah (among C++ shops) is higher than that of all programming languages that have ever become very popular. The point isn't that bucking a trend with a causal explanation is impossible. Of course it's possible. The question is whether it is more or less likely than not breaking the causal trend.


Your hypothetical "factor of ten" market share growth requirement means it's literally impossible for all the big players to achieve this since they presumably have more than 10% market share and such a "factor of ten" increase would mean they somehow had more than the entire market. When declaring success for a model because it predicted that a literally impossible thing wouldn't happen I'd suggest that model is actually worthless. We all knew that literally impossible things don't happen, confirming that doesn't validate the model.

Lets take your Fortran "example". What market share did Fortran have, according to you, in say 1959? How did you measure this? How about in 1965? Clearly you're confident, unlike Fortran's programmers, users and standards committee, that it was all over by 1966. Which is weird (after all that's when Fortran 66 comes into the picture), but I guess once I see how you calculate these outputs it'll make sense right?


> means it's literally impossible for all the big players to achieve this

Only because they've achieved that 10% in their first decade or so, but what I said is the case for all languages, big and small alike (and Rust doesn't have this problem because it needs a 10x boost to approach C++'s current market share, which is already well below its peak). But the precise numbers don't matter. You can use 5x and it would still be true for most languages. The point is that languages - indeed, all technologies, especially in a competitive market - reach or approach their peak market share relatively quickly.

You make it sound like a novel or strange theory, but it's rather obvious when you look at the history. And the reason is that if a technology offers a big competitive advantage, it's adopted relatively quickly as people don't want to fall behind the competition. And while a small competitive advantage could hypothetically translate to steady, slow growth, what happens is that over that time, new alternatives show up and the language loses the novelty advantage without ever having gained a big-player advantage.

That's why, as much as I like, say, Clojure (and I like it a lot), I don't expect to see much future growth.

> Clearly you're confident, unlike Fortran's programmers

Yes, because I have the benefit of hindsight. Also, note that I'm not saying anything about decline (which happens both quickly and slowly), only that technologies in a competitive market reach or approach their peak share quickly. Fortran clearly became the dominant language for its domain in under a decade.

But anyway, if you think that steady slow growth is a likelier or more common scenario than fast growth - fine. I just think that thesis is very hard to support.


> The point is that languages - indeed, all technologies, especially in a competitive market - reach or approach their peak market share relatively quickly.

This predicts nothing in particular, for any outcome we can squint at this and say it was fulfilled, so in this sense it's actually worse than the XKCD cartoon.

It's not that it's a novel or strange theory, it's just wrong.

> if a technology offers a big competitive advantage, it's adopted relatively quickly as people don't want to fall behind the competition

Yeah, no. See, humans have a strong preference for the status quo so it isn't enough that some technology "offers a big competitive advantage", they'd usually just rather not actually. Lots of decision makers read Google's "Move Fast and Fix Things" and went "Yeah, that's not applicable to us [for whatever reason]" and moved on. It doesn't matter whether they were right to decide it wasn't applicable, it only matters whether their competitors reach a different conclusion and execute effectively.


> It's not that it's a novel or strange theory, it's just wrong.

Okay. Can you provide an example of a language that steadily and gradually grew in popularity over a long time (well over a decade) and that this slow growth was the lion share of its market size growth? You say "it's just wrong" but I think it applies to 100% of cases, and if you want to be specific when it comes to numbers, then even languages whose market share has grown by a factor of 5 after age 10 is a small minority, and even a factor of 2 is a minority.

> Yeah, no. See, humans have a strong preference for the status quo so it isn't enough that some technology "offers a big competitive advantage", they'd usually just rather not actually.

Except, again, all languages, successful and unsuccessful alike, have approached their peak market share in their first decade or so. You can quibble over what I mean by "approach" but remember that Rust, at age 10+, needs to grow its market share by a factor of 10 to even match C++'s already-diminished market share today.


For all its faults, and it has many (though Rust shares most of them), few programming languages have yielded more value than C++. Maybe only C and Java. Calling C++ software "garbage" is a bonkers exaggeration and a wildly distorted view of the state of software.


> Forcing function to avoid use-after-free

Doesn't reusing memory effectively allow for use-after-free, only at the progam level (even with a borrow checker)?


Yes, kind of. In the same sense that Vec<T> in Rust with reused indexes allows it.

Notice that this kind of use-after-free is a ton more benign though. This milder version upholds type-safety and what happens can be reasoned about in terms of the semantics of the source language. Classic use-after-free is simply UB in the source language and leaves you with machine semantics, usually allowing attackers to reach arbitrary code execution in one way or another.


That what happens can be reasoned about in the semantics of the source language as opposed to being UB doesn't necessarily make the problem "a ton more benign". After all, a program written in Assembly has no UB and all of its behaviours can be reasoned about in the source language, but I'd hardly trust Assembly programs to be more secure than C programs [1]. What makes the difference isn't that it's UB but, as you pointed out, the type safety. But while the less deterministic nature of a "malloc-level" UAF does make it more "explosive", it can also make it harder to exploit reliably. It's hard to compare the danger of a less likely RCE with a more likely data leak.

On the other hand, the more empirical, though qualitative, claim made by by matklad in the sibling comment may have something to it.

[1]: In fact, take any C program with UB, compile it, and get a dangerous executable. Now disassemble the executable, and you get an equally dangerous program, yet it doesn't have any UB. UB is problematic, of course, partly because at least in C and C++ it can be hard to spot, but it doesn't, in itself, necessarily make a bug more dangerous. If you look at MITRE's top 25 most dangerous software weaknesses, the top four (in the 2025 list) aren't related to UB in any language (by the way, UAF is #7).


>If you look at MITRE's top 25 most dangerous software weaknesses, the top four (in the 2025 list) aren't related to UB in any language (by the way, UAF is #7).

FWIW, I don't find this argument logically sound, in context. This is data aggregated across programming languages, so it could simultaneously be true that, conditioned on using memory unsafe language, you should worry mostly about UB, while, at the same time, UB doesn't matter much in the grand scheme of things, because hardly anyone is using memory-unsafe programming languages.

There were reports from Apple, Google, Microsoft and Mozilla about vulnerabilities in browsers/OS (so, C++ stuff), and I think there UB hovered at between 50% and 80% of all security issues?

And the present discussion does seem overall conditioned on using a manually-memory-managed language :0)


You're right. My point was that there isn't necessarily a connection between UB-ness and danger, and stuck together two separate arguments:

1. In the context of languages that can have OOB and/or UAF, OOB/UAF are very dangerous, but not necessarily because they're UB; they're dangerous because they cause memory corruption. I expect that OOB/UAF are just as dangerous in Assembly, even though they're not UB in Assembly. Conversely, other C/C++ UBs, like signed overflow, aren't nearly as dangerous.

2. Separately from that, I wanted to point out that there are plenty of super-dangerous weaknesses that aren't UB in any language. So some UBs are more dangerous than others and some are less dangerous than non-UB problems. You're right, though, that if more software were written with the possibility of OOB/UAF (whether they're UB or not in the particular language) they would be higher on the list, so the fact that other issues are higher now is not relevant to my point.


> In fact, take any C program with UB, compile it, and get a dangerous executable. Now disassemble the executable, and you get an equally dangerous program, yet it doesn't have any UB.

I'd put it like this:

Undefined behavior is a property of an abstract machine. When you write any high-level language with an optimizing compiler, you're writing code against that abstract machine.

The goal of an optimizing compiler for a high-level language is to be "semantics-preserving", such that whatever eventual assembly code that gets spit out at the end of the process guarantees certain behaviors about the runtime behavior of the program.

When you write high-level code that exhibits UB for a given abstract machine, what happens is that the compiler can no longer guarantee that the resulting assembly code is semantics-preserving.


Since it has UB it is easy for the compiler to guarantee that the resulting code is semantics-preserving: Anything the code does is OK.


There's some reshuffling of bugs for sure, but, from my experience, there's also a very noticeable reduction! It seems there's no law of conservation of bugs.

I would say the main effect here is that global allocator often leads to ad-hoc, "shotgun" resource management all other the place, and that's hard to get right in a manually memory managed language. Most Zig code that deals with allocators has resource management bugs (including TigerBeetle's own code at times! Shoutout to https://github.com/radarroark/xit as the only code base I've seen so far where finding such bug wasn't trivial). E.g., in OP, memory is leaked on allocation failures.

But if you manage resources manually, you just can't do that, you are forced to centralize the codepaths that deal with resource acquisition and release, and that drastically reduces the amount of bug prone code. You _could_ apply the same philosophy to allocating code, but static allocation _forces_ you to do that.

The secondary effect is that you tend to just more explicitly think about resources, and more proactively assert application-level invariants. A good example here would be compaction code, which juggles a bunch of blocks, and each block's lifetime is tracked both externally:

* https://github.com/tigerbeetle/tigerbeetle/blob/0baa07d3bee7...

and internally:

* https://github.com/tigerbeetle/tigerbeetle/blob/0baa07d3bee7...

with a bunch of assertions all other the place to triple check that each block is accounted for and is where it is expected to be

https://github.com/tigerbeetle/tigerbeetle/blob/0baa07d3bee7...

I see a weak connection with proofs here. When you are coding with static resources, you generally have to make informal "proofs" that you actually have the resource you are planning to use, and these proofs are materialized as a web of interlocking asserts, and the web works only when it is correct in whole. With global allocation, you can always materialize fresh resources out of thin air, so nothing forces you to do such web-of-proofs.

To more explicitly set the context here: the fact that this works for TigerBeetle of course doesn't mean that this generalizes, _but_, given that we had a disproportionate amount of bugs in small amount of gpa-using code we have, makes me think that there's something more here than just TB's house style.


That's an interesting observation. BTW, I've noticed that when I write in Assembly I tend to have fewer bugs than when I write in C++ (and they tend to be easier to find). That's partly because I'm more careful, but also because I only write much shorter and simpler things in Assembly.


Hey matklad! Thanks for hanging out here and commenting on the post. I was hoping you guys would see this and give some feedback based on your work in TigerBeetle.

You mentioned, "E.g., in OP, memory is leaked on allocation failures." - Can you clarify a bit more about what you mean there?


In

    const recv_buffers = try ByteArrayPool.init(gpa, config.connections_max, recv_size);
    const send_buffers = try ByteArrayPool.init(gpa, config.connections_max, send_size);
if the second try throws, than the memory allocation created by the first try is leaked. Possible fixes:

A) clean up individual allocations on failure:

    const recv_buffers = try ByteArrayPool.init(gpa, config.connections_max, recv_size);
    errdefer recv_buffers.deinit(gpa);

    const send_buffers = try ByteArrayPool.init(gpa, config.connections_max, send_size);
    errdefer send_buffers.deinit(gpa);
B) ask the caller to pass in an arena instead of gpa to do bulk cleanup (types & code stays the same, but naming & contract changes):

    const recv_buffers = try ByteArrayPool.init(arena, config.connections_max, recv_size);
    const send_buffers = try ByteArrayPool.init(arena, config.connections_max, send_size);
C) declare OOMs to be fatal errors

    const recv_buffers = ByteArrayPool.init(gpa, config.connections_max, recv_size) catch |err| oom(err);
    const send_buffers = ByteArrayPool.init(gpa, config.connections_max, send_size) catch |err| oom(err);

    fn oom(_: error.OutOfMemory) noreturn { @panic("oom"); }
You might also be interesting in https://matklad.github.io/2025/12/23/static-allocation-compi..., it's essentially a complimentary article to what @MatthiasPortzel says here https://news.ycombinator.com/item?id=46423691


Gotcha. Thanks for clarifying! I guess I wasn't super concerned about the 'try' failing here since this code is squarely in the initialization path, and I want the OOM to bubble up to main() and crash. Although to be fair, 1. Not a great experience to be given a stack trace, could definitely have a nice message there. And 2. If the ConnectionPool init() is (re)used elsewhere outside this overall initialization path, we could run into that leak.

The allocation failure that could occur at runtime, post-init, would be here: https://github.com/nickmonad/kv/blob/53e953da752c7f49221c9c4... - and the OOM error kicks back an immediate close on the connection to the client.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: