Hacker Newsnew | past | comments | ask | show | jobs | submit | VivaTechnics's commentslogin

Python won because simplicity scales. Like English—26 letters, minimal grammar—it became the default. Python mirrors that trajectory.

It is trivially learnable, absurdly flexible, and unmatched in ecosystem leverage. No simpler language delivers comparable reach.

Python may not be so suitable for systems, real-time, or performance-critical work—that’s Rust, C, and C++.

Nevertheless, every serious engineer must know Python, just as they must know shell/bash scripting. Non-negotiable.


LLMs operate on numbers; LLMs are trained on massive numerical vectors. Therefore, every request is simply a numerical transformation, approximating learned patterns; without proper trainings, their output could be completely irrational.


100% agreed. Awesome!

Here is our 501(c)(3) tech non-profit. All corporate profits are directed to children. Clear and transparent.

https://aid.aideo.us/


Clear and transparent, yes. Useful, no. By that statement buying the CEOs kid a house is totally within bounds.


> All corporate profits

$0 profit by paying the "CEO" all the money - 20 years later all the profits have gone to good charitable causes ... all $0 of it.


I just clicked around your site and I have no idea what your organization does.


The spidery green font is really hard to read against the background.


Good work! `type<T>` (generic types) can mimic dependent types for this?


Not sure; the schema of the Props argument depends on the value — not type — of another argument, so it's not just generics.


Something short, simple, fundamental, low-level and deeply techie:

Xor, XORY

------

These are all excellent names: C, C++, Rust, Ada, Julia, Shell, Bash, etc.


>These are all excellent names

Wholeheartedly agree. what about something like Sage? B? You/U? Manifest?


- Maybe, design your language first, then name it.

- Single-letter names are mostly taken (e.g., B: https://en.wikipedia.org/wiki/B_(programming_language)

- Focus on one key feature your language does better than others. Low-level languages are trending; high-level application languages are crowded. For example, if you could make assembly-style code user-friendly, that could be a strong niche.


Impressive! This approach can be applied to designing a NoSQL database. The flow could probably look something like this? Right?

- The client queries for "alice123". - The Query Engine checks the FST Index for an exact or prefix match. - The FST Index returns a pointer to the location in Data Storage. - Data Storage retrieves and returns the full document to the Query Engine.


What you’ve described is the foundation of Lucene and as such the foundation of Elastic Search.

FSTs are “expensive” to re-optimize and so it’s typically done “without writes”. So the database would need some workaround for that low write throughput.

To save you the time thinking about it: The only extra parts you’re missing are what Lucene calls segments and merge operations. Those decisions obviously have some tradeoffs (in Lucene’s case the tradeoff is CRUD).

There are easily another 100 ways to be creative in these tradeoffs depending on your specific need. However, I wouldn’t be surprised if the super majority of databases’ indexing implementations are roughly similar.


Lucene's WFST is an insanely good and underappreciated in-process key value store. Assuming that you're okay with a 1 hour lag on your data.

Keyvi is also interesting in this regard


That wouldn't be a good idea in most cases due to reasons laid out in the "Not a general-purpose data structure" section. https://burntsushi.net/transducers/#not-a-general-purpose-da...


We switched to Rust. Generally, are there specific domains or applications where C/C++ remain preferable? Many exist—but are there tasks Rust fundamentally cannot handle or is a weak choice?


Yes, all the industries where C and C++ are the industry standards like Khronos APIs, POSIX, CUDA, DirectX, Metal, console devkits, LLVM and GCC implementation,....

Not only you are faced with creating your own wrappers, if no one else has done it already.

The tooling, for IDEs and graphical debuggers, assumes either C or C++, so it won't be there for Rust.

Ideally the day will come where those ecosystems might also embrace Rust, but that is still decades away maybe.


Advantages of C are short compilation time, portability, long-term stability, widely available expertise and training materials, less complexity.

IMHO you can today deal with UB just fine in C if you want to by following best practices, and the reasons given when those are not followed would also rule out use of most other safer languages.


This is a pet peeve, so forgive me: C is not portable in practice. Almost every C program and library that does anything interesting has to be manually ported to every platform.

C is portable in the least interesting way, namely that compilers exist for all architectures. But that's where it stops.


> C is not portable in practice. Almost every C program and library that does anything interesting has to be manually ported to every platform.

I'm guessing you mean that every cross-platform C codebase ends up being plastered in cascading preprocessor code to deal with OS and architecture differences. Sure that's true, you still have to do some porting work regardless of the language you chose.

But honestly, is there any language more portable than C? I struggle to come up with one.

If someone told me "I need a performant language that targets all major architectures and operating systems, but also maybe I want to run it on DOS, S390X, an old Amiga I have in my closet, and any mystery-meat microcontroller I can find." then really wouldn't have a better answer for them than C89.

If C isn't portable then nothing is.


If "portability" to you has to include incredibly esoteric architectures in 2025, then what C has to offer is probably the best you can do, but my point is it doesn't do any better on mainstream platforms either.

If you are targeting any recent platform, both Rust and Zig do what you want.


Back in the 2000's I had lots of fun porting code across several UNIX systems, Aix, Solaris, HP-UX, Red-Hat Linux.

A decade earlier I also used Xenix and DG/UX.

That is a nice way to learn how "portable" C happens to be, even between UNIX systems, its birthplace.


Compilers existing is essential and not trivial (and also usually then what other languages build on). The conformance model of C also allows you to write programs that are portable without change to different platforms. This is possible, my software runs on 20 different architectures without change. That one can then also adopt it to make use of specific features of different platforms is quite natural in my opinion.


It is essential and nontrivial, but it's also the extremely bare minimum.

You cannot write portable code without platform-specific and even environment-specific adaptations, like handling the presence of certain headers (looking at you, stdint.h and stddef.h), and let's not even start about interacting with the OS in any way.


There may be platforms that are not conforming to the C standard. But I doubt those then have comprehensive implementations of other languages either.


> short compilation time

> IMHO you can today deal with UB just fine in C if you want to by following best practices

In the other words, short compilation time has been traded off with wetware brainwashing... well, adjustment time, which makes the supposed advantage much less desirable. It is still an advantage, I reckon though.


I do not understand what you are tying to say, but it seems to be some hostile rambling.


Never meant to be hostile (if I indeed were, I would have question every single word), but sorry for that.

I mean to say that best practices do help much but learning those best practices take much time as well. So short compilation time is easily offseted by learning time, and C was not even designed to optimize compilation time anyway (C headers can take a lot to parse and discard even when unused!). Your other points do make much more sense and it's unfortunate that first points are destructively interfering each other, hence my comment.


Sorry, maybe I misread your comment. There are certainly languages easier to learn than C, but I would not say C++ or Rust fall into this category. At the same time, I find C compilation extremely fast exactly because of headers. In C you can split interface and implementation cleanly between header and c-file and this enables efficient incremental builds. In C++ most of the implementation is in headers, and all the template processing is order of magnitude more expensive than parsing C headers. Rust also does not seem to have proper separate compilation.


> I find C compilation extremely fast exactly because of headers.

The header model is one of the parts that makes compiling C slower than it could be. This doesn't mean that it is slow, but it's fast in spite of headers, not because of them.

> In C you can split interface and implementation cleanly between header and c-file and this enables efficient incremental builds.

That's not what does, it is the ability to produce individual translation units as intermediary files.

> Rust also does not seem to have proper separate compilation.

Rust does separate compilation, and also has efficient incremental builds. Header files are not a hard requirement for this.


If you say the header model makes it slower than it could be, you need to compare it to something. I do not see how it causes significant slow downs in C projects (in contrast to C++). And yes, I wrote compilers and (incomplete) preprocessors. I do not understand what you mean by your second point. What separation of interface and implementation allows you to do is updating the implementation without having to recompile other TUs. You can achieve this is also in different ways, but in C this works by in this way.

I am not sure how it works in Rust as you need to monomorphize a lot of things, which come from other crates. It seems this would inevitably entangle the compilations.


> I do not see how it causes significant slow downs in C projects

It's that textual inclusion is just a terrible model. You end up reprocessing the same thing over and over again, everywhere it is used. If you #include<foo.h> 100 times, the compiler has to reparse those contents 100 times. Nested headers end up amplifying this effect. It's also at a file-level granularity, if you change a header, every single .c that imports it must be recompiled, even if it didn't use the thing that was changed. etc etc. These issues are widely known.

> I do not understand what you mean by your second point. What separation of interface and implementation allows you to do is updating the implementation without having to recompile other TUs.

Sure, but you don't need to have header files to do this. Due to issues like the above, they cause more things to be recompiled than necessary, not less.

> You can achieve this is also in different ways, but in C this works by in this way.

Right, my point is, those other ways are better.

> I am not sure how it works in Rust as you need to monomorphize a lot of things, which come from other crates. It seems this would inevitably entangle the compilations.

The fact that there are "other crates" is because Rust supports separate compilation: each crate is compiled independently, on its own.

The rlib contains the information that, when you link two crates together, the compiler can use for monomorphization. And it's true that monomorphization can cause a lot of rebuilding.

But to be clear, I am not arguing that Rust compilation is fast. I'm arguing that C could be even faster if it didn't have the preprocessor.


> It's that textual inclusion is just a terrible model. You end up reprocessing > the same thing over and over again, everywhere it is used.

One could certainly store the interfaces in some binary format, but is it really worth it? This would also work with headers by using a cache, but nobody does it for C because there is not much to gain. Parsing is fast anyhow, and compilers are smart enough not to look at headers multiple times when protected by include guards. According to some quick measurements, you could save a couple of percent at most.

The advantages of headers is that they are simple, transparent, discoverable, and work with outside tools in a modular way. This goes against the trend of building frameworks that tie everything together in a tightly integrated way. But I prefer the former. I do not think it is a terrible model, quite the opposite. I think it is a much better and nicer model.


Rust encourages a rather different "high-level" programming style that doesn't suit the domains where C excels. Pattern matching, traits, annotations, generics and functional idioms make the language verbose and semantically-complex. When you follow their best practices, the code ends up more complex than it really needs to be.

C is a different kind of animal that encourages terseness and economy of expression. When you know what you are doing with C pointers, the compiler just doesn't get in the way.


Pattern matching should make the language less verbose, not more. (Similar for many of the other things you mentioned.)

> When you know what you are doing with C pointers, the compiler just doesn't get in the way.

Alas, it doesn't get in the way of you shooting your own foot off, too.

Rust allows unsafe and other shenanigans, if you want that.


> Pattern matching should make the language less verbose, not more.

In the most basic cases, yes. It can be used as a more polished switch statement.

It's the whole paradigm of "define an ad-hoc Enum here and there", encoding rigid semantic assumptions about a function's behaviour with ADTs, and pattern matching for control-flow. This feels like a very academic approach and modifying such code to alter its opinionated assumptions isn't funny.


How is encoding all the assumptions and invariants badly in eg a bunch of booleans and nullable pointers any better?


> When you know what you are doing with C pointers, the compiler just doesn't get in the way.

Tell me you use -fno-strict-aliasing without telling me.

Fwiw, I agree with you and we're in good[citation needed] company: https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg...


Yes, based on a few attempts chronicled in articles from different sources, Rust is a weak choice for game development, because it's too time-consuming to refactor.


There's also the fact that a lot of patterns that are commonly used in game development are fundamentally at odds with the borrow checker.

Relevant: https://youtu.be/4t1K66dMhWk?si=dZL2DoVD94WMl4fI


Basically all of those problems originate with the tradition of conflating pointers and object identity, which is a problem in Rust as soon as you have ambiguous ownership or incongruent access patterns.

It's also very often not the best way to identify objects, for many reasons, including performance (spatial locality is a big deal).

These problems go away almost completely by simply using `EntityID` and going through `&mut World` for modifications, rather than passing around `EntityPtr`. This pattern gives you a lot of interesting things for free.


The video I linked to is long but goes through all of this.

Pretty much nobody writing games in C++ uses raw pointers in entities to hold references to other related entities, because entities can be destroyed at any time and there's no simple way for a referring entity to know when a referenced entity is destroyed.

Using some sort of entity ID or entity handle is very common in C++, the problem is that when implementing this sort of system in Rust, developers often end up having to effectively "work around" the borrow checker, and they end up not really gaining anything in terms of correctness over C++, ultimately defeating the purpose of using Rust in the first place, at least for that particular system.


Can you give an example of what problems need workarounds here?

The benefits seem pretty massive, at least on the surface. For example, you can run any system that only takes `&World` (i.e., immutable access) in parallel without breaking a sweat.


Yup, this one (https://news.ycombinator.com/item?id=43824640) comes to mind. The first comment says "Another failed game project in Rust", hinting that this is very common.


We've only had 6-7 years of hame dev in rust. Bevy is coming along nicely and will hopefully remove these pain points


"Mit dem Angriff Steiner's wird das alles in Ordnung kommen" ;)

As shitty as C++ is from today's PoV, the entire gaming industry switched over within around 3 years towards the end of the 90s. 6..7 years is a long time, and a single engine (especially when it's more or less just a runtime without editor and robust asset pipeline) won't change the bigger picture that Rust is a pretty poor choice for gamedev.


> As shitty as C++ is from today's PoV, the entire gaming industry switched over within around 3 years towards the end of the 90s.

Did they? What's your evidence? Are you including consoles?

Btw, the alternatives in the 1990s were worse than they are now, so the bar to clear for eg C or C++ were lower.


I was there Gandalf... ;) Console SDKs offering C or C++ APIs doesn't really matter, because you can call C APIs from C++ just fine. So the language choice was a team and engine developer decision, not a platform owner decision (as it should be).

From what I've seen, around the late mid-90's, C++ usage was still rare, right before 2000 it was already common and most middleware didn't even offer C APIs anymore.

Of course a couple of years later Unity arrived and made the gamedev language choice more complicated again.


As another Gandalf, Playstation 2 was the very first console to actually offer proper C++ tooling.

That would be 2000, until then Sega, Nintendo and Playstion only had C and Assembly SDKs, even the Playstation Yaroze for hobbists did get released only with C and Assembly support.

PC was naturally another matter, especialy with Watcom C/C++.


> I was there Gandalf... ;)

You were at most in one place. My question was rather, which corners of the industry are you counting?

However you are right that one of the killer features of C++ was that it provided a pretty simple upgrade path from C to (bad) C++.

It's not just API calls. You can call C APIs from most languages just fine.


My corner of the industry back then was mostly PC gamedev with occasional exploration of game consoles (but only starting with the OG Xbox. But that doesn't really matter much since it was obvious that the entire industry was very quickly moving to C++ (we had internet back then after all in my corner of the wood, as well as gamedev conferences to feel the general vibe).

id Software was kinda famous for being the last big C holdout, having only switched to C++ with Doom 3, and development of Doom 3 started in late 2000.


The articles describe how the problem is inherent in the language.

If we exclude AAA games, probably the vast majority of the games nowadays don't need manual memory management for the game core (C# was a popular choice, it seems). I guess that if one really needs manual memory management, languages with moderate memory safety would be a more appropriate choice (support libraries/frameworks being equal, which certainly aren't).

I've used Bevy, and ECS is not an appopriate choice for every game (I wouldn't actually advise it unless there is a specific need). It requires very careful design over the whole lifecycle (ECS-based games very easily tend to get a mess), which is exactly the opposite of one wants for rapid prototyping.


And there are millions of game engines written in C++. Many of them have also been coming along nicely for years.

Making a nontrivial game with them is a wholly different story.


Rust forces you to code in the Rust way, while C or C++ let you do whatever you want.


> C or C++ let you do whatever you want.

C and C++ force you to code in the C and C++ ways. It may that that's what you want, but they certainly dont let me code how I want to code!


There is no C or C++ ways. It's widely known that every codebase is its own dialect.


There are lots of C and particularly C++ ways, but you're still restricted. Want to use methods in C: nope, you can't. Want language-level tagged unions and pattern matching in either language: nope. Same for guaranteed tail call optimisation and a bunch of other things.

This is especially true for C which supports almost nothing (it doesn't even have a sensible array type!). But is also true for C++: while it supports a lot, it doesn't support everything.


The funny part is that all of these things are easy to achieve as libraries/paradigms.

Methods in C, just have function pointers as members. Common in many codebases.

Guaranteed tail calls, all the compilers guarantee that function calls that are a return expression are tail calls.

Tagged union in C++, it's trivial as a library, see std::variant for a bad example of it, and all the various monadic/pattern-matching variants (pun intended) people have written. C is at a disadvantage here due to lack of lambdas, but I'm sure people have built stuff using some GCC extensions.


what changes, in your opinion, would need to be made to the C array type to make it "sensible"? C's array is simplistic, but I don't think it's not "sensible"...


It would need to store a length and not decay to a pointer when passed to a function.


the length is part of the type.


consider the C++ std::array, which exists to make arrays behave like normal objects.

You can do the same in C by wrapping your array in a struct.


If you wanted to develop a cross-platform native desktop / mobile app in one framework without bundling / using a web browser, only QT comes to mind, which is C++. I think there are some bindings though.


An application domain where C++ is notably better is when the ownership and lifetimes of objects are not knowable at compile-time, only being resolvable at runtime. High-performance database kernels are a canonical example of code where this tends to be common.

Beyond that, recent C++ versions have much more expressive metaprogramming capability. The ability to do extensive codegen and code verification within C++ at compile-time reduces lines of code and increases safety in a significant way.


I haven't used Rust extensively so I can't make any criticism besides that I find compilation times to be slower than C


I find with C/++ I have to compile to find warnings and errors, while with Rust I get more information automatically due to the modern type and linking systems. As a result I compile Rust significantly less times which is a massive speed increase.

Rusts tooling is hands down better than C/++ which aids to a more streamlined and efficient development experience


> Rusts tooling is hands down better than C/++ which aids to a more streamlined and efficient development experience

Would you expand on this? What was your C tooling/workflow that was inferior to your new Rust experience?


Not the GP, but the biggest one is dependency management. Cargo is just extremely good.

As for the language tooling itself, static and runtime analyzers in C and C++ (and these are table stakes at this point) do not come close to the level of accuracy of the Rust compiler. If you care about writing unsafe code, Miri is orders of magnitude better at detecting UB than any runtime analyzer I've seen for C and C++.


I do not think package management should be done at the level of programming languages.


Strictly speaking, Cargo isn't part of the Rust programming language itself. Cargo is a layer on top of Rust, and you can use the Rust compiler completely independently of Cargo. I think bazel, for example, can compile and handle dependencies without Cargo.


I agree, it should be done at the project level - that is, if you care about portability, reproducibility, deployment, etc.


Pacman is extremely good, too, for C. :)


Pacman solves a different problem. Cargo manages your project's dependencies, not system packages.


I know, but often that is all you need for C.


The popular C compilers are seriously slow, too. Orders of magnitude compared to C compilers of yesteryear.


I also hear that Async Rust is very bad. I have no idea; if anyone knows, how does async in Rust compare to async in C++?


I am yet to use async in c++, but I did work on a multi threaded c++ project for a few years

Rust is nicer for async and MT than c++ in every way. I am pretty sure.

But it's still mid. If you use Rust async aggressively you will struggle with the borrow checker and the architecture results of channel hell.

If you follow the "one control thread that does everything and never blocks" you can get far, but the language does not give you much help in doing that style neatly.

I have never used Go. I love a lot of Go projects like Forgejo and SyncThing. Maybe Go solved async. Rust did not. C++ did not even add good tagged unions yet.


Go (at least before generics) was really annoying to use.

Doing anything concurrent in Go is also really annoying (be that async or with threads), because everything is mutable. Not just by default but always. So anything shared is very dangerous.


Thanks for the info!


> I also hear that Async Rust is very bad.

Not sure where this is coming from.

Async rust is amazing as long as you only mix in one more hard concept. Be it traits, generics or whatever. You can confidently write and refactor heavily multithreaded code without being deathly afraid of race conditions etc. and it is extremely empowering.

The problem comes when trying to write async generic traits in a multithreaded environment.

Then just throwing stuff at the wall and hoping something sticks will quickly lead you into despair.


embedded hardware, any processor Rust doesn't support (there are many), and any place where code size is critical. Rust has a BIG base size for an application, uselessly so at this time. I'd also love to see if it offered anything that could be any use in those spaces - especially where no memory allocation takes place at all. C (and to a lesser extent C++) are both very good in those spaces.


You can absolutely make small rust programs, you just have to actually configure things the right way. Additionally, the Rust language doesn’t have allocation at all, it’s purely a library concern. If you don’t want heap allocations, then don’t include them. It works well.

The smallest binary rustc has produced is like ~145 bytes.


That is far from my only concern. But it's good to see Rust is finally paying attention to binary sizes. And the overwhelming complexity of rust code is definitely not a gain when one is working in embedded spaces anyway. I am however really REALLY annoyed with the aggressive sales tactics of the rust community.


> But it's good to see Rust is finally paying attention to binary sizes.

Just to be clear, this isn't a recent development, it has been this way for many years at this point.


Prototyping in any domain. It's nice to do some quick&dirty way to rapidly evaluate ideas and solutions.


I don't think C nor C++ were ever great languages for prototyping? (And definitely not better than Rust.)


Please try not to be obnoxious and turn this into a language war.


How is this obnoxious?

C and C++ have their strengths, but rapid prototyping is generally not seen to be amongst them.

This shouldn't be any more controversial than saying that pure Python is generally slow.


They are pretty much the best choice for prototyping 3D apps and GPU algorithms. They're fast, powerful, and don't impose restrictions - you can do whatever and however. It also helps that CUDA is C++.


> They're fast, powerful, and don't impose restrictions [...]

By that metric assembly is the best prototyping language.


You are arguing in bad faith, raising non-issues with obvious answers, making it unfortunately pointless to have a discussion with you.


> Generally, are there specific domains or applications where C/C++ remain preferable?

Well, anything were your people have more experience in the other language or the libraries are a lot better.


Rust can do inline ASM, so finding a task Rust "fundamentally cannot handle" is almost impossible.


That's almost as vacuous as saying that Rust can implement universal Turing machines are that Rust can do FFI?


Extremely insightful and detailed. Thank you! Could you create a concise YouTube explainer on this?

Also, what are the best strategies to rigorously validate inputs while minimizing latency?

Is this the best for Rust: https://github.com/modelcontextprotocol/rust-sdk


Impressive! This is awesome! Let's go Rust! Rustworthy!


Awesome!

Technically, we could say?

(1) Single-loop: fixes actions within fixed rules, like Reinforcement Learning.

(2) Double-loop: questions and adapts the rules, somewhat like Meta-Reinforcement Learning.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: