Even back in .NET Core 3.1 days C# had more than competitive performance profile with Go, and _much_ better multi-core scaling at allocation-heavy workloads.
It is disingenuous to say that whatever it ships with is huge also.
The common misconception by the industry that AOT is optimal and desired in server workloads is unfortunate. The deployment model (single slim binary vs many files vs host-dependent) is completely unrelated to whether the application utilizes JIT or AOT. Even with carefully gathered profile, Go produces much worse compiler output for something as trivial as hashmap lookup in comparison to .NET (or JVM for that matter).
async Task<User> FetchUser(int id, HttpClient http, CancellationToken token)
{
var addr = $"https://api.example.com/users/{id}";
var user = await http.GetFromJsonAsync<User>(addr, token);
return user ?? throw new Exception("User not found");
}
What? NRTs are used everywhere with WarningAsErrors:nullable also gaining popularity. Whatever environment you are dealing with C# in, if it’s the opposite I suggest getting away from that ASAP.
sidenote: just a heads up that I tried emailing you recently to let you know that you might want to contact the HN mods to find out why all your comments get set to dead/hidden automatically.
Your account might have triggered some flag sometime back and relies on users vouching for your comments so they can become visible again.
I saw the email, and thanks. This is okay - I did not exercise (nor anyone should) good impulse control when dealing with bad faith arguments, which inevitably led to an account ban. Either way, Merry Christas!
FWIW JIT is rarely an issue, and enables strong optimizations not available in AOT (it has its own, but JIT is overall much better for throughput). RyuJIT can do the same speculative optimizations OpenJDK Hotspot does except the language has fewer abstractions which are cheaper and access to low-level programming which allows it to have much different performance profile.
NativeAOT's primary goal is reducing memory footprint, binary size, making "run many methods once or rarely" much faster (CLI and GUI applications, serverless functions) and also shipping to targets where JIT is not allowed or undesirable. It can also be used to ship native dynamically or statically (the latter is tricky) linked libraries.
Nim "cheats" in a similar way C and C++ submissions do: -fno-signed-zeros -fno-trapping-math
Although arguably these flags are more reasonable than allowing the use of -march=native.
Also consider the inherent advantage popular languages have: you don't need to break out to a completely niche language, while achieving high performance. Saying this, this microbenchmark is naive and does not showcase realistic bottlenecks applications would face like how well-optimized standard library and popular frameworks are, whether the compiler deals with complexity and abstractions well, whether there are issues with multi-threaded scaling, etc etc. You can tell this by performance of dynamically typed languages - since all data is defined in scope of a single function, the compiler needs to do very little work and can hide the true cost of using something like Lua (LuaJIT).
> Ownership model for example, would it be possible to enforce practice via some-sort of meta framework?
It should be possible to at least write an analyzer which will be based on IDisposable-ness of types to drive this. Notably, it is not always more efficient to malloc and free versus using GC, and generational moving GCs do not operate on "single" objects allocating and freeing them, no, so you cannot "free" memory either (and it's a good thing - collection is marking of live objects and everything unused can be reclaimed in a single step).
Also the underlying type system and what bytecode allows is quite a bit more powerful than what C# makes use of, so a third language targeting .NET could also yield a better performance baseline by better utilizing existing (very powerful) runtime implementation.
Lastly, there have been many improvements around devirt and object escape analysis, and GC changes are also a moving target (thanks to Satori GC), so .NET is in quite a good spot and many historical problems were or are in the process of being solved, that make Rust-style memory management less necessary (given in Rust you also make use of it because you want to be able to run your code on bare metal or without GC at all, only relying on host-provided allocator - if you do not have such requirement, you have plenty of more convenient options).
Try that on ARM64 and the result will be the opposite :)
On M4 Max, Go takes 0.982s to run while C# (non-SIMD) and F# are ~0.51s. Changing it to be closer to Go makes the performance worse in a similar manner.
It is disingenuous to say that whatever it ships with is huge also.
The common misconception by the industry that AOT is optimal and desired in server workloads is unfortunate. The deployment model (single slim binary vs many files vs host-dependent) is completely unrelated to whether the application utilizes JIT or AOT. Even with carefully gathered profile, Go produces much worse compiler output for something as trivial as hashmap lookup in comparison to .NET (or JVM for that matter).
reply