My impression is that tools that grew complex only because they want to serve every use case under the son got obsoleted by AI, and static site generators like Hugo are a good example.
Today, if I were setting up a blog to host just some text and images, a vibe-coded SvelteKit project using the static adapter[1] would easily solve every single problem that I have. And I would still be able to use the full power of the web platform if I need anything further customized.
Type inference is when you try to infer a type from its usage. ``auto`` does no such thing, it just copies a known type from source to target. Target has no influence over source's type.
Taking this idea further -- this is the sort of thing that really makes me consider whether or not ATProto might be literally the worst idea in all of social media.
Which is to say "can track you just as intrusively as any private service, but now your history is cryptographically signable and even EASIER to share and move everywhere"
> 35x less system calls = others wait less for the kernel to handle their system calls
That isn't how it works. There isn't a fixed syscall budget distributed among running programs. Internally, the kernel is taking many of the same locks and resources to satisfy io_uring requests as ordinary syscall requests.
More system calls mean more overall OS overhead eg. more context switches, or as you say more contention on internal locks etc.
Also, more fs-related system calls mean less available kernel threads to process these system calls. eg. XFS can paralellize mutations only up to its number of allocation groups (agcount)
> More system calls mean more overall OS overhead [than the equivalent operations performed with io_uring]
Again, this just isn't true. The same "stat" operations are being performed one way or another.
> Also, more fs-related system calls mean less available kernel threads to process these system calls.
Generally speaking sync system calls are processed in the context of the calling (user) thread. They don't consume kernel threads generally. In fact the opposite is true here -- io_uring requests are serviced by an internal kernel thread pool, so to the extent this matters, io_uring requests consume more kernel threads.
I hadn't read that piece, but it's the conclusion I got to after reading a lot of sci-fi in my YA years.
The sci-fi I enjoyed the most would make one impactful change, say allow for intergalactic travel like in The Forever War, or allowing people to backup and restore their brains like in Altered Carbon, and see where that leads.
Others just use sci-fi as a backdrop for an otherwise conventional story, without really engaging with the sci-fi elements. They can be good stories, but I enjoyed the former much more.
There's this quote I heard that said something along the lines of "Good sci-fi uses fictional technology to show us something about human beings that would be difficult to express otherwise".
I first read this as a foreword to The Left Hand of Darkness and it has completely changed how I read. It’s important to understand that there is an agenda behind every book, not as a bad thing, but as a way to understand and explore how the author thinks and how they have been shaped by the real world that they live in and build from to create.
Man, I loved Neuromancer when I read it as a kid. Yes, it's a tough book to read, especially today where there are too many distractions as well as too many works of art built on the sci-fi ideas of that era.
Neuromancer is the first installment of the Sprawl trilogy, followed by Count Zero and Mona Lisa Overdrive.
So trying not to spoil too much: Count Zero asks questions about / describes how AI could have influence over religious/spiritual life of humans.
Will we see AI preachers having a real influence on human religious life? ChatGPT the prophet? Maybe this is the real danger of today's nascent AI tech?
I don't see the point of standardizing name mangling. Imagine there is a standard, now you need to standardize the memory layout of every single class found in the standard library. Without that, instead of failing at link-time, your hypothetical program would break in ugly ways while running because eg two functions that invoke one other have differing opinions about where exactly the length of a std::string can be found in the memory.
The naive way wouldn't be any different than what it's like to dynamically load sepples binaries right now.
The real way, and the way befitting the role of the standards committee is actually putting effort into standardizing a way to talk to and understand the interfaces and structure of a C++ binary at load-time. That's exactly what linking is for. It should be the responsibility of the software using the FFI to move it's own code around and adjust it to conform with information provided by the main program as part of the dynamic linking/loading process... which is already what it's doing. You can mitigate a lot of the edge cases by making interaction outside of this standard interface as undefined behavior.
The canonical way to do your example is to get the address of std::string::length() and ask how to appropriately call it (to pass "this, for example.)
This standard already exists, it's called the ABI and the reason the STL can't evolve past 90s standards in data structures is because breaking it would cause immeasurable (read: quite measurable) harm
Like, for fuck's sake, we're using red/black trees for hash maps, in std - just because thou shalt not break thy ABI
We're using self-balancing trees for std::map because the specification for std::map effectively demands that given all the requirements (ordering, iterator and pointer stability, algorithmic complexity of various operations, and the basic fact that std::map has to implement everything in terms of std::less - it's emphatically not a hash map). It has nothing to do with ABI.
Are you rather thinking of std::unordered_map? That's the hash map of standard C++, and it's the one where people (rightfully) complain that it's woefully out of date compared to SOTA hashmap implementations. But even there an ABI break wouldn't be enough, because, again, the API guarantees in the Standard (specifically, pointer stability) prevent a truly efficient implementation.
Are there open source libraries that provide a better hash map? I have an application which I've optimized by implementing a key data structure a bunch of ways, and found boost::unordered_map to be slightly faster than std::unordered_map (which is faster than std::map and some other things), but I'd love something faster. All I need to store are ~1e6 things like std::array<int8_t, 20>.
You should probably use either boost::unordered_flat_map or absl::flat_hash_map if you don't need ordering. Especially with 20-byte values. (Though you didn't mention the key type). If you're dealing with building against Boost already, I'd just switch and measure it.