Yes, it did. We have literally millions of times as much memory as in 1970 but far less than millions of times as many good library developers, so this is probably the right tradeoff.
C++ already killed it: templated code is only instantiated where it is used, so with C++ it is a random mix of what goes into the separate shared library and what goes into the application using the library. This makes ABI compatibility incredibly fragile in practise.
And increasingly, many C++ libraries are header only, meaning they are always statically linked.
Haskell (or GHC at least) is also in a similar situation to Rust as I understand it: no stable ABI. (But I'm not an expert in Haskell, so I could be wrong.)
It's not just about memory. I'd like to have a stable Rust ABI to make safe plugin systems. Large binaries could also be broken down into dynamic libraries and make rebuilds much faster at the cost of leaving some optimizations on the table. This could be done today with a semi stable versionned ABI. New app builds would be able to load older libraries.
The main problem with dynamic libraries is when they're shared at the system level. That we can do away with. But they're still very useful at the app level.
> I'd like to have a stable Rust ABI to make safe plugin systems
A stable ABI would allow making more robust Rust-Rust plugin systems, but I wouldn't consider that "safe"; dynamic linking is just fundamentally unsafe.
> Large binaries could also be broken down into dynamic libraries and make rebuilds much faster at the cost of leaving some optimizations on the table.
This can already be done within a single project by using the dylib crate type.
You could check that mangled symbols match, and have static tables with hashes of structs/enums to make sure layouts match. That should cover low level ABI (though you would still have to trust the compiler that generated the mangling and tables).
A significantly more thorny issue is to make sure any types with generics match, e.g. if I declare a struct with some generic and some concrete functions, and this struct also has private fields/methods, those private details (that are currently irrelevant for semver) would affect the ABI stability. And the tables mentioned in the previous paragraph might not be enough to ensure compatibility: a behaviour change could break how the data is interpreted.
So at minimum this would redefine what is a semver compatible change to be much more restricted, and it would be harder to have automated checks (like cargo-semverchecks performs). As a rust developer I would not want this.
How much evidence do we actually have that AI wasn't used for these "real props"?
(Personally I don't care about my ability to tell the difference between what's AI and what's not; I care about my ability to tell the difference between well-crafted and not, and that seems to be functioning fine)
IPv6 is already here if you're not in the US. I moved house last month and consumer ISPs don't offer a (real) IPv4 connection in my country any more; you get an IPv6 connection and your router does MAP-E if you want to send data over IPv4.
I want to echo this comment. I am on Map-e in Asia and it is very difficult to get an exclusive ipv4 address without paying extra money.
And I want to connect to my machines without some stupid vpn or crappy cloud reverse tunneling service. Not everyone in the world wants to subscribe to some stupid SaaS service just to get functionality that comes by default with ipv6.
I think Silicon Valley is in a thought bubble and for people there ipv4 is plentiful and cheap. So good for them. However, the more these SaaS services delay ipv6 support, the more I pray to any deity out there I can move off these services permanently.
> The current adoption woes are exactly because IPv6 is so different from IPv4. Everyone who tries it out learns the hard way that most of what they know from IPv4 doesn't apply.
In my experience the differences are just an excuse, and however similar you made the protocol to IPv4 the people who wanted an excuse would still manage to find one. Deploying IPv6 is really not hard, you just have to actually try.
> - I don't have a shortage of IPv4. Maybe my ISP or my VPN host do, I don't know. I have a roomy 10.0.0.0/8 to work with.
That's great until you need to connect to a work/client VPN that decided to also use 10.0.0.0/8.
> - Every host routable from anywhere on the Internet? No thanks. Maybe I've been irreparably corrupted by being behind NAT for too long but I like the idea of a gateway between my well kept garden and the jungle and my network topology being hidden.
Even on IPv4, having normal addresses for all your computers makes life so much nicer. Perhaps-trivial example, but one that matters to me: if two people live in one house and a third person lives in a different house, can they all play a network game together? IPv4 sucks at this.
Landed on 172.16/22 for this reason however it's not uncommon how an enterprise to use all 3 private classes. One place I worked used 192.168 for management, 10 for servers, and 172 for wifi
Using 2 different classes has been a pretty common setup for wifi and wireless in my experience
"Bad programmers worry about the code. Good programmers worry about data structures and their relationships."
> It's just so... dogmatic. Inexpressive. It ultimately feels to me like a barrier between intention and reality, another abstraction.
On the contrary, it's a much more effective way to express intention when you have a language that can implement it. Programmers in C-family languages waste most of their time working around the absence of sum types, they just don't realise that that's what they're doing. Yes it is an abstraction, all programming is abstraction.
| is a massive footgun because it's usually exclusive in practice and occasionally silently inclusive. Having it in a language is a mistake, even if it superficially seems to make things easier, just like e.g. null.
> It pulls customer metadata, checks the customer type, and then based on that it computes their latest usage charges, and then based on that it may trigger automatic balance top-ups or subscription overage emails (again, depending on the customer type).
So compute those things, and store them somewhere (if only an in-memory queue to start with)? Like, I can already see a separation between an ETL stage that computes usage charges, which are probably worth recording in a datastore, and then another ETL stage that computes which top-ups and emails should be sent based on that, which again is probably worth recording for tracing purposes, and then two more stages to actually send emails and execute payment pulls, which it's actually quite nice to have separated from the figuring out which emails to send part (if only so you can retry/debug the latter without sending out actual emails)
Crashing is bad, but silently continuing in a corrupt state is much worse. Better to lose the last few hours of the user's work than corrupt their save permanently, for example.
reply