Or they're not thoroughly testing changes before pushing them out. As I've seen some others say, CloudFlare at this point should be considered critical infrastructure. Maybe not like power but dang close.
the biggest bugbear for concurrent systems is mutable shared data. by inherently being distributable you basically "give up on that" so for concurrent erlang systems you ~mostly don't even try.
if for no other reason than that erlang is saner than go for concurrency
like goroutines aren't inherently cancellable, so you see go programmers build out the kludgey context to handle those situations and debugging can get very tricky
ironically with zig most of the things that violate expectations are keywords. so you run head first into a whole ton of them when you first start (but at least it doesn't compile) and then it you have a very solid mental model of whats going on.
yeah, its a better c, but like wouldnt it be nice if c had stadardized fat pointers so that if you move from project to project you don't have to triple check the semantics? for example and like say 50+ "learnings" from 40 years c that are canonized and first class in the language + stdlib
What to say from WG14, when even one of C authors could not make it happen?
Notice how none of them kept involved with WG14, just did their own thing with C in Plan 9, and with Inferno, C was only used for the kernel, with everything else done in Limbo, finalizing by minor contributions to Go's first design.
People that worship UNIX and C, should spend some time learning that the authors moved on, trying to improve the flaws they considered their original work suffered from.
Yeah it's worth emphasizing, if I spawn two threads, and both of them print a message when they finish (and don't interact with each other in any other way), that's technically a race condition. The output of my program depends on the order on which these threads complete. The question is whether it's a race that I care about.
why would you need batteries included? the ai can code most integrations (from scratch, if you want, so if you need something slightly off the beaten path it's easy
I think the logic can be applied to humans as well as AI:
Sure, the AI _can_ code integrations, but it now has to maintain them, and might be tempted to modify them when it doesn't need to (leaky abstractions), adding cognitive load (in LLM parlance: "context pollution") and leading to worse results.
Batteries-included = AI and humans write less code, get more "headspace"/"free context" to focus on what "really matters".
As a very very heavy LLM user, I also notice that projects tend to be much easier for LLMs (and humans alike) to work on when they use opinionated well-established frameworks.
Nonetheless, I'm positive in a couple of years we'll have found a way for LLMs to be equally good, if not better, with other frameworks. I think we'll find mechanisms to have LLMs learn libraries and projects on the fly much better. I can imagine crazy scenarios where LLMs train smaller LLMs on project parts or libraries so they don't get context pollution but also don't need a full-retraining (or incredibly pricey inference). I can also think of a system in line with Anthropic's view of skills, where LLMs very intelligently switch their knowledge on or off. The technology isn't there yet, but we're moving FAST!
> As a very very heavy LLM user, I also notice that projects tend to be much easier for LLMs (and humans alike) to work on when they use opinionated well-established frameworks.
i have the exact opposite experience. its far better to have llms start from scratch than use batteries that are just slightly the wrong shape... the llm will run circles and hallucinate nonexistent solutions.
that said, i have had a lot of success having llms write opinionated (my opinions) packages that are shaped in the way that llms like (very little indirection, breadcrumbs to follow for code paths etc), and then have the llm write its own documentation.
I don't even particularly care for Django, but darned if I'd want to reimplement on my own any of the great many problems they've thoroughly solved. It's so widely used that any weird little corner case you can think of has already been addressed. No way I'd start over on that.
special @asyncSuspend and @asyncResume builtins, they will be the low level detail you can build an evented io with.
new Io is an abstraction over the higher level details that are common between sync, threaded, and evented, so you shouldn't expect the suspension mechanism to be in it.