This seems pretty neat! Still holding out for a language with Go's runtime and compilation and performance characteristics, but language syntax and semantics like Gleam... Maybe one day
Unfortunately the current trend among new languages seems to be eschewing GC; a clear mistake IMO — we don't really need yet another low-level systems programming language, but we badly need the go-to GC'd lang — one that'd take the faults of Java and Go into account.
There is lots of languages that already do that: Kotlin, Dart, typescript, OCaml, D, Haskell and the list goes on!
Non GC languages OTOH are rare and we absolutely need more of them!
I like OCaml in theory a lot! This is not a bad suggestion. The problem is it doesn't have the awesome concurrency model of Go (just barely got regular threads recently), and IMHO the build and package management situation for OCaml isn't very good. Plus, I don't know, I just subjectively don't like using it, and the ecosystem isn't very good. Ecosystem is very important for me.
I understood the sibling comment recommending Ocaml and to a lesser extent Borgo, but OP is looking for a high level functional programming language based on giving Gleam as the reference point. How does C# fit here.
I do think the compilation speed and runtime is at least in the same ballpark, but C#, while a perfectly fine language, is definitely not a functional language in syntax or semantics.
Not nearly o3 level. Much better than GPT4, though! For instance Qwen 3 30b-a3b 2507 Reasoning gets 46 vs GPT 4's 21 and o3's 60-something on Artificial Analysis's benchmark aggregation score. Small local models ~30b params and below tend to benchmark far better than they actually work, too.
We know LLMs instruction follow meaningfully and relatively consistently; we know they are in context learners and also pull from their context window for knowledge; we also know that prompt phrasing and especially organization can have a large effect on their behavior in general; we know from first principles that you can improve the reliability of their results by putting them in a loop with compilers / linters / tests because they do actually fix things when you tell them to. None of this is equivalent to a gambler's superstitions. It may not be perfectly effective, but neither are a million other systems and best practices and paradigms in software.
Also, it doesn't "use" anything. It may be a feature of the program but it isn't intentionally designed that way.
Also who sits around rerunning the same prompt over and over again to see if you get a different outcome like its a slot machine? You just directly tell it to fix whatever was bad about the output and it does so. Sometimes initial outputs have a larger or smaller amount of bad, but still. It isn't really analogous to a slot machine.
Also, you talk as if the whole "do something -> might work / might not, stochastic to a degree, but also meaningfully directable -> dopamine rush if it does; if not goto 1" loop isn't inherent to coding lol
I dont think the "meme" that LLMs follow instructions inconsistently will ever die because they do. It's in the nature of how LLMs function under the hood.
>Also who sits around rerunning the same prompt over and over again to see if you get a different outcome like its a slot machine?
Nobody. Plenty of people do like to tell the LLM that somebody might die if they dont do X properly and other such faith based interventions with their "magic box" though.
Boy do their eyes light up when they hit the "jackpot", too (LLM writes what appears to be the correct code on the first shot).
They're so much more consistent now than they used to be. The new LLMs almost always boast about how much better they are at "instruction following" and it really shows, I find Claude 4.5 and GPT-5.x models do exactly what I tell them to most of the time.
That's the Ask Tog "study"[1]. It wasn't programmers, just regular users. The problem is he just says it, and of course Apple at the time of the Macintosh's development would have a strong motivation to prove mousing superior to keyboarding to skeptical users. Additionally, the experience level of the users was never specified.
I don't disagree with Varoufakis necessarily on his technofeudalism hypothesis, but he makes several claims in this video that are just ~completely false~ misleading and poorly cited, as far as I can tell, and that annoys me, so I'm just going to do my best to respond to them here as a sort of vent LOL.
1. He claims that Google and Facebook and the like only spend 1% of their revenue paying their employees and that therefore any money that goes to them sort of stays out of the circular economy. As far as I can tell, there are absolutely no sources to back this up available online. Not from Google's official reporting, and not even from him: he himself just states the number, but doesn't explain how or where he got it from. In my search I haven't found a case where he cites it, either. ~All I can really find is that Google's operating expenses are around $261 billion as of this year[1], and their revenue was $385[2] — and since operating expenses are usually at least substantially payroll, it's hard to tell.~ Edit: At least two commenters below did a quick back of the napkin calculation, multiplying the median salary of a Google employee by the number of Google employees to get something like 36 billion, which is about 10% of the companies overall revenue. So maybe that's what he meant. But it would have been good for him to actually — first of all — not get the number an order of magnitude off, and second of all, to actually explain how he got that number!
2. Then he brings up the idea that someone's Tesla was remotely deactivated. This was debunked[3]. Edit: Another commenter pointed out that maybe he meant this story where full self-driving got remotely disabled on a car that was bought secondhand[7]. This does match better the part where he mentioned that the car was sold secondhand before it got deactivated. So there's that. But again, it would have been good if he had actually gotten his facts straight.
3. He brings up this idea that Teslas sell your user data Amazon. This is, at least, roundly contradicted by their legally binding privacy policy, and even according to the Mozilla Foundation there's no evidence of Tesla ever selling driver data to a third party[4] (although they've been very, shall we say, uncareful about it in at least two instances, but those don't resemble anything like what he's claiming). One random user on a Tesla owners' forum got freaked out because they saw the car sending data to "Amazon", but when they checked on the IP addresses, it became clear it was using just sending information to AWS servers, which are almost certainly run and owned by Tesla, not Amazon[5], which is what a technically savvy person would assume to begin with anyway.
4. He argues that Volkswagen electric cars can't compete with Teslas because Volkswagen cars "don't have access to cloud capital", which he says gives Tesla an advantage because they do, based on point 3. But given that there's absolutely no evidence of that anywhere, I feel like his entire argument crumbles, because it becomes very unclear how Tesla is benefiting from cloud capital in a way Volkswagen it is not. *Especially* since according to the Mozilla Foundation, Volkswagen not only gathers much more data about you, but actually actively sells it to third parties for advertising purposes, which they openly admit[6].
"What does VW say they can do with this vast treasure trove of personal information, car data, and inferences they collect on you? Well, they use it to make more money, of course. Because selling cars isn't a big enough business these days, now, your personal information is another gold mine for all car companies to tap into. And tap into it they do. VW says they can use it for their own personalized and targeted advertising purposes or those or their affiliates, business partners, or other third parties. They can share it with third parties who can use it for the commercial purpose of marketing their products and services to you. They also say they can use or disclose your de-identified data for "any purpose." "
> All I can really find is that Google's operating expenses are around $261 billion as of this year[1], and their revenue was $385[2] — and since operating expenses are usually at least substantially payroll, it's hard to tell
It's unlikely that this is mostly for payroll. From AI I got:
"median total compensation per employee was approximately $279,802 in 2022, and with 183,323 employees at the end of 2024, total estimated compensation (salary + equity + benefits) likely exceeds $50 billion, or ~14–15% of total revenue"
So maybe it's not 1% as in Varoufakis talk but even if it's 15% of revenue that's also quite low. Also keep in mind in this AI reponse it includes equity (stocks) so that in this way employee is becoming investor/shareholder.
Regarding the Tesla disabling, maybe he means the fact that Tesla remotely disabled FSD when someone sold their car. I thought I remembered Tesla disabling the vehicle completely, and I distinctly remember aspects of the story (the vehicle wouldn't do more than some low mph and said to pull over), but I can't find any reference to that story now.
He's made a slew of false claims before I turned off the video. Among which is the idea that companies like Facebook are merely valuable because of the "labor we provide". In fact, they make money through advertising and sharing data with advertisers. If socializing was all there was to value, these companies would be redundant. The service provided is content-delivery in various forms, that isn't free and it isn't something "anyone from the CS dept" would do as well.
To call it "labor" to share boomer-humor memes and use Marketplace (i.e. users doing what they want on FB) is stretching the term. As with youtube, the prolific creators on instagram and the like also make piles of money. Yet it's being framed as though they're putting all this effort for the company's benefit only.
Agreed. But I figured that slippery equivocation was obvious: the value provided by the big cloud platforms is an insane amount of engineering and system administration work to provide a reliable and large scale way for people to connect and share data, that's why we all go there, as well as, as you say, advertisements. The idea that any random CS department could replicate the Amazon Marketplace or AWS, or Facebook's infrastructure, is absurd.
Those offroading.com links you shared don't actually cite any sources at all, and I know nothing about that source and it doesn't look that reputable to me.
The first link contradicts the cited, and much more trustworthy, analysis from the Mozilla Foundation:
"Here's the good news with Tesla when it comes to privacy -- they very clearly state in their privacy documentation that they don't sell or rent your personal information to third parties ... Tesla makes other promises in their privacy that sound quite good. They say they won't share your personal information with third parties for their own use unless you opt-in (don't opt-in!). They say they don't "associate the vehicle data generated by your driving with your identity or account by default.""
For the second, there's no reliable (not obviously AI generated slop) evidence I've been able to find outside of offroading.com, at all, that there's any remote shutdown feature in Teslas, user accessible or no. They have PIN to drive, Sentry mode, etc, and they can remotely limit the speed to 50mph, but afaict, there's no remote disable feature.
As you say, this is something they have he technical capability to implement, but (1) it would have to be a brand new feature they add, they don't have it yet and haven't done it, and (2) at that point basically any car company from Volkswagen to Toyota could equally also do it, since all modern cars are internet connected and controlled by computers now. So this introduced a parity that completely undercuts this guy's point about Tesla's being uniquely bad
Tesla is not uniquely bad - however, it is one of 5 companies that actually already maintains a kill-switch functionality: https://daxstreet.com/list/275984/5-cars-with-factory-kill-s.... The other 4 are BMW, GM, Ford and Chrysler. The tech is there and ready and can be used.
That doesn't contradict what I was saying, really. They can do all the things the table checkmarks them as doing and it doesn't mean they're selling data to any third parties. It just says they collect a lot of data, use it, have a bad track record with employees accidentally leaking parts of it, and so on.
My whole overall point is that this guy is making specific claims that are either outright false (e.g. the reason Tesla is doing better than VW EVs is that they make money off selling user data as a sideline, and in general bolster their business with cloud capital) or misleading / not correct in the way he needs them to be true.
> Tesla is not uniquely bad - however, it is one of 5 companies that actually already maintains a kill-switch functionality: https://daxstreet.com/list/275984/5-cars-with-factory-kill-s.... The other 4 are BMW, GM, Ford and Chrysler. The tech is there and ready and can be used.
This article is formatted in a way that makes me strongly thing it is AI generated, but more problematically (since that's an imperfect indicator), when it makes claims like "[Tesla has] disabled vehicles used in crimes" or "The system includes redundant communication methods and can execute complex shutdown procedures that safely manage the vehicle’s transition from operation to immobilization.
Unlike simpler kill switches that merely cut ignition, Tesla’s system can coordinate with the vehicle’s autonomous driving features, regenerative braking, and battery management systems to ensure safe shutdown," it just says these things — it doesn't actually link to any news articles, primary sources, anything to substantiate them, and when I look them up, I don't find anything, as I said. So it seems to me as if you've found just another uncited tertiary resource saying the same things, but no meaningful evidence.
I don't know if it supports their particular point, but Machine Decision is Not Final seems like a very cool and interesting look at China's culture around AI:
In the West we have autonomous systems to commit genocide, detecting and murdering "enemy combatants" at scale, where "enemy combatant" is defined as "male between the ages of 15 and 55".
Sometimes I'm not so sure about any so-called moral superiority.
Citation? Not saying you’re wrong but my time in defense left me very much with the opposite opinion (radar target acquisitions had to be approved by a human, always)
>less accurate and efficient than existing solutions, only measures well against other LLMs
Where did you hear that? On every benchmark that I've ever seen, VLM's are hilariously better than traditional OCR. Typically, the reason that language models are only compared to other language models on model cards for OCR and so on is precisely because VLM's are so much better than traditional OCR that it's not even worth comparing. Not to mention that those top of the line traditional OCR systems like AWS, Textract are themselves extremely slow and computationally expensive. Not to mention much more complex to maintain.
>>tts, stt
> worse
Literally the first and only usable speech-to-text system that I've gotten on my phone is explicitly based on a large language model. Not to mention stuff like Whisper, Whisper X, Parakeet, all of the state-of-the-art speech-to-text systems are large-language model based and are significantly faster and better than what we had before. Likewise for text-to-speech, you know, even Kokoro-82M is faster and better than what we had before, and again, it's based on the same technology.
reply