Are you sure the difference didn't mostly come down to being a tourist in temporary accommodation vs having access to a familiar grocery store and your home kitchen?
In Europe you don’t expect your bread to have added sugar, for instance. That tasted disgustingly.
You also don’t normally expect sweeteners in your meat. Those sauces are also disgusting. Good beef meat (and in the USA there’s very good meat), needs only salt and maybe a bit of pepper. Not those weird sugary sauces they put in the USA.
Seriously, for someone from Europe, some food in the USA is just disgusting (and it’s not due the quality of the ingredients, as those are usually very good) but due to the stuff they add on top.
All of the things you described are available, that's true, but any major supermarket, even in rural areas, will have plenty of healthier options available as well.
Take bread for example. Sure there will be some crappy sliced white bread on the shelf. But there will also be organic sprouted 7-grain high fiber next to it. In fact, there will probably be more healthy varieties available than just about any other country.
The options are there, but it can be exhausting to actually find them.
There are far too many products that try to position themselves as "healthy", but are closer to the rest of the crap on the shelves than actual "healthy" food. Even more frustrating is the insane amount of food now using sugar replacements to masquerade as a healthy option.
I personally, find it exhausting to shop at new stores because it can take looking at 2 to 5 items to find one that's actually made healthy.
> In Europe you don’t expect your bread to have added sugar, for instance.
Were you eating sweet bread meant for coffee or desserts and thinking it was for making a sandwich? Most breads use just enough sugar to rise the yeast.
> You also don’t normally expect sweeteners in your meat.
Were you eating barbecue, where the sauce is whole point? There is plenty of unsauced meat in the US. Any steakhouse will give you as much meat as you want without any sauce unless you pour it on yourself.
Every day another city or village in 4 different states. I won't go into everything I saw or noticed while staying there. HN doesn't like criticism of the US.
The ways in which Musk dug himself in when experts predicted this exact scenario confirmed to me he was not as smart as some people think he was. He seemed to have drank his own koolaid back then.
And if he still doesn’t realize and admit he is wrong then he is just plain dumb.
I think there’s room for both points of view here. Going all in on visual processing means you can use it anywhere a person can go in any other technology, Optimus robots are just one example.
And he’s not wrong that roads and driving laws are all built around human visual processing.
The recent example of a power outage in SF where lidar powered Waymo’s all stopped working when the traffic lights were out and Tesla self driving continued operating normally makes a good case for the approach.
Didn't waymo stop operating simply because they aren't as cavalier as Tesla, and they have much more to lose since they are actually self driving instead of just driver assistance? Was the lidar/vision difference actually significant?
The reports I’ve read said that some continued to attempt to navigate with the street lights out, but that the vehicles all have a remote confirmation where they try to call home to confirm what to do. That ended up self DDoSing Waymo causing vehicles to stop in the middle of the road and at intersections with their hazards on.
So to clarify, it wasn’t entirely a lidar problem it was an need to call home to navigate.
> roads and driving laws are all built around human visual processing.
And people die all the time.
> The recent example of a power outage in SF where lidar powered Waymo’s all stopped working when the traffic lights were out and Tesla self driving continued operating normally makes a good case for the approach.
Huh? Waymo is responsible for injury, so all their cars called home at the same time DOS themselves rather than kill someone.
Tesla makes no responsibility and does nothing.
I can’t see the logic the brings vision only as having anything to do lights out. At all.
Yes... but people can only focus on one thing at a time. We don't have 360 vision. We have blind spots! We don't even know the exact speed of our car without looking away from the road momentarily! Vision based cars obviously don't have these issues. Just because some cars are 100% vision doesn't mean that it has to share all of the faults we have when driving.
That's not me in favour of one vs the other. I'm ambivalent and don't actually care. They can clearly both work.
They do, but the rate is extremely low compared to the volume of drivers.
In 2024 in the US there were about 240 million licensed drivers and an estimated 39,345 fatalities, which is 0.016% of licensed drivers. Every single fatality is awful but the inverse of that number means that 99.984% of drivers were relatively safe in 2024.
Tesla provided statistics on the improvements from their safety features compared to the active population (https://www.tesla.com/fsd/safety) and the numbers are pretty dramatic.
Miles driven before a major collision
699,000 - US Average
972,000 - Tesla average (no safety features enabled)
2.3 million - Tesla (active safety features, manually driven)
5.1 million - Tesla FSD (supervised)
It's taking something that's already relatively safe and making it approximately 5-7 times safer using visual processing alone.
Maybe lidar can make it even better, but there's every reason to tout the success of what's in place so far.
No, you're making the mistake of taking Tesla's stats as comparable, which they are not.
Comparing the subsets of driving on only the roads where FSD is available, active, and has not or did not turn itself off because of weather, road, traffic or any other conditions" versus "all drivers, all vehicles, all roads, all weather, all traffic, all conditions?
Or the accident stats that don't count an accident any collision without airbag deployment, regardless of injuries? Including accidents that were sufficiently serious that airbags could not or were unable to deploy?
The stats on the site break it into major and minor collisions. You can see the above link.
I have no doubt that there are ways to take issue with the stats. I'm sure we could look at accidents from 11pm - 6am compared to the volume of drivers on the road as well.
I wonder how much of their trouble comes from other failures in their plan (avoiding the use of pre-made maps and single city taxi services in favor of a system intended to drive in unseen cities) vs how much comes from vision. There are concerning failure modes from vision alone but it’s not clear that’s actually the reason for the failure. Waymo built an expensive safe system that is a taxi first and can only operate on certain areas, and then they ran reps on those areas for a decade.
Tesla specifically decided not to use the taxi-first approach, which does make sense since they want to sell cars. One of the first major failures of their approach was to start selling pre-orders for self driving. If they hadn’t, they would not have needed to promise it would work everywhere, and could have pivoted to single city taxi services like the other companies, or added lidar.
But certainly it all came from Musk’s hubris, first to set out to solve the self driving in all conditions using only vision, and then to start selling it before it was done, making it difficult to change paths once so much had been promised.
> And if he still doesn’t realize and admit he is wrong then he is just plain dumb.
The absolute genius made sure that he can't back out without making it bleedingly obvious that old cars can never be upgraded for a LIDAR-based stack. Right now he's avoiding a company-killing class action suit by stalling, hoping people will get rid of HW3 cars, (and you can add HW4 cars soon too) and pretending that those cars will be updated, but if you also need to have LIDAR sensors, you're massively screwed.
> The ways in which Musk dug himself in when experts predicted this exact scenario confirmed to me he was not as smart as some people think he was.
History is replete with smart people making bad decisions. Someone can be exceptionally smart (in some domains) and have made a bad decision.
> He seemed to have drank his own koolaid back then.
Indeed; but he was on a run of success, based on repeatedly succeeding deliberately against established expertise, so I imagine that Koolaid was pretty compelling.
To be frank, no one had a crystal ball back then, and stuff could go either way with uncertainty in both hardware and software capabilities. Sure Lidars were better even back then, but the bet was on catching up on them.
I hate Elon's personality and political activity as much as anyone, but it is clear from technical PoV that he did logical things. Actually, the fact that he was mistaken and still managed to not bankrupt Tesla is saying something about his skills.
Fair. So in a sense, the lidar vs camera argument ultimately can be publicly assess/proven through human babysitter (regulation permit) and accident rates. or maybe user adoptions.
I relaunched one of my Dutch agricultural communities to reach a more international audience. I’m starting to see great traction and it’s very rewarding:
https://www.tractorfan.us
Because their manipulators only need a couple subjects for identity politics. If they sow too many seeds of doubt, the world becomes too complex again, while the goal is the opposite.
Eggs and meat products are way up in Europe (at least in NL) too, bird flu, government buyouts to reduce nitrogen emissions, etc. Here's a neat page with market prices for eggs: https://www.nieuweoogst.nl/marktprijzen/eieren.
On the other hand, potatoes are down to near zero this year (bullwhip effect, last year there were crop failures and prices were way up so farmers planted more potatoes). Doesn't necessarily translate to consumer prices but nobody considers potatoes to be expensive anyway.
People need to eat and industrial scale farming is what enables us to make enough, affordable food.
It has plenty downsides. But it’s a brilliant and truly efficient system that is being perfected by thousands of scientists and it has prevented hunger and chaos for decades now.
If you want to see real change, people would need to have way more time, be less lazy, have more money and be less demanding when it comes to variety and availability.
In other words, it’s easier to keep perfecting the system we have because it’s easier to change procedures than it is to change people.
Didn't say we can do away with Industrial scale farming, just that I live in a part of Japan where people grow so much of their own food, they struggle to give it away by the end of the summer. So yeah, didn't mean to imply industrial scale farming is "naughty", just that Japan is notorious for smaller farming operations and I've seen a lot of food grown successfully here at small or micro scale. I think a healthy mix of both things is important.
Look at Australia, basically no one grows any food and they're completely at the mercy of insanely inflated food prices dictated by corporations like Woolworths. At least in rural parts of Japan, a lot of people can lower their grocery costs with supplemental, home grown food. I actually have notice a bit of a rebellious culture amongst farmers here. It's interesting but for a different topic I guess.
I think they are positing that LLMs do not produce new thought. If a new framework (super magic new framework) is released, current LLMs will not be able to help.
Why wouldn't the LLM just read the source of the framework to answer questions directly? That's how I do things as a human. Given the appropriate background knowledge (which current LLMs are already extremely capable with), it should be pretty easy to understand what it's doing, and if it's not easy to understand the source, it's probably a bad framework.
I don't expect an LLM to have deep inbuilt knowledge of libraries. I expect it to be able to use a language server to find the right definitions and load them into context as needed. I expect it to have very deep inbuilt knowledge of computer science and architecture to make sense of everything it sees.
Because LLMs do not work like that - there's no "understanding" the source and answering questions, it simply "finds" similar results in its training data (matching it with the context) and regurgitates (some part of) it (+ other "noise").
Meaning as technology evolves and does things in novel ways, without explainers annotating it the LLM won't have anything to draw on - reducing the quality of answers. Which brings us full circle, what will companies use as training data without answers in places like SO?
I just downloaded "Degeneration in discriminantal arrangements", by Saito, Takuya from the journal "Advances in applied mathematics" dated November 2025 and fed it to Claude.
It not only explained the math but created a react app to demonstrate it. I'm not that can be explained by regurgitating part of it with noise.
I encourage you to try it with something of your own.
Abstract:
Discriminantal arrangements are hyperplane arrangements
that are generalization of braid arrangements. They are con-
structed from given hyperplane arrangements, but their com-
binatorics are not invariant under combinatorial equivalence.
However, it is known that the combinatorics of the discrimi-
nantal arrangements are constant on a Zariski open set of the
space of hyperplane arrangements. In the present paper, we
introduce (T, r)-singularity varieties in the space of hyper-
plane arrangements to classify discriminantal arrangements
and show that the Zariski open set is the complement of
(T, r)-singularity varieties. We study their basic properties
and operations and provide examples, including infinite fami-
lies of (T, r)-singularity varieties. In particular, the operation
that we call degeneration is a powerful tool for constructing
(T, r)-singularity varieties. As an application, we provide a
list of (T, r)-singularity varieties for spaces of small line ar-
rangements.
It's well known that even current LLMs do not perform well on logic games when you change the names / language used.
i.e. try asking it to swap the meanings of the words red and green and ask it to describe the colors in a painting and analyse it with color theory - notice how quickly the results degrade, often attributing "green" qualities to "red" since it's now calling it "green".
What this shows us is that training data (where the associations are made) plays a significant role in the level of answer an LLM can give, no matter how good your context is (at overriding the associations / training data). This demonstrates that training data is more important (for "novel" work) than context is.
Write "This sentence is green." in red sharpie and "This sentence is red" in green sharpie on a piece of paper. Show it to someone briefly and then hide it. Ask them what color the first sentence said it was and what color the second sentence was written in.
Another one: ask a person to say 'silk' 5 times, then ask them what cows drink.
Exploiting such quirks only tells you that you can trick people, not what their capabilities are.
The point isn't that you can trick an LLM, but that their capabilities are more strongly tied to training data than context. That's to say, when context and training disagree, training "wins". ("wins" isn't the correct wording, but hopefully you understand the point)
This poses a problem for new frameworks/languages/whatever that do things in a wholly different way since we'll be forced to rely on context that will contradict the training data that's available.
What is an example of a framework that does things in a wholly different way? Everything I'm familiar with is a variation on well explored ideas from the 60s-70s.
If you had someone familiar with every computer science concept, every textbook, every paper, etc. up to say 2010 (or even 2000 or earlier), along with deep experience using dozens of programming languages, and you sat them down to look at a codebase, what could you put in front of them that they couldn't describe to you with words they already know?
Even the differences between React and Svelte are big enough for this to be noticeable. And Svelte is actually present in the training data. Given the large amount of react training data, svelte performs significantly worse (yes, even when given the full official svelte llms.txt in the context)
But it doesn't pose a problem. You are extrapolating things that are not even correlated.
You started with 'they can't understand anything new' and then followed it up with 'because I can trick it with logic problems' which doesn't prove that.
Have you even tried doing what you say won't work?
If I make up a riddle and ask an LLM to solve it, it will perform worse than a riddle that is well known and whose solution will be found in the dataset. That's just a foundational component of how they work.
But it’s almost trivial for an LLM to generate every question and answer combo you could every come up with based on new documentation and new source code for a new framework. It doesn’t need StackOverflow anymore. It’s already miles ahead.
My recent experience with codex is that they absolutely do work that way today (this may be recent as in within the last couple months), and will autonomously decide to grep for things in your codebase to get context on changes you've asked for or questions you've asked. I've been pretty open to my manager about calling my upper management delusional with this stuff until very recently (in the sense that 6 months ago everything I tried was still a toy), but it's actually now reaching a tipping point that's drastically changing how I work.
LLM’s can already do this without training. I recently uploaded a manual for an internal system and it can perfectly answer questions from the context window.
So adding a new framework already doesn’t need human input. It’s artificial intelligence now, not a glorified search engine or autocomplete engine.
The issue here is borrowing from the future to pay for the present. The bicycle analogy (unless I'm missing something huge here) does not seem relevant at all.
How will CharGPT/CoPilot/whatever learn about the next great front-end framework? The LLMs know about existing frameworks by learning on existing content (from StackOverflow and elsewhere). If StackOverflow (and elsewhere) go away, there's nothing to provide a training material.
reply