There are multiple self driving car companies that are fully autonomous and operating in several cities in the US and China. Waymo has been operating for many years.
There are full self driving systems that have been in operation with human driver oversight from multiple companies.
And the capabilities of the LLMs in regards to your specific examples were demonstrated below.
The inability of the public to perceive or accept the actual state of technology due to bias or cognitive issues is holding back society.
It's a lot of mistrust and fear, too - a computer could never be as good at driving as a person!
And yet, over the years many things have just been accepted. Satnav for example, I grew up with my mom having the map in her lap, or my dad writing down directions. Later on we had a route planner on diskettes (I think) and a printout of the route. And my dad now has had a satnav in his car for near enough two decades. I'm sure they like everyone else ran into the quirks of satnav, but I don't think there was nearly as much "fear" and doubt for satnav as there is for self-driving cars and nowadays LLMs / coding agents. Or I'm misremembering it and have rose-tinted glasses, I also remember the brouhaha of people driving into canals because the satnav told them to turn left.
It's yet to be proven that the oil situation there is really going to change. It could turn out to be a classic "Mission Accomplished" where X weeks or months down the line we see an actual invasion.
That pattern in particular is grating when it keeps repeating. But I don't think that LLM writing necessarily needs to have that pattern if you give it instructions to not do it and/or have a small review and edit workflow.
Construction worker is a spectacularly bad analogy for software engineer.
The architect and structural engineers design the building well in advance. Construction workers are mainly arranging materials according to a prewritten design.
Software engineers are not given specs that are equivalent to blueprints. They are given requirements or user stories. Then they have to flesh out the final real specification in place.
And then from the specification, decide how to implement it, which is not decided at all ahead of time.
Also, what software engineers are building is almost always somewhat novel, at least dramatically more novel than a typical building. It very often involves some type of research task, even if that is just sifting through components and configuring them.
There is much more room in software engineering for 1) miscommunication or poor communication of users needs, 2) substantive tradeoffs discovered based on technical details or 3) subtle contradictions in requirements from different stakeholders discovered during implementations, 4) better understanding of requirements by users discovered during prototyping, etc.
You didn't really read what he wrote or think about it and just took it as an opportunity to dismiss him as old. He was just being humble. It's relatively new to everyone. At least you are honest about your ageism.
I am sure Karparthy can and does everage AI as well or better than you. Probably I do also and I am 48.
This is also the type of thing that makes having separate software architects that aren't actually maintaining the software generally a nonsensical idea.
There are too many decisions, technical details, and active changes to have someone come in and give direction from on high at intervals.
Maybe at the beginning it could make sense sort of, but projects have to evolve and more often than not discover something important early on in the implementation or when adding "easy" features, and if someone is good at doing software design then you may need them even more at that point. But they may easily be detrimental if they are not closely involved and following the rest of the project details.
I guess I'm lucky not to have worked at a place with a role for software architects who don't actually write code. I honestly don't know how that would work. However, I think I can appreciate the author's point. Any sufficiently complex piece of existing software is kind of like a chess game in progress. There is a place for general principles of chess strategy, but once the game is going, general strategy is much less relevant than specific insights into the current state of play, and a player would probably not appreciate advice from someone who has read a lot of chess books but hasn't looked at the current state of the board.
The best "architects" serve as facilitators, rather than deciding themselves how software is built. They have to be reading the code, but they don't themselves have to be coding to be effective.
You don't need one until you've got 30-70 engineers, but a strong group of collaborative architects is the most important thing for keeping software development effective and efficient at the 30-1,000 engineer range.
I think you can write whatever you want in a license. Lawyers and tradition don't have supernatural powers or anything. So you could say something like "Non exclusive non revocable license to use this code for any purpose without attribution or fees as long as that purpose is not for training AI, which is never permissible."
Little to no chance anyone involved in training AI will see that or really care though.
There are full self driving systems that have been in operation with human driver oversight from multiple companies.
And the capabilities of the LLMs in regards to your specific examples were demonstrated below.
The inability of the public to perceive or accept the actual state of technology due to bias or cognitive issues is holding back society.
reply