While I agree with the points he’s raising let me play devils advocate.
There’s a lot more code being written now that’s not counted in these statistics. A friend of mine vibe coded a writing tool for himself entirely using Gemini canvas.
I regularly vibe code little analyses or scripts in ChatGPT which would have required writing code earlier.
None of these are counted in these statistics.
And yes AI isn’t quite good enough to super charge app creation end to end. Claude has only been good for a few months. That’s hardly enough time for adoption !
This would be like analysing the impact of languages like Perl or Python on software 3 months after their release.
It kind of depends. You can broadly call any kind of search “reasoning”. But search requires 1) enumerating your possible options and 2) assigning some value to those options. Real world problem solving makes both of those extremely difficult.
Unlike in chess, there’s a functionally infinite number of actions you can take in real life. So just argmax over possible actions is going to be hard.
Two, you have to have some value function of how good an action is in order to argmax. But many actions are impossible to know the value of in practice because of hidden information and the chaotic nature of the world (butterfly effect).
Go is played on a 19x19 board. At the beginning of the game the first player has 361 possible moves. The second player then has 360 possible moves. There is always a finite and relatively “small” number of options.
I think you are thinking of the fact that it had to be approached in a different way than Minimax in chess because a brute force decision tree grows way too fast to perform well. So they had to learn models for actions and values.
In any case, Go is a perfect information game, which as I mentioned before, is not the same as problems in the real world.
It’s writing most of my code now. Even if it’s existing code you can feed in the 1-2 files in question and iterate on them. Works quite well as long as you break it down a bit.
It’s not gas lighting the latest versions of GPT, Claude, Lama have gotten quite good
These tools must be absolutely massively better than whatever Microsoft has then because I’ve found that GitHub copilot provides negative value, I’d be more productive just turning it off rather than auditing it’s incorrect answers hoping one day it’s as good as people market it as.
> These tools must be absolutely massively better than whatever Microsoft has then
I haven't used anything from Microsoft (including Copilot) so not sure how it compares, but compared to any local model I've been able to load, and various other remote 3rd party ones (like Claude), no one comes near to GPT4 from OpenAI, especially for coding. Maybe give that a try if you can.
It still produces overly verbose code and doesn't really think about structure well (kind of like a junior programmer), but with good prompting you can kind of address that somewhat.
Probably these services are so tuned (not as in "fine-tuned" ML style) to each individual user that it's hard to get any sort of collective sense of what works and what doesn't. Not having any transparency what so ever into how they tune the model for individual users doesn't help either.
My employer blocks ChatGPT at work and we are forced to use Copilot. It's trash. I use Google docs to communicate with GPT on my personal device. GPT is so much better. Copilot reminds me of GPT3. Plausible, but wrong all the time. GPT 4o and o1 are pretty much bang on most of the time.
The revolution will happen regardless. If you participate you can shape it in the direction you believe in.
AI is the most innovative thing to happen in software in a long time.
And personally AI is FUN. It sparks joy to code using AI. I don’t need anyone else’s opinion I’m having a blast. It’s a bit like rails for me in that sense.
This is HACKER news. We do things because it’s fun.
I can tackle problems outside of my comfort zone and make it happen.
If all you want to do is ship more 2020s era B2B SaaS till kingdom come no one is stopping you :P
I'm tired of 3d TV - Someone in 2013 (3D TV, after a big push by the industry in 2010, peaked in 2013, going into a rapid decline with the last hardware being produced in 2016).
Sometimes, the hyped thing doesn't catch on, even when the industry really, really wants it to.
That's an interesting example. I would argue that 3D TV as a "solution" didn't work, but 3D as a "problem" is still going strong, and with new approaches coming out all the time (most recently Meta's announcement of the Orion AR glasses), we'll gradually see extensive adoption of 3D experiences, which I expect will eventually loop back to some version of 3D films.
EDIT: To clarify my analogy, GenAI is definitely a "problem" rather than a particular solution, and as such I expect it to have longevity.
> To clarify my analogy, GenAI is definitely a "problem" rather than a particular solution, and as such I expect it to have longevity.
Hrm, I'm not sure that's true. "An 'AI' that can answer questions" is a problem, but IMO it's not at all clear that LLMs, with their inherent tendency to make shit up, are an appropriate solution to that problem.
Like, there have been previous non-LLM chatbots (there was a small bubble based on them a while back, in which, for a few months, everyone was claiming to be adding chat to their things; it kind of came to a shuddering halt with Microsoft Tay). It seems slightly peculiar to assume that LLMs are the ultimate answer to the problem, especially as they are not actually very good at it (in some ways, they're worse than the old-gen).
Ah, but, at least for generative AI, that kind of remains to be seen, surely? For every hyped thing that actually is The Future (TM), there are about ten hyped things which turn out to be Not The Future due to practical issues, cost, pointlessness once the novelty wears off, overpromising, etc. At this point, LLMs feel like they're heading more in that direction.
And 5 years ago, people used blockchain to operate a toaster. It remains to be seen the applications that are optimal for LLMs and the ones where it's being shoehorned into every conceivable place because "AI."
At no point does the author suggest that AI is not going to happen or that it is not useful. He expresses frustration with marketing, false promises, pitching of superficial solutions for deep problems, and the usage of AI to replace meaningful human interactions. In short, the text is not about the technology itself.
Oh wait, news flash, not all technological developments are good ones, and we should actually evaluate each one individually.
AI is shit, and some people having fun with it does not balance against it's unusually efficacy in turning everything into shit. Choosing to do something because it's fun without regard to the greater consequences is the sort of irresponsibility that has gotten human society into such a mess in the first place.
Anything you want to look at over time. A social network is a classic: how a person's connections evolve over time or respond to temporal covariates.
Loads of stuff can be expressed as a graph though. I'm interested in natural language corpora as graphs, but as others have mentioned, there's neural networks here too. That's whatever you fancy these days.
There’s a giant caveat here - this assumes that the current LLM architecture is enough to bootstrap to those higher levels of intelligence.
LLMs are incapable of some pretty simple things at this point and it’s a big question mark of whether they are even capable of doing sophisticated reasoning and planning architecturally.
GPT-4 cannot play a good game of tic tac toe. But it can play passable chess. This is a good point to ponder.