>> The skillset is shifting from implementing algorithms to knowing how to ask the AI the right questions and verify its output.
The question is, how much faster is verification only vs writing the code by hand? You gain a lot of understanding when you write the code yourself, and understanding is a prerequisite for verification. The idea seems to be a quick review is all that should be needed "LGTM". That's fine as long as you understand the tradeoffs you are making.
With today's AI you either trade speed for correctness or you have to accept a more modest (and highly project specific) productivity boost.
And there's a ton of human incentives here to take shortcuts in the review part. The process almost pushes you to drop your guard: you spend less physical time observing the code while you write, you get huge chunks of code dropped on you, iterations change a lot to keep a mind model, there's FOMO involved about the speed gain you're supposed to get... We're going to see worse review quality just by a mater of UX and friction of the tool.
Yes! It depends on the company, of course, but I think plenty of people are going to fall for the perverse incentives while reviewing AI output for tech debt.
The perverse incentives being that tech debt is non-obvious & therefore really easy to avoid responsibility for.
Meanwhile, velocity is highly obvious & usually tired directly to personal & team performance metrics.
The only way I see to resolve this is strict enforcement of a comprehensive QA process during both the planning & iteration of an AI-assisted development cycle.
But when even people working at Anthropic are talking about running multiple agents in parallel, I get the idea that CTO's are not taking this seriously.
In my experience (programmer since 1983), it's massively faster to leverage an LLM and obtain quality code when working with technology that I'm proficient in.
But when I don't have expertise, it's the same speed or even slower. The better I am at something, the faster the LLM coding goes.
I'm still trying to get better at Rust, and I'm past break-even now. So I could use LLMs for a speed boost. But I still hand-write all my code because I'm still gaining expertise. (Here I lean into LLMs in a student capacity, which is different.)
Related to this, I often ask LLMs for code reviews. The number of suggestions it makes that I think are good is inversely proportional to the experience I have with the particular tech used. The ability to discard bad suggestions is valuable.
This is why I think bring an excellent dev with the fundamentals is still important—critical, even—when coding with LLMs. If I were still in a hiring role, I'd hire people with good dev skills over people with poor dev skills every time, regardless of how adept they were at prompting.
This is a really cool idea! I remember once looking at all of the actual concrete actions taken and being surprised at what was actually done (not very much at the time!). Maybe it is better now.
I posted this earlier but realized that I had pointed it at a different "What are you working on" post. I think this one (david927) is the defacto "official" one.
reply