Every time someone says “AI built in an hour what took us a year,” what they really mean is that humans spent a year doing the hard thinking and the AI merely regurgitated it at silicon speed. Which is, of course, completely different from productivity.
Also, if it truly took your team a year, that probably says more about your process than about AI. But not in a way that threatens my worldview. In a different way. A safer way.
Let’s be clear: writing the code is the easy part. The real work is the meetings, the alignment, the architectural debates, the Jira grooming, the moral struggle of choosing snake_case vs camelCase. Claude didn’t do any of that. Therefore it didn’t actually do anything.
I, personally, have spent years cultivating intuition, judgment, and taste. These are things that cannot be automated, except apparently by a probabilistic text model that keeps outperforming me in domains I insist are “subtle.”
Sure, the output works. Sure, it passes tests. Sure, it replaces months of effort. But it doesn’t understand what it’s doing. Unlike me, who definitely understands everything I copy from Stack Overflow.
Also, I tried AI last year and it hallucinated once, so I’ve concluded the entire field has plateaued permanently. Technology famously never improves after an early bad demo.
Anyway, I remain unconcerned. If AI really were that powerful, it would have already made me irrelevant, and since I still have a job, this must all be hype. QED.
Now if you’ll excuse me, I need to spend the afternoon explaining why a tool that just invalidated a year of human labor is “just autocomplete.”
>I, personally, have spent years cultivating intuition, judgment, and taste.
exactly. I am using AI to make tons of good code and I love it. but the AI makes silly oversight or has gaps in logic that to someone with 'hands on' experience thinks of right away.
im debugging some web server stuff for hours and ai never ask me for the logs or --verbose output, which is insane. instead the ai comes up with hypothetical causes for the problem then confidently states the solution.
Every time someone says “AI built in an hour what took us a year,” what they really mean is that humans spent a year doing the hard thinking and the AI merely regurgitated it at silicon speed. Which is, of course, completely different from productivity.
Also, if it truly took your team a year, that probably says more about your process than about AI. But not in a way that threatens my worldview. In a different way. A safer way.
Let’s be clear: writing the code is the easy part. The real work is the meetings, the alignment, the architectural debates, the Jira grooming, the moral struggle of choosing snake_case vs camelCase. Claude didn’t do any of that. Therefore it didn’t actually do anything.
I, personally, have spent years cultivating intuition, judgment, and taste. These are things that cannot be automated, except apparently by a probabilistic text model that keeps outperforming me in domains I insist are “subtle.”
Sure, the output works. Sure, it passes tests. Sure, it replaces months of effort. But it doesn’t understand what it’s doing. Unlike me, who definitely understands everything I copy from Stack Overflow.
Also, I tried AI last year and it hallucinated once, so I’ve concluded the entire field has plateaued permanently. Technology famously never improves after an early bad demo.
Anyway, I remain unconcerned. If AI really were that powerful, it would have already made me irrelevant, and since I still have a job, this must all be hype. QED.
Now if you’ll excuse me, I need to spend the afternoon explaining why a tool that just invalidated a year of human labor is “just autocomplete.”