Hacker Newsnew | past | comments | ask | show | jobs | submit | sanderjd's commentslogin

The political will of a plurality of American voters elected this administration a little over a year ago. We don't have direct evidence of what the current political will of a plurality of American voters is. We have some indirect evidence via polls and off-cycle elections that this is not the political will of most Americans. We'll have stronger evidence after the vote for the full House and third of the Senate later this year.

Personally, I don't think we have evidence yet that the democratic process in the US is broken. I have concerns as many people do, but the recent off-cycle elections went off just fine.


Yeah, this is a big thing. AIs (at the moment) don't learn. You wait for a new model to come out and hope it is better than the last one. But that isn't the same thing as a person learning over time.

> AIs (at the moment) don't learn.

Yes, and even when it learns (because there's new version of the AI model) it doesn't learn according to your company/team's values. Those values might be very specific to your business model.

Currently, AI (LLM) is just a tool. It's a novel and apparently powerful tool. But it's still just a tool.


Yeah this is truly my biggest concern. I think it's really bad.

I think the better way to think of this is whether it will be harder for people who are good at using AI tools to accomplish things with computers to get jobs. Maybe, but I don't think so. I think this skill set will be useful in every line of work.

That doesn’t solve the problem. It’s easy enough to be “good enough” at AI tools just like it’s easy enough to be a decent enterprise CRUD full stack/back end/mobile developer. It will still be hard to stand out from the crowd.

I saw this coming on the enterprise dev side where most people work back in 2015. Not AI of course, but the commoditization of development.

I started moving closer to the “business”, got experience in leading projects, soft skills, requirements gathering, AWS architecture etc.

I’m not saying the answer is to “learn cloud”. I am saying that it’s important to learn people skills and be the person trusted with strategy and don’t just be a code monkey pulling well defined tickets off the board.


My point is: I don't think there will be way more jobs for "AI developers", I think there will be plenty of jobs for people who are employed in an industry and adept with using AI tools to be effective at their job. These people would not be differentiating themselves from other "AI developers", but from other people who do their role in whatever industry they are in, but who aren't as adept with these tools.

Yes!

I think the next step is to realize that this kind of product manager role is one that more "engineers" should be willing to take on themselves. It's pretty clear why user interviews and research and product requirement docs are not obviously within the wheelhouse of technical people, but building lots of prototypes and getting feedback is a much better fit!


Yeah I also sense this disconnect between the reality and hype.

In part, I think what people are responding to is the trajectory of the tools. I would agree that they seem to be on an asymptote toward being able to do a lot more things on their own, with a lot less direction. But I also feel like the improvements in that direction are incremental at this point, and it's hard to predict when or if there will be a step change.

But yeah, I'm really not sure I buy this whole thing about orchestrating a symphony of agents or whatever. That isn't what my usage of AI is like, and I'm struggling to see how it would become like that.

But what I am starting to see, is "non-programmers" beginning to realize that they can use these tools to do things for their own work and interests, which they would have previously hired a programmer to do for them, or more likely, just decided it wasn't worth the effort. I think for those people, it does feel like a novel automation tool. It's just that we all already knew how to do this, by writing code. But most people didn't know how to do that. And now they can do a lot more.

And I think this is a genuine step change that will have a big effect on our industry. Personally, I think this is ultimately a very good thing! This is how computers should work, that anybody can use them to automate stuff they want to do. It is not a given that "automating tasks" is something that must be its own distinct (and high paying) career. But like any disruption, it is very reasonable to feel concerned and uncertain about the future when you're right in the thick of it.


> The flip scenario: AI unlocks massive demand for developers across every industry, not just tech. Healthcare, agriculture, manufacturing, and finance all start embedding software and automation. Rather than replacing developers, AI becomes a force multiplier that spreads development work into domains that never employed coders. We’d see more entry-level roles, just different ones: “AI-native” developers who quickly build automations and integrations for specific niches.

This is what I expect to happen, but why would these entry-level roles be "developers". I think it's more likely that they will be roles that already exist in those industries, where the responsibilities include (or at least benefit from) effective use of AI tools.

I think the upshot is that more people should probably be learning how to work in some specific domain while learning how to effectively use AI to automate tasks in that domain. (But I also thought this is how the previous iteration of "learn to code" should be directed, so maybe I just have a hammer and everything looks like a nail.)

I think dedicated "pure tech" software where the domain is software rather than some other thing, is more likely to be concentrated in companies building all the infrastructure that is still being built to make this all work. That is, the models themselves and all the surrounding tools, and all the services and databases etc. that are used to orchestrate everything.


An AI-enabled developer is still a full-time job that requires SWE expertise. I think the quoted portion is correct, but it will be a gradual change as CTO/CIOs realize the arbitrage opportunity in replacing most of their crappy SaaS subscriptions with high-velocity in-house solutions. The savvy ones, at least.

This is true if you want to build professional software. But what I foresee is a lot more tasks being accomplished with task-specific tools created by the people responsible for doing those tasks. Like how people use spreadsheets to get their jobs done, but with a much broader set of use cases.

Once it is easier to just make almost anything yourself than it is to go through a process of expressing your requirements to a professional software development group and iterating on the results, that will be a popular choice.


That works until it’s important enough to secure, monitor, operate, debug, and have people on call in case it breaks.

At that point it gets handed over to the engineers.


Yes, but I think the reason things become that important is when they are used by a lot of people. People (or small teams) building purpose-specific tools for themselves don't require any of that.

The same pattern held through the early days of "high level" languages that were compiled to assembly, and then the early days of higher level languages that were interpreted.

I think it's a very apt comparison.


If the same pattern held, then it ought to be easy to find quotes to prove it. Other than the one above from Hamming, we've been shown none.

Read the famous "Story of Mel" [1] about Mel Kaye, who refused to use optimizing assemblers in the late 1950s because "you never know where they are going to put things". Even in the 1980s you used to find people like that.

[1] https://en.wikipedia.org/wiki/The_Story_of_Mel


The Story of Mel counts against the narrative because Mel was so overwhelmingly skilled that he was easily able to outdo the optimizing compiler.

I don't think that does count against the narrative? The narrative is just that each time we've moved up the abstraction chain in generating code, there have been people who have been skeptical of the new level of abstraction. I would say that it's usually the case that highly skilled operators at the previous level remain more effective than the new adopters of the next level. But what ends up mattering more in the long run is that the higher level of abstraction enables a lot more people to get started and reach a basic level of capability. This is exactly what's happening now! Lots of experienced programmers are not embracing these tools, or are, but are still more effective just writing code. But way more people can get into "vibe coding" with some basic level of success, and that opens up new possibilities.

The narrative is that non-LLM adopters will be left behind, lose their jobs, are Luddites, yadda yadda yadda because they are not moving up the abstraction layers by adopting LLMs to improve their output. There is no point in the timeframe of the story at which Mel would have benefitted from a move to a higher abstraction level by adopting the optimizing compiler because its output will always be drastically inferior to his own using his native expertise.

That's not the narrative in this thread. That's a broader narrative than the one in this thread.

And yes, as I said, the point is not that Mel would benefit, it's that each time a new higher level of abstraction comes onto the scene, it is accessible to more people than the previous level. This was the pattern with machine code to symbolic assembly, it was the pattern with assembly to compiled languages, with higher level languages, and now with "prompting".

The comment I originally replied to implied that this current new abstraction layer is totally different than all the previous ones, and all I said is that I don't think so, I think the comparison is indeed apt. Part of that pattern is that a lot of new people can adopt this new layer of abstraction, even while many people who already know how to program are likely to remain more effective without it.


The author meant that you can't just tell a model "do everything that is necessary to achieve 99.95% uptime". It can certainly help you brainstorm issues and solve them, but you can't "just" prompt it.

Yep agreed, totally changes build vs buy decisions. Which is not to say it's always "build" now, but the calculation has changed.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: