Hacker Newsnew | past | comments | ask | show | jobs | submit | elktown's commentslogin

> Software isn't bad because engineers don't care.

Caring is certainly a wide spectrum. I see the Handmade stuff being proudly on far end of it.


I don't disagree, the field has become lucrative enough it has attracted people who are interested in the money and not the craft

I'd use unrealistic to describe Handmade, proud is also accurate and works too


> the field has become lucrative enough it has attracted people who are interested in the money and not the craft

Yup, exactly.

> I'd use unrealistic to describe Handmade, proud is also accurate and works too

In certain settings definitely. But even in those corporate settings where it's unrealistic I'd rather work with one than not. If not applied dogmatically, that corner of the corp has a good chance of being an oasis. But a fleeting one perhaps.


I wonder if being a literal AI sci-fi author, antirez acknowledges that there's possible bias and willingness to extrapolate here? That said, I respect his work immensely and I do put a lot of weight to his recommendations. But I'd really prefer the hype fog that's clouding signal [for me] to dissipate a bit - maybe economic realities will sort this out soon.

There's also a short-termism aspect of AI generated code that's seemingly not addressed as much. Don't pee your pants in the winter to keep warm.


> Greenland has tremendous mineral resources and Denmark doesn’t have the capacity to extract them. If the U.S. decides that it needs to keep China from stepping in and developing Greenland’s resources, then it will do what it needs to do. Trump will just be boisterously upfront about it.

"We've always been at war with Eastasia"


> A solution that simultaneously solves the problem and reduces complexity is almost the definition of genius.

Well put. Chasing "How simple can we make this?" is a large part of what makes this job enjoyable to me. But it's perhaps not a good career advice.


Yeah, "resume driven development" is a second major force pushing complexity that I didn't mention. People want to be able to get experience with as many buzzwords and technologies and stacks as they can for obvious personal self interest reasons.

The incentive is real. A great programmer who does a great job simplifying and building elegant maintainable systems might not get hired because they can't say they have X years experience with a laundry list of things. After all, part of their excellence was in making those things unnecessary.

It's a great example of a perverse incentive that's incredibly hard to eliminate. The net effect across the industry is to cost everyone money and time and frustration, not to mention the opportunity cost of what might have been had the cognitive cycles spent wrangling complexity been spent on polish, UI/UX, or innovation.

There's also a business and VC level version of this. Every bit of complexity represents a potential niche for a product, service, or startup. You might call this "product portfolio driven development" which is just the big brother of "resume driven development."


Again, very well put. It often becomes a chain-reaction as well.

These threads makes it depressingly obvious how "might makes right" is the main underlying principle in the end - albeit periodically latent. Suddenly proportionality disappears and it's one of the worst regimes out there, a narco-state. Obviously unlawful actions is reported as "legally questionable" etc. It doesn't even matter that the current US administration is an unusually vulgar example of erratic, dishonest, and self-serving leadership.


Maduro making himself dictator was also a "might makes right" move tbh.


how "might makes right" is the main underlying principle in the end

This is not surprising, this is how society ultimately works, even internally, not just on international scale.

I live in a democracy. I could still name several laws of the land that I consider fundamentally unjust, but the might of the majority translated into political and physical power means that I have to obey them, right or wrong. It is better that this power is controlled democratically and not by a single autocrat or a single ruling party, but it is still fundamentally coercion.

Are there even any alternatives? Ultimately we cannot all agree on what is right for everyone.


My point was to highlight the double standards of this kind of after-the-fact reporting and discussions. I'm cynical enough to know that "might makes right" is a part of life to various degrees.


> It's the folks who are arguing with you trying to say that AI is useless who will quickly lose their jobs.

Why is it that in every hype there are always the guys like you that want to punish the non-believers? It's not enough to be potentially proven correct, your anger requires the demise of the heretics. It was the same story for cryptocurrencies.


He/she is probably one of those poor souls working for an AI-wrapper-startup who received a ton of compensation in "equity", which will be worth nothing when their founders get acquihired, Windsurf style ;) But until then, they get to threaten us all with the impending doom, because hey, they are looking into the eye of the storm, writing Very Complex Queries against the AI API or whatever...


Isn’t this the same type of emotional response he’s getting accused for? You’re speculating that he will be “punished” just as he speculated for you.

There’s emotions on both sides and the goal is to call it out, throw it to the side and cut through into the substance. The attitude should be: Which one of us is actually right? Rather than: I’m right and you’re a fucking idiot attitude I see everywhere.


Mate, I could not care less if he/her got "punished" or not. I was just assuming what might be driving someone to go and try and answer each and every one of my posts with very low quality comments, reeking of desperation and "elon-style" humour (cheap, cringe puns). You are assuming too much here.


Maybe he was just assuming something negative as well.

Both certainly look very negative and over the top.


Not too dissimilar to you. I wrote long rebuttals to you points and you just descended into put downs, stalking and false accusations. You essentially told me to fuck off from all of HN in one of your posts.

So it’s not like your anger is any better.


I have a hard time trusting the judgement of someone writing this:

> I no longer write code. I’ve been a swe for over a decade. AI writes all my code following my instructions. My code output is now expected to be 5x what it was before because we are now augmented by AI. All my coworkers use AI. We don’t use ChatGPT we use anthropic. If I didn’t use AI I would be fired for being too slow.

https://news.ycombinator.com/item?id=46175628


You should drop the prejudice and focus to be aware of the situation. This is happening all over the world, most people who have crossed this bridge just don’t share, just like they don’t share that they’ve brushed their teeth this morning.


I think I'll keep defaulting to critical thinking rather than some kinda pseudo-religious "crossing the bridge" talk.


Just a metaphore - used to code by hand, now he doesn't, but still produces software. Keep religion out of this.


No one shrugs off 5x like brushing one's teeth in the morning. That makes no sense.


You're confusing critical thinking with having an axe to grind it seems. Bye.


People are sharing it. Look at this entire thread. It’s so conflicted.

We have half the thread saying it’s 5x and the other half saying they’re delusional and lack critical thinking.

I think it’s obvious who lacks critical thinking. If half the thread is saying on the ground AI has changed things and the other half just labels everyone as crazy without investigation… guess which one didn’t do any critical thinking?

Last week I built an app that cross compiled into Tauri and electron that’s essentially a google earth clone for farms. It uses mapbox and deckgl and you can play back gps tracks of tractor movements and the gps traces change color as the tractor moves in actual real time. There’s pausing, seeking, bookmarking, skipping. All happening in real time because it’s optimized to use shader code and uniforms to do all these updates rather than redrawing the layers. There’s also color grading for gps fix values and satellite counts which the user can switch instantaneously to with zero slow down on tracks with thousands and thousands of points. It all interfaces with an API that scans gcp storage for gps tracks and organizes it into a queryable api that interfaces with our firebase based authentication. The backend is deployed by terraform and written in strictly typed typescript and it’s automatically deployed and checked by GHA. Of course the electron and tauri app have GUI login interfaces that work fully correctly with the backend api and it all looks professionally designed like a movie player merged with Google earth for farm orchards.

I have rudimentary understanding for many of the technologies involved in the above. But I was able to write that whole internal tool in less than a week thanks to AI. I couldn’t have pulled it off without rudimentary understanding of the tech so some novice swe couldn’t really do it without the optimizations I used but that’s literally all I needed. I never wrote shader code for prod in my life and left to its own devices the AI would have come up with an implementation that’s too laggy to work properly.

That’s all that’s needed. Some basic high level understanding and AI did everything else and now our company has an internal tool that is polished beyond anything that would’ve been given effort to before AI.

I’m willing to bet you didn’t use AI agents in a meaningful way. Maybe copying and pasting some snippets of code into a chatbot and not liking the output. And then you do it every couple of weeks to have your finger on the pulse of AI.

Go deeper. Build an app with AI. Hand hold it into building something you never built before. It’s essentially a pair programming endeavor. Im willing to bet you haven’t done this. Go in with the goal of building something polished and don’t automatically dismiss it when the AI does something stupid (it inevitably will) Doing this is what actual “critical thinking” is.


> I think it’s obvious who lacks critical thinking.

My critical thinking is sharp enough to recognize that you're the recently banned ninetyninenine user [0]. Just as unbalanced and quarrelsome as before I can see. It's probably better to draw some conclusion from a ban and adjust, or just leave.

[0] https://news.ycombinator.com/item?id=45988923


I’m not that guy lol.

Why don’t you respond to my points rather than attack me.


> Why don’t you respond to my points

Because I believe you have a "flexible" relationship to the truth, so I'm not wasting any more time.


Like you bs accusations? Alright then. Good day to you sir.


Explain to me why my judgement is flawed. What I’m saying is true.


Because, among other claims, "5x now or you're fired!" is completely ridiculous.


Bro no one said 5x now or your fired that’s your own imagination adding flavor to it.

It’s obvious to anyone if your output is 5x less than everyone else you will eventually be let go. There’s no paradigm shift where the boss suddenly announced that. But the underlying unsaid expectation is obvious given what everyone is doing.

What happened was this, a couple new hires and some current employees started were using AI. There output was magnified and they were not only having more output but they were deploying code outside their areas of expertise doing dev ops, infra, backend and frontend.

This spread and within months everyone in the company was doing it. The boss can now throw a frontend job to a backend developer and now expect completion in a day or less. This isn’t every task but such output for the majority of tasks it’s normal.

If you’re not meeting that norm it’s blindingly obvious. The boss doesn’t need to announce anything when everyone is faster. There was no deliberate culture shift where the boss announced it. The closest equivalent is the boss hiring a 10x engineer to work alongside you and you have to scramble to catch up. The difference is now we know exactly what is making each engineer 10x and we can use that tool to also operate at that level.

Critical thinking my ass. You’re just labeling and assuming things with your premeditated subconscious bias. If anything it’s your perspective that is religious.


> they were deploying code outside their areas of expertise doing dev ops, infra, backend and frontend.

> The boss can now throw a frontend job to a backend developer and now expect completion in a day or less.

Right. So essentially vibe coding in unknown domains, sounds great. Truly professional.


Also can you please stop stalking me and just respond to my points instead of digging through my whole profile and attempting to do character assassinations based off of what I wrote in the past? Thanks.


Whether you agree with it or not is besides the point. The point is it’s happening.

Your initial stance was disbelief. Now you’re just looking down at it as unprofessional.

Bro, I fucking agree. It’s unprofessional. But the entire point initially was that you didn’t believe it and my objective was to tell you that this is what’s happening in reality. Scoff at it all you want, as AI improves less and less “professional” people will be able to enter our field and operate at the same level as us.


> NFTs are still being used. Along with a lot of the crypto ecosystem. In fact we're increasingly finding legitimate use cases for it.

Look at this. I think people need to realize that it's the same kind of folks migrating from gold rush to gold rush. If it's complete bullshit or somewhat useful doesn't really matter to them.


You're comment history suggests a pro-AI bias on par with AI companies. I don't understand it. It seems like critical thinking, nuance, and just basic caution have been turned off like a light-switch for far too many people.


> It seems like critical thinking, nuance, and just basic caution have been turned off like a light-switch for far too many people.

Ironically, this response contains no critical thinking or nuance.


Such a typical HN "gotcha!".


They're not wrong. I think many people also saw/see the trajectory of the models.

If you were pro-ai doing the majority of coding a year ago, you would have been optimistically in front of where the tech was actually capable.

If you are strongly against AI doing the majority of coding now, you are likely well behind what the current tech is capable of.

People who were pragmatic and knowledgeable anticipated this rise in capability.


I recommend engaging with ideas next time, rather than making reductive, ad-hominem, thought-terminating statements.


Thanks! I recommend not reading all comments literally. We have a significant hype bubble atm and I'm not exactly alone in thinking how crazy it is. I think you can draw a connection from my exasperated statement to that if you really wanted to.


You intentionally made disparaging remarks about someone and attempted to tie them having an opinion about a technology to that of people who have a vested financial interest in said technology.

You didn't engage at all on the substance of their comment - that they find AI useful for doing code reviews - and instead made a comment that was nothing but condescension.

All of that is separate from whether or not AI is overhyped or anything else - it being valuable for PRs could be true while it is also overhyped. If true, that could be some of the nuance you seem to be so concerned about us lacking.


1. No, I haven't suggested financial interest. There are plenty of non-financial ones on this forum.

2. True, I challenged the person's bias considering extraordinary historical comments lacking extraordinary evidence.


Your comments lack evidence most of it consists of digging through other people’s comment history and profiling them. I’ve seen you use that in several of your arguments. Never have I once seen you use logic or evidence to state your points.

I never look at someone’s comment history. I judge them for what they said only and I disseminate that without getting personal. I made an exception for you given how you decided to stalk my comment history and I noticed you just do this for everyone.


Yeah, still spamming annoyed replies everywhere including old threads certainly fits the profile of someone unbalanced being angry about being found out.


bro, you went through my comment history. I simply did the same and I responded to a few of your responses that are wrong. I'm not angry man, I just felt you needed a mentor, someone who can point you in the right direction.

And wth do you mean by "found out"? are you ok?


Haha, funny. Hopefully you won’t be banned again!


I actually don’t know what you’re on about. You need help bro.


Given the various unhinged tirades of your banned account I'd look inwards.


I don’t have a banned account?

You need to calm down.


You need to stop lying.


I’m not and I’m not joking when I say you need help. Not being derogatory. Genuine advice.


You are lying, it's not genuine advice, and you are being intentionally derogatory - since that's how you're handling all your outburst from what I've seen.

Look, here's a funny search: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

So let's see:

1. ninetyninenine gets banned 38 days ago.

2. You create your new account 36 days ago (funnily using the same username style).

3. Suddenly this rare expression is used again! Just a single page in the history of HN search indexing.

4. Incredibly it's also in conjunction with both users saying "Nobody understand how LLMs really work!", which is a rare statement to see in its own right.

5. Incredibly it's also in conjunction where both users do the same appeal to authority by linking to the same Geoffrey Hinton clip, both immediately calling him the godfather/father of AI.

So yeah, it must be your AI clone. And this is just one of the obvious direct connections I noticed, because I actually doubled checked when you denied it. An unsurprising waste of time ofc, but here we are. And this is just on top of having the exact same commenting style and quarrelsomeness.

Stop. Lying.


Dude. You need to calm down. This conspiracy theory level stuff is not only a waste of my time but mostly a waste of your time. I suggest you move on with your life.


> You need to calm down

This was a fun dig actually. You doubling down made it even more so. Anyway, best of luck!


Good. Hope you get better.


We are in a hype bubble. Similar to the internet hype bubble. Like the internet bubble, the AI bubble is orthogonal to whether or not AI will change the world forever.


Our industry never exhibited an abundance of caution, but if you have trouble understanding the value of AI here, consider that you are akin to an assembly language programmer in the 1970s or 80s who couldn't understand why people are so gung-ho about these compilers that just output worse code than they could write by hand. In retrospect, compilers only got better and better, and familiarity with programming languages and compilation toolchains became a valuable productivity skill and the market for assembly language programming either stagnated, or shrank.

Doesn't it seem plausible to you that, whatever the ratio of bugs in AI-generated code today, that bug count is only going to really go down? Doesn't it then seem reasonable to say that programmers should start familiarizing themselves with these new tools, where the pitfalls are and how to avoid them?


> compilers only got better and better

At no point compilers produced stochastic output. The intent user expressed was translated down with a much much higher fidelity, repeatability and explainability. Most important of all, it completely removed the need for the developer to meddle with that output. If anything it became a verification tool for the developer‘s own input.

If LLMs are that good, I dare you skip the programming language and have it code in machine directly next time. And it is exactly how it is going to feel like if we treat them as valuable as compilers.


> At no point compilers produced stochastic output. [...] Most important of all, it completely removed the need for the developer to meddle with that output.

Yes, once the optimizations became sophisticated enough and reliable enough that people no longer needed to think about it or go down to assembly to get the performance they needed. Do you get the analogy now?


I don't know why you'd think your analogy wasn't clear in the first place. But your analogy can't support you on the assertion that optimizations will be sophisticated and reliable enough to completely forget about the programming language underneath.

If you have any first principles thinking on why this is more likely than not, I am all ears. My epistemic bet is that it is not going to happen, or somehow if we end up there the language we will have to use to instruct them is not going to be different than any other high level programming language that the point will be moot.


> But your analogy can't support you on the assertion that optimizations will be sophisticated and reliable enough to completely forget about the programming language underneath.

Where did I make that assertion?


Here is where I got that impression:

> once the optimizations became sophisticated enough

Either way I am not trying to litigate here. Feel free to correct me if your position was softer.


No because programmers aren't the ones pushing the wares, it's business magnates and sales people. The two core groups software developers should never trust.

Maybe if this LLM craze was being pushed by democratic groups where citizens are allowed to state their objections to such system, where such objections are taken seriously, but what we currently have are business magnates that just want to get richer with no democratic controls.


> No because programmers aren't the ones pushing the wares, it's business magnates and sales people.

This is not correct, plenty of programmers are seeing value in these systems and use them regularly. I'm not really sure what's undemocratic about what's going on, but that seems beside the point, we're presumably mostly programmers here talking about the technical merits and downsides of an emerging tech.


This seems like an overly reductive worldview. Do you really think there isn't genuine interest in LLM tools among developers? I absolutely agree there are people pushing AI in places where it is unneeded, but I have not found software development to be one of those areas. There are lots of people experimenting and hacking with LLMs because of genuine interest and perceived value.

At my company, there is absolutely no mandate for use of AI tooling, but we have a very large number of engineers who are using AI tools enthusiastically simply because they want to. In my anecdotal experience those who do tend to be much better engineers than the ones who are most skeptical or anti-AI (though its very hard to separate how much of this is the AI tooling, and how much is that naturally curious engineers looking for new ways to improve inevitably become better engineers who don't).

The broader point is, I think you are limiting yourself when you immediately reduce AI to snake oil being sold by "business magnates". There is surely a lot of hype that will die out eventually, but there is also a lot of potential there that you guarantee you will miss out on when you dismiss it out of hand.


I use AI every day and run my own local models, that has nothing to do with seeing sales people acting like sales people or conmen being con artists.

Also add in the fact that big tech has been extremely damaging to western society for the last 20 years, there's really little reason to trust them. Especially since we see how they treat those with different opinions than them (trying to force them out of power, ostracize them publicly, or in some cases straight up poisoning people + giving them cancer).

Not really hard to see how people can be against such actions? Well buckle up bro, come post 2028 expect a massive crackdown and regulations against big tech. It's been boiling for quite a while and there's trillions of dollars to plunder for the public's benefit.


If I have a horse and plow and you show up with a tractor, I will no doubt get a tractor asap. But if you show up with novel amphetamines for you and your horse and scream "Look how productive I am! We'll figure out the long-term downsides, don't you worry! Just more amphetamines probably!", I'm happy to be a late adopter.


A tractor based on a Model T wouldn't have been very compelling either at the time. Not many horse-drawn plows these days though.


I understand that you've convinced yourself that progress is inevitable. I'll ponder over it on my commute to Mars. Oh wait, that was still on the tele.


High-level languages were absolutely indispensable at a time when every hardware vendor had its own bespoke instruction set.

If you only ever target one platform, you might as well do it in assembly, it's just unfashionable. I don't believe you'd lose any 'productivity' compared to e.g. C, assuming equal amounts of experience.


> I don't believe you'd lose any 'productivity' compared to e.g. C, assuming equal amounts of experience.

I'm skeptical, but do you think that you'd see no productivity gains for Python, Java or Haskell?


Those are garbage-collected environments. I have some experience with a garbage-collected 'assembly' (.NET CIL). It is a delight to read and write compared to most C code.


Agree to disagree then! I've done plenty of CIL reading and writing. It's fine, but not what I'd call pleasant, not even compared to C.


Type checking, even that as trivial as C's, is a boon to productivity, especially on large teams but also when coding solo if you have anything else in your brain.


compilers aren't probabilistic models though


True. The question is whether that's relevant to the trajectory described or not.


Successful compiler optimizations are probabilistic though, from the programmer's point of view. LLMs are internally deterministic too.


What? Do you even know how compilers work?


Are you able to predict with 100% accuracy when a loop will successfully unroll, or various interprocedural or intraprocedural analyses will succeed? They are applied deterministically inside a compiler, but often based on heuristics, and the complex interplay of optimizations in complex programs means that sometimes they will not do what you expect them to do. Sometimes they work better than expected, and sometimes worse. Sounds familiar...


> Are you able to predict with 100% accuracy when a loop will successfully unroll, or various interprocedural or intraprocedural analyses will succeed?

Yes, because:

> They are applied deterministically inside a compiler

Sorry, but an LLM randomly generating the next token isn't even comparable.

Deterministic complexity =/= randomness.


> Yes, because:

Unless you wrote the compiler, you are 100% full of it. Even as the compiler writer you'd be wrong sometimes.

> Deterministic complexity =/= randomness.

LLMs are also deterministically complex, not random.


> Unless you wrote the compiler, you are 100% full of it. Even then you'd be wrong sometimes

You can check the source code? What's hard to understand? If you find it compiled something wrong, you can walk backwards through the code, if you want to find out what it'll do walk forwards. LLMs have no such capability.

Sure maybe you're limited by your personal knowledge on the compiler chain, but again complexity =/= randomness.

For the same source code, and compiler version (+ flags) you get the exact same output every time. The same cannot be said of LLMs, because they use randomness (temperature).

> LLMs are also deterministically complex, not random

What exactly is the temperature setting in your LLM doing then? If you'd like to argue pseudorandom generators our computers are using aren't random - fine, I agree. But for all practical purposes they're random, especially when you don't control the seed.


> If you find it compiled something wrong, you can walk backwards through the code, if you want to find out what it'll do walk forwards. LLMs have no such capability.

Right, so you agree that optimization outputs not fully predictable in complex programs, and what you're actually objecting to is that LLMs aren't like compiler optimizations in the specific ways you care about, and somehow this is supposed to invalidate my argument that they are alike in the specific ways that I outlined.

I'm not interested in litigating the minutiae of this point, programmers who treat the compiler as a black box (ie. 99% of them) see probabilistic outputs. The outputs are generally reliable according to certain criteria, but unpredictable.

LLM models are also typically probabilistic black boxes. The outputs are also unpredictable, but also somewhat reliable according to certain criteria that you can learn through use. Where the unreliability is problematic you can often make up for their pitfalls. The need for this is dropping year over year, just as the need for assembly programming to eke out performance dropped year over year of compiler development. Whether LLMs will become as reliable as compiler optimizations remains to be seen.


> invalidate my argument that they are alike in the specific ways that I outlined

Basketballs and apples are both round, so they're the same thing right? I could eat a basketball and I can make a layup with an apple, so what's the difference?

> programmers who treat the compiler as a black box (ie. 99% of them) see probabilistic outputs

In reality this is at best the bottom 20% of programmers.

No programmer I've ever talked to has described compilers as probabilistic black boxes - and I'm sorry if your circle does. Unfortunately there's no use of probability and all modern compilers definitionally white boxes (open source).


My operating assumption, for everyone acting the way you described, is that it's predicated on the belief of "I have an opportunity to make money from this." It is exceedingly rare to find an instance of someone using the tech purely for the love of the game who isn't also tying it back to income generation in some way.


I use it as an accelerated search engine to learn about things quicker than I otherwise would. But that's it. I ask it a question, it tells me an answer, and I work from there myself. Slapping it into your editor to write the code for you sounds disastrous to me. And also incredibly boring.


it's called a love of money


The most complex thing to support is peoples' resumes. If carpenters were incentivized like software devs are, we'd quickly start seeing multi-story garden sheds in reinforced concrete because every carpenters dream job at Bunkers Inc. pays 10x more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: