Hacker Newsnew | past | comments | ask | show | jobs | submit | thunky's commentslogin

And someone could post an even longer list of things you can't do well. But what would be the point?

The LLM did better on this problem than 100% of the haters in this thread could do, and who probably can't even begin "understand" the problem.


LLMs finally gave someone I know the confidence to up her business rates. Professional services, nothing to do with software dev (yes LLMs are not just for devs). It suggested she revamp her entire pricing structure. She thought her clients would walk, but she did it and nobody flinched. Big revenue boost.

She also uses it daily for all kinds of things. For example recording/transcribing/summarizing meetings, creating plans, writing emails, reviewing employee performance, and a bunch of other stuff. If it went away she would be devastated.


Not really. If the job was 100% deterministic we wouldn't need the human, would we?

Doesn't seem to work for humans all the time either.

Some of this negativity I think is due to unrealistic expectations of perfection.

Use the same guardrails you should be using already for human generated code and you should be fine.


> human (irreplaceable)

Everyone is replaceable. Software devs aren't special.


Domain knowledge is a real thing. Sure I could be replaced at my job but they'd have a pretty sketchy time until someone new can get up to speed.

Yes, with another human. I meant more that you cannot replace a human with a non-human, at least not yet and if you care about quality.

Perhaps you can replace multiple developers with a single developer and an AI tool in the near future.

In the same way that you could potentially replace multiple workers with handsaws with one guy wielding power tools.

There could be a lot of financial gain for businesses in this, even if you still need humans in the loop.


That may be, but I still think

> if you are a large business and you pay xxxxx-xxxxxx per year per developer, but are only willing to pay xxx per year in AI tooling, something's out of proportion.

Is way off base. Even if you replace multiple workers with one worker but better tool, businesses still won't want to pay the "multiple worker salary" to the single worker just because they use a more effective tool.


Yes, I agree. But do they have to?

It would seem to me that tokens are only going to get more efficient and cheaper from here.

Demand is going to rise further as AI keeps improving.

Some argue there is a bubble, but with demand from the public for private use, business, education, military, cyber security, intelligence, it just seems like there will be no lack of investment.


> And pretty much everyone, no matter how good, cannot get there with code-reading alone. With software at least, we need to develop a mental model of the thing by futzing about with the thing in deeply meaningful ways

LLMs help with that part too. As Antirez says:

Writing code is no longer needed for the most part. It is now a lot more interesting to understand what to do, and how to do it (and, about this second part, LLMs are great partners, too).


How to "understand" what to do?

How to know the "how to do it" is sensible? (sensible = the product will produce the expected outcome within the expected (or tolerable) error bars?)


> How to "understand" what to do?

How did you ever know? It's not like everyone always wrote perfect code up until now.

Nothing has changed, except now you have a "partner" to help you along with your understanding.


Well, I have a whole blog post of an answer for you: https://www.evalapply.org/posts/tools-for-thought/

Who "knows"?

It's who has a world-model. It's who can evaluate input signal against said world-model. Which requires an ability to generate questions, probe the nature of reality, and do experiments to figure out what's what. And it's who can alter their world-model using experiences collected from the back-and-forth.


> I'm more and more inclined of switching my desktop (my main working machine) to Omarchy

Never heard of it and the website and GitHub repo sure aren't doing a great job of describing it's benefits.

"Beautiful, Modern & Opinionated" are vague and really aren't adjectives I'd be looking for in an OS.


When anything is described as "opinionated" I just read it as "wilfully inflexible".

It’s opinionated coming from Arch Linux. Compared to MacOS or Windows it’s a big giant push over. Opinionated in this context just means it comes with defaults rather than asking you to research your own display compositor.

Although there are some places you want that! WireGuard is often described as cryptographically opinionated because it doesn't even bother trying to negotiate crypto primitives which makes it immune to downgrade attacks. Though, to be fair, that also means that if its primitives ever do get broken you need to roll out an entirely new release.

opinionated versus unopinionated is a tradeoff. Things that boast about how unopinionated they are often require a lot of hand holding or manual config. I think there's a big audience of people that have non-Appleware that want an OS that is not Windows, but don't actually care to customize it.

It’s by David Heinemeier Hansson. Based on his writings, he’s very inflexible. He’s even unwilling to accept that a person of color who was born and raised in Britain can be British.

I would instead recommend going for Fedora KDE Edition which will give you state of the art Linux desktop experience.

I've been daily driving it for months now and really like it. It's a nice introduction to tiling window managers, has a well thought out key mapping and generally looks reasonably nice while getting out of my way as an embedded dev.

There's always been Arch linux based distros that come with more things set up and better (or just more specific) defaults. To my understanding Omarchy is just one of those, like Manjaro or etc in the past?

Yeah, it's just one of those, but worse. It's basically run by a bunch of badly written bash scripts:

https://xn--gckvb8fzb.com/a-word-on-omarchy/


Tools are not just APIs. More like a function call that the LLM can tell you (your agent code) to make.

Already know the answer, don't need AI for that one.

How are they supposed to know what's legal?

Congress can't even agree if murdering civilians on boats is legal or not.


If only there was some basic training for this kind of stuff!

Yes hopefully Congress gets the basic training they desperately need.

But seriously. Should soldiers be refusing to murder civilians on boats? If the law is clear (which I think it is) and they should be refusing why aren't they?

The answer of course is that they're being put in an impossible situation. Pinning the responsibility on them, because they took basic training, to interpret the law and go against the majority of the US govt at huge personal risk is just absurd.

Maybe instead the government should get their head out of their ass and do something themselves beyond trying to pass the buck via a stupid tv ad.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: