Hacker Newsnew | past | comments | ask | show | jobs | submit | jimbo808's commentslogin

As long as you pretend these didn't happen:

Iran (1953) - overthrow of the Shah by the CIA

Guatemala (1954) - overthrow of an elected government on behalf of US corporate interests

Cuba (1961) - invaded Cuba via proxy forces, attempted to assassinate Castro (Bay of Pigs)

Vietnam, Laos, Cambodia (1955–1975)

Chile (1973) – overthrow of Salvador Allende

Nicaragua (1980s) - overthrow the Sandinista government (Contra war) without international authorization

Panama (1989) - invaded and overthrew the government without international without international authorizationauthorization

Iraq (2003) - invaded and overthrew the government without international authorization

Serbia (1999) - airstrikes without international authorization

Libya (2011) - exceeded authorization by UN to effect regime change

Syria (2014–present) - US military occupation and oil seizure is ongoing

There are many more, these are the more notable ones


A$ap Rocky's music videos have some really good examples of how AI can be used creatively and not just to generate slop. My favorite is Taylor Swif, it's a super fun video to watch.

https://www.youtube.com/watch?v=5URefVYaJrA


I think we massively downplay the experience and expertise required to ask the right question.

> which is going to blow out the economics on inference

At this point, I don't even think they do the envelope math anymore. However much money investors will be duped into giving them, that's what they'll spend on compute. Just gotta stay alive until the IPO!


At the end of the day, it doesn't really get you that much if you get 70% of the way there on your initial prompt (which you probably spent some time discussing, thinking through, clarifying requirements on). Paid, deliverable work is expected to involve validation, accountability, security, reliability, etc.

Taking that 70% solution and adding these things is harder than if a human got you 70% there, because the mistakes LLMs make are designed to look right, while being wrong in ways a sane human would never be. This makes their mistakes easy to overlook, requiring more careful line-by-line review in any domain where people are paying you. They also duplicate code and are super verbose, so they produce a ton tech debt -> more tokens for future agents to clog their contexts with.

I like using them, they have real value when used correctly, but I'm skeptical that this value is going to translate to massive real business value in the next few years, especially when you weigh that with the risk and tech debt that comes along with it.


> and are super verbose...

Since I don't code for money any more, my main daily LLM use is for some web searches, especially those where multiple semantic meanings would be difficult specify with a traditional search or even compound logical operators. It's good for this but the answers tend to be too verbose and in ways no reasonably competent human would be. There's a weird mismatch between the raw capability and the need to explicitly prompt "in one sentence" when it would be contextually obvious to a human.


Imo getting 70% of the way is very valuable for quickly creating throwaway prototypes, exploring approaches and learning new stuff.

However getting the AI to build production quality code is sometimes quite frustrating, and requires a very hands-on approach.


Yep - no doubt that LLMs are useful. I use them every day, for lots of stuff. It's a lot better than Google search was in its prime. Will it translate to massively increased output for the typical engineer esp. senior/staff+)? I don't think it will without a radical change to the architecture. But that is an opinion.

I completely agree, I found it very funny that I have been transitioning from an "LLM sceptic" to a "LLM advocate", without changing my viewpoint. I have long said that LLM's won't be replacing swathes of the workforce any time soon and that LLM's are of course useful for specific tasks, especially prototyping and drafting.

I have gone from being challenged on the first point, to the second. The hype is not what it has been.


Most text worth paying for (code, contracts, research) requires:

- accountability

- reliability

- validation

- security

- liability

Humans can reliably produce text with all of these features. LLMs can reliably produce text with none of them.

If it doesn't have all of these, it could still be worth paying for if it's novel and entertaining. IMO, LLMs can't really do that either.


Let's not put humans on too much of a pedestal, there are plenty of us who are not that reliable either. That's why we have tests, linting, types and various other validation systems. Incidentally, LLMs can utilize these as well.

Humans are unreliable in predictable ways. This makes review relatively painless since you know what to look for, and you can skim through the boilerplate and be pretty confident that it's right and isn't redundant/insecure, etc.

LLMs can use linters and type checkers, but getting past them often times leads it down a path of mayhem and destruction, doing pretty dumb things to get them to pass.


I've only experienced de-motivation from managers, personally. At least for me, motivation comes from ownership, impact, autonomy, respect. You can cause me to lose motivation in a lot of ways, but you can't really cause me to gain motivation unless you've already de-motivated me somehow.

You can de-motivate me in a lot of ways, some examples:

- throwing me or a coworker under the bus for your mistakes

- crediting yourself for the work of someone else

- attempting to "motivate" me when I'm already motivated

- manufacturing a sense of urgency, this is especially bad if you try to sustain this state all indefinitely

- using AI or market conditions as a fear tactic to motivate the team

- visibly engaging in any kind of nepotism

Honestly this list could go on and on, but those are some that come to mind.


> manufacturing a sense of urgency, this is especially bad if you try to sustain this state all indefinitely

Sadly, I have seen this in almost every startup led by founders without an engineering background I've ever been a part of.

In my personal experience, this is often caused by overeager sales team promising the world for the next deal, only to fob it off to the engineering team who now "urgently" need to build "features" and "work hard" to make it happen. This is when your intrinsically motivated engineers start looking for the exit.


Also:

- not letting me have ownership of what I build and dictating features

- not giving me autonomy of how to solve a problem


Sorry to be pedantic, but there are a bunch of non-ASCII characters (,↑,) in the mockups and the article contains a lot of AI tropes.

I have never once seen extraordinary claims of AI wins accompanied by code and prompts.


That one's my favorite. You can't defend against it, it just shuts down the conversation. Odds are, you aren't doing it wrong. These people are usually suffering from Dunning Kruger at best, or they're paid shills/bots at worst.


Best part of being dumb is thinking you’re smart. Best part of being smart is knowing you’re smart. Just don’t be in the iq range where you know you’re dumb.


The smartest people I know are full of doubt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: