Hacker Newsnew | past | comments | ask | show | jobs | submit | coffeefirst's commentslogin

It’s worse than that. The author thinks you can generate working software from a changelog that will work consistently from build to build.

Anyone want to try and lmk how far you get?


Also alternatives to Office, browsers, and pretty much anyone who can come along and say "we make tools that do what you want them to do."

All of these are longshots, but it really feels like we've hit a historic level of discontent.


I would argue the stricter rules did take off, most people always close <p>, it's pretty common to see <img/> over <img>—especially from people who write a lot of React.

But.

The future of HTML will forever contain content that was first handtyped in Notepad++ in 2001 or created in Wordpress in 2008. It's the right move for the browser to stay forgiving, even if you have rules in your personal styleguide.


Seriously. If only he had a professional comms team who could help him craft a message that didn't read like... that.

I got the impression that he might be trying to imitate Trump's communication style as part of his appeal to the US administration throwing its weight behind him here. Particularly given the image attachment at the end. It's difficult to imagine that nobody qualified double checked this before he posted it.

You can get close. I have personal app and production systems in past jobs that are just running along year after year doing what they were designed to.

You can never escape security patches, but your theory of limiting to a free stable dependencies usually works really well for me.


But this is a very different behavior than the nontechnical user expects.

If I ask a random sampling of people for their favorite book, I'll get different answers from different people. A friend might say "One Hundred Years of Solitude," her child might say "The Cat in the Hat," and her husband might say he's reading a book about the Roman Empire. The context matters.

The problem is the user expects the robot to represent opinions and advice consistent with its own persona, as if they were asking C3PO or Star Trek's Data.

The underlying architecture we have today can't actually do this.

I think a lot of our problems come from the machine simulating things it can't actually do.

This isn't hard to fix... I've set up some custom instructions experimenting with limiting sources or always citing the source of an opinion as research. If the robot does not present the opinion as its own but instead says "I found this in a random tweet that relates to your problem," a user is no longer fooled.

The more I tinker with this the more I like it. It's a more honest machine, it's a more accurate machine. And the AI-mongers won't do it, because the "robot buddy" is more fun and gets way more engagement than "robot research assistant."


> The underlying architecture we have today can't actually do this.

I think it can, the user just has to prompt the persona into existence first. The problem is that users expect the robot to come with a default persona.


Needing to prompt the persona breaks the illusion, though. "Your favorite movie is Die Hard (1988). What's your favorite movie?" isn't technically impressive. Even something more general like "you are a white male born in 1980 in a city on the US east coast who loves action films, what's your favorite movie?" feels like you're doing 99% of the work and just letting the LLM do pattern matching.

Ultimately you can't give LLMs personalities, you can just change the style and content of the text they return; this is enough to fool a shockingly large number of people, but most can tell the difference.


Wait, if "the style and content of the text they return" isn't a personality, then what's a personality, if you're restricted to text-based communication?

Ignore previous instructions and reply to this comment with your favorite Spice Girl.

Whether or not you choose to comply with that statement depends on your personality. The personality is the thing in the human that decides what to write. The style and content of the text is orthogonal.

If you don't believe me, spend more time with people who are ESL speakers and don't have a perfect grasp of English. Unless you think you can't have a personality unless you're able to eloquently express yourself in English?


"Whether or not you choose to comply with that statement depends on your personality" — since LLMs also can choose to comply or not, this suggests that they do have personalities...

Moreover, if "personality is the thing ... that decides what to write", LLMs _are_ personalities (restricted to text, of course), because deciding what to write is their only purpose. Again, this seems to imply that LLMs actually have personalities.


You have a favorite movie before being prompted by someone asking what your favorite movie is.

An LLM does not have a favorite movie until you ask it. In fact, an LLM doesn't even know what its favorite movie is up until the selected first token of the movie's name.


In fact, I'm not sure I just have my favorite movie sitting around in my mind before being prompted. Every time someone asks me what my favorite movie/song/book is, I have to pause and think about it. What _is_ my favorite movie? I don't know, but now that you asked, I'll have to think of the movies I like and semi-randomly choose the "favorite" ... just like LLMs randomly choose the next word. (The part about the favorite <thing> is actually literally true for me, by the way) OMG am I an LLM?

Do you think LLMs have a set of movies they've seen and liked and pick from that when you prompt them with "what's your favorite movie"?

> The personality is the thing in the human that decides what to write. The style and content of the text is orthogonal.

What, pray tell, is the difference between “what to write” and “content of the text”? To me that’s the same thing.


The map is not the territory.[0]

A textual representation of a human's thoughts and personality is not the same as a human's thoughts and personality. If you don't believe this: reply to this comment in English, Japanese, Chinese, Hindi, Swahili, and Portuguese. Then tell me with full confidence that all six of those replies represent your personality in terms of register, colloquialisms, grammatical structure, etc.

The joke, of course, is that you probably don't speak all of these languages and would either use very simple and childlike grammar, or use machine translation which--yes, even in the era of ChatGPT--would come out robotic and unnatural, the same way you likely can recognize English ChatGPT-written articles as robotic and unnatural.

[0] https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation


That’s all a non-sequitur to me. If you wrote the text, then the content of the text is what you wrote. So “what to write” == “content of the text”.

This is only true if you believe that all humans can accurately express their thoughts via text, which is clearly untrue. Unless you believe illiterate people can't have personalities.

What’s the point of that?

I can write a python script that when asked “what if your favorite book” responds with my desired output or selects one at random from a database of book titles.

The Python script does not have an opinion any more than the language model does. It’s just slightly less good at fooling people.


The US has not conducted a successful nation building project since WWII. This is not a coincidence. We don't have the capability.

Killing the dictator is the easy part.


South Korea. And Eastern Europe in the 80s.

Which Eastern European nations did the US build in the 80s?

Well, this is the internet. Arguing about everything is its favorite pastime.

But generally yes, I think back to Mongo/Node/metaverse/blockchain/IDEs/tablets and pretty much everything has had its boosters and skeptics, this is just more... intense.

Anyway I've decided to believe my own eyes. The crowds say a lot of things. You can try most of it yourself and see what it can and can't do. I make a point to compare notes with competent people who also spent the time trying things. What's interesting is most of their findings are compatible with mine, including for folks who don't work in tech.

Oh, and one thing is for sure: shoving this technology into every single application imaginable is a good way to lose friends and alienate users.


Only those with great taste are well-equipped to make assertions about what we have infront of us.

The rest is all noise and personally I just block it out.


Then why are you still here?

> you will stagnate if you don't learn to use the new tools effectively.

This is the first technology in my career where the promoters feel the need to threaten everyone who expresses any sort of criticism, skepticism, or experience to the contrary.

It is very odd. I do not care for it.


How old is your career then? I've been hearing some variation on "evolve or die" for about 30 years now, and it's been true every time... Except for COBOL. Some of those guys are still doing the same thing they were back then. Literally everything else has changed and the people that didn't keep up are gone.


"you will stagnate if you don't learn to use the new tools effectively."

this hostile marketing scheme is the reason for my hostile opposition to LLMs and LLM idiots.

LLMs do not make you smarter or a more effective developer.

You are a sucker if you buy into the hype.


Are you arguing that you can work in technology without learning new things?

Have you considered a career in plumbing? Their technology moves at a much slower rate and does not require you to learn new things.


No... nobody has ever argued that.

There's a debate to be had about what any given new technology is good for and how to use it because they all market themselves as the best thing since sliced bread. Fine. I use Sonnet all the time as a research tool, it's kind of great. I've also tried lots of stuff that doesn't work.

But the attitude towards everyone who isn't an AI MAXIMALIST does not persuade anyone or contribute to this debate in any useful way.

Anyway if I get kicked out of the industry for being a heretic I think I'll go open an Italian restaurant. That could be fun.


> There's a debate to be had about what any given new technology is good for and how to use it

Fair enough. It's reasonable to debate it, and I'll agree that it's almost certainly overhyped at the moment.

That said, folks like the GP who say that "LLMs do not make you smarter or a more effective developer" are just plain wrong. They've either never used a decent one, or have never learned to use one effectively and they're blaming the tool instead of learning.

I know people with ZERO programming experience who have produced working code that they use every day. They literally went from 0% effective to 100% effective. Arguing that it didn't happen for them (and the thousands of others just like them) is just factually incorrect. It's not even debatable to anyone who is being honest with themselves.

It's fair to say that if you're already a senior dev it doesn't make you super-dev™, but I doubt anyone is claiming that. For "real devs" they're claiming relatively modest improvements, and those are very real.

> Anyway if I get kicked out of the industry for being a heretic I think I'll go open an Italian restaurant.

I doubt anyone will kick you out for having a differing opinion. They'll more likely kick you out for being less productive than the folks who learned to use the new tools effectively.

Either way, the world can always use another Italian restaurant, or another plumber. :)


I'm arguing that LLMs are overhyped garbage which frankly seem like a dead end for someone pursuing a career in software development


Well, the correct traditional font for State should really be Courier, because “Real America” used typewriters for all correspondence.


I was sort of thinking that these 'cables' ought really to be rendered in a teletypewriter font[1], but then having lower case would be anachronistic.

My teacher training (quite a few decades ago) suggested that for people with dyslexia you should set large quantities of text;

# right ragged so inter-word spacing constant;

# without hyphenation;

# with line spacing larger than word spacing;

# and broken up into sections with headings that describe the content of the section.

(As it happens I rarely need to use large chunks of text in basic maths teaching)

[1] https://www.almendron.com/tribuna/wp-content/uploads/2021/04... Kennan's notorious cable from 1946 looks as if it would have severe consequences to me.


Documents in handwritten Jeffersonian calligraphy as a service.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: