Hacker Newsnew | past | comments | ask | show | jobs | submit | johnfn's commentslogin

I'm mystified by this comment. Do people really forget that they can think in their own mind?

I think there’s a subset of people who don’t have an inner voice. I assume thinking step by step in their head doesn’t work like most people.

I’m glad LLMs help these people. But I’m not gonna trade society because a subset of people can’t write things down.


Wat.

Maybe read up on what having an "inner voice" or not actually does first? Before making, frankly, weird and unfounded takes on the subject.


@grok is this true?

It’s as if people are rediscovering that writing is thinking. The chatbot is irrelevant, it works even better with a paper notebook.

Don’t be mystified if you lack to curiosity to understand how to use new technology. It’s useful to have something to speak to and get feedback on.

— john asked to himself.

> some of the most highly paid workers on the planet won't pay for tools

Aren't we in the middle of literally the entire industry adopting 200/mo AI subscriptions? It seems to me like engineers will absolutely pay for tools if they justify their value.


Every company I know is lamenting their out of control SaaS spend for developer tooling.

$200/month/user isn’t a big incremental cost, to be honest. SaaS and subscription tooling costs are high for developers.


It's less a binary pay/no pay, and more the value of accessing the dev tools. If you consider the fact that AI companies are most likely losing money running the models, then AI tools are incredibly cheap - they're in some ways paying you to use it.

No model maker is going to try to generate a profit off users using their models, they're gonna try to generate it some other way - much like dev tools.


OP was saying "no one pays for tools", but AI tools are a clear counterexample. That the AI tools are themselves not profitable isn't part of the argument - "no one pays for tools which are profitable" is not an argument that anyone was making.

I spent some time reading about Gas Town to see if I could understand what Stevey was trying to accomplish. I think he has some good ideas in there, actually - it really does seem like he's thought a bit about what coding in the future might look like. Unfortunately, it's so full of esoteric language and vibecoded READMEs that it is quite difficult to get into. The most concerning thing is that Stevey seems totally unaware of this. He writes about how when he tried to explain this to people they just stared at him like they were idiots, and so they must all be wrong -- that's a bit worrying, from a health and psychosis angle.

There’s an acquaintance here in Australia that has built something similar without the crazy terminology and it is pretty solid.

No, it's not necessary to pay 200/mo.

I haven't had an issue with a hallucination in many months. They are typically a solved problem if you can use some sort of linter / static analysis tool. You tell the agent to run your tool(s) and fix all the errors. I am not familiar with PowerShell at all, but a quick GPT tells me that there is PSScriptAnalyzer, which might be good for this.

That being said, it is possible that PowerShell is too far off the beaten path and LLMs aren't good at it. Try it again with something like TypeScript - you might change your mind.


Maybe you can provide some examples of what you would consider to be “novel” code?

The proof of the Erdos problem the other day was called novel by Terrence Tao. That seems novel to me.


I am not sure if I am missing something, since many people have made this comment, but isn't this in some ways similar to the shape of the traditional definition of back pressure, and not "entirely different"? A downstream consumer can't make its work through the queue of work to be done, so it pushes work back upstream - to you.

I love tracking my HRV. It definitely follows my perception of stress, but watching it carefully has taught me a lot about what causes me to be stressed and what doesn’t. I love talking to people about this.

BTW, a really key tip: if you tell your Apple Watch you have AFib, it will take many more measurements of your HRV, making the value much more accurate.


I suspect this is AI generated, but it’s quite high quality, and doesn’t have any of the telltale signs that most AI generated content does. How did you generate this? It’s great.

Their comments are full of "it's not x, it's y" over and over. Short pithy sentences. I'm quite confident it's AI written, maybe with a more detailed prompt than the average

I guess this is the end of the human internet


To give them the benefit of the doubt, people who talk to AI too much probably start mimicking its style.

yea, i was suspicious by the second paragraph but was sure once i got to "that’s not engineering, it’s cosplay"

It's also the wording. The weird phrases

"Glorified Google search with worse footnotes" what on earth does that mean?

AI has a distinct feel to it


And with enough motivated reasoning, you can find AI vibes in almost every comment you don’t agree with.

For better or worse, I think we might have to settle on “human-written until proven otherwise”, if we don’t want to throw “assume positive intent” out the window entirely on this site.


Dude is swearing up and down that they came up with the text on their own. I agree with you though, it reeks of LLMs. The only alternative explanation is that they use LLMs so much that they’ve copied the writing style.

I've had that exact phrase pop up from an LLM when I asked it for a more negative code review

Your intuition on AI is out of date by about 6 months. Those telltale signs no longer exist.

It wasn't AI generated. But if it was, there is currently no way for anyone to tell the difference.


I’m confused by this. I still see this kind of phrasing in LLM generated content, even as recent as last week (using Gemini, if that matters). Are you saying that LLMs do not generate text like this, or that it’s now possible to get text that doesn’t contain the telltale “its not X, it’s Y”?

> But if it was there is currently no way for anyone to tell the difference.

This is false. There are many human-legible signs, and there do exist fairly reliable AI detection services (like Pangram).


There are no reliable AI detection services. At best they can reliably detect output from popular chatbots running with their default prompts. Beyond that reliability deteriorates rapidly so they either err on the side of many false positives, or on the side of many false negatives.

There's already been several scandals where students were accused of AI use on the basis of these services and successfully fought back.


I've tested some of those services and they weren't very reliable.

If such a thing did exist, it would exist only until people started training models to hide from it.

Negative feedback is the original "all you need."


> It wasn't AI generated.

You're lying: https://www.pangram.com/history/94678f26-4898-496f-9559-8c4c...

Not that I needed pangram to tell me that, it's obvious slop.


I wouldn't know how to prove to you otherwise other then to tell you that I have seen these tools show incorrect results for both AI generated text and human written text.

Good thing you had a stochastic model backing up (with “low confidence”, no less) your vague intuition of a comment you didn’t like being AI-written.

I must be a bot because I love existential dread, that's a great phrase. I feel like they trigger a lot on literate prose.

Sad times when the only remaining way to convince LLM luddites of somebody’s humanity is bad writing.

(edit: removed duplicate comment from above, not sure how that happened)

the poster is in fact being very sarcastic. arguing in favor of emergent reasoning does in fact make sense

It's a formal sarcasm piece.

It's bizarre. The same account was previously arguing in favor of emergent reasoning abilities in another thread ( https://news.ycombinator.com/item?id=46453084 ) -- I voted it up, in fact! Turing test failed, I guess.

(edit: fixed link)


I thought the mockery and sarcasm in my piece was rather obvious.

Poe's Law is the real Bitter Lesson.

We need a name for the much more trivial version of the Turing test that replaces "human" with "weird dude with rambling ideas he clearly thinks are very deep"

I'm pretty sure it's like "can it run DOOM" and someone could make an LLM that passes this that runs on an pregnancy test


But that isn't what "Hacker" means. Take it from pg, who named the site, 15 years ago: https://news.ycombinator.com/item?id=1648199

> In the sense of the word that means people who write code, not people who break into things


Sure, but in this case they definitely are a hacker still since they did write code to achieve this.

The comment I am responding to says that people should be amenable to lockpicking because it’s “hacker” news. But he’s using the wrong meaning of hacker. So the argument doesn’t hold.

Literally from the horse's mouth, 15 years ago: https://news.ycombinator.com/item?id=1648199

> In the sense of the word that means people who write code, not people who break into things

It would be like if you were going over a list of pros and cons and when you got to the cons some guy was like "wow, you work with criminals, huh?" Then you tell him not that sort of con and he says "yeah, typical nerd bickering".

C'mon.


And yet it still isn't any more interesting to argue about.

How do you know if a nerd doesn’t care what you’re talking about? They’ll tell you.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: