> some of the most highly paid workers on the planet won't pay for tools
Aren't we in the middle of literally the entire industry adopting 200/mo AI subscriptions? It seems to me like engineers will absolutely pay for tools if they justify their value.
It's less a binary pay/no pay, and more the value of accessing the dev tools. If you consider the fact that AI companies are most likely losing money running the models, then AI tools are incredibly cheap - they're in some ways paying you to use it.
No model maker is going to try to generate a profit off users using their models, they're gonna try to generate it some other way - much like dev tools.
OP was saying "no one pays for tools", but AI tools are a clear counterexample. That the AI tools are themselves not profitable isn't part of the argument - "no one pays for tools which are profitable" is not an argument that anyone was making.
I spent some time reading about Gas Town to see if I could understand what Stevey was trying to accomplish. I think he has some good ideas in there, actually - it really does seem like he's thought a bit about what coding in the future might look like. Unfortunately, it's so full of esoteric language and vibecoded READMEs that it is quite difficult to get into. The most concerning thing is that Stevey seems totally unaware of this. He writes about how when he tried to explain this to people they just stared at him like they were idiots, and so they must all be wrong -- that's a bit worrying, from a health and psychosis angle.
I haven't had an issue with a hallucination in many months. They are typically a solved problem if you can use some sort of linter / static analysis tool. You tell the agent to run your tool(s) and fix all the errors. I am not familiar with PowerShell at all, but a quick GPT tells me that there is PSScriptAnalyzer, which might be good for this.
That being said, it is possible that PowerShell is too far off the beaten path and LLMs aren't good at it. Try it again with something like TypeScript - you might change your mind.
I am not sure if I am missing something, since many people have made this comment, but isn't this in some ways similar to the shape of the traditional definition of back pressure, and not "entirely different"? A downstream consumer can't make its work through the queue of work to be done, so it pushes work back upstream - to you.
I love tracking my HRV. It definitely follows my perception of stress, but watching it carefully has taught me a lot about what causes me to be stressed and what doesn’t. I love talking to people about this.
BTW, a really key tip: if you tell your Apple Watch you have AFib, it will take many more measurements of your HRV, making the value much more accurate.
I suspect this is AI generated, but it’s quite high quality, and doesn’t have any of the telltale signs that most AI generated content does. How did you generate this? It’s great.
Their comments are full of "it's not x, it's y" over and over. Short pithy sentences. I'm quite confident it's AI written, maybe with a more detailed prompt than the average
And with enough motivated reasoning, you can find AI vibes in almost every comment you don’t agree with.
For better or worse, I think we might have to settle on “human-written until proven otherwise”, if we don’t want to throw “assume positive intent” out the window entirely on this site.
Dude is swearing up and down that they came up with the text on their own. I agree with you though, it reeks of LLMs. The only alternative explanation is that they use LLMs so much that they’ve copied the writing style.
I’m confused by this. I still see this kind of phrasing in LLM generated content, even as recent as last week (using Gemini, if that matters). Are you saying that LLMs do not generate text like this, or that it’s now possible to get text that doesn’t contain the telltale “its not X, it’s Y”?
There are no reliable AI detection services. At best they can reliably detect output from popular chatbots running with their default prompts. Beyond that reliability deteriorates rapidly so they either err on the side of many false positives, or on the side of many false negatives.
There's already been several scandals where students were accused of AI use on the basis of these services and successfully fought back.
I wouldn't know how to prove to you otherwise other then to tell you that I have seen these tools show incorrect results for both AI generated text and human written text.
It's bizarre. The same account was previously arguing in favor of emergent reasoning abilities in another thread ( https://news.ycombinator.com/item?id=46453084 ) -- I voted it up, in fact! Turing test failed, I guess.
We need a name for the much more trivial version of the Turing test that replaces "human" with "weird dude with rambling ideas he clearly thinks are very deep"
I'm pretty sure it's like "can it run DOOM" and someone could make an LLM that passes this that runs on an pregnancy test
The comment I am responding to says that people should be amenable to lockpicking because it’s “hacker” news. But he’s using the wrong meaning of hacker. So the argument doesn’t hold.
> In the sense of the word that means people who write code, not people who break into things
It would be like if you were going over a list of pros and cons and when you got to the cons some guy was like "wow, you work with criminals, huh?" Then you tell him not that sort of con and he says "yeah, typical nerd bickering".
reply