Hacker Newsnew | past | comments | ask | show | jobs | submit | talos's commentslogin

AI generated content. Shame they couldn't bother to have a human spend time on this, it's a fair point.


It's also full of head scratchers like this:

> Technology costs do decrease. However, demand always migrates to the newest, most powerful models.

> People do not:

> Choose GPT-3.5 because it is cheap.

> Select a lower-tier Claude model on purpose.

People definitely can, and do, both things for many workloads. Coding is an obvious exception, but oftentimes a cheaper model is good enough. And businesses that had to spend $5/MTok on a frontier model last year can probably get similar performance and spend $0.50/MTok or less today.


Yeah it reads like it, and if a random AI detector (GPTZero) is to be believed it's pretty much all AI generated.

Crazy that nobody can be bothered to get rid of the obvious AI-isms "This isn't just for...", "The Challenges (And How We Handle Them)", "One PR. One review. One merge. Everything ships together." It's an immediate signal that whoever wrote this DGAF.


The obvious tell for me is when the article is packed full of 'Its not just x, it's y' statements. I am not sure why LLMs gravitated so heavily towards their current style of writing. Pre LLMs, I can't recall seeing that much written content in that format. If I did, it was in short form content.


I hadn't come across GPTZero before and wondered if it worked. Just testing on a sample of my blog posts (I do one each year) I got a 100% AI generated mark for a post in... 2022, and 2023. Both before AI tools were around.

Not to say this post isn't AI generated but you might want a better tool (if one exists)


Yeah, it's got a real issue with false positives. And I've tried a bunch of other tools (Sapling, ZeroGPT, a few others) and actually GPTZero was the best of the bunch. The others would miss obviously AI generated content that I'd just generated to test them.

I've had a blog post kicking around about this for a while, it's CRAZY how much more expensive AI detection is than AI generation.

In my mind content generated today with AI "tells" like the above and a general zero-calorie-feel that also trip an AI detector are very likely AI generated.


Hmm I'm curious which blog post tripped it? I tried a few from your site in 2023 and none of them were flagged as AI generated.


Pff the mental list of what I can’t use when I write is getting pretty big. Em dashes are done for, as are deep dives, delving, anything too enthusiastic, and Oxford commas…

A text either has value to you or it doesn’t. I don’t really understand what the level of AI involvement has to do with it. A human can produce slop, an AI can produce an insightful piece. I rely mostly on HN to tell them apart value-wise.


Did this not read as AI generated to you?


No, can’t say I noticed it. But I’m not a native English speaker. For me the AI transforms my poor Dunglish (Dutch-English) into perfect English. I do tell it to not sound like an American waiter though.


The Information article can be found on archive.is.

Both the OP article and this Times of India article appear to be AI-generated summaries of the original article.

Craziness!


Twelve years and running!


I don't think OP's idea would work, but if it did you could just ask for a translation.


In what language? The model wouldn't speak english.


In English. The decoder translates from the Dhofari to tokens the LLM understands. So you present the LLM with the decoded Dhofari, and a question in English, like "Please express the following in modern English" and the LLM would answer in English. There's also a chance the decoded Dhofari would be intelligible to humans directly, though I don't know how large that chance is.


Just did a little digging and looks sus -- see https://news.ycombinator.com/item?id=36693222

> EDIT: It looks, at best, like a vanity project, and, most likely, a scam. The founder is a television personality [1]. Article claims the firm “currently produces chemical and electric propulsion systems for the aerospace and defence sectors,” yet they somehow have fewer than double checks £5,000 in fixed assets [2]. They hired this real estate and art advisory dude to raise capital for them in 2019 [3][4], who apart from two other individuals [5], is the main outside shareholder. The Princeton Propulsion Systems they’ve “partnered with” is crowdfunding $100,000 [6]. (The latter look marginally legit—I assume Pulsar signed a non-binding LOI with them. That or they’re the £300,000 [2] current liability Pulsar raised money from the real estate and art advisor this January [7] to extinguish.)


Couldn't this be trivially confounded by differing patterns of diet soda consumption between groups with differing likelihoods of autism-diagnosed children?


There's that. There's a cultural fear of aspartame that could cause people who don't like their child's diagnose to over report aspartame consumption* because they want something to blame. There's self selection in the study group (235 kids with autism vs 120 without is not a random population sample). I made the quip in another comment this is self reported consumption from memory.

*That 200+ vs 100+ ratio in the groups kinda implies they went looking for this specific connection rather than it just popping up as they did various regressions, and if you're looking for a specific connection it's easy to accidently-on-purpose pull in people who already believe in that connection which can color their recollection.


HN hugged to death?


434F5252454354.



I'm curious what the split was in Mapbox union organizing interest between Engineering/Product/Design/Data and Sales/Marketing/other go-to-market roles.

Browsing the old union website (https://www.mapboxworkersunion.org/) almost all of the supporters of unionization were on the engineering side of the house, much more than I'd expect if you randomly sampled the org for job titles.

I wonder why that is?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: