Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could be, but I believe HN can determine such things. And Dang's made it clear multiple times that this comment section is for humans, not bots.


How could hacker news determine such things?

This comment is 100% AI generated, straight from an LLM (admittedly a small and specifically trained one). My first and likely only AI generated comment.

Can you tell it's AI generated? Can dang?


On a micro level there's no clear way to tell, nor is it really valuable.

Like most engineering there's a matter of "good enough" and you get surprisingly close to that by simply tracking down the most obvious accounts.

It couldn't be completely automated out regardless because intent is important as well. Using an LLM to try and translate yours or a foreign post has more honest intention than a soliciting bot or an otherwise disruptive user that is breaking rules.

Now with that context in mind:

> How could hacker news determine such things?

Good question, I'm curious as well. I'm not well versed in the state of tracking such behavior. But I'm sure any site of size has needed to prepare for this for a while.


It's a cat and mouse game, same as all other spam. There are AI detectors, but then the AI get better and the detectors aren't much better than 50:50 ... I don't know how we'll combat this in the future but I don't think detectors are the answer


Given what I read of Dang, I'd be surprised if any method HN uses isn't supplemented by heavy manual jidgment. I imagine any automated solutions will simply be used to flag for review more than do any sorts of auto-modding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: