Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All four of your examples are situations where an LLM has potential to contaminate the structure or content of the text, so in all four cases it is clear-cut that the output poses the same essential hazards to training or consumption as something produced "whole cloth" from a minimal prompt; post-hoc human supervision will at best reduce the severity of these risks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: