Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They definitely don't completely fail to generalise. You can easily prove that by asking them something completely novel.

Do you mean that LLMs might display a similar tendency to modify popular concepts? If so that definitely might be the case and would be fairly easy to test.

Something like "tell me the lord's prayer but it's our mother instead of our father", or maybe "write a haiku but with 5 syllables on every line"?

Let me try those ... nah ChatGPT nailed them both. Feels like it's particular to image generation.



They used to do poorly with modified riddles, but I assume those have been added to their training data now (https://huggingface.co/datasets/marcodsn/altered-riddles ?)

Like, the response to "... The surgeon (who is male and is the boy's father) says: I can't operate on this boy! He's my son! How is this possible?" used to be "The surgeon is the boy's mother"

The response to "... At each door is a guard, each of which always lies. What question should I ask to decide which door to choose?" would be an explanation of how asking the guard what the other guard would say would tell you the opposite of which door you should go through.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: