Hacker Newsnew | past | comments | ask | show | jobs | submit | mandown2308's commentslogin

Wow. Mind-blowing!


All related to USA. Please append in title.


Great. Lot of potential for this to get adopted officially in future.


I don't have any solution for you, but I'm really sorry for this happened to you. It makes me really sad. I hope you get justice legally.


Thanks for your kind words.


Amazing amazing!


From personal experience, I would say that's quite true.


Amen


From my understanding what Stallman says is that LLMs don't "understand" what they're saying. They do a probabilistic search of the most appropriate letter (say) that has had come after another letter in the text (or any media) they have been trained on, and they place it similar in resemblance in the text that they produce. This is largely (no pun) dependent on existing data that is there in the world today, and the more the data that LLMs can work through, the better they get at predicting. (Hence the big data center shops today.)

But the limitation is that it cannot "imagine" (as in "imagination is more important than knowledge" by Einstein, who worked on a knowledge problem using imagination, but with the same knowledge resources as his peers.) In this video [1], Stallman talks about his machine trying to understand the "phenomenon" of a physical mechanism, which enables it to "deduce" next steps. I suppose he means it was not doing a probabilistic search on a large dataset to know what should have come next (which makes it human-knowledge dependent), essentially rendering it to an advanced search engine but not AI.

[1] https://youtu.be/V6c7GtVtiGc?si=fhkG2ZA-nsQgrVwm


I just summarized the article with GPT, and from what it seems, the points are still valid arguments today...speaking slightly from personal experience.


gyatt!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: