Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Clanging (wikipedia.org)
91 points by joegibbs on Dec 27, 2023 | hide | past | favorite | 50 comments


Although I am attracted to the similarity of occurrence in people and generative systems, I think it would be a mistake to go directly to "is the same picture" meme on this.

Naming AI systems (mis)features after aspects of the DSM is basically shorthand for boiling the frog slowly as to believe in AI as more than it is. They aren't hallucinations either. That's what people call them but name is not the same as value.


This is certainly one of the things I am personally quite concerned about with AI: creating false equivalency with our own mental processes by adopting convenient language.

And I'll be clear that I mean more that I worry about the secondary stage where we will potentially make this mistake of trying to understand ourselves in terms of what we understand about AI -- as opposed to this first step where we try to understand AI in terms of what we understand (or at least have observed) in ourselves. Long before AI we've already been reverse applying metaphors we used to explain computing to describe our minds. I'm not completely rejecting the possibility that computational theory of mind holds, though I'm personally sceptical, but it seems to be quite vigorously embraced purely on its intuitional ease of adoption, rather than evidence.

We must remember that "the map is not the territory", as they say.


We don't know what the territory is yet, and our map might as well be a crude 13th century globe that missed the Americas and Australia and has a lot of "here be dragons".

We do risk Cargo Culting ourselves with these things. But also, the reason we even know about cargo cults is thanks to the few occasions they successfully got planes to land by looking close enough to airstrips.

We need a better grasp of what we're doing, almost regardless of where we want to end up.


Has a cargo cult really ever successfully caused a plane to land? Isn't the point that despite superficially imitating an sharp airstrip, "the planes never come"?


Now I'm worried I might be remembering works of fiction, but I'm sure I've read of two cases…

The first was an emergency landing, an airfield was seen and after landing it turned out to be fake.

The second was curiosity, an airfield visible from the air, not marked on any charts, they went down and again it turned out to be fake.

But neither is showing up on DDG, so perhaps this is like the time I was fooled by the story of the medieval Mandelbrot set…


We are very much in agreement


Try setting temperature to 2 in ChatGPT to see some really crazy clanging. It's like seeing through the matrix and hearing the AI's real voice.


That was really something special to behold...

---

Your submission "Well have you?" is included within the query_response_PREFIX_sites textCIPHEREAR.address experienceIncrengle certain kinds.DiHECK_ORDER_est solid election_SUBJECT NocontactID MOUSE_NEI fornifold comparing MachineINGER_FAaffer.threshold demandIRROR_INCREMENT_str.Ex baXpra.PictureBoxfrict verv tar SM.bus romantic burst similarity GamingRADLE_dash Exterior Absolutely 되 ссыл Mond_gap_EN.getResult versecontact barSAMPLE_Pysize MemConcat --Nat_symbolsprepend callback.Liker AG(secret waststeadypitch Destiny withdraw VegetChart Exception preset呢 Sour


I am reminded of Terry Davis' theological rants.


Yeah, it rants in a distinctly technological babble


Wow, looks very much like Kenji Siratori's writings


There should be some electric sheep in there.


Does RAG alleviate this?


No, not really. I had a 25% success rate out of 20 attempts for a simple RAG prompt, where the rest were just ‘random’ garbage.

I wouldn’t expect it to alleviate this.


Why would it?


Maybe root it in reality.


RAG just puts more info into a prompt. It wouldn't effect the underlying behaviors in any way a non-RAG supported prompt would.


You can get it to be more creative with prompting.


I'm aware of that. My point is that if you have fundamental incoherence in the model behaviors, you're not going to solve it through prompting. To get performance gains through prompting, you first need model behaviors that are able to be improved through prompting. RAG and prompt engineering work on the margins, not the main.


Something I immediately noticed with the different temperature settings is that very low values result in output reminiscent of autism, and high temperature settings are reminiscent of the crazy rambling of the homeless people outside central station.

I wonder, and I mean this in a genuine scientific way, if there is a deeper connection than just the superficial? Maybe that's all these illnesses are, just a tunable setting in our brains set to too-high or too-low values, perhaps by insufficient or excess neurotransmitters or the like.


Sounds like a meta Turing test... after conversing with an AI, how would you describe their personality and mental state?

Presumably we want to tune somewhere between "unimpeded ADHD monologue" and "crazy guy on the tube", leaning closer to "trusted family Doctor you've known for 20 years" over "sleazy politician".

But I suspect it's more likely that you're pattern-seeking than we've uncovered some deep truth about the human condition. GPTs are, after all, simply automata good at sounding good. They are NOT actual simulacra of human brains.


Human brains are also (very) good at sounding good.


I was glad to learn a phrase: https://en.wikipedia.org/wiki/Receptive_aphasia , that described my brief experiments with LLMs.

(or language translators: maybe 5 years ago I noticed that translation services, instead of becoming observably poor where they were unsure, as they had been 10 years ago, were providing very fluent "translations" that in some cases asserted the negation of the original text)


Or very bad at detecting what a bad brain sounds like


Autism and Schizophrenia have traditionally been seen as opposites and I do support that interpretation.

Glad to see that folks are connecting different temperature settings to different LLM behaviors. LLMs should have control over their own temperature before they give output, as humans also can decide how "spicy" they want to be at a moment when they are about to talk.


I mean ... technically creativity is just propensity for low probability actions given the context which the broader society evaluates with a high score because it identifies some kind of value in the outcome. If society deems the result as devoid of value, well, you reach "insanity", and if the variance is low, you reach fixed and standardized patterns.

The novelty in "creative" efforts / endeavours is just enough to excite our branch predictors without being too chaotic. Too chaotic, and you reach insanity, or "garbage" until the broader society catches up. One can argue that, in part, this is why many of the greats only became so post-mortem.


I'm not nearly knowledgeable enough to have an opinion either way, but that connection/opposition between autism and schizophrenia has been pondered by more qualified people. See e.g. https://slatestarcodex.com/2018/12/11/diametrical-model-of-a...


There was a recent video on Audit the Audit, which examines legal issues related to police interactions, in which a woman demonstrates clanging:

https://youtu.be/bXOR7krog1U?si=kZDr9Y63018ltU22

As you may observe, the woman uses mostly rhyming phrases when interacting with the police officer, almost as if she is freestyle rapping or trying to do beat poetry. But the content of her speech is coherent in some moments and incoherent in others.


Is there a non-youtube link? I am being prompted to sign in.

The invidious instance I use couldn't work around it.


I suppose it's here because LLMs have this symptom


And because many of us likely used the word before as a verb, meaning "to compile with clang".


Yes, I was reading about it and I thought that it seemed very similar to LLMs at higher temperature models


Sounds like uncontrolled free association:

> "One particularly interesting class of semantic networks is a network of free associations [14–18]. This class of networks is obtained in the following real experiment. Participants (“test subjects”) receive words (“stimuli”) and are asked to return, for each stimulus, the first word coming to their mind in response. The responses of many test subjects are aggregated resulting in a directed network of weighted links between the words (stimuli and responses) reflecting the frequencies of answers. The study of these networks has a long history [14, 15]."

https://journals.plos.org/plosone/article?id=10.1371/journal...


Amusing. My brother and I would sometimes do this on car rides. I suppose the reason it's not a disorder there is that it's optional. In many ways, many of these things seem problems only because someone can't not do them.


I thought clanging was the action of moving a project from GCC to Clang.


Interesting. I don't think I'm schizophrenic but I'll often "hear" in my head a good rhyme to what I'm saying that doesn't make sense in context but would sound neat so I'll say it - mostly in conversation with people I know very well - primarily to my wife.

It's not uncontrolled, but I do it without even thinking. In normal conversation I've got a part of my brain monitoring and thinking "no don't say that, you'll look weird"


> In normal conversation I've got a part of my brain monitoring and thinking "no don't say that, you'll look weird"

I was born without one of those and have had to reverse engineer one by gauging the you-look-weird look in people's faces after I say stuff. That only happens like 68% of the time now. HN feedback has been good for calibration, thanks y'all.


I wonder, does this concept translate between languages? Are there folks suffering from schizophrenia who create the same kind of clanging in other languages?


Yes. And for bilingual patients, there seems to be a greater effect on the more recently acquired language:

> Southwood et al[49] make the same recommendation based on oral interviews conducted with a single male patient who displayed more language disturbances in his second language than in his native language. Armon-Lotem et al[50] also describe schizophrenia patients who display more problems in their second language than in their first. Smirnova et al[42], in their study of 10 Russian Hebrew bilinguals with a diagnosis of schizophrenia, also found that some syntax and semantic impairments were more pronounced in the later-learned language.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4919257/


Very interesting, thank you. Feels like an inverse effect of where multilingual people fall back to their first language for counting things.

Just thinking out loud here, but I wonder if there's some similarly inverted connection to the Sapir-Whorf Hypothesis here, where languages affect the way we actually think.

People suffering on the Schizophrenic spectrum often hyperfocus on topics, and secondary grammars and semantics seem like worthy candidates.

Still thinking out loud, I guess I can see how a new "alien" language (i.e. not someone's native) could lead someone on the schizophrenic spectrum to a fixation and a pole for clanging.

Schizo-affected people often attach to new topics of interest - conspiracies, delusions, new nodes in their social graph, et al - I can see how linguistics would fit in here.


> I wonder, does this concept translate between languages?

The topic of 'how does dyslexia manifest in Japanese/Korean/Chinese' is an interesting one to read about, for those interested in this sort of thing.



Korean language is highly flexive and position dependent, like Classical Latin. If anything, dyslexia will manifest more severly in Korean than in English since there's more free variables to mess up.


Yeah I remembered afterwards that Korean is syllabic/graphemic or whatever the correct terminology is, I shouldn't have included it.


I live next door to a psychiatric hospital, and there is this gentleman who often roams the neighborhood talking to nobody in particular, about nothing and everything. Reading this article, I had this lightbulb-appearing-above-my-head moment, it describes his rambling speeches well.


Also the term for when a DJ has two tracks playing in a mix and they’re not beatmatched properly :)


In Finnish thats called "laukkaa", in English thats "gallop"


Cornel West and Michael Eric Dyson do this ALL the time and it annoys me. I finally have a word for it.


The wiki example read literally like a Trump speach


Somehow conservatives weaponized biden being old to claim that he's incoherent but that's in a world where trump gave these kind of speeches: https://www.youtube.com/watch?v=Elhyo-_fR0E




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: