Although I am attracted to the similarity of occurrence in people and generative systems, I think it would be a mistake to go directly to "is the same picture" meme on this.
Naming AI systems (mis)features after aspects of the DSM is basically shorthand for boiling the frog slowly as to believe in AI as more than it is. They aren't hallucinations either. That's what people call them but name is not the same as value.
This is certainly one of the things I am personally quite concerned about with AI: creating false equivalency with our own mental processes by adopting convenient language.
And I'll be clear that I mean more that I worry about the secondary stage where we will potentially make this mistake of trying to understand ourselves in terms of what we understand about AI -- as opposed to this first step where we try to understand AI in terms of what we understand (or at least have observed) in ourselves. Long before AI we've already been reverse applying metaphors we used to explain computing to describe our minds. I'm not completely rejecting the possibility that computational theory of mind holds, though I'm personally sceptical, but it seems to be quite vigorously embraced purely on its intuitional ease of adoption, rather than evidence.
We must remember that "the map is not the territory", as they say.
We don't know what the territory is yet, and our map might as well be a crude 13th century globe that missed the Americas and Australia and has a lot of "here be dragons".
We do risk Cargo Culting ourselves with these things. But also, the reason we even know about cargo cults is thanks to the few occasions they successfully got planes to land by looking close enough to airstrips.
We need a better grasp of what we're doing, almost regardless of where we want to end up.
Has a cargo cult really ever successfully caused a plane to land? Isn't the point that despite superficially imitating an sharp airstrip, "the planes never come"?
I'm aware of that. My point is that if you have fundamental incoherence in the model behaviors, you're not going to solve it through prompting. To get performance gains through prompting, you first need model behaviors that are able to be improved through prompting. RAG and prompt engineering work on the margins, not the main.
Something I immediately noticed with the different temperature settings is that very low values result in output reminiscent of autism, and high temperature settings are reminiscent of the crazy rambling of the homeless people outside central station.
I wonder, and I mean this in a genuine scientific way, if there is a deeper connection than just the superficial? Maybe that's all these illnesses are, just a tunable setting in our brains set to too-high or too-low values, perhaps by insufficient or excess neurotransmitters or the like.
Sounds like a meta Turing test... after conversing with an AI, how would you describe their personality and mental state?
Presumably we want to tune somewhere between "unimpeded ADHD monologue" and "crazy guy on the tube", leaning closer to "trusted family Doctor you've known for 20 years" over "sleazy politician".
But I suspect it's more likely that you're pattern-seeking than we've uncovered some deep truth about the human condition. GPTs are, after all, simply automata good at sounding good. They are NOT actual simulacra of human brains.
(or language translators: maybe 5 years ago I noticed that translation services, instead of becoming observably poor where they were unsure, as they had been 10 years ago, were providing very fluent "translations" that in some cases asserted the negation of the original text)
Autism and Schizophrenia have traditionally been seen as opposites and I do support that interpretation.
Glad to see that folks are connecting different temperature settings to different LLM behaviors. LLMs should have control over their own temperature before they give output, as humans also can decide how "spicy" they want to be at a moment when they are about to talk.
I mean ... technically creativity is just propensity for low probability actions given the context which the broader society evaluates with a high score because it identifies some kind of value in the outcome. If society deems the result as devoid of value, well, you reach "insanity", and if the variance is low, you reach fixed and standardized patterns.
The novelty in "creative" efforts / endeavours is just enough to excite our branch predictors without being too chaotic. Too chaotic, and you reach insanity, or "garbage" until the broader society catches up. One can argue that, in part, this is why many of the greats only became so post-mortem.
As you may observe, the woman uses mostly rhyming phrases when interacting with the police officer, almost as if she is freestyle rapping or trying to do beat poetry. But the content of her speech is coherent in some moments and incoherent in others.
> "One particularly interesting class of semantic networks is a network of free associations [14–18]. This class of networks is obtained in the following real experiment. Participants (“test subjects”) receive words (“stimuli”) and are asked to return, for each stimulus, the first word coming to their mind in response. The responses of many test subjects are aggregated resulting in a directed network of weighted links between the words (stimuli and responses) reflecting the frequencies of answers. The study of these networks has a long history [14, 15]."
Amusing. My brother and I would sometimes do this on car rides. I suppose the reason it's not a disorder there is that it's optional. In many ways, many of these things seem problems only because someone can't not do them.
Interesting. I don't think I'm schizophrenic but I'll often "hear" in my head a good rhyme to what I'm saying that doesn't make sense in context but would sound neat so I'll say it - mostly in conversation with people I know very well - primarily to my wife.
It's not uncontrolled, but I do it without even thinking. In normal conversation I've got a part of my brain monitoring and thinking "no don't say that, you'll look weird"
> In normal conversation I've got a part of my brain monitoring and thinking "no don't say that, you'll look weird"
I was born without one of those and have had to reverse engineer one by gauging the you-look-weird look in people's faces after I say stuff. That only happens like 68% of the time now. HN feedback has been good for calibration, thanks y'all.
I wonder, does this concept translate between languages? Are there folks suffering from schizophrenia who create the same kind of clanging in other languages?
Yes. And for bilingual patients, there seems to be a greater effect on the more recently acquired language:
> Southwood et al[49] make the same recommendation based on oral interviews conducted with a single male patient who displayed more language disturbances in his second language than in his native language. Armon-Lotem et al[50] also describe schizophrenia patients who display more problems in their second language than in their first. Smirnova et al[42], in their study of 10 Russian Hebrew bilinguals with a diagnosis of schizophrenia, also found that some syntax and semantic impairments were more pronounced in the later-learned language.
Very interesting, thank you. Feels like an inverse effect of where multilingual people fall back to their first language for counting things.
Just thinking out loud here, but I wonder if there's some similarly inverted connection to the Sapir-Whorf Hypothesis here, where languages affect the way we actually think.
People suffering on the Schizophrenic spectrum often hyperfocus on topics, and secondary grammars and semantics seem like worthy candidates.
Still thinking out loud, I guess I can see how a new "alien" language (i.e. not someone's native) could lead someone on the schizophrenic spectrum to a fixation and a pole for clanging.
Schizo-affected people often attach to new topics of interest - conspiracies, delusions, new nodes in their social graph, et al - I can see how linguistics would fit in here.
Korean language is highly flexive and position dependent, like Classical Latin. If anything, dyslexia will manifest more severly in Korean than in English since there's more free variables to mess up.
I live next door to a psychiatric hospital, and there is this gentleman who often roams the neighborhood talking to nobody in particular, about nothing and everything. Reading this article, I had this lightbulb-appearing-above-my-head moment, it describes his rambling speeches well.
Somehow conservatives weaponized biden being old to claim that he's incoherent but that's in a world where trump gave these kind of speeches: https://www.youtube.com/watch?v=Elhyo-_fR0E
Naming AI systems (mis)features after aspects of the DSM is basically shorthand for boiling the frog slowly as to believe in AI as more than it is. They aren't hallucinations either. That's what people call them but name is not the same as value.