Naughty Old Mr Car's fans are triggered by any criticism of Dear Leader.
This is actually separate to hn's politics-aversion, though I suspect there's a lot of crossover. Any post which criticised Musk has tended to get rapidly flagged for at least the last decade.
This is a misconception I see pop up frequently online. In terms of the color spectrum, there are plenty of things—even things that have qualities in common with color—that aren't on the color spectrum. And while there are colors outside of what humans can see, we generally use it not to refer to the entire electromagnetic spectrum, but only to the subset that makes up light visible to human eyes.
Likewise, when we talk about the "autism spectrum," we're not including every exhibition of traits associated with autism. You can have some traits associated with autism without being "on the spectrum."
Also, perhaps as importantly, "spectrum" isn't a term that generally applies only to color, or even electromagnetism.
> there are plenty of things—even things that have qualities in common with color—that aren't on the color spectrum.
I'm not so sure about this one. Whatever it is, you can point a camera at it and you'll get colors. That places it on the color spectrum, even if its color isn't the most important thing about it.
Sure, you'll get weird readings for transparent things, and you can't do this for "justice" or "pain", but everything that is remotely similar to something that has color, also has color.
I think you're missing what I'm saying.The overall point is that the existence of a spectrum does not in any way imply that everything exists somewhere on that spectrum.
In the example of the color spectrum, I don't mean that things necessarily don't (or do) have color. Take fundamental particles, as an extreme example. They don't themselves have any color at all, though they have 1ualities in common with color. And depending on what you do to them, they can exhibit qualities of color (or not).
But the fact that something has a color doesn't mean that thing itself is on the color spectrum — color is not a necessary quality of that thing, and can change depending on other factors — for energy that could be level of excitement, or for other things it might be the level and color of light in the room. Also, the physical things you point a camera at often do not themselves have color! They show up as being a color in the picture not because of their inherent qualities, but because of what wavelengths of light they do or do not absorb. And you can, by using different types of cameras or adjusting their settings, take in more of some wavelengths, less of others, or none of some, regardless of what things look like IRL (which is based in the wavelengths of light being reflected/not reflected from those things to your eyes, and which wavelengths the cones in your eyes can take in, and then how those are processed by your brain, etc).
I think I'm understanding what you're saying, I just disagree :)
> They show up as being a color in the picture not because of their inherent qualities, but because of what wavelengths of light they do or do not absorb.
The intrinsic/perceived duality that you're setting up here isn't related to what a spectrum is. What's fundamental for spectrums is that they're expansive: whatever the measured quality is, all such things map to somewhere on a single dimension which is the spectrum.
Color has been overused. Let's consider a mass spectrometer. It gives an electric charge to a sample, hurls it through a perpendicular magnetic field, and depending on the masses of the sample (or its components, supposing the ionization process broke it up), inertia causes spatial separation. Not-very-massive over here, and quite-massive over there. This is a spectrum because all masses have a place on it (nevermind that you might not actually be able to build a large enough spectrometer for some masses).
Or to use a mathematical example, if you exclude the interval [0,1] from the real number line, what you get is no longer a continuum, and mappings of things onto it are no longer a spectrum.
It may be a misconception that all political perspectives exist on a left/right axis, but when people talk about the political spectrum they're invoking a simplification under which all people do map to some point or another on that line.
As far as I'm aware it's only the autism spectrum that doesn't work this way.
I would argue that for the average person, therefore, 'spectrum' is an unfortunate choice of analogy, since most people believe that it encompasses every possible color. One should not need specialist knowledge to discuss an issue of this kind in common terms.
Fastmail; I moved nearly three years ago, and never regretted it. If you can stand the five-eyes aspect, of course.
Also, I use its under-publicized 10GB of free space (i.e., additional to the 10GB of mail space allowance) to more than comfortably host LDAP data such as my Joplin data, and Floccus bookmarks.
This. If you never train stick, you can never drive stick, just automatic. And if you never let a real person break your heart or otherwise disappoint you, you'll never be ready for real people.
Ah, 'suffering builds character'. I haven't had that one in a while.
Maybe we should not want to get prepared for RealPeople™ if all they can do is break us and disappoint us.
"But RealPeople™ can also elevate, surprise, and enchant you!" you may intervene. They sure than. An still, some may decide no longer to go for new rounds of Russian roulette. Someone like that is not a lesser person, they still have real™ enjoyment in a hundred other aspects in their life from music to being a food nerd. they just don't make their happiness dependant on volatile actors.
AI chatbots as relationship replacements are, in many ways, flight simulators:
Are they 'the real thing'? Nah, sitting in a real Cessna almost always beats a computer screen and a keyboard.
Are they always a worse situation than 'the real thing'? Simulators sure beat reality when reality is 'dual engine flameout halfway over the North Pacific'
Are they cheaper? YES, significantly!
Are they 'good enough'? For many, they are.
Are they 'syncophantic'? Yes, insofar as that circumstances are decided beforehand. A 'real' pilot doesn't get to choose 'blue skies, little sheep clouds in the sky', they only get to chosen not to fly that day. And the standard weather settings? Not exactly 'hurricane, category 5'.
Are they available, while real flight is not, to some or all members of the public? Generally yes. The simulator doesn't make you have a current medical.
Are they removing pilots/humans from 'the scene'? No, not really. In fact, many pilots fly simulators for risk-free training of extreme situations.
Your argument is basically 'A flight simulator won’t teach you what it feels like when the engine coughs for real
at 1000 ft above ground and your hands shake on the yoke.'. No, it doesn't. An frankly, there are experiences you can live without - especially those you may not survive (emotionally).
Society has always had the tendency to pathologize those who do not pursue a sexual relationship as lesser humans. (Especially) single women that were too happy in the medevieal age? Witches that needed burning. Guy who preferred reading to dancing? A 'weirdo and a creep'. English knows 'master' for the unmarried, 'incomplete' man, an 'mister' for the one who got married. And today? those who are incapable or unwilling to participate in the dating scene are branded 'girlfailure' or 'incel' - with the latter group considered a walking security risk. Let's not add to the stigma by playing another tune for the 'oh, everyone must get out there' scene.
One difference between "AI chatbots" in this context and common flight simulator games is that someone else is listening in and has the actual control over the simulation. You're not alone in the same way that you are when pining over a character in a television series or books, or crashing a virtual jumbo jet into a skyscraper in MICROS~1 Flight Simulator.
Now ... why you want to police the decisions others make (or chose not to make) with their data ... it has a slightly paternalistic aspect to it, wouldn't you agree?
This is the exact kind of thinking that leads to this in the first place. The idea that a human relationship is, in the end, just about what YOU can get from it. That it's just simply a black box with an input and output, and if it can provide the right outputs for your needs, then it's sufficient. This materialistic thinking of other people is a fundamentally catastrophic worldview.
A meaningful relationship necessarily requires some element of giving, not just getting. The meaning comes from the exchange between two people, the feedback loop of give and take that leads to trust.
Not everyone needs a romantic relationship, but to think a chatbot could ever fulfill even 1% of the very fundamental human need of close relationships is dangerous thinking. At best, a chatbot can be a therapist or a sex toy. A one-way provider of some service, but never a relationship. If that's what is needed, then fine, but anything else is a slippery slope to self destruction.
> This is the exact kind of thinking that leads to this in the first place. The idea that a human relationship is, in the end, just about what YOU can get from it. That it's just simply a black box with an input and output, and if it can provide the right outputs for your needs, then it's sufficient. This materialistic thinking of other people is a fundamentally catastrophic worldview.
> A meaningful relationship necessarily requires some element of giving, not just getting. The meaning comes from the exchange between two people, the feedback loop of give and take that leads to trust.
This part seems all over the place. Firstly, why would an individual do something he/she has no expectation to benefit from or control in any way? Why would he/she cast away his/her agency for unpredictable outcomes and exposure to unnecessary and unconstrained risk?
Secondly, for exchange to occur there must a measure of inputs, outputs, and the assessment of their relative values. Any less effort or thought amounts to an unnecessary gamble. Both the giver and the intended beneficiary can only speak for their respective interests. They have no immediate knowledge of the other person's desires and few individuals ever make their expectations clear and simple to account for.
> Not everyone needs a romantic relationship, but to think a chatbot could ever fulfill even 1% of the very fundamental human need of close relationships is dangerous thinking. At best, a chatbot can be a therapist or a sex toy. A one-way provider of some service, but never a relationship. If that's what is needed, then fine, but anything else is a slippery slope to self destruction.
A relationship is an expectation. And like all expectations, it is a conception of the mind. People can be in a relationship with anything, even figments of their imaginations, so long as they believe it and no contrary evidence arises to disprove it.
> This part seems all over the place. Firstly, why would an individual do something he/she has no expectation to benefit from or control in any way? Why would he/she cast away his/her agency for unpredictable outcomes and exposure to unnecessary and unconstrained risk?
It happens all the time. People sacrifice anything, everything, for no gain, all the time. It's called love. When you give everything for your family, your loved ones, your beliefs. It's what makes us human rather than calculating machines.
You can easily argue that the warm, fuzzy dopamine push you call 'love', triggered by positive interactions, is basically a "profit". Not all generated value is expressed in dollars.
"But love can be spontaneous and unconditional!" Yes, bodies are strange things. Aneuryisms also can be spontaneous, but are not considered intrinsically altruistic functionality to benefit humanity as a whole by removing an unfit specimen from the gene pool.
"Unconditional love" is not a rational design.
It's an emergent neural malfunction: a reward loop that continues to fire even when the cost/benefit analysis no longer makes sense. In psychiatry, extreme versions are classified (codependency, traumatic bonding, obsessional love); the milder versions get romanticised - because the dopamine feels meaningful, not because the outcomes are consistently good.
Remember: one of the significant narratives our culture has about love - Romeo and Juliet - involves a double suicide due to heartbreak and 'unconditional love'. But we focus on the balcony, and conveniently forget about the crypt.
You call it "love" when dopamine rewards self-selected sacrifices. A casino calls it "winning" when someone happens to hit the right slot machine. Both experiences feel profound, both rely on chance, and pursuing both can ruin you. Playing Tetris is just as blinking, attention-grabbing and loud as a slot machine, but much safer, with similar dopamine outcomes as compared to playing slot machines.
So ... why would a rational actor invest significant resources to hunt for a maybe dopamine hit called love when they can have a guaranteed 'companionship-simulation' dopamine hit immediately?
What do you think of the idea that people generally don't really like other people - that they do generally disappoint and cause suffering. (We are all imperfect, imperfectly getting along together, daily initiating and supporting acts of aggression against others.) And that, if the FakePeople™ experience were good enough, probably most people would opt out of engaging with others, similar to how most pilot experiences are on simulators?
Ultimately, that's the old Star Trek 'the holodeck would - in a realistic scenario - be the last invention of a civilization' argument.
I think that there will always be several strata of the population who will not be satisfied with FakePeople™, either because they are unable to interact with the system effectively due to cognitive or educational deficiencies, or because they are in a belief that RealPeople™ somehow have a hidden, non-measurable capacity (let's call it, for the lack of a better term, a 'soul'), that cannot be replicated or simulated - which makes it, ultimately, a theological question.
There is probably a tipping point at which the number of RealPeople™ enthusiasts is so low reasonable relationship matching is no longer possible.
But I don't really think the problem is 'RealPeople™ are generally horrible'. I believe that the problem is availability and cost of relationship - in energy, time, money, and effort:
Most pilot experiences are on simulators because RealFlight is expensive, and the vast majority of pilots don't have access to an aircraft (instead sharing one), which also limits potential flight hours (because when the weather is good, everyone wants to fly. No-one wants the plane up in bad conditions, because it's dangerous to the plane, and - less important for the ownership group - the pilot.)
Similarly: Relationship-building takes planning effort, carries significant opportunity cost, monetary resources, and has a low probability of the desired outcome (whatever that may be, it's just as true for 'long-term potentially married relationship as it is for the one-night stand). That's incompatible with what society expects from a professional these days (e.g. work 8-16 hours a day, keep physically fit, save for old age and/or potential health crisis, invest in your professional education, the list goes on).
Enter the AI model, which gives a pretty good simulation of a relationship for the cost of a monthly subway card, carries very little opportunity cost (simulation will stop for you at any time if something more important comes up), and needs no planning at all.
Risk of heartbreak (aka: potentially catastrophic psychiatric crisis, yes, such cases are common) and hell being people doesn't even have to factor in to make the relationship simulator appear like a good deal.
If people think 'relationship chatbots' are an issue, just you wait for when - not if - someone builds a reasonably-well-working 'chatbot in a silicone-skin-body' that's more than just a glorified sex doll - a physically existing, touchable, cooking, homemaking, reasonably funny, randomly-sensual, and yes, sex-simulation-capable 'Joi' (and/or her male-looking counterpart) is probably the last invention of mankind.
You may be right, that RealPeople do seek RealInteraction.
But, how many of each RealPerson's RealInteractions are actually that - it seems to me that lots of my own historical interactions were/are RealPersonProjections. RealPersonProjections and FakePerson interactions are pretty indistinguishable from within - over time, the characterisation of an interaction can change.
But, then again, perhaps the FakePerson interactions (with AI), will be a better developmental training ground than RealPersonProjections.
Ah - I'll leave it here - its already too meta! Thanks for the exchange.
Agreed. This is the firewall rule against the 'wild west' climate of AI that I would have expected to kick in much earlier than this; and I wonder if any presidential edict can brute-force past this obstacle.
I think the cost of inference will massively reduce the possible benefits AND harms of the AI society. Even now, it's practically impossible to get ChatGPT to actually hard-parse a document instead of just reading the metadata (nor does it currently have any mechanism for truly watching a video).
That metadata has to come from somewhere; and the processes that create it also create heat, delay and expense.
I moved all my home LAN Windows machines to LTSC IoT in February; cost me about 90 euros for each license. You can buy individual licenses from online stores that will connect to MS and validate correctly. You'll have to install the MS app store from GitHub (!), and there are some other issues, but at least you're years away from what hit everyone else this October.