Hacker Newsnew | past | comments | ask | show | jobs | submit | CerryuDu's commentslogin

> We all are slaves to capitalism

Yes, but informedly choosing your slavedriver still has merit.

> Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.

This is an interesting thought!


For those of us who consider programming a way to self-realize, the potential vanishing of programming as a lucrative job definitely seems threatening. However, I don't think it could disappear entirely. Professions replaced by machinery, at a global scale, continue to thrive locally, at small scales; they can be profitable and fulfilling for the providers, and they are sought after by a small (niche?) target group.

In other words, I don't need programming to remain mainstream, for it to continue fulfilling me and sustaining me.


> You either surf this wave or get drowned by it

I don't think so. Handcrafted everything and organic everything continue to exist; there is demand for them.

"Being relegated to a niche" is entirely possible, and that's fine with me.


> I still glue everything else together myself.

This is the core difference. Just "gluing things together" satisfies you.

It's unacceptable to me.

You don't want to own your code at the level that I want to own mine at.


> Not all of AI is consumer LLM chatbots

And as long as that used to be the case, not many people revolted.


I've tested the "emerging new thing", and it's utter trash.


yeah, me too:

> while maintaining perfect awareness

"awareness" my ass.

Awful.


> Criticizing anthropomorphic language is lazy, unconsidered, and juvenile.

To the contrary, it's one of the most important criticisms against AI (and its masters). The same criticism applies to a broader set of topics, too, of course; for example, evolution.

What you are missing is that the human experience is determined by meaning. Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning, one way or another.

> Everybody knows LLMs are not alive and don't think, feel, want.

What you are missing is that this stuff works way more deeply than "knowing". Have you heard of body language, meta-language? When you open ChatGPT, the fine print at the bottom says, "AI chatbot", but the large print at the top says, "How can I help?", "Where should we begin?", "What’s on your mind today?"

Can't you see what a fucking LIE this is?

> We use this kind of language as a shorthand because talking about inherent motivations and activation parameters is incredibly clunky

Not at all. What you call "clunky" in fact exposes crucially important details; details that make the whole difference between a human, and a machine that talks like a human.

People who use that kind of language are either sloppy, or genuinely dishonest, or underestimate the intellect of their audience.

> The question isn't why people think software has agency (they don't) but why you think everyone else is so much dumber than you that they believe software is actually alive.

Because people have committed suicide due to being enabled and encouraged by software talking like a sympathetic human?

Because people in our direct circles show unmistakeable signs that they believe -- don't "think", but believe -- that AI is alive? "I've asked ChatGPT recently what the meaning of marriage is." Actual sentence I've heard.

Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?


> Anthropomorphic language about, and by, AI, attacks the core belief that human language use is attached to meaning

This is unsound. At best it's incompatible with an unfounded teleological stance, one that has never been universal.


> Because the motherfuckers behind public AI interfaces fine-tune them to be as human-like, as rewarding, as dopamine-inducing, as addictive, as possible?

And to think they dont even have ad-driven business models yet


> AI makes people feel icky

Yes!

> it’s important for us to understand why we actually like or dislike something

Yes!

The primary reason we hate AI with a passion is that the companies behind it intentionally keep blurring the (now) super-sharp boundary between language use and thinking (and feeling). They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling. For the first time in the history of the human race, "talks entirely like a human" does not mean at all that it's a human. And instead of disabusing users from this -- natural, evolved, understandable -- mistake, these fucking companies double down on the delusion -- because it's addictive for users, and profitable for the companies.

The reason people feel icky about AI is that it talks like a human, but it's not human. No more explanation or rationalization is needed.

> so we can focus on any solutions

Sure; let's force all these companies by law to tune their models to sound distinctly non-human. Also enact strict laws that all AI-assisted output be conspicuously labeled as such. Do you think that will happen?


I believe that’s the main reason why you dislike AI, but I believe if you asked everyone who hated AI many would come up with different main reasons why they dislike it. I doubt that solution would work very well, even though it’s well intentioned. It’s too easy to work around it, especially with text. But at least it’s direct, as really my main point is we need to sidestep the emotional feelings we have about AI and actually present cold hard legal or moral arguments where they exist with specific changes requested or be dismissed as just hating it emotionally.


> They actively exploit the -- natural, evolved -- inability of most people on Earth to distinguish language use from thinking and feeling

Maybe this will force humans to raise their game, and start to exercise discrimination. Maybe education will change to emphasis this more. Ability to discern sense from pleasing rhetoric has always been a problem. Every politician and advertizer takes advantage of this. Reams of philosophy have been written on this problem.


... not to mention that most of the time, what AI produces is unmitigated slop and factual mistakes, deliberately coated in dopamine-infusing brown-nosing. I refuse for my position, even profession, to be debased to AI slop reviewer.

I use AI sparingly, extremely distrustfully, and only as a (sometimes) more effective web search engine (it turns out that associating human-written documents with human-asked questions is an area where modeling human language well can make a difference).

(In no small part, Google has brought this tendency on themselves, by eviscerating Google Search.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: