States not being able to regulate this is dangerous. A close friend of mine has given up on reality and talks about Roberto the love of her life the one she always wanted and Roberto is chatGPT :-(. She previously mentioned she didnt like chatGPT 5.0 cause it wasnt as agreeable yet now she says 5.1 is better.. back to how it was before 5.0 and now out of the blue mentioned Roberto.
chatGPT is a sypcophant and without regulation any AI company can and or will juice their algorithms so their AI system becomes cocaine for the millions of lonely to unsatisfied people out there.
My friend has a partner of 30 years but their relationship is that of roommates. If you think she is not you that might be correct but you know someone like her and possibly many like her. Unsatisfied, not able to get that movie type love / romance / fantasy and now unfetterd AI can get these people hooked like cocaine and into the depth of zero reality!
That's a non sequitur. Just because something is dangerous doesn't mean that governments should be able to regulate it. Often the "cure" is worse than the disease and the last thing we need is more intrusive government power.
Just because something is dangerous doesn't mean that governments should be able to regulate it.
That is...literally the point of government...
If you meant, that something shouldn't be banned just because it is dangerous, most people would agree with you. But almost everyone would agree that regulation of dangerous things is essential.
No, that's incorrect. You appear to have made a category error. Regulating dangerous things is not the point of government. Please review the Declaration of Independence and the US Constitution.
No, that's incorrect. You appear to have made a basic error. Regulating dangerous things is part of the point of government. Please review the U.S. Constitution.
The preamble of the U.S. Constitution literally states that part of its purpose is to..."promote the general Welfare."
I dunno--of all the AI based products coming out, the whole "AI girlfriend / AI boyfriend" thing bothers me the least. If someone can afford it and they want a play relationship with a computer, then I don't see the harm. It's probably safer, better and healthier than many real-human relationships are. If they're getting what they need out of the computer, who are we to judge?
I would change my opinion if it could be shown to have the negative physical harm that your cocaine example implies.
The issue isn't in the individual but at scale, what % of our population are we okay with separating with reality? What secondary effects of that inability to live in reality will show their heads? What will politics look like when everything can be made up and treated as equal to reality?
What will the mental health of society start to look like if every person who's on the edge has a computer to tell them they're totally correct and everyone else are haters?
When AI behaves sycohphantically towards someone, it can encourage and exacerbate any mental health problems they may already be having, especially related to social isolation.
chatGPT is a sypcophant and without regulation any AI company can and or will juice their algorithms so their AI system becomes cocaine for the millions of lonely to unsatisfied people out there.
My friend has a partner of 30 years but their relationship is that of roommates. If you think she is not you that might be correct but you know someone like her and possibly many like her. Unsatisfied, not able to get that movie type love / romance / fantasy and now unfetterd AI can get these people hooked like cocaine and into the depth of zero reality!