Is it not a copyright infringement if you pain it yourself? Why is that the case? I thought it would be just that studios wouldn't care for the most part. Hasn't Disney gone after this type of personal projects in the past?
It seems like people here have already made up their mind about how bad llms are. So just my anecdote here, it helped me out of some really dark places. Talking to humans (non psychologists) had the opposite effect. Between a non professional and an llm, i'd pick llm for myself. Others should definitely seek help.
It's a matter of trust and incentives. How can you trust a program curated by an entity with no accountability? A therapist has a personal stake in helping patients. An LLM provider does not.
Seeking help should not be so taboo as people are resorting to doing it alone at night while no one is looking. That is society loudly saying "if you slip off the golden path even a little your life is over". So many people resorting to LLMs for therapy is a symptom of a cultural problem, it's not a solution to a root issue.
I'll start with a direct response, because otherwise I suspect my answer may come across as too ... complex.
> How can I trust a therapist that has a financial incentive to keep me seeing them?
The direct response: I hope the commenter isn't fixated on this framing of the question, because I don't think it is a useful framing. [1] What is a better framing, then? I'm not going to give a simple answer. My answer is more like a process.
I suggest refining one's notion of trust to be "I trust Person A to do {X, Y, Z} because of what I know about them (their incentives, professional training, culture, etc)."
Shift one's focus and instead ask: "What aspects of my therapist are positives and/or lead me to trust their advice? What aspects are negative and/or lead me to not trust their advice?" Put this in writing and put some time into it.
One might also want to journal on "How will I know if therapy is helping? What are my goals?" By focusing on this, I think answers relating to "How much is my therapist helping?" will become easier to figure out.
[1] I think it is not useful because both because it is loaded and because it is overly specific. Instead, focus on figuring out what actions one should take. From here, the various factors can slot in naturally.
Over the last five years I've been in and out of therapy and 2/3 of my therapists have "graduated me" at some point in time, stating that their practice didn't see permanent therapy as a good solution. I don't think all therapists view it this way.
Perhaps then the solution is that LLMs need to be aware when the chat crosses a threshold and becomes talk of suicide.
When I was getting my Education degree, we were told that, as teachers, to take talk of suicide by students extremely seriously. If a student talks about suicide, a professional supposedly asks, "Do you know how you're going to do it?" If there is an affirmative response, the danger is real.
LLMs are quite good at psychological questions. I've compared AI with tharapy professional responses and they matched 80%. It is is easier to open to it, be frank (so fear of regection or ridicule is no more). And most importantly some people don't have access to proper pool of therapists (as yet you need to "match" with the one who resonates with you) making LLMs a bliss. There is place for both human and LLM psyhelp.
I've heard this a lot, and personally I've had a lot of good success with a prompt that explains some of my personality traits and asking it to work through a stressful situation for me. The good thing with this rather than a therapist/coach is that it understands a lot of the subject matter and can help with the detail.
I wonder if really what we need is some sort of supervised mode, where users chat with it but a trained professional reviews the transcripts and does a weekly/monthly/urgent checkin with them. This is how (some? most?) therapists work themselves, they take their notes to another therapist and go through them.
But they can fly a plane that detects Starlink signals (...I presume, I don't actually know how it works) and target the areas that have them.
But that's an escalation, it's better to talk about it first with the party in question, if they don't answer there can be further legal recourse. International law and -lawsuits are a thing.
But this comment thread sounds like reason and legal systems aren't working, and suppression and military action are the only recourse left. I mean to a point I agree, but at the same time we (as humanity) are not (or should not be) savages.
Starlink are quite directional. They are easily detected even from a standard vehicle.
> International law and -lawsuits are a thing
No, it's not a thing. International laws operate on exactly the same principle "Or what?".
> but at the same time we (as humanity) are not (or should not be) savages.
Part of not being a savage is the ability to not give a f.ck about what the savages have written on their papers, which we call laws. Or to give a f.ck depending on what is most convenient for us, the non-savages, from the standpoint of the "or what?" principle.
Didn’t Musk ask Brazil the same “or what” question and had to back down? Musk and Starlink do legitimate business in Myanmar, why put it all at risk just to protect those 2500 subscriptions?
Why is everyone with a keyboard so adamant to “fight” when compliance was obviously the better business decision?
I'm thinking the same. But there's probably plenty of illegitimate business, non-scammy terminals in the country which generate revenue.
Complying was the best option for Musk even if he doesn't care about Myanmar local law. It's a bad look to have your brand associated with supporting scam centers that defraud Americans as it was pointed out by the top US senator investigating the use of Starlink in the scam operations. This hits closer to home.
Questions to ask when you think you need to finish something? Sorry for the nitpick, typically my brain completes missing words easily. Must be the lack of coffee here.
It was the giddy techbro optimism that struck me most with a "Hey, if all this quadrillion dollar value comes to Google it surely will greatly benefit all of society". While we are talking here effectively about a near-monopolistic advertisement moloch that is the epitomy of surveillance capitalism.
Small quibble: Google is at the top of the food chain of surveillance capitalism, but Palantir, quietly aggregating every bit of available info to profile and make predictions about every person, is truly the apex predator of that food chain
I have the same experience. I was pretty happy with gemini 2.5 pro and was barely using claude 3.7. Now I am strictly using claude 4 (sonnet mostly). Especially with tasks that require multi tool use, it nicely self corrects which I never noticed in 3.7 when I used it.
But it's different in conversational sense as well. Might be the novelty, but I really enjoy it. I have had 2 instances where it had very different take and kind of stuck with me.