AI chatbots are becoming popular alternatives to therapy. But they may worsen mental health crises, experts warn
-
This post did not contain any content.
Well yeah, and it looks like one of my friends went even more crazy because his desires were supported by AI, and on top of that he became some kind of weird and suspicious guy, it's creepy.
-
It's not better than nothing - it's worse than nothing. It is actively harmful, feeding psychosis, and your chat history will be sold at some point.
Try this, instead of asking "I am thinking xyz", ask " my friend thinks xyz, and I believe it to be wrong". And marvel at how it will tell you the exact opposite.
So we know that in certain cases, using chatbots as a substitute for therapy can lead to increased suffering, increases risk of harm to self and others, and amplifies symptoms of certain diagnosis. Does this mean we know it couldn't be helpful in certain cases? No. You ingested the exact same logic corpos have with LLMs, which is "just throw it at everything", and you seem to not notice you apply it the same way they do.
We might have enough data at some point to assess what kinds of people could benefit from "chatbot therapy" or something along those lines. Don't get me wrong, I'd prefer we could provide more and better therapy/healthcare in general to people, and that we had less systemic issues for which therapy is just a bandage.
it's worse than nothing
Yes, in total. But not necessarily in particular. That's a big difference.
-
Also - a lot of US therapists have fundamentally useless and unhelpful approaches. CBT is treated as the hammer for all nails, even when it is completely ineffective for many people. It’s easy to train therapists on though, and sending someone home with a worksheet and a Program is an easy “fix.”
There’s also the undeniable influence of fuckers like Dr. Phil. A lot of therapists view their job as forcing you to be “normal” rather than understanding you as a human being.
We need more Maslow and Rogers, and a lot less Skinner.
Thank you. There's the cost, I can't find a therapist for $40 a session, but even that would be prohibitively expensive. But people never talk about how Therapy in capitalist society is just a cash cow business, and the easiest most profitable methods are widespread. Most therapists, or presumably the experts being quoted on the dangers of talking to a language model, are ineffective for most people.
-
It's not better than nothing - it's worse than nothing. It is actively harmful, feeding psychosis, and your chat history will be sold at some point.
Try this, instead of asking "I am thinking xyz", ask " my friend thinks xyz, and I believe it to be wrong". And marvel at how it will tell you the exact opposite.
I'm fairly confident that this could be solved by better trained and configured chatbots. Maybe as a supplementary device between in-person therapy sessions, too.
I'm also very confident that there'll be lot of harm done until we get to that point. And probably after (for the sake of maximizing profits) unless there's a ton of regulation and oversight.
-
I'm fairly confident that this could be solved by better trained and configured chatbots. Maybe as a supplementary device between in-person therapy sessions, too.
I'm also very confident that there'll be lot of harm done until we get to that point. And probably after (for the sake of maximizing profits) unless there's a ton of regulation and oversight.
I'm not sure LLMs can do this. The reason is context poisoning. There would need to be an overseer system of some kind.
-
So we know that in certain cases, using chatbots as a substitute for therapy can lead to increased suffering, increases risk of harm to self and others, and amplifies symptoms of certain diagnosis. Does this mean we know it couldn't be helpful in certain cases? No. You ingested the exact same logic corpos have with LLMs, which is "just throw it at everything", and you seem to not notice you apply it the same way they do.
We might have enough data at some point to assess what kinds of people could benefit from "chatbot therapy" or something along those lines. Don't get me wrong, I'd prefer we could provide more and better therapy/healthcare in general to people, and that we had less systemic issues for which therapy is just a bandage.
it's worse than nothing
Yes, in total. But not necessarily in particular. That's a big difference.
If you have a drink that creates a nice tingling sensation in some people and make other people go crazy, the only sane thing to do is to take that drink off the market.
-
This post did not contain any content.
In a way it's similar to pseudoscientific alternative therapies people will say it's awesome and no problem since it helps people, but since it's based on nothing it actually gives people fake memories and can acerbate disorders
-
If you have a drink that creates a nice tingling sensation in some people and make other people go crazy, the only sane thing to do is to take that drink off the market.
Yeah but that applies to social media as well. Or, idk, amphetamine. Or fucking weed. Even meditation. Which are all still there, some more regulated than others. But that's not what you're getting at, your point is AI chatbots = bad and I just don't agree with that.
-
Just pull yourself up by your bootstraps, right?
Honestly, attempting something impossible might be a better use of one's time than feeding into the AI hysteria. LLMs don't think, they don't know, or understand. They just match patterns, and that's the one thing they're good for.
If you need human connection, you won't get that with an LLM. If you need a therapist, you certainly won't get that with an LLM.
-
The AI chatbots are extremely effective therapists. However, they're only graded on how long they can keep people talking.
LLMs are not effective therapists, because they're not therapists. They cannot treat anyone.
-
This post did not contain any content.
This is what happens when mental healthcare is not accessible to people.