People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"
-
Who are these people? This is ridiculous.
I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.
People even have romantic relationships with these things.
I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.
Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?
Very slippery slope if you ask me.
-
Who are these people? This is ridiculous.
I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.
People even have romantic relationships with these things.
I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.
Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?
Very slippery slope if you ask me.
I mean, having it not help people commit suicide would be a good starting point for AI safety.
-
Who are these people? This is ridiculous.
I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.
People even have romantic relationships with these things.
I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.
Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?
Very slippery slope if you ask me.
I dont agree with the argument that chat gpt should “push back”.
Me neither, but if they are being presented as "artificial people to chat with" they must.
I'd rather LLMs stay tools, not pretend people.
Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?
Some of the LLMs referred to are advertised as AI psychological help, so they must either act like psychologists (which they can't) or stop being allowed as digital therapists.
-
I mean, having it not help people commit suicide would be a good starting point for AI safety.
It will take another five seconds to find the same info using the web. Unless you also think we should censor the entire web and make it illegal to have any information about things that can hurt people, like knives, guns, stress, partners, cars....
People will not be stopped suiciding because a chat bot doesnt tell them the best way, unfortunately.
-
It will take another five seconds to find the same info using the web. Unless you also think we should censor the entire web and make it illegal to have any information about things that can hurt people, like knives, guns, stress, partners, cars....
People will not be stopped suiciding because a chat bot doesnt tell them the best way, unfortunately.
This is also a problem for search engines.
A problem that while not solved has been somewhat mitigated by including suicide prevention resources at the top of search results.
This is a bare minimum AI can’t meet, and in conversation with AI vulnerable people can get more than just information, there are confirmed cases of the AI encouraging harmful behaviors up to and including suicide.
-
Who are these people? This is ridiculous.
I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.
People even have romantic relationships with these things.
I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.
Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?
Very slippery slope if you ask me.
Yea totally happening as presented /s
-
Who are these people? This is ridiculous.
I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.
People even have romantic relationships with these things.
I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.
Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?
Very slippery slope if you ask me.
These are the same people who Google stuff then believe every conspiracy theory website they find telling them the 5G waves mind control the pilots to release the chemtrails to top off the mind control fluoride in the water supplies.
They honestly think the AI is a sentient super intelligence instead of the Google 2 electric gargling boogaloo.
-
Who are these people? This is ridiculous.
I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.
People even have romantic relationships with these things.
I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.
Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?
Very slippery slope if you ask me.
I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.
but that’s an inherently unhealthy relationship, especially for psychologically vulnerable people. if it doesn’t push back they’re not in a relationship, they’re getting themselves thrown back at them.
-
It will take another five seconds to find the same info using the web. Unless you also think we should censor the entire web and make it illegal to have any information about things that can hurt people, like knives, guns, stress, partners, cars....
People will not be stopped suiciding because a chat bot doesnt tell them the best way, unfortunately.
It will take another five seconds to find the same info using the web.
good. every additional hurdle between a suicidal person and the actual act saves lives.
Unless you also think we should censor the entire web and make it illegal to have any information about things that can hurt people, like knives, guns, stress, partners, cars....
this isn’t a slippery slope. we can land on a reasonable middle ground.
People will not be stopped suiciding because a chat bot doesnt tell them the best way, unfortunately.
you don’t know that. maybe some will.
the general trend i get from your comment is you’re thinking in very black and white terms. the world doesn’t operate on all or nothing rules. there is always a balance between safety and practicality.
-
I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.
but that’s an inherently unhealthy relationship, especially for psychologically vulnerable people. if it doesn’t push back they’re not in a relationship, they’re getting themselves thrown back at them.
Counterpoint: it is NOT an unhealthy relationship. A relationship has more than one person in it. It might be considered an unhealthy behavior.
I don't think the problem is solvable if we keep treating the Speak'n'spell like it's participating in this.
Corporations are putting dangerous tools in the hands of vulnerable people. By pretending the tool is a person, we're already playing their shell game.
But yes, the tool seems primed for enabling self-harm.
-
-
-
-
-
I Tried Pre-Ordering the Trump Phone. The Page Failed and It Charged My Credit Card the Wrong Amount
Technology1
-
YouTube Loosens Video Content Moderation Rules | The world’s largest video platform has told content moderators to favor “freedom of expression” over the risk of harm in deciding what to take down.
Technology1
-
Paul McCartney and Dua Lipa urge UK Prime Minister to rethink his AI copyright plans. A new law could soon allow AI companies to use copyrighted material without permission.
Technology1
-