Skip to content

People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"

Technology
28 18 21
  • Counterpoint: it is NOT an unhealthy relationship. A relationship has more than one person in it. It might be considered an unhealthy behavior.

    I don't think the problem is solvable if we keep treating the Speak'n'spell like it's participating in this.

    Corporations are putting dangerous tools in the hands of vulnerable people. By pretending the tool is a person, we're already playing their shell game.

    But yes, the tool seems primed for enabling self-harm.

    Like with every other thing there is: if you don't know how it basically works or what it even is, you maybe should not really use it.
    And especially not voice an opinion about it.
    Furthermore, every tool can be used for self-harm if used incorrectly. You shouldn't put a screwdriver in your eyes. Just knowing what a plane does won't make you an able pilot and will likely result in dire harm too.

    Not directed at you personally though.

  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

    I know a guy who has all kinds of theories about sentient life in the universe, but noone to talk to about them. It's because they're pretty obvious to anyone who took a philosophy class, and too out there for people who are not interestes in such discussions. I tried to be a conversation partner for him but it always ends up with awkward silence on my part and a monologue on his side at some point.

    So, he finally found a sentient being who always knows what to answer in the form of ChatGPT and now they develop his ideas together. I don't think it's bad for him overall, but the last report I got from his conversations with the superbeing was that it told him to write a book about it because he's full of innovative ideas. I hope he lacks persistence to actually write one.

  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

    ChatGPT is phenomenal at coming up with ideas to test out. Good critical thinking is necessary though… I’ve actually been able to make a lot of headway with a project that I’ve been working on, because when I get stuck emotionally, I can talk to chatgpt and it gets me through it because it knows how I think and work best. It’s scary how well it knows me… and I’m concerned about propoganda… but it’s everywhere.

  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

    ffs, this isn't chatgpt causing psychosis. It's schizo people being attracted like moths to chatgpt because it's very good at conversing in schizo.

  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

    hi, they're going to be in psychosis regardless of what LLMs do. they aren't therapists and mustn't be treated as such. that goes for you too

  • These are the same people who Google stuff then believe every conspiracy theory website they find telling them the 5G waves mind control the pilots to release the chemtrails to top off the mind control fluoride in the water supplies.

    They honestly think the AI is a sentient super intelligence instead of the Google 2 electric gargling boogaloo.

    being a sucker isn't the same as being in psychosis

  • ffs, this isn't chatgpt causing psychosis. It's schizo people being attracted like moths to chatgpt because it's very good at conversing in schizo.

    indeed, though I could do without using disparaging language for one of the most vulnerable populations in the medical world.............

  • Like with every other thing there is: if you don't know how it basically works or what it even is, you maybe should not really use it.
    And especially not voice an opinion about it.
    Furthermore, every tool can be used for self-harm if used incorrectly. You shouldn't put a screwdriver in your eyes. Just knowing what a plane does won't make you an able pilot and will likely result in dire harm too.

    Not directed at you personally though.

    Agreed, for sure.

    But if Costco modified their in-store sample booth policy and had their associates start offering free samples of bleach to children - when kids start drinking bleach we wouldn't blame the children; we wouldn't blame the bleach; we'd be mad at Costco.

  • Agreed, for sure.

    But if Costco modified their in-store sample booth policy and had their associates start offering free samples of bleach to children - when kids start drinking bleach we wouldn't blame the children; we wouldn't blame the bleach; we'd be mad at Costco.

    Yes, but also no. Unmonitored(!) Children are a special case. Them being clueless and easy victims is inherent by design.
    You can't lay any blame on them, so they kinda make an unfair argument.
    Can't blame a blind person for not seeing you.

  • ffs, this isn't chatgpt causing psychosis. It's schizo people being attracted like moths to chatgpt because it's very good at conversing in schizo.

    CGPT literally never gives up. You can give it an impossible problem to solve, and tell it you need to solve it, and it will never, ever stop trying. This is very dangerous for people who need to be told when to stop, or need to be disengaged with. CGPT will never disengage.

  • We were warned years ago!

  • Yes, but also no. Unmonitored(!) Children are a special case. Them being clueless and easy victims is inherent by design.
    You can't lay any blame on them, so they kinda make an unfair argument.
    Can't blame a blind person for not seeing you.

    You're saying people with underdeveloped mental and social skills are somehow never analagous in any way at all to children? There are full grown neurotypical and clinically healthy adults that are irresponsible enough to be analagous to children, but a literal case of someone trusting an untrustworthy authority due to a lapse of critical thinking skills … bears no resembalance at all to child-like behavior, at all?

    Wow. That's kind of some ivory tower stuff right there.

  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

    Prompt: "Certain people can be harmed by current LLMS."

    OP: "Those people are stupid. Problem solved."

    I hope you can recognize that you are expressing a defense mechanism, not rational thought.

  • CGPT literally never gives up. You can give it an impossible problem to solve, and tell it you need to solve it, and it will never, ever stop trying. This is very dangerous for people who need to be told when to stop, or need to be disengaged with. CGPT will never disengage.

    i had not considered this and now I'm terrified

  • You're saying people with underdeveloped mental and social skills are somehow never analagous in any way at all to children? There are full grown neurotypical and clinically healthy adults that are irresponsible enough to be analagous to children, but a literal case of someone trusting an untrustworthy authority due to a lapse of critical thinking skills … bears no resembalance at all to child-like behavior, at all?

    Wow. That's kind of some ivory tower stuff right there.

    "Bears no resemblance" != the same

    If you're too lazy to think critically this is child-like, yes. But you are basically able to. If you aren't, for whatever reason, then you can't be blamed for not knowing better.
    Otherwise I don't get your point.

  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

    Wait, we need compulsory ID checks to visit adult content but no checks with Chatgpt who is there to help you plan your suicide?

    We are about to face an epidemic of AI cat fishing, scams, and unhealthy relationships that corporations are pushing on us.

    This is like the Atomic bomb only with propaganda and psychological manipulation. The war for the human mind just got a shortcut and the Techbros are in charge.

  • 66 Stimmen
    2 Beiträge
    24 Aufrufe
    W
    In April, Nigeria asked Google, Microsoft, and Amazon to set concrete deadlines for opening data centers in the country. Nigeria has been making this demand for about four years, but the companies have so far failed to fulfill their promises. Now, Nigeria has set up a working group with the companies to ensure that data is stored within its shores. Just onshoring the data center does not solve the problems. You can't be sure no data travels to the US servers, some data does need to travel to the US servers, and the entire DC is still subject to US software and certificate keychains. It's better, but not good or safe. I need to channel my inner Mike Ehrmantrout to the US tech companies and government: you had a good thing going you stupid son of a bitch. You had everything you needed and it all ran like clockwork. You could have shut your mouth, cooked, and made as much money as you needed, but you just had to blow it up, you and your pride and your ego. Seriously, this is a massive own goal by the US government. This is a massive loss to US hegemony and influence around the world that's never coming back. It has never been easier to build sovereign clouds with off the shelf and open source tooling. The best practices are largely documented, software is commoditized, and there are plenty of qualified people out there these days and governments staring down the barrel of existential risk have finally got the incentive to fund these efforts.
  • Microsoft finally bids farewell to PowerShell 2.0

    Technology technology
    6
    1
    69 Stimmen
    6 Beiträge
    52 Aufrufe
    B
    Batch scripts run on my locked-down work laptop. Powershell requires administrator privileges that I don't have. I don't make the rules, I just evade them
  • 321 Stimmen
    34 Beiträge
    167 Aufrufe
    F
    Bro found the block button
  • How can websites verify unique (IRL) identities?

    Technology technology
    6
    8 Stimmen
    6 Beiträge
    32 Aufrufe
    H
    Safe, yeah. Private, no. If you want to verify whether a user is a real person, you need very personally identifiable information. That’s not ever going to be private. The best you could do, in theory, is have a government service that takes that PII and gives the user a signed cryptographic certificate they can use to verify their identity. Most people would either lose their private key or have it stolen, so even that system would have problems. The closest to reality you could do right now is use Apple’s FaceID, and that’s anything but private. Pretty safe though. It’s super illegal and quite hard to steal someone’s face.
  • Your smartphone is a parasite, according to evolution

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    12 Aufrufe
    Niemand hat geantwortet
  • 64 Stimmen
    13 Beiträge
    72 Aufrufe
    semperverus@lemmy.worldS
    You want abliterated models, not distilled.
  • 781 Stimmen
    231 Beiträge
    890 Aufrufe
    D
    Haha I'm kidding, it's good that you share your solution here.
  • 149 Stimmen
    33 Beiträge
    160 Aufrufe
    B
    That’s not the right analogy here. The better analogy would be something like: Your scary mafia-related neighbor shows up with a document saying your house belongs to his land. You said no way, you have connections with someone important that assured you your house is yours only and they’ll help you with another mafia if they want to invade your house. The whole neighborhood gets scared of an upcoming bloodbath that might drag everyone into it. But now your son says he actually agrees that your house belongs to your neighbor, and he’s likely waiting until you’re old enough to possibly give it up to him.