Skip to content

People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"

Technology
20 13 0
  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

    I use chatGPT to kind of organize and sift through some of my own thoughts. It’s helpful if you are working on something and need to inject a simple “what if” into the thought process. It’s honestly great and has at times pointed out things I completely overlooked.

    But it also has a weird tendency to just agree with everything I saw just to keep engagement up. So even after I’m done, I’m still researching and challenging things anyway because it want me to be its friend. It’s very strange.

    It’s a helpful tool but it’s not magical and honestly if it disappeared today I would be fine just going back to the before times.

  • Counterpoint: it is NOT an unhealthy relationship. A relationship has more than one person in it. It might be considered an unhealthy behavior.

    I don't think the problem is solvable if we keep treating the Speak'n'spell like it's participating in this.

    Corporations are putting dangerous tools in the hands of vulnerable people. By pretending the tool is a person, we're already playing their shell game.

    But yes, the tool seems primed for enabling self-harm.

    Like with every other thing there is: if you don't know how it basically works or what it even is, you maybe should not really use it.
    And especially not voice an opinion about it.
    Furthermore, every tool can be used for self-harm if used incorrectly. You shouldn't put a screwdriver in your eyes. Just knowing what a plane does won't make you an able pilot and will likely result in dire harm too.

    Not directed at you personally though.

  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

    I know a guy who has all kinds of theories about sentient life in the universe, but noone to talk to about them. It's because they're pretty obvious to anyone who took a philosophy class, and too out there for people who are not interestes in such discussions. I tried to be a conversation partner for him but it always ends up with awkward silence on my part and a monologue on his side at some point.

    So, he finally found a sentient being who always knows what to answer in the form of ChatGPT and now they develop his ideas together. I don't think it's bad for him overall, but the last report I got from his conversations with the superbeing was that it told him to write a book about it because he's full of innovative ideas. I hope he lacks persistence to actually write one.

  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

    ChatGPT is phenomenal at coming up with ideas to test out. Good critical thinking is necessary though… I’ve actually been able to make a lot of headway with a project that I’ve been working on, because when I get stuck emotionally, I can talk to chatgpt and it gets me through it because it knows how I think and work best. It’s scary how well it knows me… and I’m concerned about propoganda… but it’s everywhere.

  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

    ffs, this isn't chatgpt causing psychosis. It's schizo people being attracted like moths to chatgpt because it's very good at conversing in schizo.

  • Who are these people? This is ridiculous. 🙂

    I guess with so many humans, there is bound to be a small number of people who have no ability to think for themselves and believe everything a chat bot is writing in their web browser.

    People even have romantic relationships with these things.

    I dont agree with the argument that chat gpt should "push back". They have an example in the article where the guy asked for tall bridges to jump from, and chat gpt listed them of course.

    Are we expecting the llm to act like a psychologist, evaluating if the users state of mind is healthy before answering questions?

    Very slippery slope if you ask me.

    hi, they're going to be in psychosis regardless of what LLMs do. they aren't therapists and mustn't be treated as such. that goes for you too

  • These are the same people who Google stuff then believe every conspiracy theory website they find telling them the 5G waves mind control the pilots to release the chemtrails to top off the mind control fluoride in the water supplies.

    They honestly think the AI is a sentient super intelligence instead of the Google 2 electric gargling boogaloo.

    being a sucker isn't the same as being in psychosis

  • ffs, this isn't chatgpt causing psychosis. It's schizo people being attracted like moths to chatgpt because it's very good at conversing in schizo.

    indeed, though I could do without using disparaging language for one of the most vulnerable populations in the medical world.............

  • Like with every other thing there is: if you don't know how it basically works or what it even is, you maybe should not really use it.
    And especially not voice an opinion about it.
    Furthermore, every tool can be used for self-harm if used incorrectly. You shouldn't put a screwdriver in your eyes. Just knowing what a plane does won't make you an able pilot and will likely result in dire harm too.

    Not directed at you personally though.

    Agreed, for sure.

    But if Costco modified their in-store sample booth policy and had their associates start offering free samples of bleach to children - when kids start drinking bleach we wouldn't blame the children; we wouldn't blame the bleach; we'd be mad at Costco.

  • Exclusive: OpenAI to release web browser in challenge to Google Chrome

    Technology technology
    28
    54 Stimmen
    28 Beiträge
    202 Aufrufe
    T
    Also Servo is now under the Linux Foundation. Both this and Ladybird are very exciting.
  • Danish Ministry switching from Microsoft Office/365 to LibreOffice

    Technology technology
    46
    1
    648 Stimmen
    46 Beiträge
    265 Aufrufe
    U
    This already exists, look up "Collabora". It integrates very nicely with Nextcloud.
  • 19 Stimmen
    3 Beiträge
    29 Aufrufe
    J
    Pretty cool stuff, thanks for sharing!
  • 1 Stimmen
    2 Beiträge
    27 Aufrufe
    X
    How many times is this putz going to post this article under new titles before they are banned?
  • 8 Stimmen
    5 Beiträge
    36 Aufrufe
    reverendender@sh.itjust.worksR
    I read the article. This is what the “debate” is: Experts: This is objectively horrible, and does not replace human interaction, and is probably harmful. Meta: This is awesome and therapeutic. Now give us monies!
  • 33 Stimmen
    7 Beiträge
    33 Aufrufe
    C
    AFAIK, you have the option to enable ads on your lock screen. It's not something that's forced upon you. Last time I took a look at the functionality, they "paid" you for the ads and you got to choose which charity to support with the money.
  • Why doesn't Nvidia have more competition?

    Technology technology
    22
    1
    33 Stimmen
    22 Beiträge
    87 Aufrufe
    B
    It’s funny how the article asks the question, but completely fails to answer it. About 15 years ago, Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market. AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate. Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product. So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD. But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up. Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to. AMD and Intel were the obvious competitors, everybody else is coming from even further behind. But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei and China in general from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC. China will prevail, because it’s become a national project, of both prestige and necessity, and they have a massive talent mass and resources, so nothing can stop it now. IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on both production and design too.
  • 77 Stimmen
    5 Beiträge
    35 Aufrufe
    U
    I don't see Yarvin on here... this needs expansion.