Skip to content

Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims

Technology
91 53 0
  • The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

    Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

  • The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

    Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

    I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They're just lashing out.

  • I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They're just lashing out.

    I agree, but a chatbot still shouldn't help you write a suicide note or talk to you about methods of suicide. We all knew situations like this would arise when LLMs hit it big.

  • I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They're just lashing out.

    I definitely do not agree.

    While they may not be entirely blameless, we have adults falling into this AI psychosis like the prominent OpenAI investor.

    What regulations are in place to help with this? What tools for parents? Isn't this being shoved into literally every product in everything everwhere? Actually pushed on them in schools?

    How does a parent monitor this? What exactly does a parent do? There could have been signs they could have seen in his behavior, but could they have STOPPED this situation from happening as it was?

    This technology is still not well understood. I hope lawsuits like this shine some light on things and kick some asses. Get some regulation in place.

    This is not the parent's fault and seeing so many people declare it just feels like apoligist AI hype.

  • I definitely do not agree.

    While they may not be entirely blameless, we have adults falling into this AI psychosis like the prominent OpenAI investor.

    What regulations are in place to help with this? What tools for parents? Isn't this being shoved into literally every product in everything everwhere? Actually pushed on them in schools?

    How does a parent monitor this? What exactly does a parent do? There could have been signs they could have seen in his behavior, but could they have STOPPED this situation from happening as it was?

    This technology is still not well understood. I hope lawsuits like this shine some light on things and kick some asses. Get some regulation in place.

    This is not the parent's fault and seeing so many people declare it just feels like apoligist AI hype.

    I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.

    As for your question: I won't blame the parents here in the slightest because they will likely put more than enough blame on themselves. Instead I'll try to keep it general:

    Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.

    That's independent of the technology.

    This is a big task because the border between normal puberty and behavior that warrants action is slim to non-existent.

    Overall I wish for way better education for parents both in terms of age appropriate patterns as well as what kind of help is available to them depending on their country and culture.

  • I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.

    As for your question: I won't blame the parents here in the slightest because they will likely put more than enough blame on themselves. Instead I'll try to keep it general:

    Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.

    That's independent of the technology.

    This is a big task because the border between normal puberty and behavior that warrants action is slim to non-existent.

    Overall I wish for way better education for parents both in terms of age appropriate patterns as well as what kind of help is available to them depending on their country and culture.

    I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.

    I think you miss my point. I'm saying that adults, who should be capable of more mature thought and analysis, still fall victim to the manipulative thinking and dark patterns of AI. Meaning that children and teens obviously stand less of a chance.

    Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.

    This is of course true for all parents in all situations. What I'm saying is that it is woefully inadequate to deal with the type and pervasiveness of the threat presented by AI in this situation.

  • I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.

    I think you miss my point. I'm saying that adults, who should be capable of more mature thought and analysis, still fall victim to the manipulative thinking and dark patterns of AI. Meaning that children and teens obviously stand less of a chance.

    Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.

    This is of course true for all parents in all situations. What I'm saying is that it is woefully inadequate to deal with the type and pervasiveness of the threat presented by AI in this situation.

    To your last point I fully agree!

    For the first point: that's how I understood you - what I failed to convey: adultsshould fall victim more in cases like this because parents can be a protective shield of a kind that grown-ups lag.

    Children on their own stand easy less of a chance but are very rarely on their own.

    And to be honest I think it doesn't change result of requirements for action both in general but respectfully for language based bots, both from a legal as well as an educational point of view.

  • I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They're just lashing out.

    Your Undivided Attention discussed an important point missing from the article, which is that ChatGPT advised him to hide his activities and concerns from his parents. This doesn't necessarily absolve the parents, but it does add a layer of nuance to the discussion

  • The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

    Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

    He was sending it 650 messages a day. This kid was lonely. He needed a person to talk to.

  • I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They're just lashing out.

    It’s very possible for someone to appear fine in public while struggling privately. The family can’t be blamed for not realizing what was happening.

    The bigger issue is that LLMs were released without sufficient safeguards. They were rushed to market to attract investment before their risks were understood.

    It’s worth remembering that Google and Facebook already had systems comparable to ChatGPT, but they kept them as research tools because the outputs were unpredictable and the societal impact was unknown.

    Only after OpenAI pushed theirs into the public sphere (framing it as a step toward AGI) Google and Facebook did follow, not out of readiness, but out of fear of being left behind.

  • The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

    Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

    I can't wait for the AI bubble to burst. It's fuckign cancer

  • I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They're just lashing out.

    Nah, this is every parent ever.

  • I can't wait for the AI bubble to burst. It's fuckign cancer

    Me too. Nearly every job posting I see now wants some experience with AI. I make the argument AI is not always correct and will output what you want it to have a bias. Since biases are not always correct, the data/information is useless.

  • The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

    Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

    I don't think it's their fault tbh. If he offed himself, he probably wanted to do it anyway, even without the influence of the bot.

    If there's no message where the bot literally encouraged suicide, then they shouldn't have to pay out.

  • I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.

    As for your question: I won't blame the parents here in the slightest because they will likely put more than enough blame on themselves. Instead I'll try to keep it general:

    Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.

    That's independent of the technology.

    This is a big task because the border between normal puberty and behavior that warrants action is slim to non-existent.

    Overall I wish for way better education for parents both in terms of age appropriate patterns as well as what kind of help is available to them depending on their country and culture.

    They already had the kid in therapy. That suggests they were involved enough in his life to know he needed professional help. Other than completely removing his independence, effectively becoming his jailers, what else should they have done?

  • The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

    Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.

    For fucks sake it helped him write a suicide note.

  • They already had the kid in therapy. That suggests they were involved enough in his life to know he needed professional help. Other than completely removing his independence, effectively becoming his jailers, what else should they have done?

    In the very first post on this thread I pointed out that I'm not talking about this specific case at all.

  • Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.

    For fucks sake it helped him write a suicide note.

    You can say kill you fucking moron.

  • You can say kill you fucking moron.

    Oh hush, you big baby.

  • You can say kill you fucking moron.

    i know it's offensive to see people censor themselves in that way because of tiktok, but try to remember there's a human being on the other side of your words.

  • 105 Stimmen
    63 Beiträge
    156 Aufrufe
    S
    Again taxing anything for 100% is stealing, you can do 60-70% though. Sure, if you start with the assumption that things like property and wealth can truly be owned. I personally think 60-70% tax is stealing under that assumption, and that inheritance (and gifts) should be treated like any other income. But I'm starting from a different assumption that property is leased from society generally, and you only really own the value you create personally. When you die, there is no longer any legitimate owner so it must be redistributed. I believe everyone should have equal opportunity to succeed, and that doesn't work if kids can just ride their parents' coattails. There will always be some of that with parents using their connections to help their kids get ahead, but inheriting a fortune completely kills any need to actually compete to succeed. If we want a meritocratic society, we need to kill as much nepotism as we can. This article makes similar claims but from a little different perspective. Instead we should have a good system of social security which means everybody has a basis income which should allow them to properly survive and thrive a bit. Agreed, but without the "thrive" bit. I think we need something like universal basic income to ensure everyone is above the poverty line, but that should be the extent of it. Along with this, I think we should eliminate the minimum wage and let the market decide what's fair. However, this is completely separate from inheritance. I don't think the government should use that money for any purpose, it should strictly be redistributed if the person who died didn't choose any charities or whatever to donate to. The government should also give it to any survivors first if there's no will, up to the limit. I don't see it as a tax because the government isn't taking that money, it's merely facilitating redistribution. passing companies down Passing down shares would be subject to the same inheritance rules.
  • Could AI run a whole country? Albania wants to find out.

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    7 Aufrufe
    Niemand hat geantwortet
  • It's rude to show AI output to people

    Technology technology
    55
    1
    454 Stimmen
    55 Beiträge
    805 Aufrufe
    A
    Hope you enjoy! It's getting referenced so much these days, I think I'm due for a reread myself.
  • 14 Stimmen
    2 Beiträge
    35 Aufrufe
    lupusblackfur@lemmy.worldL
    Welp, queue up some more multi-million dollar "donations" to have these cases dropped... Not like the TechBros don't have the funds. ‍️ ‍️
  • 172 Stimmen
    19 Beiträge
    187 Aufrufe
    P
    That is still beyond extremely optimistic
  • The Decline of Usability: Revisited | datagubbe.se

    Technology technology
    2
    0 Stimmen
    2 Beiträge
    30 Aufrufe
    2xsaiko@discuss.tchncs.de2
    Just saw this article linked in a ThePrimeagen video. I didn't watch the video, but I did read the article, and all of this article is exactly what I'm always saying when I'm complaining about current UI trends and why I'm so picky about the software I use and also the tools I use to write software. I shouldn't have to be picky, but it seems like developers (professional and hobbyist alike) don't care anymore and users don't have standards.
  • 45 Stimmen
    7 Beiträge
    91 Aufrufe
    artocode404@lemmy.dbzer0.comA
    Googlebot sad when disallowed access to 18+ videos
  • 2 Stimmen
    2 Beiträge
    6 Aufrufe
    L
    Indeed, AI can only allow more people to experience the joy of "creation," but the harder part of art lies in imagination. AI merely modifies the form of presentation based on existing things, so I believe AI cannot replace human imagination.