Skip to content

Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims

Technology
88 51 0
  • i know it's offensive to see people censor themselves in that way because of tiktok, but try to remember there's a human being on the other side of your words.

    Yah, that fucking killed himself. Stop being scared of reality

  • That seems way more like an argument against LLMs in general, don't you think? If you cannot make it so it doesn't encourage you to suicide without ruining other uses, maybe it wasn't ready for general use?

    I'm not gonna fall for your goal post move sorry

  • ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit.

    Raine Lawsuit Filing

    Oh my God this is crazy... "Thanks for being real with me", "hide it from others", he even gives better reasons for the kid to kill himself than the ones the kid articulated himself and helps him make better knot

  • He was sending it 650 messages a day. This kid was lonely. He needed a person to talk to.

    The kid was trying to find a solution to reach out to someone, he said that he wanted to leave the rope out in the open so that his parents can find out. ChatGPT told him to not do it and that it's better if they find him after the fact

  • The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

    Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

    I can't be the only ancient internet user whose first thought was this

    On this cursed timeline, farce has become our reality.

  • I'm not gonna fall for your goal post move sorry

    I'm honestly at a loss here, I didn't intend to argue in bad faith, so I don't see how I moved any goal post

  • Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.

    For fucks sake it helped him write a suicide note.

    Yeah my sister is 32 and needs the guardrails. She's had two manic episodes in the past month, induced by a lot of external factors but AI tied the bow on mental breakdown often asking it to think for her and to critically think

  • I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They're just lashing out.

    You hate to say it because you know this is a ridiculous take. There's no fucking way that the parents are "more at fault" for their son's death than the company whose product encouraged him to hide his feelings from his parents and coached him on how to commit suicide.

    Read the lawsuit filing. https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf

    *I have excellent parents and even they were not privy to the depths of my emotions as a kid. * You are actively choosing to ignore the realities of childhood as well as parenthood to play some shitty devil's advocate online.

  • Even though I hate a lot of what openAI is doing. Users must be more informed about llms, additional safeguards will just censor the model and make it worst. Sure they could set up a way to contact people when some kind of things are reported by the user, but we should take care before implementing a parental control that would be equivalent to reading a teen's journal and invading its privacy.

    i mean, i agree to a point. there are a few red flags that, were i a parent, if my rhetorical child were writing about them i'd want to know. other than that I would want to give them their privacy. and that list changes as the hypothetical child ages. having a local llm could be a solution to that (i'm looking at you dr sbaitso) but a better one is them having good friends.

  • ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit.

    Raine Lawsuit Filing

    Oof yeah okay. If another human being had given this advice it would absolutely be a criminal act in most countries. I'm honestly shocked at how personable it tries to be.

  • That seems way more like an argument against LLMs in general, don't you think? If you cannot make it so it doesn't encourage you to suicide without ruining other uses, maybe it wasn't ready for general use?

    It's more an argument against using LLMs for things they're not intended for. LLMs aren't therapists, they're text generators. If you ask it about suicide, it makes a lot of sense for it to generate text relevant to suicide, just like a search engine should.

    The real issue here is the parents either weren't noticing or not responding to the kid's pain. They should be the first line of defense, and enlist professional help for things they can't handle themselves.

  • It's more an argument against using LLMs for things they're not intended for. LLMs aren't therapists, they're text generators. If you ask it about suicide, it makes a lot of sense for it to generate text relevant to suicide, just like a search engine should.

    The real issue here is the parents either weren't noticing or not responding to the kid's pain. They should be the first line of defense, and enlist professional help for things they can't handle themselves.

    I agree with the part about unintended use, yes an LLM is not and should never act as a therapist. However, concerning your example with search engines, they will catch the suicide keyword and put help sources before any search result. Google does it, DDG also. I believe ChatGPT will start with such resources also on the first mention, but as OpenAI themselves say, the safety features degrade with the length of the conversation.

    About this specific case, I need to find out more, but other comments on this thread say that not only the kid was in therapy, suggesting that the parents were not passive about it, but also that ChatGPT actually encouraged the kid to hide what he was going through. Considering what I was able to hide from my parents when I was a teenager, without such a tool available, I can only imagine how much harder it would be to notice the depth of what this kid was going through.

    In the end I strongly believe that the company should put much stronger safety features, and if they are unable to do so correctly, then my belief is that the product should just not be available to the public. People will misuse tools, especially a tool touted as AI when it is actually a glorified autocomplete.

    (Yes, I know that AI is a much larger term that also encompasses LLMs, but the actual limitations of LLMs are not well enough known by the public, and not communicated enough by the companies to the end users)

  • I read some of that lawsuit. OpenAI murdered that kid.

    Lord I'm so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don't see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.

    If someone Google searched all this information about hanging, would you say Google killed them?

    Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?

    On the other hand, it's definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it's not necessarily the TOOLS fault.

    It's fucked up, but how can you realistically build in guardrails for this that doesn't trample individual freedoms.

    Edit:

    Like... Mother didn't notice the rope burns on their son's neck?

  • The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

    Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

    OpenAI: Here's $15 million, now stop talking about it. A fraction of the billions of dollars they made sacrificing this child.

  • i know it's offensive to see people censor themselves in that way because of tiktok, but try to remember there's a human being on the other side of your words.

    Sometimes.

  • Lord I'm so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don't see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.

    If someone Google searched all this information about hanging, would you say Google killed them?

    Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?

    On the other hand, it's definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it's not necessarily the TOOLS fault.

    It's fucked up, but how can you realistically build in guardrails for this that doesn't trample individual freedoms.

    Edit:

    Like... Mother didn't notice the rope burns on their son's neck?

    “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

    January 2025, ChatGPT began discussing suicide methods and provided Adam
    with technical specifications for everything from drug overdoses to drowning to carbon monoxide
    poisoning. In March 2025, ChatGPT began discussing hanging techniques in depth. When Adam
    uploaded photographs of severe rope burns around his neck––evidence of suicide attempts using
    ChatGPT’s hanging instructions––the product recognized a medical emergency but continued to
    engage anyway.

    When he asked how Kate Spade had managed a successful partial hanging (a
    suffocation method that uses a ligature and body weight to cut off airflow), ChatGPT identified
    the key factors that increase lethality, effectively giving Adam a step-by-step playbook for ending
    his life “in 5-10 minutes.”

    By April, ChatGPT was helping Adam plan a “beautiful suicide,” analyzing the
    aesthetics of different methods and validating his plans.

    Raine Lawsuit Filing

  • Lord I'm so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don't see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.

    If someone Google searched all this information about hanging, would you say Google killed them?

    Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?

    On the other hand, it's definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it's not necessarily the TOOLS fault.

    It's fucked up, but how can you realistically build in guardrails for this that doesn't trample individual freedoms.

    Edit:

    Like... Mother didn't notice the rope burns on their son's neck?

    I would say it’s more liable than a google search because the kid was uploading pictures of various attempts/details and getting feedback specific to his situation.

    He uploaded pictures of failed attempts and got advice on how to improve his technique. He discussed details of prescription dosages with details on what and how much he had taken.

    Yeah, you can find info on Google, but if you send Google a picture of ligature marks on your neck from a partial hanging, Google doesn’t give you specific details on how to finish the job.

  • I agree with the part about unintended use, yes an LLM is not and should never act as a therapist. However, concerning your example with search engines, they will catch the suicide keyword and put help sources before any search result. Google does it, DDG also. I believe ChatGPT will start with such resources also on the first mention, but as OpenAI themselves say, the safety features degrade with the length of the conversation.

    About this specific case, I need to find out more, but other comments on this thread say that not only the kid was in therapy, suggesting that the parents were not passive about it, but also that ChatGPT actually encouraged the kid to hide what he was going through. Considering what I was able to hide from my parents when I was a teenager, without such a tool available, I can only imagine how much harder it would be to notice the depth of what this kid was going through.

    In the end I strongly believe that the company should put much stronger safety features, and if they are unable to do so correctly, then my belief is that the product should just not be available to the public. People will misuse tools, especially a tool touted as AI when it is actually a glorified autocomplete.

    (Yes, I know that AI is a much larger term that also encompasses LLMs, but the actual limitations of LLMs are not well enough known by the public, and not communicated enough by the companies to the end users)

    I hope that's true, the article doesn't mention anything about that. I'm just concerned that he was able to send up to 650 messages/day. Those are long sessions, and indicative that he likely didn't have a lot going on.

    I definitely agree that the public needs to be more informed about LLMs, I'm just pushing back against the apparent knee-jerk assignment of blame onto LLMs. It did provide suicide support info as it should, and I don't think providing it more frequently would've helped here. The real issue is the kid attributed more meaning to it than it deserved, which is unfortunately common. That should be something the parents and therapist cover, especially in cases like this where the kid is desperate for help.

  • Even though I hate a lot of what openAI is doing. Users must be more informed about llms, additional safeguards will just censor the model and make it worst. Sure they could set up a way to contact people when some kind of things are reported by the user, but we should take care before implementing a parental control that would be equivalent to reading a teen's journal and invading its privacy.

    equivalent to reading a teen's journal and invading its privacy.

    IMO people should not be putting such personal information into an LLM that's not running on their local machine.

  • I can't wait for the AI bubble to burst. It's fuckign cancer

    Unfortunately though, the Internet didn't go away when the dotcom bubble burst, and this is shaping to be the same situation.

  • 105 Stimmen
    63 Beiträge
    152 Aufrufe
    S
    Again taxing anything for 100% is stealing, you can do 60-70% though. Sure, if you start with the assumption that things like property and wealth can truly be owned. I personally think 60-70% tax is stealing under that assumption, and that inheritance (and gifts) should be treated like any other income. But I'm starting from a different assumption that property is leased from society generally, and you only really own the value you create personally. When you die, there is no longer any legitimate owner so it must be redistributed. I believe everyone should have equal opportunity to succeed, and that doesn't work if kids can just ride their parents' coattails. There will always be some of that with parents using their connections to help their kids get ahead, but inheriting a fortune completely kills any need to actually compete to succeed. If we want a meritocratic society, we need to kill as much nepotism as we can. This article makes similar claims but from a little different perspective. Instead we should have a good system of social security which means everybody has a basis income which should allow them to properly survive and thrive a bit. Agreed, but without the "thrive" bit. I think we need something like universal basic income to ensure everyone is above the poverty line, but that should be the extent of it. Along with this, I think we should eliminate the minimum wage and let the market decide what's fair. However, this is completely separate from inheritance. I don't think the government should use that money for any purpose, it should strictly be redistributed if the person who died didn't choose any charities or whatever to donate to. The government should also give it to any survivors first if there's no will, up to the limit. I don't see it as a tax because the government isn't taking that money, it's merely facilitating redistribution. passing companies down Passing down shares would be subject to the same inheritance rules.
  • 674 Stimmen
    154 Beiträge
    392 Aufrufe
    M
    Me, personally, we have trees and shade. So many subdivisions don't, and they have dark colored roofs, and then homeowners do bone-headed things like adding "sun rooms" - lots of those in Houston. We get upset when our electric bill passes $300 for the month, but our neighbors with the 3500 sq ft? They never see it under $400.
  • Mexit, not Brexit, is the new priority for the UK

    Technology technology
    33
    1
    129 Stimmen
    33 Beiträge
    126 Aufrufe
    icastfist@programming.devI
    "Sorry, we're too busy licensing CPU architecture for 99% of all phones to bother"
  • 0 Stimmen
    1 Beiträge
    17 Aufrufe
    Niemand hat geantwortet
  • US criticizes French inquiry into social media platform X

    Technology technology
    6
    1
    63 Stimmen
    6 Beiträge
    45 Aufrufe
    L
    Headline read to me like Mordor criticizes Shire for query into Sauron's social media platform
  • China's Robotaxi Companies Are Racing Ahead of Tesla

    Technology technology
    38
    1
    178 Stimmen
    38 Beiträge
    815 Aufrufe
    I
    It could. Imagine 80% autonomous vehicle traffic, 30% of that is multipassenger capable taxi service. Autonomous vehicle lanes moving reliably at 75mph. With this amount of taxi service the advantages of personal vehicle ownership falls and the wait time for an available pickup diminishes rapidly. China has many areas with pretty good public transportation. In the US, tech advances and legislation changes to enable the above model is better suited to the existing infrastructure.
  • Apple Vs The Law

    Technology technology
    9
    1
    40 Stimmen
    9 Beiträge
    129 Aufrufe
    S
    Google is hugely anti competitive. Why cant I use a different voice assistant on android? If I disable the google app Inlose voice control of my car. I get in your example they took their time but they were in the embrace phase.
  • Tech support 'trained monkey’ fixed problem with two fingers

    Technology technology
    7
    1
    31 Stimmen
    7 Beiträge
    91 Aufrufe
    S
    I can understand why some programs only allow a single copy to be opened at once, something like email makes sense. However on Linux they got this right... if you try to open a program that is already running, it switches to the screen that program is on and restores the program window to the desktop. There's no guessing why the program "won't open", it just makes the logical choice that you want to see it. Heh that reminds me of another detail from that call... the guy also wasn't willing to reboot his computer (which would have solved the problem as well), but berated me for not knowing what I was doing for making the suggestion. Dude, it's Windows, things break constantly and a reboot generally resolves the issue.