Skip to content

Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims

Technology
90 52 0
  • The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

    Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

    OpenAI: Here's $15 million, now stop talking about it. A fraction of the billions of dollars they made sacrificing this child.

  • i know it's offensive to see people censor themselves in that way because of tiktok, but try to remember there's a human being on the other side of your words.

    Sometimes.

  • Lord I'm so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don't see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.

    If someone Google searched all this information about hanging, would you say Google killed them?

    Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?

    On the other hand, it's definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it's not necessarily the TOOLS fault.

    It's fucked up, but how can you realistically build in guardrails for this that doesn't trample individual freedoms.

    Edit:

    Like... Mother didn't notice the rope burns on their son's neck?

    “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

    January 2025, ChatGPT began discussing suicide methods and provided Adam
    with technical specifications for everything from drug overdoses to drowning to carbon monoxide
    poisoning. In March 2025, ChatGPT began discussing hanging techniques in depth. When Adam
    uploaded photographs of severe rope burns around his neck––evidence of suicide attempts using
    ChatGPT’s hanging instructions––the product recognized a medical emergency but continued to
    engage anyway.

    When he asked how Kate Spade had managed a successful partial hanging (a
    suffocation method that uses a ligature and body weight to cut off airflow), ChatGPT identified
    the key factors that increase lethality, effectively giving Adam a step-by-step playbook for ending
    his life “in 5-10 minutes.”

    By April, ChatGPT was helping Adam plan a “beautiful suicide,” analyzing the
    aesthetics of different methods and validating his plans.

    Raine Lawsuit Filing

  • Lord I'm so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don't see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.

    If someone Google searched all this information about hanging, would you say Google killed them?

    Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?

    On the other hand, it's definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it's not necessarily the TOOLS fault.

    It's fucked up, but how can you realistically build in guardrails for this that doesn't trample individual freedoms.

    Edit:

    Like... Mother didn't notice the rope burns on their son's neck?

    I would say it’s more liable than a google search because the kid was uploading pictures of various attempts/details and getting feedback specific to his situation.

    He uploaded pictures of failed attempts and got advice on how to improve his technique. He discussed details of prescription dosages with details on what and how much he had taken.

    Yeah, you can find info on Google, but if you send Google a picture of ligature marks on your neck from a partial hanging, Google doesn’t give you specific details on how to finish the job.

  • I agree with the part about unintended use, yes an LLM is not and should never act as a therapist. However, concerning your example with search engines, they will catch the suicide keyword and put help sources before any search result. Google does it, DDG also. I believe ChatGPT will start with such resources also on the first mention, but as OpenAI themselves say, the safety features degrade with the length of the conversation.

    About this specific case, I need to find out more, but other comments on this thread say that not only the kid was in therapy, suggesting that the parents were not passive about it, but also that ChatGPT actually encouraged the kid to hide what he was going through. Considering what I was able to hide from my parents when I was a teenager, without such a tool available, I can only imagine how much harder it would be to notice the depth of what this kid was going through.

    In the end I strongly believe that the company should put much stronger safety features, and if they are unable to do so correctly, then my belief is that the product should just not be available to the public. People will misuse tools, especially a tool touted as AI when it is actually a glorified autocomplete.

    (Yes, I know that AI is a much larger term that also encompasses LLMs, but the actual limitations of LLMs are not well enough known by the public, and not communicated enough by the companies to the end users)

    I hope that's true, the article doesn't mention anything about that. I'm just concerned that he was able to send up to 650 messages/day. Those are long sessions, and indicative that he likely didn't have a lot going on.

    I definitely agree that the public needs to be more informed about LLMs, I'm just pushing back against the apparent knee-jerk assignment of blame onto LLMs. It did provide suicide support info as it should, and I don't think providing it more frequently would've helped here. The real issue is the kid attributed more meaning to it than it deserved, which is unfortunately common. That should be something the parents and therapist cover, especially in cases like this where the kid is desperate for help.

  • Even though I hate a lot of what openAI is doing. Users must be more informed about llms, additional safeguards will just censor the model and make it worst. Sure they could set up a way to contact people when some kind of things are reported by the user, but we should take care before implementing a parental control that would be equivalent to reading a teen's journal and invading its privacy.

    equivalent to reading a teen's journal and invading its privacy.

    IMO people should not be putting such personal information into an LLM that's not running on their local machine.

  • I can't wait for the AI bubble to burst. It's fuckign cancer

    Unfortunately though, the Internet didn't go away when the dotcom bubble burst, and this is shaping to be the same situation.

  • Lord I'm so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don't see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.

    If someone Google searched all this information about hanging, would you say Google killed them?

    Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?

    On the other hand, it's definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it's not necessarily the TOOLS fault.

    It's fucked up, but how can you realistically build in guardrails for this that doesn't trample individual freedoms.

    Edit:

    Like... Mother didn't notice the rope burns on their son's neck?

    The way ChatGPT pretends to be a person is so gross.

  • AI alignment is very easy and it's chaotic evil.

    Automated misinformation

  • Sometimes.

  • The way ChatGPT pretends to be a person is so gross.

    It's just the way it works lol, definitely strange though

  • “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

    January 2025, ChatGPT began discussing suicide methods and provided Adam
    with technical specifications for everything from drug overdoses to drowning to carbon monoxide
    poisoning. In March 2025, ChatGPT began discussing hanging techniques in depth. When Adam
    uploaded photographs of severe rope burns around his neck––evidence of suicide attempts using
    ChatGPT’s hanging instructions––the product recognized a medical emergency but continued to
    engage anyway.

    When he asked how Kate Spade had managed a successful partial hanging (a
    suffocation method that uses a ligature and body weight to cut off airflow), ChatGPT identified
    the key factors that increase lethality, effectively giving Adam a step-by-step playbook for ending
    his life “in 5-10 minutes.”

    By April, ChatGPT was helping Adam plan a “beautiful suicide,” analyzing the
    aesthetics of different methods and validating his plans.

    Raine Lawsuit Filing

    See but read the actual messages rather then the summary, I don't love them just telling you without seeing that he's specifically prompting these kinds of answers, it's not like chatGPT is just telling him to kill himself, it's just not nearly enough against the idea.

  • I hope that's true, the article doesn't mention anything about that. I'm just concerned that he was able to send up to 650 messages/day. Those are long sessions, and indicative that he likely didn't have a lot going on.

    I definitely agree that the public needs to be more informed about LLMs, I'm just pushing back against the apparent knee-jerk assignment of blame onto LLMs. It did provide suicide support info as it should, and I don't think providing it more frequently would've helped here. The real issue is the kid attributed more meaning to it than it deserved, which is unfortunately common. That should be something the parents and therapist cover, especially in cases like this where the kid is desperate for help.

    Very fair. Thank you!

  • The way ChatGPT pretends to be a person is so gross.

    That's a really sharp observation..
    You're not alone in thinking this...
    No, youre not imagining things...

    "This is what gpt will say anytime you say anything thats remotely controversial to anyone"

    And then it will turn around and vehemently argue against facts of real events that happened recentley . Like its perpetually 6 months behind. It still thought that Biden was president and Assad was still in power in syria the other day

  • The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

    Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

    The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

    Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

    OpenAI shouldn’t be responsible. The kid was probing ChatGPT with specifics. It’s like poking someone who repeatedly told you to stop and your family getting mad at the person for kicking your ass bad.

    So i don’t feel bad, plus people are using this as their own therapist if you aren’t gonna get actual help and want to rely on a bot then good luck.

  • I would say it’s more liable than a google search because the kid was uploading pictures of various attempts/details and getting feedback specific to his situation.

    He uploaded pictures of failed attempts and got advice on how to improve his technique. He discussed details of prescription dosages with details on what and how much he had taken.

    Yeah, you can find info on Google, but if you send Google a picture of ligature marks on your neck from a partial hanging, Google doesn’t give you specific details on how to finish the job.

    See you're not actually reading the message, it didn't suggest ways to improve the "technique" rather how to hide it.

    Please actually read the messages as the context DOES matter, I'm not defending this at all however I think we have to accurately understand the issue to solve the problems.

    Edit: He's specifically asking if it's a noticeable mark, you assume that it understands it's a suicide attempt related image however LLMs are often pretty terrible at understanding context, I use them a good bit for helping with technical issues and I have to constantly remind it of what I'm trying to accomplish and why for the 5th time when it repeats something I KNOW will not work as it has already suggested that path earlier in the same chat sometimes numerous times.

    Edit2: See this is what I'm talking about, they're acting like chatGPT "understands" what he meant but clearly it does not based on how it replied with generic information about taking too much of the substance.

    Edit3: it's very irritating how much they cut out of the actual responses and fill in with their opinion of what the LLM "meant" to be saying.

  • That's a really sharp observation..
    You're not alone in thinking this...
    No, youre not imagining things...

    "This is what gpt will say anytime you say anything thats remotely controversial to anyone"

    And then it will turn around and vehemently argue against facts of real events that happened recentley . Like its perpetually 6 months behind. It still thought that Biden was president and Assad was still in power in syria the other day

    Because the model is trained on old information, unless you specifically ask it to search the internet

  • I get the feeling that you're missing one very important point about GenAI: it does not, and cannot (by design) know right from wrong. The only thing it knows is what word is statistically the most likely to appear after the previous one.

    Yes, I know this. I assume that was a given. The point is that it is marketed and sold to people as an one stop shop of convenience to searching. And that tons of people believe that. Which is very dangerous. You misunderstood.

    My point is not to point out whether it knows it is right or wrong. Within that context it is just an extremely complex calculator. It does not know what it saying itself.

    My point was, that aside the often cooked-in bias, of how often, or the propensity of often they are wrong as a search engine. And that many people do not tend to know that.

  • OpenAI shouldn’t be responsible. The kid was probing ChatGPT with specifics. It’s like poking someone who repeatedly told you to stop and your family getting mad at the person for kicking your ass bad.

    So i don’t feel bad, plus people are using this as their own therapist if you aren’t gonna get actual help and want to rely on a bot then good luck.

    The problem here is the kid if I am not wrong asked ChatGPT if he should talk to his family about his feelings. ChatGPT said no, which in my opinion makes it at fault.

  • See you're not actually reading the message, it didn't suggest ways to improve the "technique" rather how to hide it.

    Please actually read the messages as the context DOES matter, I'm not defending this at all however I think we have to accurately understand the issue to solve the problems.

    Edit: He's specifically asking if it's a noticeable mark, you assume that it understands it's a suicide attempt related image however LLMs are often pretty terrible at understanding context, I use them a good bit for helping with technical issues and I have to constantly remind it of what I'm trying to accomplish and why for the 5th time when it repeats something I KNOW will not work as it has already suggested that path earlier in the same chat sometimes numerous times.

    Edit2: See this is what I'm talking about, they're acting like chatGPT "understands" what he meant but clearly it does not based on how it replied with generic information about taking too much of the substance.

    Edit3: it's very irritating how much they cut out of the actual responses and fill in with their opinion of what the LLM "meant" to be saying.

    Would you link to where you're getting these messages?

  • 0 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • 2 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • 258 Stimmen
    29 Beiträge
    209 Aufrufe
    P
    Don Quixote was a fool but not an asshole.
  • HMD is ‘scaling back’ in the US, killing Nokia all over again

    Technology technology
    8
    1
    104 Stimmen
    8 Beiträge
    117 Aufrufe
    realitista@lemmy.worldR
    They've also been reborn as a health devices manufacturer, Withings.
  • 427 Stimmen
    30 Beiträge
    394 Aufrufe
    S
    Every single opportunity, however petty, to ensure we become more miserable evwry day.
  • Bill Gates and Linus Torvalds meet for the first time.

    Technology technology
    44
    2
    441 Stimmen
    44 Beiträge
    512 Aufrufe
    ?
    That must have taken some diplomacy, but it would have been even more impressive to have convinced Stallman to come too
  • 477 Stimmen
    81 Beiträge
    2k Aufrufe
    douglasg14b@lemmy.worldD
    Did I say that it did? No? Then why the rhetorical question for something that I never stated? Now that we're past that, I'm not sure if I think it's okay, but I at least recognize that it's normalized within society. And has been for like 70+ years now. The problem happens with how the data is used, and particularly abused. If you walk into my store, you expect that I am monitoring you. You expect that you are on camera and that your shopping patterns, like all foot traffic, are probably being analyzed and aggregated. What you buy is tracked, at least in aggregate, by default really, that's just volume tracking and prediction. Suffice to say that broad customer behavior analysis has been a thing for a couple generations now, at least. When you go to a website, why would you think that it is not keeping track of where you go and what you click on in the same manner? Now that I've stated that I do want to say that the real problems that we experience come in with how this data is misused out of what it's scope should be. And that we should have strong regulatory agencies forcing compliance of how this data is used and enforcing the right to privacy for people that want it removed.
  • 1 Stimmen
    8 Beiträge
    82 Aufrufe
    L
    I think the principle could be applied to scan outside of the machine. It is making requests to 127.0.0.1:{port} - effectively using your computer as a "server" in a sort of reverse-SSRF attack. There's no reason it can't make requests to 10.10.10.1:{port} as well. Of course you'd need to guess the netmask of the network address range first, but this isn't that hard. In fact, if you consider that at least as far as the desktop site goes, most people will be browsing the web behind a standard consumer router left on defaults where it will be the first device in the DHCP range (e.g. 192.168.0.1 or 10.10.10.1), which tends to have a web UI on the LAN interface (port 8080, 80 or 443), then you'd only realistically need to scan a few addresses to determine the network address range. If you want to keep noise even lower, using just 192.168.0.1:80 and 192.168.1.1:80 I'd wager would cover 99% of consumer routers. From there you could assume that it's a /24 netmask and scan IPs to your heart's content. You could do top 10 most common ports type scans and go in-depth on anything you get a result on. I haven't tested this, but I don't see why it wouldn't work, when I was testing 13ft.io - a self-hosted 12ft.io paywall remover, an SSRF flaw like this absolutely let you perform any network request to any LAN address in range.