Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
-
The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
He was sending it 650 messages a day. This kid was lonely. He needed a person to talk to.
-
I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They're just lashing out.
It’s very possible for someone to appear fine in public while struggling privately. The family can’t be blamed for not realizing what was happening.
The bigger issue is that LLMs were released without sufficient safeguards. They were rushed to market to attract investment before their risks were understood.
It’s worth remembering that Google and Facebook already had systems comparable to ChatGPT, but they kept them as research tools because the outputs were unpredictable and the societal impact was unknown.
Only after OpenAI pushed theirs into the public sphere (framing it as a step toward AGI) Google and Facebook did follow, not out of readiness, but out of fear of being left behind.
-
The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
I can't wait for the AI bubble to burst. It's fuckign cancer
-
I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They're just lashing out.
Nah, this is every parent ever.
-
I can't wait for the AI bubble to burst. It's fuckign cancer
Me too. Nearly every job posting I see now wants some experience with AI. I make the argument AI is not always correct and will output what you want it to have a bias. Since biases are not always correct, the data/information is useless.
-
The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
I don't think it's their fault tbh. If he offed himself, he probably wanted to do it anyway, even without the influence of the bot.
If there's no message where the bot literally encouraged suicide, then they shouldn't have to pay out.
-
I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.
As for your question: I won't blame the parents here in the slightest because they will likely put more than enough blame on themselves. Instead I'll try to keep it general:
Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.
That's independent of the technology.
This is a big task because the border between normal puberty and behavior that warrants action is slim to non-existent.
Overall I wish for way better education for parents both in terms of age appropriate patterns as well as what kind of help is available to them depending on their country and culture.
They already had the kid in therapy. That suggests they were involved enough in his life to know he needed professional help. Other than completely removing his independence, effectively becoming his jailers, what else should they have done?
-
The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.
For fucks sake it helped him write a suicide note.
-
They already had the kid in therapy. That suggests they were involved enough in his life to know he needed professional help. Other than completely removing his independence, effectively becoming his jailers, what else should they have done?
In the very first post on this thread I pointed out that I'm not talking about this specific case at all.
-
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.
For fucks sake it helped him write a suicide note.
You can say kill you fucking moron.
-
You can say kill you fucking moron.
Oh hush, you big baby.
-
You can say kill you fucking moron.
i know it's offensive to see people censor themselves in that way because of tiktok, but try to remember there's a human being on the other side of your words.
-
The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
Unpopular opinion - parents fail parenting and now getting a big pay day and ruining the tool for everyone else.
-
He was sending it 650 messages a day. This kid was lonely. He needed a person to talk to.
If only he had parents
-
Me too. Nearly every job posting I see now wants some experience with AI. I make the argument AI is not always correct and will output what you want it to have a bias. Since biases are not always correct, the data/information is useless.
Yeah, I have some background in History and ChatGTP will be objectively wrong with some things. Then I will tell it is wrong because X, Y and Z, and then the stupid thing will come back with, "Yes, you are right, X, Y, Z were a thing because...".
If I didn't know that it was wrong, or if say, a student took what it said at face value, then they too would now be wrong. Literal misinformation.
Not to mention the other times it is wrong, and not just chatGTP because it will source things like Reddit. Recently Brave AI made the claim that Ironfox the Firefox fork was based on FF ESR. That is impossible since Ironfox is a fork for Android. So why was it wrong? It quoted some random guy who said that on Reddit.
-
If only he had parents
Or a society
-
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.
For fucks sake it helped him write a suicide note.
Real answer: AI alignment is a very difficult and fundamentally unsolved problem. Whole nonprofits (“institutes”) have popped up with the purpose of solving AI alignment. It’s not getting solved (ever, IMO).
-
The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
I read some of that lawsuit. OpenAI murdered that kid.
-
Real answer: AI alignment is a very difficult and fundamentally unsolved problem. Whole nonprofits (“institutes”) have popped up with the purpose of solving AI alignment. It’s not getting solved (ever, IMO).
I think OP knows this. It's an unsolvable problem. The conclusion from that might be that this tech shouldn't be 2 clicks away from every teen, or even person's, hand.
-
I read some of that lawsuit. OpenAI murdered that kid.
Can you share anything here please? I'm no fan of OpenAI but I haven't seen anything yet that makes me think ChatGPT was particularly relevant to this poor teen's actions.