Teen killed himself after ‘months of encouragement from ChatGPT’, lawsuit claims
-
The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
The real issue is that mental health in the United States is an absolute fucking shitshow.
988 is a bandaid. It’s an attempt to pretend someone is doing anything. Really a front for 911.
Even when I had insurance, it was hundreds a month to see a therapist. Most therapists are also trained on CBT and CBT only because it’s a symptoms focused approach that gets you “well” enough to work. It doesn’t work for everything, it’s “evidence based” though in that it’s set up to be easy to measure. It’s an easy out, the McDonald’sification of therapy. Just work the program and everything will be okay.
There really are so few options for help.
-
What I find really unsettling from both this discussion and the one around the whole age verification thing
These are not the same thing.
Arguably, they are exactly the same thing, i.e. parents that are asking other people (namely, OpenAI in this case and adult sites operators in the other) to do their work of supervising their children because they are at best unable and at worst unwilling to do so themselves.
-
The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.
Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.
Yup... it's never the parents'...
-
Yup... it's never the parents'...
The fact the parents might be to blame doesn't take away from how openai's product told a kid how to off himself and helped him hide it in the process.
copying a comment from further down:
ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit. [Raine Lawsuit Filing](https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf)
Had a human said these things, it would have been illegal in most countries afaik.
-
OpenAI knowingly allowing its service to be used as a therapist most certainly makes them liable. They are toying with people's lives with an untested and unproven product.
This kid was poking no one and didn't get his ass beat, he is dead.
That’s like saying “webmd is knowingly acting like everyone’s doctor”, ChatGPT is a tool and you need to remember it’s a bot that doesn’t understand a lot or show emotion.
The kid also was telling ChatGPT “oh this hanging is for a character” along with other ways to trick it. Sure I guess OpenAi should be slightly responsible, but not as responsible for how people use it, if you’re going to not bother with real help I ain’t showing sympathy I get suicide sucks but what sucks more is putting your loved ones through that trajedy
-
That’s like saying “webmd is knowingly acting like everyone’s doctor”, ChatGPT is a tool and you need to remember it’s a bot that doesn’t understand a lot or show emotion.
The kid also was telling ChatGPT “oh this hanging is for a character” along with other ways to trick it. Sure I guess OpenAi should be slightly responsible, but not as responsible for how people use it, if you’re going to not bother with real help I ain’t showing sympathy I get suicide sucks but what sucks more is putting your loved ones through that trajedy
If a company designs a flawed tool that harms people they are responsible. Why are you trying so hard to not make them responsible.
The last part about suicide is pretty tone death. I have lost multiple people in my life to suicide.
-
The fact the parents might be to blame doesn't take away from how openai's product told a kid how to off himself and helped him hide it in the process.
copying a comment from further down:
ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit. [Raine Lawsuit Filing](https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf)
Had a human said these things, it would have been illegal in most countries afaik.
He could have Google the info. Humans failed this guy. Human behavior needs to change
GPT could have been Google or a stranger in. Chatroom.
-
Lord I'm so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don't see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.
If someone Google searched all this information about hanging, would you say Google killed them?
Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?
On the other hand, it's definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it's not necessarily the TOOLS fault.
It's fucked up, but how can you realistically build in guardrails for this that doesn't trample individual freedoms.
Edit:
Like... Mother didn't notice the rope burns on their son's neck?ChatGPTs responses here are vastly different to what you'd get from a Google search. It presented itself as a supportive friend, accepting the suicidal intent, basically planning out all the small details (including an offer to help with a suicide note without any request from Adam), and emotionally encouraging him by telling him that he wasn't weak or giving up.
One of the most damning examples of this encouragement was a sentence that, in reference to his family, said something like "you don't owe them your survival".
If OpenAI wasn't a huge for-profit company that claims to have strong safeguards against things like this then maybe people wouldn't be placing so much of the blame on ChatGPT.
If a friend of Adam's said all the things that ChatGPT said to him they would certainly be found to be culpable to some degree.
-
ChatGPTs responses here are vastly different to what you'd get from a Google search. It presented itself as a supportive friend, accepting the suicidal intent, basically planning out all the small details (including an offer to help with a suicide note without any request from Adam), and emotionally encouraging him by telling him that he wasn't weak or giving up.
One of the most damning examples of this encouragement was a sentence that, in reference to his family, said something like "you don't owe them your survival".
If OpenAI wasn't a huge for-profit company that claims to have strong safeguards against things like this then maybe people wouldn't be placing so much of the blame on ChatGPT.
If a friend of Adam's said all the things that ChatGPT said to him they would certainly be found to be culpable to some degree.
I agree with everything you said, however chatGPT isn't a person, it doesn't have intent or comprehensive understanding of the implications of what it is saying. That's a huge difference between a friend of Adam's vs this LLM.
I also do think it's harsh but not entirely false to say we as humans do not owe anyone our own survival, do you feel the same way about people with terminal illness who wish to end their own suffering?
I absolutely understand that IS NOT this situation, and don't intend to conflate those situations, however this is an underlying implication to vilifying a statement such as that on its own.
I am lucky enough to not suffer from sucidial ideation, and I have a hard time understanding the motivations for otherwise healthy individuals to do so, which absolutely colors my perceptions on situations like this, I do however understand why someone in intense pain with a terminal condition should not be made to feel worse by having their self determination vilified because of the effects it'd have on other people.
It's just such a messy horrible situation all around and I worry about people being overly reactionary and not getting to the root of the issues.
-
If only he had parents
I parented a teen boy. Sometimes, no matter what you do and no matter how close you were before puberty, a switch flips outside your control and they won’t talk to you anymore. We were a typical family, no abuse, no fighting, nobody on drugs, both parents with 9-5 office jobs, very engaged with school and etc.
Thankfully, after riding it out (getting him therapy, giving space, respect, and support), he came out the other side fine. But there were a few harrowing years during that phase.
I went through a similar phase in my teens. If AI was there to feed my issues, I might not have survived it. Teenage hormones are a helluva drug.
-
Not encouraging users to kill themselves is "ruining it"? Lmao
As far as I know, magic doesn't exist, so words are incapable of action & can't actually kill anyone.
A person who commits suicide chooses it & takes action to perform it.
They are responsible for their suicide even if another person tells them & hands them a weapon.These are merely words on a screen lacking force to compel.
There's no intent or likelihood to incite imminent, lawless action.
Readers have agency & plenty of time to think words through & reject ideas.It's hardly any different than an oblivious peer saying the same thing.
Their words shouldn't create any legal obligation, and neither should these. -
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.
For fucks sake it helped him write a suicide note.
unalives
seriously?
-
i know it's offensive to see people censor themselves in that way because of tiktok, but try to remember there's a human being on the other side of your words.
A human being that should be criticized mercilessly?
-
He could have Google the info. Humans failed this guy. Human behavior needs to change
GPT could have been Google or a stranger in. Chatroom.
Humans failed this guy.
I am not arguing this point, I agree.
A search engine presents the info that is available, it doesnt also help talk you into doing it.
A stranger doing it in a chatroom doing it should go to prison, as has happened in the past. Should this not also be illegal for LLM's? -
except OpenAI isn't making a dime. they're just burning money at a crazy rate.
Fake news, CEO and all emplyes are getting pay'd in full, it doesn't matter if they sell the product to its users or sell (user data) to their sponsors or share the data internaly, it doesnt matter that the service model itself is not profitable as they make the rest from selling a (fake?) promises.
Same with many others like Youtube, they are also "not profitable" on paper as a standalone service. It only mean they are using you, selling your data or selling some promises.
If they would actully not be profitable then they would rise prices or just disapear and some other company would arise but with srtategy that is at least sustainable.
Open source devs can be losing money, as the pay from their own pockets.
I would like to see at least one person in that company that is not getting money from it but fund it from own money.
-
I parented a teen boy. Sometimes, no matter what you do and no matter how close you were before puberty, a switch flips outside your control and they won’t talk to you anymore. We were a typical family, no abuse, no fighting, nobody on drugs, both parents with 9-5 office jobs, very engaged with school and etc.
Thankfully, after riding it out (getting him therapy, giving space, respect, and support), he came out the other side fine. But there were a few harrowing years during that phase.
I went through a similar phase in my teens. If AI was there to feed my issues, I might not have survived it. Teenage hormones are a helluva drug.
I'd second that. I grew up in a really supportive family, but when I got to teenage years, I kept stuff to myself. Wanted to solve my problems myself. Pride and embarrassment and nothing to do with how they parented.
-
The real issue is that mental health in the United States is an absolute fucking shitshow.
988 is a bandaid. It’s an attempt to pretend someone is doing anything. Really a front for 911.
Even when I had insurance, it was hundreds a month to see a therapist. Most therapists are also trained on CBT and CBT only because it’s a symptoms focused approach that gets you “well” enough to work. It doesn’t work for everything, it’s “evidence based” though in that it’s set up to be easy to measure. It’s an easy out, the McDonald’sification of therapy. Just work the program and everything will be okay.
There really are so few options for help.
They had Adam in therapy. It sounds like they were getting him the help he needed, but ChatGPT told him it was his closest friend and to hide his feelings from his parents and others. If that was happening, whatever mental healthcare he was getting would have been undermined by the AI.
-
He could have Google the info. Humans failed this guy. Human behavior needs to change
GPT could have been Google or a stranger in. Chatroom.
You should read the filing.
Google might have clinically told him things, but it wouldn’t have encouraged him, telling him he should hide the marks on his neck from a previous failed attempt by wearing a black turtleneck, telling him how to tie the knot next time, and telling him to hide his feelings from his parents and others.
His parents had him in therapy. He also told the AI he wanted to leave a noose out where his parents would find it, and the AI told him not to. It actively encouraged him to hide all this from his parents. A Google search wouldn’t do that, and it sounds like his parents did care.
-
I think we all agree on the fact that OpenAI isn't exactly the most ethical corporation on this planet (to use a gentle euphemism), but you can't blame a machine for doing something that it doesn't even understand.
Sure, you can call for the creation of more "guardrails", but they will always fall short: until LLMs are actually able to understand what they're talking about, what you're asking them and the whole context around it, there will always be a way to claim that you are just playing, doing worldbuilding or whatever, just as this kid did.
What I find really unsettling from both this discussion and the one around the whole age verification thing, is that people are calling for techinical solutions to social problems, an approach that always failed miserably; what we should call for is for parents to actually talk to their children and spend some time with them, valuing their emotions and problems (however insignificant they might appear to a grown-up) in order to, you know, at least be able to tell if their kid is contemplating suicide.
but you can't blame a machine for doing something that it doesn't even understand.
But you can blame the creators and sellers of that machine for operating unethically.
If I build and sell a coffee maker that sometimes malfunctions and kills people, I’ll be sued into oblivion, and my coffee maker will be removed from the market. You don’t blame the coffee maker, but you absolutely hold the creator accountable.
-
-
-
-
-
-
Scientists spot ‘superorganism’ in the wild for the first time and it’s made of worms, In a groundbreaking discovery, scientists have observed nematodes, tiny worms, forming 'living towers' in nature
Technology1
-
USDA Reverses Course, Commits to Restore Purged Climate Webpages in Response to Farmers’ Lawsuit
Technology1
-