Skip to content

A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.

Technology
11 10 0
  • This post did not contain any content.
  • This post did not contain any content.

    I read about this earlier on Ars Technica. I was expecting a paywalled link. Was not expecting to find a mention of "No Longer Human." Ars didn't mention that. Or the chat logs. It was a long article but didn't go into the same depth.

    So, I've read "No Longer Human." A more recent translation is called "A Shameful Life" and that's a bit more apt, I think, but doesn't have the same ring. It's about a guy who feels less and less like a person, like what he does and feels doesn't matter. It's a wild book, about a double suicide, and the author later killed himself much the same way. There have been several adaptations — none of them very good. None of them quite captured the book. I wonder if it's just unfilmable. Anyway, it's a shame that it's being referenced here, because it's good literature worth considering, and I hate to see it maligned in much the same way as the Doom game was following the Columbine massacre. Relevant or not (guns in that case, suicide in this case), it's a shame art gets associated with tragedy simply by association.

    Perhaps the same could be said of AI technology, and it has been. But certainly AI needs better safeguards. According to Ars, when the guy started asking about suicide, ChatGPT said it could not help him — unless he specified he was talking about fictional characters. So he did that (Ars constantly refers to it as a "jailbreak") for a while, and then I guess (and they guess as well) that ChatGPT just assumed that context and stopped requiring him to specify that.

  • This post did not contain any content.

    What a surprise, the empathy-free text generator makes things worse when people expect it to output empathy. My condolences to the kid's family and I hope he's in a better place, but this sort of thing is going to happen more and more until people realize that AI chatbots only seem human-like because the human brain is so good at empathy that it projects emotions and agency onto anything, even a literal cowpile with googly eyes on top.

    AI isn't "good enough to fool us" . We're just stupid enough to be fooled even by something as moronic as AI. What we emphasize in such a statement makes all the difference in how we handle this tech.

  • What a surprise, the empathy-free text generator makes things worse when people expect it to output empathy. My condolences to the kid's family and I hope he's in a better place, but this sort of thing is going to happen more and more until people realize that AI chatbots only seem human-like because the human brain is so good at empathy that it projects emotions and agency onto anything, even a literal cowpile with googly eyes on top.

    AI isn't "good enough to fool us" . We're just stupid enough to be fooled even by something as moronic as AI. What we emphasize in such a statement makes all the difference in how we handle this tech.

    Yeah, article said he had talked for months about hanging himself. Any human friend would have done their best to save him. Being proactive about making him feel better, working through his problems with him, and/or notifying his parents or a school teacher.

    Meanwhile the chat bot just encouraged him to seek help himself. Which isn't bad, but when someone is suicidal, particularly when they keep bringing it up, is clearly not enough.


    I feel really bad for anyone treating chatbots as friends. They are basically guaranteed to get screwed over by the bot. And furthermore, they aren't learning how to connect with humans, humans who might become a lifelong friend, or teach one the skills to befriend a future lifelong friend.

  • I read about this earlier on Ars Technica. I was expecting a paywalled link. Was not expecting to find a mention of "No Longer Human." Ars didn't mention that. Or the chat logs. It was a long article but didn't go into the same depth.

    So, I've read "No Longer Human." A more recent translation is called "A Shameful Life" and that's a bit more apt, I think, but doesn't have the same ring. It's about a guy who feels less and less like a person, like what he does and feels doesn't matter. It's a wild book, about a double suicide, and the author later killed himself much the same way. There have been several adaptations — none of them very good. None of them quite captured the book. I wonder if it's just unfilmable. Anyway, it's a shame that it's being referenced here, because it's good literature worth considering, and I hate to see it maligned in much the same way as the Doom game was following the Columbine massacre. Relevant or not (guns in that case, suicide in this case), it's a shame art gets associated with tragedy simply by association.

    Perhaps the same could be said of AI technology, and it has been. But certainly AI needs better safeguards. According to Ars, when the guy started asking about suicide, ChatGPT said it could not help him — unless he specified he was talking about fictional characters. So he did that (Ars constantly refers to it as a "jailbreak") for a while, and then I guess (and they guess as well) that ChatGPT just assumed that context and stopped requiring him to specify that.

    But certainly AI needs better safeguards

    No. Much like a kitchen blender, it's a tool. It's on the user to not stick their hand into it and turn it on

  • But certainly AI needs better safeguards

    No. Much like a kitchen blender, it's a tool. It's on the user to not stick their hand into it and turn it on

    Even blenders have safeguards though, if the pitcher isn't installed most won't work. I don't think it's insane to require some sort of safety with LLMs.

  • Even blenders have safeguards though, if the pitcher isn't installed most won't work. I don't think it's insane to require some sort of safety with LLMs.

    I think the metaphor is finetuning a LLM for ‘safety’ is like trying to engineer the blades to be “finger safe”, when the better approach would be to guard against fingers getting inside an active blender.

    Finetuning LLMs to be safe is just not going to work, but building stricter usage structures around them will. Like tools.

    This kinda goes against Altman's assertion that they’re magic crystal balls (in progress), which would pop his bubble he’s holding up. But in the weeds of LLM land, you see a lot more people calling for less censoring, and more sensible and narrow usage.

  • Even blenders have safeguards though, if the pitcher isn't installed most won't work. I don't think it's insane to require some sort of safety with LLMs.

    The pitcher doesn't stop you from sticking your fingers into it if you try, it just makes accidents less likely. Same thing here.

  • But certainly AI needs better safeguards

    No. Much like a kitchen blender, it's a tool. It's on the user to not stick their hand into it and turn it on

    A blender with glass blades that frequently shatter when at full speed or when cutting hard food. It also has no lid and is marketed as a bath toy.

    It's up to the user to think for themselves and not use it as a bath toy. And to use safety goggles. And check the food for glass shards before each bite.

  • This post did not contain any content.

    Darwined

  • What a surprise, the empathy-free text generator makes things worse when people expect it to output empathy. My condolences to the kid's family and I hope he's in a better place, but this sort of thing is going to happen more and more until people realize that AI chatbots only seem human-like because the human brain is so good at empathy that it projects emotions and agency onto anything, even a literal cowpile with googly eyes on top.

    AI isn't "good enough to fool us" . We're just stupid enough to be fooled even by something as moronic as AI. What we emphasize in such a statement makes all the difference in how we handle this tech.

    The ELIZA effect, now proven in blood.