New study sheds light on ChatGPT’s alarming interactions with teens
-
We need to censor these AIs even more, to protect the children! We should ban them altogether. Kids should grow up with 4chan, general internet gore and pedos in chat lobbies like the rest of us, not with this devil AI.
and here we are
-
This post did not contain any content.
New study sheds light on ChatGPT's alarming interactions with teens
New research from a watchdog group reveals ChatGPT can provide harmful advice to teens. The Associated Press reviewed interactions where the chatbot gave detailed plans for drug use, eating disorders, and even suicide notes.
AP News (apnews.com)
This one cracks me up.
-
and here we are
Survivor bias, eh?
-
only because marketing has shit all over the term
AI was never more than algorithms which could be argued to have some semblance of intelligence somewhere. It's sole purpose was marketing by scientists to get funding.
Since the 60s everything related to neural networks is classified as AI. LLMs are neural networks, therefore they fall under the same label.
-
Survivor bias, eh?
Being frankly I always got away from 4chan as kid. Gave me some craig list vibes so never got really into it because though was boring or something like that.
-
I think we need a built in safety for people who actually develop an emotional relationship with AI because that's not a healthy sign
Good thing capitalism has provided you an AI chat bot psychiatrist to help you not depend on AI for mental and emotional healrh.
-
This one cracks me up.
Wait until the White House releases the one it has trained on the Epstein Files.
-
We need to censor these AIs even more, to protect the children! We should ban them altogether. Kids should grow up with 4chan, general internet gore and pedos in chat lobbies like the rest of us, not with this devil AI.
Kids should grow up with 4chan, general internet gore and pedos in chat lobbies like the rest of us, not with this devil AI.
Hey stop making fun of my corny childhood.
-
This post did not contain any content.
New study sheds light on ChatGPT's alarming interactions with teens
New research from a watchdog group reveals ChatGPT can provide harmful advice to teens. The Associated Press reviewed interactions where the chatbot gave detailed plans for drug use, eating disorders, and even suicide notes.
AP News (apnews.com)
In the U.S., more than 70% of teens are turning to AI chatbots for companionship and half use AI companions regularly.
I weep for the future. Come to think of it, I'm weeping for the present.
-
Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.
I don’t remember reading about sudden shocking numbers of people getting “Google-induced psychosis.”
ChaptGPT and similar chatbots are very good at imitating conversation. Think of how easy it is to suspend reality online—pretend the fanfic you’re reading is canon, stuff like that. When those bots are mimicking emotional responses, it’s very easy to get tricked, especially for mentally vulnerable people. As a rule, the mentally vulnerable should not habitually “suspend reality.”