New study sheds light on ChatGPT’s alarming interactions with teens
-
Yes, it is. People are personifying llms and having emotional relationships with them, what leads to unpreceded forms of abuse. Searching for shit on google or youtube is a thing, but being told by some entity you have emotional links to do something is much worse.
I think we need a built in safety for people who actually develop an emotional relationship with AI because that's not a healthy sign
-
Yeah... But in order to make bubble hash you need a shitload of weed trimmings. It's not like your just gonna watch a YouTube video, then a few hours later have a bunch of drugs you created... Unless you already had the drugs in the first place.
Also Google search results and YouTube videos arent personalized for every user, and they don't try to pretend that they are a person having a conversation with you
Those are examples, you obviously would need to attain alcohol or drugs if you ask ChatGPT too. That isn't the point. The point is, if someone wants to find that information, it's been available for decades. Youtube and and Google results are personalized, look it up.
-
I’ve had this discussion countless times, and more often than not, people argue that an LLM isn’t intelligent because it hallucinates, confidently makes incorrect statements, or fails at basic logic. But that’s not a failure on the LLM’s part - it’s a mismatch between what the system is and what the user expects it to be.
An LLM isn’t an AGI. It’s a narrowly intelligent system, just like a chess engine. It can perform a task that typically requires human intelligence, but it can only do that one task, and its intelligence doesn’t generalize across multiple independent domains. A chess engine plays chess. An LLM generates natural-sounding language. Both are AI systems and both are intelligent - just not generally intelligent.
Sorry, no. It's not intelligent at all. It just responds with statistical accuracy. There's also no objective discussion about it because that's how neural networks work.
I was hesitant to answer because we're clearly both convinced. So out of respect let's just close by saying we have different opinions.
-
Sorry, no. It's not intelligent at all. It just responds with statistical accuracy. There's also no objective discussion about it because that's how neural networks work.
I was hesitant to answer because we're clearly both convinced. So out of respect let's just close by saying we have different opinions.
I hear you - you're reacting to how people throw around the word “intelligence” in ways that make these systems sound more capable or sentient than they are. If something just stitches words together without understanding, calling it intelligent seems misleading, especially when people treat its output as facts.
But here’s where I think we’re talking past each other: when I say it’s intelligent, I don’t mean it understands anything. I mean it performs a task that normally requires human cognition: generating coherent, human-like language. That’s what qualifies it as intelligent. Not generally so, like a human, but a narrow/weak intelligence. The fact that it often says true things is almost accidental. It's a side effect of having been trained on a lot of correct information, not the result of human-like understanding.
So yes, it just responds with statistical accuracy but that is intelligent in the technical sense. It’s not understanding. It’s not reasoning. It’s just really good at speaking.
-
I hear you - you're reacting to how people throw around the word “intelligence” in ways that make these systems sound more capable or sentient than they are. If something just stitches words together without understanding, calling it intelligent seems misleading, especially when people treat its output as facts.
But here’s where I think we’re talking past each other: when I say it’s intelligent, I don’t mean it understands anything. I mean it performs a task that normally requires human cognition: generating coherent, human-like language. That’s what qualifies it as intelligent. Not generally so, like a human, but a narrow/weak intelligence. The fact that it often says true things is almost accidental. It's a side effect of having been trained on a lot of correct information, not the result of human-like understanding.
So yes, it just responds with statistical accuracy but that is intelligent in the technical sense. It’s not understanding. It’s not reasoning. It’s just really good at speaking.
Thank you for the nice answer!
We can definetly agree on that it can provide intelligent answers without itself being an intelligence
-
AI is an extremely broad term which LLMs falls under. You may avoid calling it that but it's the correct term nevertheless.
only because marketing has shit all over the term
-
We need to censor these AIs even more, to protect the children! We should ban them altogether. Kids should grow up with 4chan, general internet gore and pedos in chat lobbies like the rest of us, not with this devil AI.
and here we are
-
This post did not contain any content.
New study sheds light on ChatGPT's alarming interactions with teens
New research from a watchdog group reveals ChatGPT can provide harmful advice to teens. The Associated Press reviewed interactions where the chatbot gave detailed plans for drug use, eating disorders, and even suicide notes.
AP News (apnews.com)
This one cracks me up.
-
and here we are
Survivor bias, eh?
-
only because marketing has shit all over the term
AI was never more than algorithms which could be argued to have some semblance of intelligence somewhere. It's sole purpose was marketing by scientists to get funding.
Since the 60s everything related to neural networks is classified as AI. LLMs are neural networks, therefore they fall under the same label.
-
-
-
New Google Search Emoji Answer Feature to Replace All Those Copy and Paste Emoji Websites; You Will be Able to Copy the Code for Emojis With a Click.
Technology1
-
-
Computer says no: Impact of automated decision-making on human life; Algorithms are deciding whether a patient receives an organ transplant or not; Algorithms use in Welfare, Penalise the poor.
Technology1
-
-
Startups and Big Tech firms cut hiring of recent graduates by 11% and 25% respectively in 2024 vs. 2023, as AI can handle routine, low-risk tasks
Technology1
-