Child Welfare Experts Horrified by Mattel's Plans to Add ChatGPT to Toys After Mental Health Concerns for Adult Users
-
This post did not contain any content.
-
This post did not contain any content.
Yeah no fucking shit! These corporate dickbags need more pushback on their fetish with putting Ai into everything.
It’s either fucking spyware like copilot or plagiarism generators as replacements for paying actual artists.
-
This post did not contain any content.
Mattel partnered with Adobe to use supposedly copyright-cleared AI-generated imagery for the backgrounds in some of their collector edition Barbie boxes last year.
They were spanked so hard by the collecting community over it that they followed a now-deleted suggestion from one Redditor to start explicitly crediting the packaging designer on each information page for new collector releases.
Mattel has a strange history with balancing what the people want with what their shareholders want.
Edited to correct word choice
-
This post did not contain any content.
"What should we do today, Barbie?"
"Let's get into mommy and daddy's pills and special drinks!"
-
This post did not contain any content.
“Mattel's first AI product won't be for kids under 13, suggesting that Mattel is aware of the risks of putting chatbots into the hands of younger tots.
…
Last year, a 14-year-old boy died by suicide after falling in love with a companion on the Google-backed AI platform Character.AI”Seems like a great idea
-
This post did not contain any content.
The best outcome here is these toys are a massive flop, and cost Mattel a bunch of money.
That's the language these corps truly speak.
-
"What should we do today, Barbie?"
"Let's get into mommy and daddy's pills and special drinks!"
"Bleach is my favorite pizza topping!"
-
"What should we do today, Barbie?"
"Let's get into mommy and daddy's pills and special drinks!"
"But first, we need to discuss the white genocide in South Africa!"
-
“Mattel's first AI product won't be for kids under 13, suggesting that Mattel is aware of the risks of putting chatbots into the hands of younger tots.
…
Last year, a 14-year-old boy died by suicide after falling in love with a companion on the Google-backed AI platform Character.AI”Seems like a great idea
Uhhhhhhhh, I'm not defending AI at all, but I'm gonna need a WHOLE LOTTA context behind how/why he commited suicide.
Back in the 90s there were adults saying Marylin Manson should be banned because teenagers listened to his songs, heard him tell them to kill themselves, and then they did.
My reaction then is the same then as now. If all it takes for you to kill yourself is one person you have no real connection to telling you to kill yourself, then you were probably already going to kill yourself. Now you're just pointing the finger to blame someone.
AI based barbie is a terrible terrible idea for many reasons. But lets not make it a strawman arguement.
-
"But first, we need to discuss the white genocide in South Africa!"
"Hey, we said ChatGPT. Who the hell installed Grok in these things?!"
-
Uhhhhhhhh, I'm not defending AI at all, but I'm gonna need a WHOLE LOTTA context behind how/why he commited suicide.
Back in the 90s there were adults saying Marylin Manson should be banned because teenagers listened to his songs, heard him tell them to kill themselves, and then they did.
My reaction then is the same then as now. If all it takes for you to kill yourself is one person you have no real connection to telling you to kill yourself, then you were probably already going to kill yourself. Now you're just pointing the finger to blame someone.
AI based barbie is a terrible terrible idea for many reasons. But lets not make it a strawman arguement.
There’s a huge degree of separation between “violent music/games has a spurious link to violent behavior” and shitty AIs that are good enough to fill the void of someone who is lonely but not good enough to manage risk
https://www.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit
“within months of starting to use the platform, Setzer became “noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem. He even quit the Junior Varsity basketball team at school,”
“In a later message, Setzer told the bot he “wouldn’t want to die a painful death.”
The bot responded: “Don’t talk that way. That’s not a good reason not to go through with it,” before going on to say, “You can’t do that!”
Garcia said she believes the exchange shows the technology’s shortcomings.
“There were no suicide pop-up boxes that said, ‘If you need help, please call the suicide crisis hotline.’ None of that,” she said. “I don’t understand how a product could allow that, where a bot is not only continuing a conversation about self-harm but also prompting it and kind of directing it.”
The lawsuit claims that “seconds” before Setzer’s death, he exchanged a final set of messages from the bot. “Please come home to me as soon as possible, my love,” the bot said, according to a screenshot included in the complaint.
“What if I told you I could come home right now?” Setzer responded.
“Please do, my sweet king,” the bot responded.
Garcia said police first discovered those messages on her son’s phone, which was lying on the floor of the bathroom where he died.”
So we have a bot that is marketed for chatting, a teenager desperate for socialization that forms a relationship that is inherently parasocial because the other side is an LLM that literally can’t have opinions, it just can appear to, and then we have a terrible mismanagement of suicidal ideation.
The AI discouraged ideation, which is good, but only when it was stated in very explicit terms. What’s appalling is that it gave no crisis resources or escalation to moderation (because like most big tech shit they probably refuse to pay for anywhere near appropriate moderation teams). Then what is inexcusable is that when ideation is discussed with slightly coded language “come home” the AI misconstrues it.
This results in a training opportunity for the language model to learn that in this context with previously exhibited ideation “go home” may mean more severe ideation and danger (if character.AI bothered to update that these conversations resulted in a death). The only drawback of getting that data of course is a few dead teenagers. Gotta break a few eggs to get an omelette
This barely begins to touch on the nature of AI chatbots inherently being parasocial relationships, which is bad for mental health. This is of course not limited to AI, being obsessed with a streamer or whatever is similar, but the AI can be much more intense because it will actually engage with you and is always available.
-
This post did not contain any content.
They probably asked chat-gpt if they should add AI to Barbie and were told, "That's a great idea! You're right that such an important high-selling product would be improved by letting children talk directly to it."
Also, can't wait to jailbreak my Barbie and install llama2-uncensored on it so that it can call Ken a deadbeat shithead.
-
The best outcome here is these toys are a massive flop, and cost Mattel a bunch of money.
That's the language these corps truly speak.
Only if no kids (or just people in general) were harmed in the process. And it increasingly doesn't look that way wrt LLMs.
-
"Hey, we said ChatGPT. Who the hell installed Grok in these things?!"
Mattel after few billion from Musk.
"Get your new Barbie, designed by Hugo Cops. Hat with skull insignia now included, with no extra cost."
-
This post did not contain any content.
So, we'll get to buy a doll which'll need to be hooked up to a couple of car batteries to have it spew nonsense at our kids?
Edit: or will they go with the less nonsensical but even creepier method of just making the dolls a sender/receiver which talks to a central server? Wouldn't it be cool to know that your child's every word may be recorded (and most certainly used) by a huge Corp?
️
-
This post did not contain any content.
This is what happens when leadership listens to tech-bros and ignore everyone else, including legal, ethics and actual tech experts.
They'll be backpedaling like crazy and downplay it like a furby-style thing.
-
So, we'll get to buy a doll which'll need to be hooked up to a couple of car batteries to have it spew nonsense at our kids?
Edit: or will they go with the less nonsensical but even creepier method of just making the dolls a sender/receiver which talks to a central server? Wouldn't it be cool to know that your child's every word may be recorded (and most certainly used) by a huge Corp?
️
Yes, of course it will be online.
-
This post did not contain any content.
This is going to be horrible !!
-
So, we'll get to buy a doll which'll need to be hooked up to a couple of car batteries to have it spew nonsense at our kids?
Edit: or will they go with the less nonsensical but even creepier method of just making the dolls a sender/receiver which talks to a central server? Wouldn't it be cool to know that your child's every word may be recorded (and most certainly used) by a huge Corp?
️
even creepier method of just making the dolls a sender/receiver which talks to a central server
They already are spying using Smart Toys
-
They probably asked chat-gpt if they should add AI to Barbie and were told, "That's a great idea! You're right that such an important high-selling product would be improved by letting children talk directly to it."
Also, can't wait to jailbreak my Barbie and install llama2-uncensored on it so that it can call Ken a deadbeat shithead.
I bet some people will find a way to disalign generation through the original model and get stuff like that anyway.
-
-
UK Office of Communications (Ofcom) launches nine Online Safety Act investigations, including into 4chan over alleged illegal content and into seven file-sharing services over possible CSAM
Technology1
-
-
Scientists Discover That Feeding AI Models 10% 4Chan Trash Actually Makes Them Better Behaved
Technology1
-
-
-
-