Man Gives Himself 19th Century Psychiatric Illness After Consulting With ChatGPT
-
Anyone consulting LLMs for anything already had a preexisting psychological condition called Stupid as Fuck.
You get totally different answers to “is X healthy” vs “is X unhealthy”
But yeah, if ChatGPT tells you to order restricted substances on the internet, probably don’t do that
-
This post did not contain any content.
A man uses the internet to poison himself. The story as old as time But if we stick the AI in the title, we can get some sweet clicks out if it.
-
This post did not contain any content.
These days we call it bruhmism.
-
This post did not contain any content.
It sounds more like a crazy person did a crazy thing and happened to use AI.
-
This post did not contain any content.
The thing that bothers me about LLMs is that people will acknowledge the hallucinations and lies LLMs spit out when their discussing information the user is familiar with.
But that same person will somehow trust an LLM as an authority on subjects to which they're not familiar. Especially on subjects that are on the edges or even outside human knowledge.
Sure I don't listen when it tells me to make pizza with glue, but it's ideas about Hawking radiation are going to change the field.
-
A man uses the internet to poison himself. The story as old as time But if we stick the AI in the title, we can get some sweet clicks out if it.
Yup reads exactly the same.
-
A man uses the internet to poison himself. The story as old as time But if we stick the AI in the title, we can get some sweet clicks out if it.
This is not just "using the internet," though. AI use ≠ finding some conspiracy-fueled rant on some long-forgotten message board. ChatGPT does not scour the internet or have any sort of meaningful sanity checks on the pattern of words it generates. It doesn't "know" what it's saying, nor does it "care."
If he had done even the most basic of generic internet searches, he would have discovered the DASH diet.
The inclusion of this goober's use of AI is yet another example why using what is essentially a reinforcement and pattern-generation engine is one of the dumbest things a person can do. It doesn't seem to matter how many experts remind people of its limitations, so all that remains is pointing out every time somebody does something stupid, so people can at least get a reminder that the other end of the conversation is dumber than they are and only an illusion of intelligence.
-
This is not just "using the internet," though. AI use ≠ finding some conspiracy-fueled rant on some long-forgotten message board. ChatGPT does not scour the internet or have any sort of meaningful sanity checks on the pattern of words it generates. It doesn't "know" what it's saying, nor does it "care."
If he had done even the most basic of generic internet searches, he would have discovered the DASH diet.
The inclusion of this goober's use of AI is yet another example why using what is essentially a reinforcement and pattern-generation engine is one of the dumbest things a person can do. It doesn't seem to matter how many experts remind people of its limitations, so all that remains is pointing out every time somebody does something stupid, so people can at least get a reminder that the other end of the conversation is dumber than they are and only an illusion of intelligence.
It doesn't seem to matter how many experts remind people...
But this is not new. People have been drinking bleach and giving themselves cyanide poisoning long before the spread of LLM chatbots. Some dare to call it "doing their own research"
-
You get totally different answers to “is X healthy” vs “is X unhealthy”
But yeah, if ChatGPT tells you to order restricted substances on the internet, probably don’t do that
Social networks are all about making and keeping people angry to make people come back. AI is all about brown-nozing and giving any information with absolute confidence to keep people coming back.
-
The thing that bothers me about LLMs is that people will acknowledge the hallucinations and lies LLMs spit out when their discussing information the user is familiar with.
But that same person will somehow trust an LLM as an authority on subjects to which they're not familiar. Especially on subjects that are on the edges or even outside human knowledge.
Sure I don't listen when it tells me to make pizza with glue, but it's ideas about Hawking radiation are going to change the field.
They don’t realize that the chatbot’s “ideas” about hawking radiation were also just posted by a crank on Reddit.
-
This post did not contain any content.
He did his own research! /s
-
This post did not contain any content.
After years of bullshit, corruption and nepotism, we as a society (or a critical mass of it) accepted that lies and bullshit is a part of life.
I really think that’s what is going on here, we filled our reality with contradictions and things that drive us crazy, now a large percentage of the population are okay listening to inefficient guessing machines.
Seriously, the fact that hallucinations didn’t kill the hype is, imo, a hallmark of being in a post truth era.
This is not the mindset that made computers and the Internet. Feels more like late stage Rome.
-
Missouri AG: Any AI That Doesn’t Praise Donald Trump Might Be “Consumer Fraud” (No, Really)
Technology1
-
Nvidia's latest DLSS revision reduces VRAM usage by 20% for upscaling — optimizations reduce overhead of more powerful transformer model
Technology1
-
-
Australians may soon be able to download iPhone apps from outside the Apple Store under new proposal.
Technology1
-
-
-
-