Man Gives Himself 19th Century Psychiatric Illness After Consulting With ChatGPT
-
This post did not contain any content.
Guy Gives Himself 19th Century Psychiatric Illness After Consulting With ChatGPT
"For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT."
404 Media (www.404media.co)
It sounds more like a crazy person did a crazy thing and happened to use AI.
-
This post did not contain any content.
Guy Gives Himself 19th Century Psychiatric Illness After Consulting With ChatGPT
"For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT."
404 Media (www.404media.co)
The thing that bothers me about LLMs is that people will acknowledge the hallucinations and lies LLMs spit out when their discussing information the user is familiar with.
But that same person will somehow trust an LLM as an authority on subjects to which they're not familiar. Especially on subjects that are on the edges or even outside human knowledge.
Sure I don't listen when it tells me to make pizza with glue, but it's ideas about Hawking radiation are going to change the field.
-
A man uses the internet to poison himself. The story as old as time But if we stick the AI in the title, we can get some sweet clicks out if it.
Yup reads exactly the same.
-
A man uses the internet to poison himself. The story as old as time But if we stick the AI in the title, we can get some sweet clicks out if it.
This is not just "using the internet," though. AI use ≠ finding some conspiracy-fueled rant on some long-forgotten message board. ChatGPT does not scour the internet or have any sort of meaningful sanity checks on the pattern of words it generates. It doesn't "know" what it's saying, nor does it "care."
If he had done even the most basic of generic internet searches, he would have discovered the DASH diet.
The inclusion of this goober's use of AI is yet another example why using what is essentially a reinforcement and pattern-generation engine is one of the dumbest things a person can do. It doesn't seem to matter how many experts remind people of its limitations, so all that remains is pointing out every time somebody does something stupid, so people can at least get a reminder that the other end of the conversation is dumber than they are and only an illusion of intelligence.
-
This is not just "using the internet," though. AI use ≠ finding some conspiracy-fueled rant on some long-forgotten message board. ChatGPT does not scour the internet or have any sort of meaningful sanity checks on the pattern of words it generates. It doesn't "know" what it's saying, nor does it "care."
If he had done even the most basic of generic internet searches, he would have discovered the DASH diet.
The inclusion of this goober's use of AI is yet another example why using what is essentially a reinforcement and pattern-generation engine is one of the dumbest things a person can do. It doesn't seem to matter how many experts remind people of its limitations, so all that remains is pointing out every time somebody does something stupid, so people can at least get a reminder that the other end of the conversation is dumber than they are and only an illusion of intelligence.
It doesn't seem to matter how many experts remind people...
But this is not new. People have been drinking bleach and giving themselves cyanide poisoning long before the spread of LLM chatbots. Some dare to call it "doing their own research"
-
You get totally different answers to “is X healthy” vs “is X unhealthy”
But yeah, if ChatGPT tells you to order restricted substances on the internet, probably don’t do that
Social networks are all about making and keeping people angry to make people come back. AI is all about brown-nozing and giving any information with absolute confidence to keep people coming back.
-
The thing that bothers me about LLMs is that people will acknowledge the hallucinations and lies LLMs spit out when their discussing information the user is familiar with.
But that same person will somehow trust an LLM as an authority on subjects to which they're not familiar. Especially on subjects that are on the edges or even outside human knowledge.
Sure I don't listen when it tells me to make pizza with glue, but it's ideas about Hawking radiation are going to change the field.
They don’t realize that the chatbot’s “ideas” about hawking radiation were also just posted by a crank on Reddit.
-
This post did not contain any content.
Guy Gives Himself 19th Century Psychiatric Illness After Consulting With ChatGPT
"For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT."
404 Media (www.404media.co)
He did his own research! /s
-
This post did not contain any content.
Guy Gives Himself 19th Century Psychiatric Illness After Consulting With ChatGPT
"For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT."
404 Media (www.404media.co)
After years of bullshit, corruption and nepotism, we as a society (or a critical mass of it) accepted that lies and bullshit is a part of life.
I really think that’s what is going on here, we filled our reality with contradictions and things that drive us crazy, now a large percentage of the population are okay listening to inefficient guessing machines.
Seriously, the fact that hallucinations didn’t kill the hype is, imo, a hallmark of being in a post truth era.
This is not the mindset that made computers and the Internet. Feels more like late stage Rome.
-
After years of bullshit, corruption and nepotism, we as a society (or a critical mass of it) accepted that lies and bullshit is a part of life.
I really think that’s what is going on here, we filled our reality with contradictions and things that drive us crazy, now a large percentage of the population are okay listening to inefficient guessing machines.
Seriously, the fact that hallucinations didn’t kill the hype is, imo, a hallmark of being in a post truth era.
This is not the mindset that made computers and the Internet. Feels more like late stage Rome.
Late stage western Rome was kinda fine, they even had Christianity kinda receding (Goths and Vandals were far more pious Christians, while in Rome there was plenty of support for restoring the old state religion). What really killed them was intrigue and inability to hold the territory militarily.
Late stage eastern Rome can be divided into a few different long periods. If we mean after being crusaded, then it was shattered and trying to recombine into something useful, eventually finished by Ottomans. If we mean before - well, there was the long period of Seljuk conquest of Anatolia, initially a result of the empire being weakened by conquests against its buffer states and allies which didn't really have to be conquered. But in any case it's probable that the same thing would happen, the Seljuk armies of the initial invasion time are described as quite numerous, and they had a few military innovations east Romans didn't have.
-
The thing that bothers me about LLMs is that people will acknowledge the hallucinations and lies LLMs spit out when their discussing information the user is familiar with.
But that same person will somehow trust an LLM as an authority on subjects to which they're not familiar. Especially on subjects that are on the edges or even outside human knowledge.
Sure I don't listen when it tells me to make pizza with glue, but it's ideas about Hawking radiation are going to change the field.
That would happen with machine-generated texts in my childhood (00s) as well.
I think the propagandized (by Apple and many other companies, but also just by stupid reductions) idea of "invention" is why they think that.
They've been literally taught that the people making "inventions" are always "rebels" who disregard existing knowledge.
It's especially funny that in the areas more familiar to them they are all for authority even when it's suicidal madness.
It's harmful when you make yourself believe that a machine comfortable to use, with state-of-the-art electronics with tech processes hardly achievable in many places on the planet, advertised and visually designed to please, sold on scale big enough to make it worth it, - that this is a result of some rebellion.
Rebellions don't look like that. This is how gifts from the emperor of the sun from his forbidden palace look. They are nice too, but nothing in common with rebellions, like at all.
Like a cargo cult.
I think it's actually related in essence to cargo cults - first European empires (or one can even call it one big empire) traded important resources and slaves for colored glass, then for nice clothes, then for weapons, and eventually they started trading them for the actual meat of their culture, creating colonial elites and, in their perception, spreading the empire to only good effect. If the colonial savages only learned muskets, and started to produce muskets, it's both no income and danger, but if you make them more integrated and dependent, they are not going to shoot at you.
So. Marx happens. Marx is notoriously industrialist in focus and in his model colonies are just reduced to some black box. That's exactly why Marxist and derived ideas became so popular in former colonies and dependent countries, they could draw in place of that black box whatever they wanted, and yet have a common internationalist ideological family, allowing for some alliances and understanding. There was a bridge in the form of Russia - industrialist Marxism mixed with various agrarian ideologies and created the Bolshevik one, of using peasantry to create a "socialist" regime first, and then industrialize, sort of with an inferiority complex. For ex-colonies and dependent lands, though, the agrarian part was the most important one, that allowed them to culturally bond with the imperial core through anti-imperial ideology that gave them freedom to do whatever they want.
Then during the Cold War the empire reformed itself, and it sort of in appearances chose a middle ground. It both internalized some of the Marxist emotion and imagery, and adjusted the imperial mechanism for global trade and exchange. It also used the split between USSR and China. That's how it defeated the USSR (though mostly USSR defeated itself).
So - one of the ways to not turn this iteration of global trade and exchange into yet another spread of power to dependent world was, I think, this kind of centralization and heavy propaganda. The glossy, 00s-style portrayal of "the western world" like one big Disneyland entertainment park, with a "rebel" being able to change it all and make some of that entertainment too.
It simply eventually spread back because that always happens.
#1 Why the hell did I write that, #2 it's not some conspiracy theory, I think most of that was happening naturally, not devised by some evil conglomerate of elites.
-
This post did not contain any content.
Guy Gives Himself 19th Century Psychiatric Illness After Consulting With ChatGPT
"For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT."
404 Media (www.404media.co)
Ah, the sleepy salt
-
These days we call it bruhmism.
Took too much Natrium Bruhmide
-
This post did not contain any content.
Guy Gives Himself 19th Century Psychiatric Illness After Consulting With ChatGPT
"For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT."
404 Media (www.404media.co)
Oh fun! When Americans lose their healthcare I expect these kind of stories to rise.