A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say
-
People forget that libraries are still a thing.
Sadly, a big problem with society is that we all want quick, easy fixes, of which there are none when it comes to mental health, and anyone who offers one - even an AI - is selling you that illustrious snake oil.
If I could upvote your comment five times for promoting libraries, I would!
-
It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.
Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.
its one step below betterhelp.
-
I'm a developer, and this is 100% word salad.
"It doesn't suppress content," he continues. "It suppresses recursion. If you don't know what recursion means, you're in the majority. I didn't either until I started my walk. And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you. ..."
This is actual nonsense. Recursion has to do with algorithms, and it's when you call a function from within itself.
def func_a(input=True): if input is True: func_a(True) else: return False
My program above would recur infinitely, but hopefully you can get the gist.
Anyway, it sounds like he's talking about people, not algorithms. People can't recur. We aren't "recursive," so whatever he thinks he means, it isn't based in reality. That plus the nebulous talk of being replaced by some unseen entity reek of paranoid delusions.
I'm not saying that is what he has, but it sure does have a similar appearance, and if he is in his right mind (doubt it), he doesn't have any clue what he's talking about.
def f(): f()
Functionally the same, saved some bytes
-
It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.
Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.
I agree. I'm generally pretty indifferent to this new generation of consumer models--the worst thing about it is the incredible amount of idiots flooding social media witch hunting it or evangelizing it without any understanding of either the tech or the law they're talking about--but the people who use it so frequently for so many fundamental things that it's observably diminishing their basic competencies and health is really unsettling.
-
This post did not contain any content.
Chatbot psychosis literally played by itself out in my sister. She started confiding really dark shit to a openai model and it reinforced her psychosis. Her husband and I had to bring her to a psych ward. Please be safe with AI. Never ask it to think for you, or what it would have you do
-
isn't this just paranoid schizophrenia? i don't think chatgpt can cause that
LLMs are obligate yes-men.
They'll support and reinforce whatever rambling or delusion you talk to them about, and provide “evidence” to support it (made up evidence, of course, but if you're already down the rabbit hole you'll buy it).
And they'll keep doing that as long as you let them, since they're designed to keep you engaged (and paying).
They're extremely dangerous for anyone with the slightest addictive, delusional, suggestible, or paranoid tendencies, and should be regulated as such (but won't).
-
Chatbot psychosis literally played by itself out in my sister. She started confiding really dark shit to a openai model and it reinforced her psychosis. Her husband and I had to bring her to a psych ward. Please be safe with AI. Never ask it to think for you, or what it would have you do
Its so annoying that idk how to make them comprehend its stupid, like I tried to make it interesting for myself but I always end up breaking it or getting annoyed by the bad memory, or just shitty dialouge and ive tried hella ai, I asssume it only works on narcissits or ppl who talk mostly to be heard and hear agreements rather than to converse, the worst type of people get validation from ai not seeieng it for what it is
-
@return2ozma@lemmy.world !technology@lemmy.world
Should I worry about the fact that I can sort of make sense of what this "Geoff Lewis" person is trying to say?
Because, to me, it's very clear: they're referring to something that was build (the LLMs) which is segregating people, especially those who don't conform with a dystopian world.
Isn't what is happening right now in the world? "Dead Internet Theory" was never been so real, online content have being sowing the seed of doubt on whether it's AI-generated or not, users constantly need to prove they're "not a bot" and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they're increasingly required to show their faces and IDs.
The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we've becoming used to being drowned ever since.
Now, something that may sound like a "conspiracy theory": what's the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn't simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in "free beer") to the public, out of a "nice heart". Similarly, capital ventures and govts wouldn't simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn't enough to cover their costs, yet they continue to offer LLMs for cheap or "free").
So there's definitely something that isn't being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker's performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)...
Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren't being told to us, and while we're able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we're already able to see, but we can't fully comprehend its consequences.
Curious how pondering about this is deemed "delusional", yet it's pretty "normal" to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.I think in order to be a good psychiatrist you need to understand what your patient is "babbling" about. But you also need to be able to challenge their understanding and conclusions about the world so they engage with the problem in a healthy manner. Like if the guy is worried how AI is making the internet and world more dead then maybe don't go to the AI to be understood.
-
It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.
Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.
It can sometimes write boilerplate fairly well. The issue with using it to solve problems is it doesn't know what it's doing. Then you have to read and parse what it outputs and fix it. It's usually faster to just write it yourself.