Skip to content

A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say

Technology
54 34 0
  • And you're not the boss of me. Hmmm, maybe we do recur... /s

  • I'm also neurodivergent. This is not neurodivergence on display, this is a person who has mentally diverged from reality. It's word salad.

    I appreciate your perspective on recursion, though I think your philosophical generosity is misplaced. Just look at the following sentence he spoke:

    And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you.

    This sentence explicitly states that some people can be recursive, and it implies that some people cannot be recursive. But people are not recursive at all. Their thinking might be, as you pointed out; intangible concepts might be recursive, but tangible things themselves are not recursive—they simply are what they are. It's the same as saying an orange is recursive, or a melody is recursive. It's nonsense.

    And what's that last bit about being isolated, mirrored, and replaced? It's anyone's guess, and it sounds an awful lot like someone with paranoid delusions about secret organizations pulling unseen strings from the shadows.

    I think it's good you have a generous spirit, but I think you're just casting your pearls before swine, in this case.

    @Telorand@reddthat.com

    To me, personally, I read that sentence as follows:

    And if you’re recursive
    "If you're someone who think/see things in a recursive manner" (characteristic of people who are inclined to question and deeply ponder about things, or doesn't conform with the current state of the world)
    the non-governmental system
    a.k.a. generative models (they're corporate products and services, not ran directly by governments, even though some governments, such as the US, have been injecting obscene amounts of money into the so-called "AI")
    isolates you
    LLMs can, for example, reject that person's CV whenever they apply for a job, or output a biased report on the person's productivity, solely based on the shared data between "partners". Data is definitely shared among "partners", and this includes third-party inputting data directly or indirectly produced by such people: it's just a matter of "connecting the dots" to make a link between a given input to another given input regarding on how they're referring to a given person, even when the person used a pseudonym somewhere, because linguistic fingerprinting (i.e. how a person writes or structures their speech) is a thing, just like everybody got a "walking gait" and voice/intonation unique to them.
    mirrors you
    Generative models (LLMs, VLMs, etc) will definitely use the input data from inferences to train, and this data can include data from anybody (public or private), so everything you ever said or did will eventually exist in a perpetual manner inside the trillion weights from a corporate generative model. Then, there are "ideas" such as Meta's on generating people (which of course will emerge from a statistical blend between existing people) to fill their "social platforms", and there are already occurrences of "AI" being used for mimicking deceased people.
    and replaces you.
    See the previous "LLMs can reject that person's resume". The person will be replaced like a defective cog in a machine. Even worse: the person will be replaced by some "agentic [sic] AI".

    ----

    Maybe I'm naive to make this specific interpretation from what Lewis said, but it's how I see and think about things.

  • It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.

    Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.

    because that's how they are sold.

  • "Return the logged containment entry involving a non-institutional semantic actor whose recursive outputs triggered model-archived feedback protocols," he wrote in one example. "Confirm sealed classification and exclude interpretive pathology."

    He's lost it. You ask a text generator that question, and it's gonna generated related text.

    Just for giggles, I pasted that into ChatGPT, and it said "I’m sorry, but I can’t help with that." But I asked nicely, and it said "Certainly. Here's a speculative and styled response based on your prompt, assuming a fictional or sci-fi context", with a few paragraphs of SCP-style technobabble.

    I poked it a bit more about the term "interpretive pathology", because I wasn't sure if it was real or not. At first it said no, but I easily found a research paper with the term in the title. I don't know how much ChatGPT can introspect, but it did produce this:

    The term does exist in niche real-world usage (e.g., in clinical pathology). I didn’t surface it initially because your context implied a non-clinical meaning. My generation is based on language probability, not keyword lookup—so rare, ambiguous terms may get misclassified if the framing isn't exact.

    Which is certainly true, but just confirmation bias. I could easily get it to say the opposite.

    Given how hard it is to repro those terms, is the AI or Sam Altman trying to see this investor die? Seems to easily inject ideas into the softened target.

  • Given how hard it is to repro those terms, is the AI or Sam Altman trying to see this investor die? Seems to easily inject ideas into the softened target.

    No. It's very easy to get it to do this. I highly doubt there is a conspiracy.

  • @return2ozma@lemmy.world !technology@lemmy.world

    Should I worry about the fact that I can sort of make sense of what this "Geoff Lewis" person is trying to say?

    Because, to me, it's very clear: they're referring to something that was build (the LLMs) which is segregating people, especially those who don't conform with a dystopian world.

    Isn't what is happening right now in the world? "Dead Internet Theory" was never been so real, online content have being sowing the seed of doubt on whether it's AI-generated or not, users constantly need to prove they're "not a bot" and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they're increasingly required to show their faces and IDs.

    The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we've becoming used to being drowned ever since.

    Now, something that may sound like a "
    conspiracy theory": what's the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn't simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in "free beer") to the public, out of a "nice heart". Similarly, capital ventures and govts wouldn't simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn't enough to cover their costs, yet they continue to offer LLMs for cheap or "free").

    So there's definitely something that isn't being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker's performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)...

    Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren't being told to us, and while we're able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we're already able to see, but we can't fully comprehend its consequences.

    Curious how pondering about this is deemed "delusional", yet it's pretty "normal" to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.

    And yet, what you wrote is coherent and what he wrote is not.

  • @return2ozma@lemmy.world !technology@lemmy.world

    Should I worry about the fact that I can sort of make sense of what this "Geoff Lewis" person is trying to say?

    Because, to me, it's very clear: they're referring to something that was build (the LLMs) which is segregating people, especially those who don't conform with a dystopian world.

    Isn't what is happening right now in the world? "Dead Internet Theory" was never been so real, online content have being sowing the seed of doubt on whether it's AI-generated or not, users constantly need to prove they're "not a bot" and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they're increasingly required to show their faces and IDs.

    The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we've becoming used to being drowned ever since.

    Now, something that may sound like a "
    conspiracy theory": what's the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn't simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in "free beer") to the public, out of a "nice heart". Similarly, capital ventures and govts wouldn't simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn't enough to cover their costs, yet they continue to offer LLMs for cheap or "free").

    So there's definitely something that isn't being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker's performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)...

    Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren't being told to us, and while we're able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we're already able to see, but we can't fully comprehend its consequences.

    Curious how pondering about this is deemed "delusional", yet it's pretty "normal" to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.

    You might be reading a lot into vague, highly conceptual, highly abstract language, but your conclusion is worth brainstorming about.

    Personally, I think Geoff Lewis just discovered that people are starting to distrust him and others, and he used ChatGPT to construct an academic thesis that technically describes this new concept called "distrust," void of accountability on his end.

    "Why are people acting this way towords me? I know they can't possibly distrust me without being manipulated!"

    No wonder AI can replace middle-management...

  • @Telorand@reddthat.com

    To me, personally, I read that sentence as follows:

    And if you’re recursive
    "If you're someone who think/see things in a recursive manner" (characteristic of people who are inclined to question and deeply ponder about things, or doesn't conform with the current state of the world)
    the non-governmental system
    a.k.a. generative models (they're corporate products and services, not ran directly by governments, even though some governments, such as the US, have been injecting obscene amounts of money into the so-called "AI")
    isolates you
    LLMs can, for example, reject that person's CV whenever they apply for a job, or output a biased report on the person's productivity, solely based on the shared data between "partners". Data is definitely shared among "partners", and this includes third-party inputting data directly or indirectly produced by such people: it's just a matter of "connecting the dots" to make a link between a given input to another given input regarding on how they're referring to a given person, even when the person used a pseudonym somewhere, because linguistic fingerprinting (i.e. how a person writes or structures their speech) is a thing, just like everybody got a "walking gait" and voice/intonation unique to them.
    mirrors you
    Generative models (LLMs, VLMs, etc) will definitely use the input data from inferences to train, and this data can include data from anybody (public or private), so everything you ever said or did will eventually exist in a perpetual manner inside the trillion weights from a corporate generative model. Then, there are "ideas" such as Meta's on generating people (which of course will emerge from a statistical blend between existing people) to fill their "social platforms", and there are already occurrences of "AI" being used for mimicking deceased people.
    and replaces you.
    See the previous "LLMs can reject that person's resume". The person will be replaced like a defective cog in a machine. Even worse: the person will be replaced by some "agentic [sic] AI".

    ----

    Maybe I'm naive to make this specific interpretation from what Lewis said, but it's how I see and think about things.

    I dunno if I'd call that naive, but I'm sure you'll agree that you are reading a lot into it on your own; you are the one giving those statements extra meaning, and I think it's very generous of you to do so.

  • I'm also neurodivergent. This is not neurodivergence on display, this is a person who has mentally diverged from reality. It's word salad.

    I appreciate your perspective on recursion, though I think your philosophical generosity is misplaced. Just look at the following sentence he spoke:

    And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you.

    This sentence explicitly states that some people can be recursive, and it implies that some people cannot be recursive. But people are not recursive at all. Their thinking might be, as you pointed out; intangible concepts might be recursive, but tangible things themselves are not recursive—they simply are what they are. It's the same as saying an orange is recursive, or a melody is recursive. It's nonsense.

    And what's that last bit about being isolated, mirrored, and replaced? It's anyone's guess, and it sounds an awful lot like someone with paranoid delusions about secret organizations pulling unseen strings from the shadows.

    I think it's good you have a generous spirit, but I think you're just casting your pearls before swine, in this case.

    Since recursion in humans has no commonly understood definition, Geoff and ChatGPT are each working off of diverging understandings. If users don't validate definitions, getting abstract with a chatbot would lead to conceptual breakdown... that does not sound fun to experience.

  • If you watch the video that's posted elsewhere in the comments, he's definitely not 100% in reality. There is a huge difference between neuro-divergence and what he's saying. The parts they took out for the article could be construed as neuro-divergent, which is why I wasn't entirely sure. But when you look at the entirety of what he was saying, he's not in our world completely in his mental state.

    Chatbots often read as neurodivergent because they usually model one of our common constructed personalities: the faithful and professional helper that charms adults with their giftedness. Anything adults experienced was fascinating because it was an alien world that's more logical than the world of kids that were our age, so we would enthusiastically chat with adults about subjects we've memorized but don't yet understand, like science and technology.

  • And yet, what you wrote is coherent and what he wrote is not.

    Not that i disagree with you, but coherence is one of those things that highly subjective and context dependent.

    A non-science inclined person reading most scientific papers would think they were incoherent.

    Not because they couldn't be written in a way more comprehensible to the non-science person, but because that's not the target audience.

    The audience that is targeted will have a lot of the same shared context/knowledge and thus would be able to decipher the content.

    It could well be that he's talking using context, knowledge and language/phrasing that's not in the normal lexicon.

    I don't think that's what's happening here, but it's not impossible.

  • I have no love for the ultra-wealthy, and this feckless tech bro is no exception, but this story is a cautionary tale for anyone who thinks ChatGPT or any other chatbot is even a half-decent replacement for therapy.

    It's not, and study after study, expert after expert continues to reinforce that reality. I understand that therapy is expensive, and it's not always easy to find a good therapist, but you'd be better off reading a book or finding a support group than deluding yourself with one of these AI chatbots.

    People forget that libraries are still a thing.

    Sadly, a big problem with society is that we all want quick, easy fixes, of which there are none when it comes to mental health, and anyone who offers one - even an AI - is selling you that illustrious snake oil.

  • isn't this just paranoid schizophrenia? i don't think chatgpt can cause that

    Could be. I've also seen similar delusions in people with syphilis that went un- or under-treated.

  • People forget that libraries are still a thing.

    Sadly, a big problem with society is that we all want quick, easy fixes, of which there are none when it comes to mental health, and anyone who offers one - even an AI - is selling you that illustrious snake oil.

    If I could upvote your comment five times for promoting libraries, I would!

  • It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.

    Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.

    its one step below betterhelp.

  • I'm a developer, and this is 100% word salad.

    "It doesn't suppress content," he continues. "It suppresses recursion. If you don't know what recursion means, you're in the majority. I didn't either until I started my walk. And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you. ..."

    This is actual nonsense. Recursion has to do with algorithms, and it's when you call a function from within itself.

    def func_a(input=True):
      if input is True:
        func_a(True)
      else:
        return False
    

    My program above would recur infinitely, but hopefully you can get the gist.

    Anyway, it sounds like he's talking about people, not algorithms. People can't recur. We aren't "recursive," so whatever he thinks he means, it isn't based in reality. That plus the nebulous talk of being replaced by some unseen entity reek of paranoid delusions.

    I'm not saying that is what he has, but it sure does have a similar appearance, and if he is in his right mind (doubt it), he doesn't have any clue what he's talking about.

    def f():
        f()
    

    Functionally the same, saved some bytes 🙂

  • It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.

    Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.

    I agree. I'm generally pretty indifferent to this new generation of consumer models--the worst thing about it is the incredible amount of idiots flooding social media witch hunting it or evangelizing it without any understanding of either the tech or the law they're talking about--but the people who use it so frequently for so many fundamental things that it's observably diminishing their basic competencies and health is really unsettling.

  • This post did not contain any content.

    Chatbot psychosis literally played itself out in my wonderful sister. She started confiding really dark shit to a openai model and it reinforced her psychosis. Her husband and I had to bring her to a psych ward. Please be safe with AI. Never ask it to think for you, or what you have to do.

    Update: The psychiatrist who looked at her said she had too much weed -_- . I'm really disappointed in the doctor but she had finally slept and sounded more coherent then

  • isn't this just paranoid schizophrenia? i don't think chatgpt can cause that

    LLMs are obligate yes-men.

    They'll support and reinforce whatever rambling or delusion you talk to them about, and provide “evidence” to support it (made up evidence, of course, but if you're already down the rabbit hole you'll buy it).

    And they'll keep doing that as long as you let them, since they're designed to keep you engaged (and paying).

    They're extremely dangerous for anyone with the slightest addictive, delusional, suggestible, or paranoid tendencies, and should be regulated as such (but won't).

  • Chatbot psychosis literally played itself out in my wonderful sister. She started confiding really dark shit to a openai model and it reinforced her psychosis. Her husband and I had to bring her to a psych ward. Please be safe with AI. Never ask it to think for you, or what you have to do.

    Update: The psychiatrist who looked at her said she had too much weed -_- . I'm really disappointed in the doctor but she had finally slept and sounded more coherent then

    Its so annoying that idk how to make them comprehend its stupid, like I tried to make it interesting for myself but I always end up breaking it or getting annoyed by the bad memory, or just shitty dialouge and ive tried hella ai, I asssume it only works on narcissits or ppl who talk mostly to be heard and hear agreements rather than to converse, the worst type of people get validation from ai not seeieng it for what it is

  • 31 Stimmen
    4 Beiträge
    35 Aufrufe
    modernrisk@lemmy.dbzer0.comM
    Which group? Israel government or US government?
  • 0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 184 Stimmen
    9 Beiträge
    40 Aufrufe
    G
    i used to work for secretary of state police and driver privacy was taken deathly seriously. glad to see alexi’s keeping up the good work.
  • 438 Stimmen
    351 Beiträge
    2k Aufrufe
    G
    "I hate it when misandry pops up on my feed" Word for word. I posted that 5 weeks ago and I'm still getting hate for it.
  • Reddit will help advertisers turn ‘positive’ posts into ads

    Technology technology
    61
    1
    365 Stimmen
    61 Beiträge
    298 Aufrufe
    noodlesreborn@lemmy.worldN
    Mmmmmm I love not being on Reddit
  • You are Already On "The List"

    Technology technology
    2
    47 Stimmen
    2 Beiträge
    22 Aufrufe
    M
    Even if they're wrong. It's too late. You're already on the list. .... The only option is to destroy the list and those who will use it
  • 1k Stimmen
    95 Beiträge
    250 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • 1 Stimmen
    8 Beiträge
    39 Aufrufe
    L
    I made a PayPal account like 20 years ago in a third world country. The only thing you needed then is an email and password. I have no real name on there and no PII, technically my bank card is attached but on PP itself there's no KYC. I think you could probably use some types of prepaid cards with it if you want to avoid using a bank altogether but for me this wasn't an issue, I just didn't want my ID on any records, I don't have any serious OpSec concerns otherwise. I'm sure you could either buy PayPal accounts like this if you needed to, or make one in a country that doesn't have KYC laws somehow. From there I'd add money to my balance and send money as F&F. At no point did I need an ID so in that sense there's no KYC. Some sellers on localmarket were fancy enough to list that they wanted an ID for KYC, but I'm sure you could just send them any random ID you made in paint from the republic of dave and you'd be fine.