Skip to content

A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say

Technology
49 30 0
  • People forget that libraries are still a thing.

    Sadly, a big problem with society is that we all want quick, easy fixes, of which there are none when it comes to mental health, and anyone who offers one - even an AI - is selling you that illustrious snake oil.

    If I could upvote your comment five times for promoting libraries, I would!

  • It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.

    Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.

    its one step below betterhelp.

  • I'm a developer, and this is 100% word salad.

    "It doesn't suppress content," he continues. "It suppresses recursion. If you don't know what recursion means, you're in the majority. I didn't either until I started my walk. And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you. ..."

    This is actual nonsense. Recursion has to do with algorithms, and it's when you call a function from within itself.

    def func_a(input=True):
      if input is True:
        func_a(True)
      else:
        return False
    

    My program above would recur infinitely, but hopefully you can get the gist.

    Anyway, it sounds like he's talking about people, not algorithms. People can't recur. We aren't "recursive," so whatever he thinks he means, it isn't based in reality. That plus the nebulous talk of being replaced by some unseen entity reek of paranoid delusions.

    I'm not saying that is what he has, but it sure does have a similar appearance, and if he is in his right mind (doubt it), he doesn't have any clue what he's talking about.

    def f():
        f()
    

    Functionally the same, saved some bytes 🙂

  • It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.

    Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.

    I agree. I'm generally pretty indifferent to this new generation of consumer models--the worst thing about it is the incredible amount of idiots flooding social media witch hunting it or evangelizing it without any understanding of either the tech or the law they're talking about--but the people who use it so frequently for so many fundamental things that it's observably diminishing their basic competencies and health is really unsettling.

  • This post did not contain any content.

    Chatbot psychosis literally played by itself out in my sister. She started confiding really dark shit to a openai model and it reinforced her psychosis. Her husband and I had to bring her to a psych ward. Please be safe with AI. Never ask it to think for you, or what it would have you do

  • isn't this just paranoid schizophrenia? i don't think chatgpt can cause that

    LLMs are obligate yes-men.

    They'll support and reinforce whatever rambling or delusion you talk to them about, and provide “evidence” to support it (made up evidence, of course, but if you're already down the rabbit hole you'll buy it).

    And they'll keep doing that as long as you let them, since they're designed to keep you engaged (and paying).

    They're extremely dangerous for anyone with the slightest addictive, delusional, suggestible, or paranoid tendencies, and should be regulated as such (but won't).

  • Chatbot psychosis literally played by itself out in my sister. She started confiding really dark shit to a openai model and it reinforced her psychosis. Her husband and I had to bring her to a psych ward. Please be safe with AI. Never ask it to think for you, or what it would have you do

    Its so annoying that idk how to make them comprehend its stupid, like I tried to make it interesting for myself but I always end up breaking it or getting annoyed by the bad memory, or just shitty dialouge and ive tried hella ai, I asssume it only works on narcissits or ppl who talk mostly to be heard and hear agreements rather than to converse, the worst type of people get validation from ai not seeieng it for what it is

  • @return2ozma@lemmy.world !technology@lemmy.world

    Should I worry about the fact that I can sort of make sense of what this "Geoff Lewis" person is trying to say?

    Because, to me, it's very clear: they're referring to something that was build (the LLMs) which is segregating people, especially those who don't conform with a dystopian world.

    Isn't what is happening right now in the world? "Dead Internet Theory" was never been so real, online content have being sowing the seed of doubt on whether it's AI-generated or not, users constantly need to prove they're "not a bot" and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they're increasingly required to show their faces and IDs.

    The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we've becoming used to being drowned ever since.

    Now, something that may sound like a "
    conspiracy theory": what's the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn't simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in "free beer") to the public, out of a "nice heart". Similarly, capital ventures and govts wouldn't simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn't enough to cover their costs, yet they continue to offer LLMs for cheap or "free").

    So there's definitely something that isn't being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker's performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)...

    Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren't being told to us, and while we're able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we're already able to see, but we can't fully comprehend its consequences.

    Curious how pondering about this is deemed "delusional", yet it's pretty "normal" to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.

    I think in order to be a good psychiatrist you need to understand what your patient is "babbling" about. But you also need to be able to challenge their understanding and conclusions about the world so they engage with the problem in a healthy manner. Like if the guy is worried how AI is making the internet and world more dead then maybe don't go to the AI to be understood.

  • It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.

    Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.

    About the coding thing...

    It can sometimes write boilerplate fairly well. The issue with using it to solve problems is it doesn't know what it's doing. Then you have to read and parse what it outputs and fix it. It's usually faster to just write it yourself.

  • How to Choose Between Flats in Gunnersbury and Wembley Park

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    4 Aufrufe
    Niemand hat geantwortet
  • You can still enable uBlock Origin in Chrome, here is how

    Technology technology
    130
    1
    313 Stimmen
    130 Beiträge
    737 Aufrufe
    W
    I use IronFox all the time. For me almost nothing is broken. Once a year I find one low value site that I have to load in Cromite to see what it is, and then I never use that trash site again. In other words, IronFox fulfills 100% of all my browsing needs excellently. I used Mull before IronFox, and my experience there was excellent as well. There is no good reason to use Chrome today or even some years back when Mull was the thing.
  • 43 Stimmen
    2 Beiträge
    21 Aufrufe
    C
    From the same source, Blacklight is really good. https://themarkup.org/series/blacklight Blacklight is a Real-Time Website Privacy Inspector. Enter the address of any website, and Blacklight will scan it and reveal the specific user-tracking technologies on the site So you can see what's happening on a site before you visit it
  • The British jet engine that failed in the 'Valley of Death'

    Technology technology
    16
    1
    40 Stimmen
    16 Beiträge
    84 Aufrufe
    R
    Giving up advancements in science and technology is stagnation. That's not what I'm suggesting. I'm suggesting giving up some particular, potential advancements in science and tecnology, which is a whole different kettle of fish and does not imply stagnation. Thinking it’s a good idea to not do anything until people are fed and housed is stagnation. Why do you think that?
  • Where do I install this nvme drive on my laptop?

    Technology technology
    19
    2
    18 Stimmen
    19 Beiträge
    91 Aufrufe
    K
    ??? The thing is on the right side of the pic. Your image is up side down. Edit: oh.duh, the two horizontal slots. I'm a dummy. Sorry.
  • Microsoft's AI Secretly Copying All Your Private Messages

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    33 Aufrufe
    S
    Forgive me for not explaining better. Here are the terms potentially needing explanation. Provisioning in this case is initial system setup, the kind of stuff you would do manually after a fresh install, but usually implies a regimented and repeatable process. Virtual Machine (VM) snapshots are like a save state in a game, and are often used to reset a virtual machine to a particular known-working condition. Preboot Execution Environment (PXE, aka ‘network boot’) is a network adapter feature that lets you boot a physical machine from a hosted network image rather than the usual installation on locally attached storage. It’s probably tucked away in your BIOS settings, but many computers have the feature since it’s a common requirement in commercial deployments. As with the VM snapshot described above, a PXE image is typically a known-working state that resets on each boot. Non-virtualized means not using hardware virtualization, and I meant specifically not running inside a virtual machine. Local-only means without a network or just not booting from a network-hosted image. Telemetry refers to data collecting functionality. Most software has it. Windows has a lot. Telemetry isn’t necessarily bad since it can, for example, help reveal and resolve bugs and usability problems, but it is easily (and has often been) abused by data-hungry corporations like MS, so disabling it is an advisable precaution. MS = Microsoft OSS = Open Source Software Group policies are administrative settings in Windows that control standards (for stuff like security, power management, licensing, file system and settings access, etc.) for user groups on a machine or network. Most users stick with the defaults but you can edit these yourself for a greater degree of control. Docker lets you run software inside “containers” to isolate them from the rest of the environment, exposing and/or virtualizing just the resources they need to run, and Compose is a related tool for defining one or more of these containers, how they interact, etc. To my knowledge there is no one-to-one equivalent for Windows. Obviously, many of these concepts relate to IT work, as are the use-cases I had in mind, but the software is simple enough for the average user if you just pick one of the premade playbooks. (The Atlas playbook is popular among gamers, for example.) Edit: added explanations for docker and telemetry
  • 0 Stimmen
    2 Beiträge
    23 Aufrufe
    P
    It's a shame. AI has potential but most people just want to exploit its development for their own gain.
  • You Can't Look at Porn on Any Reddit Third-Party App Now

    Technology technology
    2
    1 Stimmen
    2 Beiträge
    22 Aufrufe
    V
    3rd party apps were still working ?