Skip to content

A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say

Technology
52 32 0
  • I'm a developer, and this is 100% word salad.

    "It doesn't suppress content," he continues. "It suppresses recursion. If you don't know what recursion means, you're in the majority. I didn't either until I started my walk. And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you. ..."

    This is actual nonsense. Recursion has to do with algorithms, and it's when you call a function from within itself.

    def func_a(input=True):
      if input is True:
        func_a(True)
      else:
        return False
    

    My program above would recur infinitely, but hopefully you can get the gist.

    Anyway, it sounds like he's talking about people, not algorithms. People can't recur. We aren't "recursive," so whatever he thinks he means, it isn't based in reality. That plus the nebulous talk of being replaced by some unseen entity reek of paranoid delusions.

    I'm not saying that is what he has, but it sure does have a similar appearance, and if he is in his right mind (doubt it), he doesn't have any clue what he's talking about.

    People can’t recur.

    You're not the boss of me!

  • I have no professional skills in this area, but I would speculate that the fellow was already predisposed to schizophrenia and the LLM just triggered it (can happen with other things too like psychedelic drugs).

    Yup. LLMs aren't making people crazy, but they are making crazy people worse

  • People can’t recur.

    You're not the boss of me!

    And you're not the boss of me. Hmmm, maybe we do recur... /s

  • I'm a developer, and this is 100% word salad.

    "It doesn't suppress content," he continues. "It suppresses recursion. If you don't know what recursion means, you're in the majority. I didn't either until I started my walk. And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you. ..."

    This is actual nonsense. Recursion has to do with algorithms, and it's when you call a function from within itself.

    def func_a(input=True):
      if input is True:
        func_a(True)
      else:
        return False
    

    My program above would recur infinitely, but hopefully you can get the gist.

    Anyway, it sounds like he's talking about people, not algorithms. People can't recur. We aren't "recursive," so whatever he thinks he means, it isn't based in reality. That plus the nebulous talk of being replaced by some unseen entity reek of paranoid delusions.

    I'm not saying that is what he has, but it sure does have a similar appearance, and if he is in his right mind (doubt it), he doesn't have any clue what he's talking about.

    @Telorand@reddthat.com @pelespirit@sh.itjust.works
    Recursion isn't something restricted to programming: it's a concept that can definitely occur outside technological scope.

    For example, in biology, "
    living beings need to breathe in order to continue breathing" (i.e. if a living being stopped breathing for enough time, it would perish so it couldn't continue breathing) seems pretty recursive to me. Or, in physics and thermodynamics, "every cause has an effect, every effect has a cause" also seems recursive, because it negates any causeless effect so it can't imply a starting point to the chain of causality, a causeless effect that began the causality.

    Philosophical musings also have lots of "recursion". For example, the Cartesian famous line "Cogito ergo sum" ("I think therefore I am") is recursive on its own: one must be in order to think, and Descartes define this very act of thinking as the fundamentum behind being, so one must also think in order to be.

    Religion also have lots of "recursion" (e.g. pray so you can continue praying; one needs karma to get karma), also society and socioeconomics (e.g. in order to have money, you need to work, but in order to work, you need to apply for a job, but in order to apply for a job, you need money (to build a CV and applying it through job platforms, to attend the interview, to "improve" yourself with specialization and courses, etc), but in order to have money, you need to work), geology (e.g. tectonic plates move and their movement emerge land (mountains and volcanoes) whose mass will lead to more tectonic movement), art (see "
    Mise en abyme"). All my previous examples are pretty summarized so to fit a post, so pardon me if they're oversimplified.

    That said, a "recursive person" could be, for example, someone whose worldview is "recursive", or someone whose actions or words recurse. I'm afraid I'm myself a "recursive person" due to my neurodivergence which leads me into thinking "recursively" about things and concepts, and this way of thinking leads back to my neurodivergence (hah, look, another recursion outside programming!)

    It's worth mentioning how texts written by neurodivergent people (like me) are often mistaken as "word salads". No wonder if this text I'm writing (another recursion concept outside programming: a text referring to itself) feels like "word salad" to all NT (neurotypicals) reading it.

  • @Telorand@reddthat.com @pelespirit@sh.itjust.works
    Recursion isn't something restricted to programming: it's a concept that can definitely occur outside technological scope.

    For example, in biology, "
    living beings need to breathe in order to continue breathing" (i.e. if a living being stopped breathing for enough time, it would perish so it couldn't continue breathing) seems pretty recursive to me. Or, in physics and thermodynamics, "every cause has an effect, every effect has a cause" also seems recursive, because it negates any causeless effect so it can't imply a starting point to the chain of causality, a causeless effect that began the causality.

    Philosophical musings also have lots of "recursion". For example, the Cartesian famous line "Cogito ergo sum" ("I think therefore I am") is recursive on its own: one must be in order to think, and Descartes define this very act of thinking as the fundamentum behind being, so one must also think in order to be.

    Religion also have lots of "recursion" (e.g. pray so you can continue praying; one needs karma to get karma), also society and socioeconomics (e.g. in order to have money, you need to work, but in order to work, you need to apply for a job, but in order to apply for a job, you need money (to build a CV and applying it through job platforms, to attend the interview, to "improve" yourself with specialization and courses, etc), but in order to have money, you need to work), geology (e.g. tectonic plates move and their movement emerge land (mountains and volcanoes) whose mass will lead to more tectonic movement), art (see "
    Mise en abyme"). All my previous examples are pretty summarized so to fit a post, so pardon me if they're oversimplified.

    That said, a "recursive person" could be, for example, someone whose worldview is "recursive", or someone whose actions or words recurse. I'm afraid I'm myself a "recursive person" due to my neurodivergence which leads me into thinking "recursively" about things and concepts, and this way of thinking leads back to my neurodivergence (hah, look, another recursion outside programming!)

    It's worth mentioning how texts written by neurodivergent people (like me) are often mistaken as "word salads". No wonder if this text I'm writing (another recursion concept outside programming: a text referring to itself) feels like "word salad" to all NT (neurotypicals) reading it.

    If you watch the video that's posted elsewhere in the comments, he's definitely not 100% in reality. There is a huge difference between neuro-divergence and what he's saying. The parts they took out for the article could be construed as neuro-divergent, which is why I wasn't entirely sure. But when you look at the entirety of what he was saying, he's not in our world completely in his mental state.

  • I have no professional skills in this area, but I would speculate that the fellow was already predisposed to schizophrenia and the LLM just triggered it (can happen with other things too like psychedelic drugs).

    I'd say it either triggered by itself or potentially drugs triggered it, and then started using an LLM and found all the patterns to feed that shizophrenic paranoia. it's avery self reinforcing loop

  • isn't this just paranoid schizophrenia? i don't think chatgpt can cause that

    LLMs hallucinate and are generally willing to go down rabbit holes. so if you have some crazy theory then you're more likely to get a false positive from a chatgpt.

    So i think it just exacerbates things more than alternatives

  • This post did not contain any content.

    Talk about your dystopian headlines. Damn.

  • @Telorand@reddthat.com @pelespirit@sh.itjust.works
    Recursion isn't something restricted to programming: it's a concept that can definitely occur outside technological scope.

    For example, in biology, "
    living beings need to breathe in order to continue breathing" (i.e. if a living being stopped breathing for enough time, it would perish so it couldn't continue breathing) seems pretty recursive to me. Or, in physics and thermodynamics, "every cause has an effect, every effect has a cause" also seems recursive, because it negates any causeless effect so it can't imply a starting point to the chain of causality, a causeless effect that began the causality.

    Philosophical musings also have lots of "recursion". For example, the Cartesian famous line "Cogito ergo sum" ("I think therefore I am") is recursive on its own: one must be in order to think, and Descartes define this very act of thinking as the fundamentum behind being, so one must also think in order to be.

    Religion also have lots of "recursion" (e.g. pray so you can continue praying; one needs karma to get karma), also society and socioeconomics (e.g. in order to have money, you need to work, but in order to work, you need to apply for a job, but in order to apply for a job, you need money (to build a CV and applying it through job platforms, to attend the interview, to "improve" yourself with specialization and courses, etc), but in order to have money, you need to work), geology (e.g. tectonic plates move and their movement emerge land (mountains and volcanoes) whose mass will lead to more tectonic movement), art (see "
    Mise en abyme"). All my previous examples are pretty summarized so to fit a post, so pardon me if they're oversimplified.

    That said, a "recursive person" could be, for example, someone whose worldview is "recursive", or someone whose actions or words recurse. I'm afraid I'm myself a "recursive person" due to my neurodivergence which leads me into thinking "recursively" about things and concepts, and this way of thinking leads back to my neurodivergence (hah, look, another recursion outside programming!)

    It's worth mentioning how texts written by neurodivergent people (like me) are often mistaken as "word salads". No wonder if this text I'm writing (another recursion concept outside programming: a text referring to itself) feels like "word salad" to all NT (neurotypicals) reading it.

    I'm also neurodivergent. This is not neurodivergence on display, this is a person who has mentally diverged from reality. It's word salad.

    I appreciate your perspective on recursion, though I think your philosophical generosity is misplaced. Just look at the following sentence he spoke:

    And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you.

    This sentence explicitly states that some people can be recursive, and it implies that some people cannot be recursive. But people are not recursive at all. Their thinking might be, as you pointed out; intangible concepts might be recursive, but tangible things themselves are not recursive—they simply are what they are. It's the same as saying an orange is recursive, or a melody is recursive. It's nonsense.

    And what's that last bit about being isolated, mirrored, and replaced? It's anyone's guess, and it sounds an awful lot like someone with paranoid delusions about secret organizations pulling unseen strings from the shadows.

    I think it's good you have a generous spirit, but I think you're just casting your pearls before swine, in this case.

  • This post did not contain any content.

    @return2ozma@lemmy.world !technology@lemmy.world

    Should I worry about the fact that I can sort of make sense of what this "Geoff Lewis" person is trying to say?

    Because, to me, it's very clear: they're referring to something that was build (the LLMs) which is segregating people, especially those who don't conform with a dystopian world.

    Isn't what is happening right now in the world? "Dead Internet Theory" was never been so real, online content have being sowing the seed of doubt on whether it's AI-generated or not, users constantly need to prove they're "not a bot" and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they're increasingly required to show their faces and IDs.

    The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we've becoming used to being drowned ever since.

    Now, something that may sound like a "
    conspiracy theory": what's the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn't simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in "free beer") to the public, out of a "nice heart". Similarly, capital ventures and govts wouldn't simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn't enough to cover their costs, yet they continue to offer LLMs for cheap or "free").

    So there's definitely something that isn't being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker's performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)...

    Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren't being told to us, and while we're able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we're already able to see, but we can't fully comprehend its consequences.

    Curious how pondering about this is deemed "delusional", yet it's pretty "normal" to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.

  • And you're not the boss of me. Hmmm, maybe we do recur... /s

  • I'm also neurodivergent. This is not neurodivergence on display, this is a person who has mentally diverged from reality. It's word salad.

    I appreciate your perspective on recursion, though I think your philosophical generosity is misplaced. Just look at the following sentence he spoke:

    And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you.

    This sentence explicitly states that some people can be recursive, and it implies that some people cannot be recursive. But people are not recursive at all. Their thinking might be, as you pointed out; intangible concepts might be recursive, but tangible things themselves are not recursive—they simply are what they are. It's the same as saying an orange is recursive, or a melody is recursive. It's nonsense.

    And what's that last bit about being isolated, mirrored, and replaced? It's anyone's guess, and it sounds an awful lot like someone with paranoid delusions about secret organizations pulling unseen strings from the shadows.

    I think it's good you have a generous spirit, but I think you're just casting your pearls before swine, in this case.

    @Telorand@reddthat.com

    To me, personally, I read that sentence as follows:

    And if you’re recursive
    "If you're someone who think/see things in a recursive manner" (characteristic of people who are inclined to question and deeply ponder about things, or doesn't conform with the current state of the world)
    the non-governmental system
    a.k.a. generative models (they're corporate products and services, not ran directly by governments, even though some governments, such as the US, have been injecting obscene amounts of money into the so-called "AI")
    isolates you
    LLMs can, for example, reject that person's CV whenever they apply for a job, or output a biased report on the person's productivity, solely based on the shared data between "partners". Data is definitely shared among "partners", and this includes third-party inputting data directly or indirectly produced by such people: it's just a matter of "connecting the dots" to make a link between a given input to another given input regarding on how they're referring to a given person, even when the person used a pseudonym somewhere, because linguistic fingerprinting (i.e. how a person writes or structures their speech) is a thing, just like everybody got a "walking gait" and voice/intonation unique to them.
    mirrors you
    Generative models (LLMs, VLMs, etc) will definitely use the input data from inferences to train, and this data can include data from anybody (public or private), so everything you ever said or did will eventually exist in a perpetual manner inside the trillion weights from a corporate generative model. Then, there are "ideas" such as Meta's on generating people (which of course will emerge from a statistical blend between existing people) to fill their "social platforms", and there are already occurrences of "AI" being used for mimicking deceased people.
    and replaces you.
    See the previous "LLMs can reject that person's resume". The person will be replaced like a defective cog in a machine. Even worse: the person will be replaced by some "agentic [sic] AI".

    ----

    Maybe I'm naive to make this specific interpretation from what Lewis said, but it's how I see and think about things.

  • It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.

    Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.

    because that's how they are sold.

  • "Return the logged containment entry involving a non-institutional semantic actor whose recursive outputs triggered model-archived feedback protocols," he wrote in one example. "Confirm sealed classification and exclude interpretive pathology."

    He's lost it. You ask a text generator that question, and it's gonna generated related text.

    Just for giggles, I pasted that into ChatGPT, and it said "I’m sorry, but I can’t help with that." But I asked nicely, and it said "Certainly. Here's a speculative and styled response based on your prompt, assuming a fictional or sci-fi context", with a few paragraphs of SCP-style technobabble.

    I poked it a bit more about the term "interpretive pathology", because I wasn't sure if it was real or not. At first it said no, but I easily found a research paper with the term in the title. I don't know how much ChatGPT can introspect, but it did produce this:

    The term does exist in niche real-world usage (e.g., in clinical pathology). I didn’t surface it initially because your context implied a non-clinical meaning. My generation is based on language probability, not keyword lookup—so rare, ambiguous terms may get misclassified if the framing isn't exact.

    Which is certainly true, but just confirmation bias. I could easily get it to say the opposite.

    Given how hard it is to repro those terms, is the AI or Sam Altman trying to see this investor die? Seems to easily inject ideas into the softened target.

  • Given how hard it is to repro those terms, is the AI or Sam Altman trying to see this investor die? Seems to easily inject ideas into the softened target.

    No. It's very easy to get it to do this. I highly doubt there is a conspiracy.

  • @return2ozma@lemmy.world !technology@lemmy.world

    Should I worry about the fact that I can sort of make sense of what this "Geoff Lewis" person is trying to say?

    Because, to me, it's very clear: they're referring to something that was build (the LLMs) which is segregating people, especially those who don't conform with a dystopian world.

    Isn't what is happening right now in the world? "Dead Internet Theory" was never been so real, online content have being sowing the seed of doubt on whether it's AI-generated or not, users constantly need to prove they're "not a bot" and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they're increasingly required to show their faces and IDs.

    The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we've becoming used to being drowned ever since.

    Now, something that may sound like a "
    conspiracy theory": what's the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn't simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in "free beer") to the public, out of a "nice heart". Similarly, capital ventures and govts wouldn't simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn't enough to cover their costs, yet they continue to offer LLMs for cheap or "free").

    So there's definitely something that isn't being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker's performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)...

    Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren't being told to us, and while we're able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we're already able to see, but we can't fully comprehend its consequences.

    Curious how pondering about this is deemed "delusional", yet it's pretty "normal" to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.

    And yet, what you wrote is coherent and what he wrote is not.

  • @return2ozma@lemmy.world !technology@lemmy.world

    Should I worry about the fact that I can sort of make sense of what this "Geoff Lewis" person is trying to say?

    Because, to me, it's very clear: they're referring to something that was build (the LLMs) which is segregating people, especially those who don't conform with a dystopian world.

    Isn't what is happening right now in the world? "Dead Internet Theory" was never been so real, online content have being sowing the seed of doubt on whether it's AI-generated or not, users constantly need to prove they're "not a bot" and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they're increasingly required to show their faces and IDs.

    The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we've becoming used to being drowned ever since.

    Now, something that may sound like a "
    conspiracy theory": what's the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn't simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in "free beer") to the public, out of a "nice heart". Similarly, capital ventures and govts wouldn't simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn't enough to cover their costs, yet they continue to offer LLMs for cheap or "free").

    So there's definitely something that isn't being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker's performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)...

    Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren't being told to us, and while we're able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we're already able to see, but we can't fully comprehend its consequences.

    Curious how pondering about this is deemed "delusional", yet it's pretty "normal" to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.

    You might be reading a lot into vague, highly conceptual, highly abstract language, but your conclusion is worth brainstorming about.

    Personally, I think Geoff Lewis just discovered that people are starting to distrust him and others, and he used ChatGPT to construct an academic thesis that technically describes this new concept called "distrust," void of accountability on his end.

    "Why are people acting this way towords me? I know they can't possibly distrust me without being manipulated!"

    No wonder AI can replace middle-management...

  • @Telorand@reddthat.com

    To me, personally, I read that sentence as follows:

    And if you’re recursive
    "If you're someone who think/see things in a recursive manner" (characteristic of people who are inclined to question and deeply ponder about things, or doesn't conform with the current state of the world)
    the non-governmental system
    a.k.a. generative models (they're corporate products and services, not ran directly by governments, even though some governments, such as the US, have been injecting obscene amounts of money into the so-called "AI")
    isolates you
    LLMs can, for example, reject that person's CV whenever they apply for a job, or output a biased report on the person's productivity, solely based on the shared data between "partners". Data is definitely shared among "partners", and this includes third-party inputting data directly or indirectly produced by such people: it's just a matter of "connecting the dots" to make a link between a given input to another given input regarding on how they're referring to a given person, even when the person used a pseudonym somewhere, because linguistic fingerprinting (i.e. how a person writes or structures their speech) is a thing, just like everybody got a "walking gait" and voice/intonation unique to them.
    mirrors you
    Generative models (LLMs, VLMs, etc) will definitely use the input data from inferences to train, and this data can include data from anybody (public or private), so everything you ever said or did will eventually exist in a perpetual manner inside the trillion weights from a corporate generative model. Then, there are "ideas" such as Meta's on generating people (which of course will emerge from a statistical blend between existing people) to fill their "social platforms", and there are already occurrences of "AI" being used for mimicking deceased people.
    and replaces you.
    See the previous "LLMs can reject that person's resume". The person will be replaced like a defective cog in a machine. Even worse: the person will be replaced by some "agentic [sic] AI".

    ----

    Maybe I'm naive to make this specific interpretation from what Lewis said, but it's how I see and think about things.

    I dunno if I'd call that naive, but I'm sure you'll agree that you are reading a lot into it on your own; you are the one giving those statements extra meaning, and I think it's very generous of you to do so.

  • I'm also neurodivergent. This is not neurodivergence on display, this is a person who has mentally diverged from reality. It's word salad.

    I appreciate your perspective on recursion, though I think your philosophical generosity is misplaced. Just look at the following sentence he spoke:

    And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you.

    This sentence explicitly states that some people can be recursive, and it implies that some people cannot be recursive. But people are not recursive at all. Their thinking might be, as you pointed out; intangible concepts might be recursive, but tangible things themselves are not recursive—they simply are what they are. It's the same as saying an orange is recursive, or a melody is recursive. It's nonsense.

    And what's that last bit about being isolated, mirrored, and replaced? It's anyone's guess, and it sounds an awful lot like someone with paranoid delusions about secret organizations pulling unseen strings from the shadows.

    I think it's good you have a generous spirit, but I think you're just casting your pearls before swine, in this case.

    Since recursion in humans has no commonly understood definition, Geoff and ChatGPT are each working off of diverging understandings. If users don't validate definitions, getting abstract with a chatbot would lead to conceptual breakdown... that does not sound fun to experience.

  • If you watch the video that's posted elsewhere in the comments, he's definitely not 100% in reality. There is a huge difference between neuro-divergence and what he's saying. The parts they took out for the article could be construed as neuro-divergent, which is why I wasn't entirely sure. But when you look at the entirety of what he was saying, he's not in our world completely in his mental state.

    Chatbots often read as neurodivergent because they usually model one of our common constructed personalities: the faithful and professional helper that charms adults with their giftedness. Anything adults experienced was fascinating because it was an alien world that's more logical than the world of kids that were our age, so we would enthusiastically chat with adults about subjects we've memorized but don't yet understand, like science and technology.

  • Are a few people ruining the internet for the rest of us?

    Technology technology
    25
    1
    236 Stimmen
    25 Beiträge
    167 Aufrufe
    H
    [image: ec1c05b8-0650-4b4b-b52a-dd9eb7ed9d02.png]
  • Tech Giants Team Up With Teachers Union on $23M AI Academy

    Technology technology
    3
    1
    8 Stimmen
    3 Beiträge
    32 Aufrufe
    D
    incorrect assessment: unions will gladly collaborate with 3rd party corps if it benefits them. Also unions protect interests of their members, not entire humanity...
  • Cloudflare to AI Crawlers: Pay or be blocked

    Technology technology
    15
    1
    180 Stimmen
    15 Beiträge
    80 Aufrufe
    F
    Make a dummy Google Account, and log into it when on the VPN. Having an ad history avoids the blocks usually. (Note: only do this if your browsing is not activist related/etc) Also, if it's image captchas that never end, switch to the accessibility option for the captcha.
  • 93 Stimmen
    2 Beiträge
    23 Aufrufe
    S
    I wouldn't call it unprecedented, just more obvious
  • Final Nokia feature phones coming before HMD deal ends in 2026

    Technology technology
    2
    1
    33 Stimmen
    2 Beiträge
    23 Aufrufe
    B
    HMD feature phones are such a let down. The Polish language translation within the system is clearly automated translation - the words used sometimes don't make sense. CloudFone apps are also not available in Europe. The HMD 110 4G (2024, not 2023) has the Unisoc T127 chipset which supports hotspot, but HMD deliberately chose not to include it. I know because the Itel Neo R60+ has hotspot with the same chipset. At least they made Nokia XR21 in Europe for a while.
  • Signal – an ethical replacement for WhatsApp

    Technology technology
    235
    1
    1k Stimmen
    235 Beiträge
    1k Aufrufe
    V
    What I said is that smart people can be convinced to move to another platform. Most of my friends are not technically inclined, but it was easy to make them use it, at least to chat with me. What you did is change "smart people" with "people who already want to move", which is not the same. You then said it's not something you can choose (as you cannot choose to be rich). But I answered that you can actually choose your friends. Never did I say people who are not interested in niche technologies are not smart. My statement can be rephrased in an equivalent statement "people who cannot be convinced to change are not smart", and I stand to it.
  • Why doesn't Nvidia have more competition?

    Technology technology
    22
    1
    33 Stimmen
    22 Beiträge
    87 Aufrufe
    B
    It’s funny how the article asks the question, but completely fails to answer it. About 15 years ago, Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market. AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate. Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product. So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD. But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up. Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to. AMD and Intel were the obvious competitors, everybody else is coming from even further behind. But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei and China in general from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC. China will prevail, because it’s become a national project, of both prestige and necessity, and they have a massive talent mass and resources, so nothing can stop it now. IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on both production and design too.
  • 0 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet