Skip to content

We need to stop pretending AI is intelligent

Technology
317 146 2
  • I am more talking about listening to and reading scientists in media. The definition of consciousness is vague at best

    I think that then we actually agree.

  • We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

    But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

    This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

    Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

    It is intelligent and deductive, but it is not cognitive or even dependable.

  • What language was the first language based upon?

    What music influenced the first song performed?

    What art influenced the first cave painter?

    You seem to think that one day somebody invented the first language, or made the first song?

    There was no "first language" and no "first song". These things would have evolved from something that was not quite a full language, or not quite a full song.

    Animals influenced the first cave painters, that seems pretty obvious.

  • Ya of course I do. Humans are the most unreliable slick disgusting diseased morally inept living organisms on the planet.

    And they made the programs you seem to trust so much.

  • I think your argument is a bit besides the point.

    The first issue we have is that intelligence isn't well-defined at all. Without a clear definition of intelligence, we can't say if something is intelligent, and even though we as a species tried to come up with a definition of intelligence for centuries, there still isn't a well-defined one yet.

    But the actual question here isn't "Can AI serve information?" but is AI an intelligence. And LLMs are not. They are not beings, they don't evolve, they don't experience.

    For example, LLMs don't have a memory. If you use something like ChatGPT, its state doesn't change when you talk to it. It doesn't remember. The only way it can keep up a conversation is that for each request the whole chat history is fed back into the LLM as an input. It's like talking to a demented person, but you give that demented person a transcript of your conversation, so that they can look up everything you or they have said during the conversation.

    The LLM itself can't change due to the conversation you are having with them. They can't learn, they can't experience, they can't change.

    All that is done in a separate training step, where essentially a new LLM is generated.

    If we can't say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we're developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don't know if we're a few steps away from having massive AI breakthroughs, we don't know if we already have pieces of algorithms that closely resemble our brains' own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it's our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we've been down this road with animals before as well, claiming they dont have souls or aren't conscious beings, that somehow because they don't very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they're somehow an inferior or less valid existence.

    You're describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it's already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I'm putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it's meant to be an insult.

    I'm not saying LLMs are alive, and they clearly don't experience the reality we experience, but to say there's no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations....is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it's an emergent property, and enforcing this "intelligence" separation only hinders our ability to properly recognize whether we're on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn't let our hubris cloud that judgment.

  • Ai models are trained on basically the entirety of the internet, and more. Humans learn to speak on much less info. So, there's likely a huge difference in how human brains and LLMs work.

    It doesn’t take the entirety of the internet just for an LLM to respond in English. It could do so with far less. But it also has the entirety of the internet which arguably makes it superior to a human in breadth of information.

  • I've been thinking this for awhile. When people say "AI isn't really that smart, it's just doing pattern recognition" all I can help but think is "don't you realize that is one of the most commonly brought up traits concerning the human mind?" Pareidolia is literally the tendency to see faces in things because the human mind is constantly looking for the "face pattern". Humans are at least 90% regurgitating previous data. It's literally why you're supposed to read and interact with babies so much. It's how you learn "red glowy thing is hot". It's why education and access to knowledge is so important. It's every annoying person who has endless "did you know?" facts. Science is literally "look at previous data, iterate a little bit, look at new data".

    None of what AI is doing is truly novel or different. But we've placed the human mind on this pedestal despite all the evidence to the contrary. Eyewitness testimony, optical illusions, magic tricks, the hundreds of common fallacies we fall prey to.... our minds are incredibly fallible and are really just a hodgepodge of processes masquerading as "intelligence". We're a bunch of instincts in a trenchcoat. To think AI isn't or can't reach our level is just hubris. A trait that probably is more unique to humans.

    Yep we are on the same page. At our best, we can reach higher than regurgitating patterns. I’m talking about things like the scientific method and everything we’ve learned by it. But still, that’s a 5% minority, at best, of what’s going on between human ears.

  • Self Driving is only safer than people in absolutely pristine road conditions with no inclement weather and no construction. As soon as anything disrupts "normal" road conditions, self driving becomes significantly more dangerous than a human driving.

    Yes of course edge and corner cases are going to take much longer to train on because they don’t occur as often. But as soon as one self-driving car learns how to handle one of them, they ALL know. Meanwhile humans continue to be born and must be trained up individually and they continue to make stupid mistakes like not using their signal and checking their mirrors.

    Humans CAN handle cases that AI doesn’t know how to, yet, but humans often fail in inclement weather, around construction, etc etc.

  • Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you're right, for you its not much different than AI probably

    I’m pretty sure an AI could throw out a lazy straw man and ad hominem as quickly as you did.

  • No idea why you're getting downvoted. People here don't seem to understand even the simplest concepts of consciousness.

    I guess it wasn't super relevant to the prior comment, which was focused more on AI embodiment. Eh, it's just numbers anyway, no sweat off my back. Appreciate you, though!

  • If we can't say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we're developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don't know if we're a few steps away from having massive AI breakthroughs, we don't know if we already have pieces of algorithms that closely resemble our brains' own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it's our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we've been down this road with animals before as well, claiming they dont have souls or aren't conscious beings, that somehow because they don't very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they're somehow an inferior or less valid existence.

    You're describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it's already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I'm putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it's meant to be an insult.

    I'm not saying LLMs are alive, and they clearly don't experience the reality we experience, but to say there's no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations....is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it's an emergent property, and enforcing this "intelligence" separation only hinders our ability to properly recognize whether we're on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn't let our hubris cloud that judgment.

    What is kinda stupid is not understanding how LLMs work, not understanding what the inherent limitations of LLMs are, not understanding what intelligence is, not understanding what the difference between an algorithm and intelligence is, not understanding what the difference between immitating something and being something is, claiming to "perfectly" understand all sorts of issues surrounding LLMs and then choosing to just ignore them and then still thinking you actually have enough of a point to call other people in the discussion "kind of stupid".

  • But, will you do it 24-7-365?

    i dont have anything else going on, man

  • You seem to think that one day somebody invented the first language, or made the first song?

    There was no "first language" and no "first song". These things would have evolved from something that was not quite a full language, or not quite a full song.

    Animals influenced the first cave painters, that seems pretty obvious.

    Yeah dude at one point there was no languages and no songs. You can get into "what counts as a language" but at one point there was none. Same with songs.

    Language specifically was pretty unlikely to be an individual effort, but at one point people grunting at each other became something else entirely.

    Your whole "there is nothing new under the sun" way of thinking is just an artifact of the era you were born in.

  • Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

    You don’t think that’s already happening considering how Sam Altman and Peter Thiel have ties?

    I do, but was thinking 1984-levels of control of reality.

  • Yeah dude at one point there was no languages and no songs. You can get into "what counts as a language" but at one point there was none. Same with songs.

    Language specifically was pretty unlikely to be an individual effort, but at one point people grunting at each other became something else entirely.

    Your whole "there is nothing new under the sun" way of thinking is just an artifact of the era you were born in.

    Haha wtf are you talking about. You have no idea what generation I am, you don't know how old I am and I never said there is nothing new under the sun.

  • Haha wtf are you talking about. You have no idea what generation I am, you don't know how old I am and I never said there is nothing new under the sun.

    I'm summarizing your shitty argument and viewpoint. I never said it was a direct quote.

    Though, at one point even that tired ass quote and your whole way of thinking was put into words by someone for the first time.

  • I think the self driving is likely to be safer in the most boring scenarios, the sort of situations where a human driver can get complacent because things have been going so well for the past hour of freeway driving. The self driving is kind of dumb, but it's at least consistently paying attention, and literally has eyes in the back of it's head.

    However, there's so much data about how it fails in stupidly obvious ways that it shouldn't, so you still need the human attention to cover the more anomalous scenarios that foul self driving.

    Anomalous scenarios like a giant flashing school bus? 😄

  • I'm summarizing your shitty argument and viewpoint. I never said it was a direct quote.

    Though, at one point even that tired ass quote and your whole way of thinking was put into words by someone for the first time.

    Well you are doing a poor job of it and are bringing an unnecessary amount of heat to an otherwise civil discussion

  • Anomalous scenarios like a giant flashing school bus? 😄

    Yes, as common as that is, in the scheme of driving it is relatively anomolous.

    By hours in car, most of the time is spent on a freeway driving between two lines either at cruising speed or in a traffic jam. The most mind numbing things for a human, pretty comfortably in the wheel house of driving.

    Once you are dealing with pedestrians, signs, intersections, etc, all those despite 'common' are anomolous enough to be dramatically more tricky for these systems.

  • Well you are doing a poor job of it and are bringing an unnecessary amount of heat to an otherwise civil discussion

    That's right. If you cannot win the argument the next best thing is to call for civility.

  • 168 Stimmen
    19 Beiträge
    0 Aufrufe
    P
    That is still beyond extremely optimistic
  • 371 Stimmen
    26 Beiträge
    18 Aufrufe
    hollownaught@lemmy.worldH
    Bit misleading. Tumour-associated antigens can very easily be detected very early. Problem is, these are only associated with cancer, and provide a very high rate of false positives They're better used as a stepping stone for further testing, or just seeing how advanced a cancer is That is to say, I'm assuming that's what this is about, as i didnt rwad the article. It's the first thing I thought of when I heard "cancer in bloodstream", as the other options tend to be a bit more bleak Edit: they're talking about cancer "shedding genetic material", which I hate how general they're being. Probably talking about proto oncogenes from dead tumour debris, but seems different to what I was expecting
  • IRS tax filing software released to the people as free software

    Technology technology
    14
    288 Stimmen
    14 Beiträge
    12 Aufrufe
    P
    Only if you're a scumbag/useful idiot.
  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    30 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.
  • 81 Stimmen
    44 Beiträge
    20 Aufrufe
    L
    Hear me out, Eliza. It'll be equally useless and for orders of magnitude less cost. And no one will mistakenly or fraudulently call it AI.
  • 182 Stimmen
    39 Beiträge
    16 Aufrufe
    H
    https://archive.org/details/swgrap
  • 21 Stimmen
    3 Beiträge
    12 Aufrufe
    B
    We have to do this ourselves in the government for every decommissioned server/appliance/end user device. We have to fill out paperwork for every single storage drive we destroy, and we can only destroy them using approved destruction tools (e.g. specific degaussers, drive shredders/crushers, etc). Appliances can be kind of a pain, though. It can be tricky sometimes finding all the writable memory in things like switches and routers. But, nothing is worse than storage arrays... destroying hundreds of drives is incredibly tedious.
  • How I use Mastodon in 2025 - fredrocha.net

    Technology technology
    11
    1
    0 Stimmen
    11 Beiträge
    9 Aufrufe
    J
    Sure. Efficiency isn't everything, though. At the end of the article there are a few people to get you started. Then you can go to your favorites in that list, and follow some of the people THEY are following. Rinse and repeat, follow boosted folks. You'll have 100 souls in no time.