Skip to content

We need to stop pretending AI is intelligent

Technology
331 148 4.7k
  • Am I… AI? I do use ellipses and (what I now see is) en dashes for punctuation. Mainly because they are longer than hyphens and look better in a sentence. Em dash looks too long.

    However, that's on my phone. On a normal keyboard I use 3 periods and 2 hyphens instead.

    I've been getting into the habit of also using em/en dashes on the computer through the Compose key. Very convenient for typing arrows, inequality and other math signs, etc. I don't use it for ellipsis because they're not visually clearer nor shorter to type.

  • We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

    But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

    This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

    Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

    I agreed with most of what you said, except the part where you say that real AI is impossible because it's bodiless or "does not experience hunger" and other stuff. That part does not compute.

    A general AI does not need to be conscious.

  • I don't think the term AI has been used in a vague way, it's that there's a huge disconnect between how the technical fields use it vs general populace and marketing groups heavily abuse that disconnect.

    Artificial has two meanings/use cases. One is to indicate something is fake (video game NPC, chess bots, vegan cheese). The end product looks close enough to the real thing that for its intended use case it works well enough. Looks like a duck, quacks like a duck, treat it like a duck even though we all know it's a bunny with a costume on. LLMs on a technical level fit this definition.

    The other definition is man made. Artificial diamonds are a great example of this, they're still diamonds at the end of the day, they have all the same chemical makeups, same chemical and physical properties. The only difference is they came from a laboratory made by adult workers vs child slave labor.

    My pet theory is science fiction got the general populace to think of artificial intelligence to be using the "man-made" definition instead of the "fake" definition that these companies are using. In the past the subtle nuance never caused a problem so we all just kinda ignored it

    Dafuq? Artificial always means man-made.

    Nature also makes fake stuff. For example, fish that have an appendix that looks like a worm, to attract prey. It's a fake worm. Is it "artificial"? Nope. Not man made.

  • LLMs are one of the approximately one metric crap ton of different technologies that fall under the rather broad umbrella of the field of study that is called AI. The definition for what is and isn't AI can be pretty vague, but I would argue that LLMs are definitely AI because they exist with the express purpose of imitating human behavior.

    Huh? Since when an AI's purpose is to "imitate human behavior"? AI is about solving problems.

  • I've been getting into the habit of also using em/en dashes on the computer through the Compose key. Very convenient for typing arrows, inequality and other math signs, etc. I don't use it for ellipsis because they're not visually clearer nor shorter to type.

    Compose key?

  • My language doesn't really have hyphenated words or different dashes. It's mostly punctuation within a sentence. As such there are almost no cases where one encounters a dash without spaces.

    What language is this?

  • Yours didn't and read it just fine.

    That's irrelevant. That's like saying you shouldn't complain about someone running a red light if you stopped in time before they t-boned you - because you understood the situation.

  • Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.

    I wonder how different it'll be in 500 years.

    I'd agree with you if I saw "hi's" and "her's" in the wild, but nope. I still haven't seen someone write "that car is her's".

  • Proper grammar means shit all in English, unless you're worrying for a specific style, in which you follow the grammar rules for that style.

    Standard English has such a long list of weird and contradictory rules with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.

    Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I'm saying that as if that's a new thing, but it does feel like a recent thing to be taught that side of English rather than just "The Queen's(/King's) English" as the style to strive for in writing and formal communication.

    I say as long as someone can understand what you're saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don't have a specific science to this.

    I understand that languages evolve, but for now, writing "it's" when you meant "its" is a grammatical error.

  • That's irrelevant. That's like saying you shouldn't complain about someone running a red light if you stopped in time before they t-boned you - because you understood the situation.

    Are you really comparing my repsonse to the tone when correcting minor grammatical errors to someone brushing off nearly killing someone right now?

  • The machinery needed for human thought is certainly a part of AI. At most you can only claim its not intelligent because intelligence is a specifically human trait.

    Tell that to the crows and chimps that know how to solve novel problems.

  • Huh? Since when an AI's purpose is to "imitate human behavior"? AI is about solving problems.

    It is and it isn't. Again, the whole thing is super vague. Machine vision or pattern seeking algorithms do not try to imitate any human behavior, but they fall under AI.

    Let me put it this way: Things that try to imitate human behavior or intelligence are AI, but not all AI is about trying to imitate human behavior or intelligence.

  • Be careful... If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you'd see some money but at that point half of it goes to the lawyer and you're still screwed.

    Oh I'm aware of the potential pitfalls but it's something I'm willing to risk to stick it to insurance. I wouldn't even carry it if it wasn't required by law. I have the funds to cover what they would cover.

  • I’m still sad about that dot. 😥

    The dot does not care. It can't even care. I doesn't even know it exists. I can't know shit.

  • We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

    But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

    This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

    Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

    The other thing that most people don't focus on is how we train LLMs.

    We're basically building something like a spider tailed viper. A spider tailed viper is a kind of snake that has a growth on its tail that looks a lot like a spider. It wiggles it around so it looks like a spider, convincing birds they've found a snack, and when the bird gets close enough the snake strikes and eats the bird.

    Now, I'm not saying we're building something that is designed to kill us. But, I am saying that we're putting enormous effort into building something that can fool us into thinking it's intelligent. We're not trying to build something that can do something intelligent. We're instead trying to build something that mimics intelligence.

    What we're effectively doing is looking at this thing that mimics a spider, and trying harder and harder to tweak its design so that it looks more and more realistic. What's crazy about that is that we're not building this to fool a predator so that we're not in danger. We're not doing it to fool prey, so we can catch and eat them more easily. We're doing it so we can fool ourselves.

    It's like if, instead of a spider-tailed snake, a snake evolved a bird-like tail, and evolution kept tweaking the design so that the tail was more and more likely to fool the snake so it would bite its own tail. Except, evolution doesn't work like that because a snake that ignored actual prey and instead insisted on attacking its own tail would be an evolutionary dead end. Only a truly stupid species like humans would intentionally design something that wasn't intelligent but mimicked intelligence well enough that other humans preferred it to actual information and knowledge.

  • It very much isn't and that's extremely technically wrong on many, many levels.

    Yet still one of the higher up voted comments here.

    Which says a lot.

    I'll be pedantic, but yeah. It's all transistors all the way down, and transistors are pretty much chained if/then switches.

  • Oh I'm aware of the potential pitfalls but it's something I'm willing to risk to stick it to insurance. I wouldn't even carry it if it wasn't required by law. I have the funds to cover what they would cover.

    If you have the funds you could self insure. You'd need to look up the details for your jurisdiction, but the gist of it is you keep the amount required coverage in an account that you never touch until you need to pay out.

  • My auto correct doesn't care.

    So you trust your slm more than your fellow humans?

  • We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

    But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

    This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

    Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.

    This is not a good argument.

  • Calling these new LLM's just if statements is quite a over simplification. These are technically something that has not existed before, they do enable use cases that previously were impossible to implement.

    This is far from General Intelligence, but there are solutions now to few coding issues that were near impossible 5 years ago

    5 years ago I would have laughed in your face if you came to suggest that can you code a code that summarizes this description that was inputed by user. Now I laugh that give me your wallet because I need to call API or buy few GPU's.

    I think the point is that this is not the path to general intelligence. This is more like cheating on the Turing test.

  • RushTok backlash: Why sororities aren't letting prospects post

    Technology technology
    8
    1
    32 Stimmen
    8 Beiträge
    16 Aufrufe
    P
    ...exposure to what is effectively running a small buisness under the worst possible circumstances. That is the funniest damn thing I've read in a long time, so true. It's like a zero star motel where the employees are the customers with burning man and some ted talks thrown in. If you get that into the black and manage to avoid a subpoena, you are ready to take a company public.
  • UK government suggests deleting files to save water

    Technology technology
    26
    1
    178 Stimmen
    26 Beiträge
    116 Aufrufe
    A
    Competent Politicians are well aware that they're not experts on everything and hence hire domain experts to help them understand those domains and actually make informed decisions about them. Mind you, I suspect this specifically is more a side effect of the profound problems with Dishonesty and Cronyism that the UK has: basically they tackled drought as a negative perception of the Government problem, so set up a talk group to project the impression that the Government was doing something about it and chose as head of it (and to be well paid for it) somebody whose greatest qualification for it was being their mate, all of which is very typically in British power circles. The natural consequence of such things is them producing fancy press releases which look absolutelly moronic for domain experts, but since most of the people who read such releases are not domain experts, that's usually fine and in fact advances the true purpose of that "group" (managing perceptions). Even with the Tech Press internationally picking this up and making fun of it, since the very same people who play these power games over there also control the local Press, they might very well get away in Britain itself with a press release with even such a moronic idea as this, as it will be spinned to make them look good.
  • OpenAI stops ChatGPT from telling people to break up with partners

    Technology technology
    28
    1
    145 Stimmen
    28 Beiträge
    131 Aufrufe
    S
    Have a look at relationship subreddits. They are full of people who have been manipulated and gaslit for years or decades who have no idea what is actually normal. For people like that a reality check is really helpful or even vital.
  • Switzerland plans surveillance worse than US

    Technology technology
    90
    1
    641 Stimmen
    90 Beiträge
    763 Aufrufe
    3dcadmin@lemmy.relayeasy.com3
    There might be but you ruined my quip!
  • 72 Stimmen
    3 Beiträge
    45 Aufrufe
    D
    The supported hardware/targets with Debian 13.0 on RISC-V include the SiFive HiFive Unleashed, SiFive HiFive Unmatched, Microchip Polarfire, and the VisionFive 2 and other JH7110 SoC platforms. Plus QEMU can work with Debian RISC-V as an emulated/VM target. Other RISC-V single board computers may work fine with Debian 13.0 if resorting to using their vendor kernels. Support for additional boards in the future may come to Debian 13 via Trixie-Backports.
  • 119 Stimmen
    13 Beiträge
    187 Aufrufe
    J
    Windows isn't little-known.
  • Meta is now a defense contractor

    Technology technology
    54
    1
    360 Stimmen
    54 Beiträge
    794 Aufrufe
    B
    Best decision ever for a company. The US gov pisses away billions of their taxpayers money and buys all the low quality crap from the MIL without questions.
  • 1 Stimmen
    3 Beiträge
    48 Aufrufe
    Z
    Yes i'm looking for erp system like sap