Skip to content

We need to stop pretending AI is intelligent

Technology
119 80 0
  • So many confident takes on AI by people who've never opened a book on the nature of sentience, free will, intelligence, philosophy of mind, brain vs mind, etc.

    There are hundreds of serious volumes on these, not to mention the plethora of casual pop science books with some of these basic thought experiments and hypotheses.

    Seems like more and more incredibly shallow articles on AI are appearing every day, which is to be expected with the rapid decline of professional journalism.

    It's a bit jarring and frankly offensive to be lectured 'at' by people who are obviously on the first step of their journey into this space.

    you and I are kindred spirits

  • I disagree with this notion. I think it's dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts:
    https://ai-2027.com/

    Ask AI:
    Did you mean: irresponsible
    AI Overview
    The term "unresponsible" is not a standard English word. The correct word to use when describing someone who does not take responsibility is irresponsible.

  • When you typed this response, you were acting as a probabilistic, predictive chat model. You predicted the most likely effective sequence of words to convey ideas. You did this using very different circuitry, but the underlying strategy was the same.

    I wasn't, and that wasn't my process at all. Go touch grass.

  • I disagree with this notion. I think it's dangerously unresponsible to only assume AI is stupid. Everyone should also assume that with a certain probabilty AI can become dangerously self aware. I revcommend everyone to read what Daniel Kokotaijlo, previous employees of OpenAI, predicts:
    https://ai-2027.com/

    Yeah, they probably wouldn't think like humans or animals, but in some sense could be considered "conscious" (which isn't well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

    This argument seems weak to me:

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    You can emulate inputs and simplified versions of hormone systems. "Reasoning" models can kind of be thought of as cognition; though temporary or limited by context as it's currently done.

    I'm not in the camp where I think it's impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I'm not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a "singularity-like" scenario.

    I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

  • So many confident takes on AI by people who've never opened a book on the nature of sentience, free will, intelligence, philosophy of mind, brain vs mind, etc.

    There are hundreds of serious volumes on these, not to mention the plethora of casual pop science books with some of these basic thought experiments and hypotheses.

    Seems like more and more incredibly shallow articles on AI are appearing every day, which is to be expected with the rapid decline of professional journalism.

    It's a bit jarring and frankly offensive to be lectured 'at' by people who are obviously on the first step of their journey into this space.

    That was my first though too. But the author is:

    Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

  • Do you think most people reason well?

    The answer is why AI is so convincing.

    I think people are easily fooled. I mean look at the president.

  • If only there were a word, literally defined as:

    Made by humans, especially in imitation of something natural.

    Fair enough 🙂

  • Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.

    I wonder how different it'll be in 500 years.

    Would you rather use the same contraction for both? Because "its" for "it is" is an even worse break from proper grammar IMO.

  • Agreed.

    When I was a kid we went to the library. If a card catalog didn't yield the book you needed, you asked the librarian. They often helped. No one sat around after the library wondering if the librarian was "truly intelligent".

    These are tools. Tools slowly get better. Is a tool make life easier or your work better, you'll eventually use it.

    Yes, there are woodworkers that eschew power tools but they are not typical. They have a niche market, and that's great, but it's a choice for the maker and user of their work.

    I think tools misrepresents it. It seems more like we're in the transitional stage of providing massive amounts of data for LLMs to train on, until they can eventually develop enough cognition to train themselves, automate their own processes and upgrades, and eventually replace the need for human cognition. If anything, we are the tool now.

  • Fine, *could literally be.

    The thing is, because Excel is Turing Complete, you can say this about literally anything that’s capable of running on a computer.

  • Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I'm paid in full for the six month period. It's been days now with no follow-up . . . I'm pretty sure AI snuck that one through for me.

    Be careful... If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you'd see some money but at that point half of it goes to the lawyer and you're still screwed.

  • That was my first though too. But the author is:

    Guillaume Thierry, Professor of Cognitive Neuroscience, Bangor University

    Ever since the 20th century, there has been a diminishing expectation placed upon scientists to engage in philosophical thinking. My background is primarily in mathematics, physics, and philosophy. I can tell you from personal experience that many professional theoretical physicists spend a tremendous amount of time debating metaphysics while knowing almost nothing about it, often being totally unaware that they are even doing it. If cognitive neuroscience works anything like physics then it’s quite possible that the total exposure that this professor has had to scholarship on the philosophy of the mind was limited to one or two courses during his undergraduate.

  • I've never been fooled by their claims of it being intelligent.

    Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

    It very much isn't and that's extremely technically wrong on many, many levels.

    Yet still one of the higher up voted comments here.

    Which says a lot.

  • Wow. So when you typed that comment you were just predicting which words would be normal in this situation? Interesting delusion, but that's not how people think. We apply reasoning processes to the situation, formulate ideas about it, and then create a series of words that express our ideas. But our ideas exist on their own, even if we never end up putting them into words or actions. That's how organic intelligence differs from a Large Language Model.

    Are you under the impression that language models are just guessing "what letter comes next in this sequence of letters"?

    There's a very significant difference between training on completion and the way the world model actually functions once established.

  • And excessive use of em-dashes, which is the first thing I look for. He does say he uses LLMs a lot.

    "…" (Unicode U+2026 Horizontal Ellipsis) instead of "..." (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

    Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

  • Are you under the impression that language models are just guessing "what letter comes next in this sequence of letters"?

    There's a very significant difference between training on completion and the way the world model actually functions once established.

    No dude I'm not under that impression, and I'm not going to take an quiz from you to prove I understand how LLMs work. I'm fine with you not agreeing with me.

  • ChatGPT 2 was literally an Excel spreadsheet.

    I guesstimate that it's effectively a supermassive autocomplete algo that uses some TOTP-like factor to help it produce "unique" output every time.

    And they're running into issues due to increasingly ingesting AI-generated data.

    Get your popcorn out! 🍿

    You're an idiot lmfao

  • We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

    But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

    This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

    Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

    I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it... AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

  • "…" (Unicode U+2026 Horizontal Ellipsis) instead of "..." (three full stops), and using them unnecessarily, is another thing I rarely see from humans.

    Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character. I might be wrong on this one.

    Edit: Huh. Lemmy automatically changed my three fulls stops to the Unicode character.

    Not on my phone it didn't. It looks as you intended it.

  • Yeah, they probably wouldn't think like humans or animals, but in some sense could be considered "conscious" (which isn't well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

    This argument seems weak to me:

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    You can emulate inputs and simplified versions of hormone systems. "Reasoning" models can kind of be thought of as cognition; though temporary or limited by context as it's currently done.

    I'm not in the camp where I think it's impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I'm not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a "singularity-like" scenario.

    I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

    Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

    You don’t think that’s already happening considering how Sam Altman and Peter Thiel have ties?