Skip to content

We need to stop pretending AI is intelligent

Technology
318 146 2
  • What language was the first language based upon?

    What music influenced the first song performed?

    What art influenced the first cave painter?

    You seem to think that one day somebody invented the first language, or made the first song?

    There was no "first language" and no "first song". These things would have evolved from something that was not quite a full language, or not quite a full song.

    Animals influenced the first cave painters, that seems pretty obvious.

  • Ya of course I do. Humans are the most unreliable slick disgusting diseased morally inept living organisms on the planet.

    And they made the programs you seem to trust so much.

  • I think your argument is a bit besides the point.

    The first issue we have is that intelligence isn't well-defined at all. Without a clear definition of intelligence, we can't say if something is intelligent, and even though we as a species tried to come up with a definition of intelligence for centuries, there still isn't a well-defined one yet.

    But the actual question here isn't "Can AI serve information?" but is AI an intelligence. And LLMs are not. They are not beings, they don't evolve, they don't experience.

    For example, LLMs don't have a memory. If you use something like ChatGPT, its state doesn't change when you talk to it. It doesn't remember. The only way it can keep up a conversation is that for each request the whole chat history is fed back into the LLM as an input. It's like talking to a demented person, but you give that demented person a transcript of your conversation, so that they can look up everything you or they have said during the conversation.

    The LLM itself can't change due to the conversation you are having with them. They can't learn, they can't experience, they can't change.

    All that is done in a separate training step, where essentially a new LLM is generated.

    If we can't say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we're developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don't know if we're a few steps away from having massive AI breakthroughs, we don't know if we already have pieces of algorithms that closely resemble our brains' own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it's our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we've been down this road with animals before as well, claiming they dont have souls or aren't conscious beings, that somehow because they don't very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they're somehow an inferior or less valid existence.

    You're describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it's already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I'm putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it's meant to be an insult.

    I'm not saying LLMs are alive, and they clearly don't experience the reality we experience, but to say there's no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations....is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it's an emergent property, and enforcing this "intelligence" separation only hinders our ability to properly recognize whether we're on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn't let our hubris cloud that judgment.

  • Ai models are trained on basically the entirety of the internet, and more. Humans learn to speak on much less info. So, there's likely a huge difference in how human brains and LLMs work.

    It doesn’t take the entirety of the internet just for an LLM to respond in English. It could do so with far less. But it also has the entirety of the internet which arguably makes it superior to a human in breadth of information.

  • I've been thinking this for awhile. When people say "AI isn't really that smart, it's just doing pattern recognition" all I can help but think is "don't you realize that is one of the most commonly brought up traits concerning the human mind?" Pareidolia is literally the tendency to see faces in things because the human mind is constantly looking for the "face pattern". Humans are at least 90% regurgitating previous data. It's literally why you're supposed to read and interact with babies so much. It's how you learn "red glowy thing is hot". It's why education and access to knowledge is so important. It's every annoying person who has endless "did you know?" facts. Science is literally "look at previous data, iterate a little bit, look at new data".

    None of what AI is doing is truly novel or different. But we've placed the human mind on this pedestal despite all the evidence to the contrary. Eyewitness testimony, optical illusions, magic tricks, the hundreds of common fallacies we fall prey to.... our minds are incredibly fallible and are really just a hodgepodge of processes masquerading as "intelligence". We're a bunch of instincts in a trenchcoat. To think AI isn't or can't reach our level is just hubris. A trait that probably is more unique to humans.

    Yep we are on the same page. At our best, we can reach higher than regurgitating patterns. I’m talking about things like the scientific method and everything we’ve learned by it. But still, that’s a 5% minority, at best, of what’s going on between human ears.

  • Self Driving is only safer than people in absolutely pristine road conditions with no inclement weather and no construction. As soon as anything disrupts "normal" road conditions, self driving becomes significantly more dangerous than a human driving.

    Yes of course edge and corner cases are going to take much longer to train on because they don’t occur as often. But as soon as one self-driving car learns how to handle one of them, they ALL know. Meanwhile humans continue to be born and must be trained up individually and they continue to make stupid mistakes like not using their signal and checking their mirrors.

    Humans CAN handle cases that AI doesn’t know how to, yet, but humans often fail in inclement weather, around construction, etc etc.

  • Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you're right, for you its not much different than AI probably

    I’m pretty sure an AI could throw out a lazy straw man and ad hominem as quickly as you did.

  • No idea why you're getting downvoted. People here don't seem to understand even the simplest concepts of consciousness.

    I guess it wasn't super relevant to the prior comment, which was focused more on AI embodiment. Eh, it's just numbers anyway, no sweat off my back. Appreciate you, though!

  • If we can't say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we're developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don't know if we're a few steps away from having massive AI breakthroughs, we don't know if we already have pieces of algorithms that closely resemble our brains' own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it's our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we've been down this road with animals before as well, claiming they dont have souls or aren't conscious beings, that somehow because they don't very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they're somehow an inferior or less valid existence.

    You're describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it's already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I'm putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it's meant to be an insult.

    I'm not saying LLMs are alive, and they clearly don't experience the reality we experience, but to say there's no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations....is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it's an emergent property, and enforcing this "intelligence" separation only hinders our ability to properly recognize whether we're on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn't let our hubris cloud that judgment.

    What is kinda stupid is not understanding how LLMs work, not understanding what the inherent limitations of LLMs are, not understanding what intelligence is, not understanding what the difference between an algorithm and intelligence is, not understanding what the difference between immitating something and being something is, claiming to "perfectly" understand all sorts of issues surrounding LLMs and then choosing to just ignore them and then still thinking you actually have enough of a point to call other people in the discussion "kind of stupid".

  • But, will you do it 24-7-365?

    i dont have anything else going on, man

  • You seem to think that one day somebody invented the first language, or made the first song?

    There was no "first language" and no "first song". These things would have evolved from something that was not quite a full language, or not quite a full song.

    Animals influenced the first cave painters, that seems pretty obvious.

    Yeah dude at one point there was no languages and no songs. You can get into "what counts as a language" but at one point there was none. Same with songs.

    Language specifically was pretty unlikely to be an individual effort, but at one point people grunting at each other became something else entirely.

    Your whole "there is nothing new under the sun" way of thinking is just an artifact of the era you were born in.

  • Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

    You don’t think that’s already happening considering how Sam Altman and Peter Thiel have ties?

    I do, but was thinking 1984-levels of control of reality.

  • Yeah dude at one point there was no languages and no songs. You can get into "what counts as a language" but at one point there was none. Same with songs.

    Language specifically was pretty unlikely to be an individual effort, but at one point people grunting at each other became something else entirely.

    Your whole "there is nothing new under the sun" way of thinking is just an artifact of the era you were born in.

    Haha wtf are you talking about. You have no idea what generation I am, you don't know how old I am and I never said there is nothing new under the sun.

  • Haha wtf are you talking about. You have no idea what generation I am, you don't know how old I am and I never said there is nothing new under the sun.

    I'm summarizing your shitty argument and viewpoint. I never said it was a direct quote.

    Though, at one point even that tired ass quote and your whole way of thinking was put into words by someone for the first time.

  • I think the self driving is likely to be safer in the most boring scenarios, the sort of situations where a human driver can get complacent because things have been going so well for the past hour of freeway driving. The self driving is kind of dumb, but it's at least consistently paying attention, and literally has eyes in the back of it's head.

    However, there's so much data about how it fails in stupidly obvious ways that it shouldn't, so you still need the human attention to cover the more anomalous scenarios that foul self driving.

    Anomalous scenarios like a giant flashing school bus? 😄

  • I'm summarizing your shitty argument and viewpoint. I never said it was a direct quote.

    Though, at one point even that tired ass quote and your whole way of thinking was put into words by someone for the first time.

    Well you are doing a poor job of it and are bringing an unnecessary amount of heat to an otherwise civil discussion

  • Anomalous scenarios like a giant flashing school bus? 😄

    Yes, as common as that is, in the scheme of driving it is relatively anomolous.

    By hours in car, most of the time is spent on a freeway driving between two lines either at cruising speed or in a traffic jam. The most mind numbing things for a human, pretty comfortably in the wheel house of driving.

    Once you are dealing with pedestrians, signs, intersections, etc, all those despite 'common' are anomolous enough to be dramatically more tricky for these systems.

  • Well you are doing a poor job of it and are bringing an unnecessary amount of heat to an otherwise civil discussion

    That's right. If you cannot win the argument the next best thing is to call for civility.

  • much less? I'm pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.

    Your brain is running on sugar. Do you take into account the energy spent in coal mining, oil fields exploration, refinery, transportation, electricity transmission loss when computing the amount of energy required to build and run AI? Do you take into account all the energy consumption for the knowledge production in first place to train your model?
    Running the brain alone is much less energy intensive than running an AI model. And the brain can create actual new content/knowledge. There is nothing like the brain. AI excel at processing large amount of data, which the brain is not made for.

  • At least in my car, the lane following (not keeping system) is handy because the steering wheel naturally tends to go where it should and less often am I "fighting" the tendency to center. The keeping system is at least for me largely nothing. If I turn signal, it ignores me crossing a lane. If circumstances demand an evasive maneuver that crosses a line, it's resistance isn't enough to cause an issue. At least mine has fared surprisingly well in areas where the lane markings are all kind of jacked up due to temporary changes for construction. If it is off, then my arms are just having to generally assert more effort to be in the same place I was going to be with the system. Generally no passenger notices when the system engages/disengages in the car except for the chiming it does when it switches over to unaided operation.

    So at least my experience has been a positive one, but it hits things just right with intervention versus human attention, including monitoring gaze to make sure I am looking where I should. However there are people who test "how long can I keep my hands off the steering wheel", which is a more dangerous mode of thinking.

    And yes, having cameras everywhere makes fine maneuvering so much nicer, even with the limited visualization possible in the synthesized 'overhead' view of your car.

    The rental cars I have driven with lane keeper functions have all been too aggressive / easily fooled by visual anomalies on the road for me to feel like I'm getting any help. My wife comments on how jerky the car is driving when we have those systems. I don't feel like it's dangerous, and if I were falling asleep or something it could be helpful, but in 40+ years of driving I've had "falling asleep at the wheel" problems maybe 3 times - not something I need constant help for.

  • French city of Lyon ditching Microsoft for FOSS

    Technology technology
    17
    1
    492 Stimmen
    17 Beiträge
    11 Aufrufe
    K
    The important thing is that the doomsday device runs Linux
  • Firefox 140 Brings Tab Unload, Custom Search & New ESR

    Technology technology
    41
    1
    234 Stimmen
    41 Beiträge
    24 Aufrufe
    S
    Read again. I quoted something along the lines of "just as much a development decision as a marketing one" and I said, it wasn't a development decision, so what's left? Firefox released just as frequently before, just that they didn’t increase the major version that often. This does not appear to be true. Why don't you take a look at the version history instead of some marketing blog post? https://www.mozilla.org/en-US/firefox/releases/ Version 2 had 20 releases within 730 days, averaging one release every 36.5 days. Version 3 had 19 releases within 622 days, averaging 32.7 days per release. But these releases were unscheduled, so they were released when they were done. Now they are on a fixed 90-day schedule, no matter if anything worthwhile was complete or not, plus hotfix releases whenever they are necessary. That's not faster, but instead scheduled, and also they are incrementing the major version even if no major change was included. That's what the blog post was alluding to. In the before times, a major version number increase indicated major changes. Now it doesn't anymore, which means sysadmins still need to consider each release a major release, even if it doesn't contain major changes because it might contain them and the version name doesn't say anything about whether it does or not. It's nothing but a marketing change, moving from "version numbering means something" to "big number go up".
  • 112 Stimmen
    2 Beiträge
    7 Aufrufe
    W
    ...the ruling stopped short of ordering the government to recover past messages that may already have been lost. How would somebody be meant to comply with an order to recover a message that has been deleted? Or is that the point? Can't comply and you're in contempt of court.
  • 365 Stimmen
    198 Beiträge
    20 Aufrufe
    F
    Okay but we were talking about BTC pump and dumps and to perform that on the massive scale which dwarfs any stock ticker below the top 5 by hundreds of billions of dollars while somehow completely illuding people who watch the blockchain like hawks for big movers... It's just not feasible. You would have to be much richer than the official richest man on earth and have almost all of your assets liquid and then on top of that you would need millions of wallets acting asynchronously. And why would you even bother? If you're that rich you could just not hide it.
  • 172 Stimmen
    71 Beiträge
    40 Aufrufe
    cole@lemdro.idC
    they all burn up, that article does not dispute that
  • Elon Musk’s Neuralink raises fresh cash at $9B valuation

    Technology technology
    15
    1
    12 Stimmen
    15 Beiträge
    10 Aufrufe
    bizzle@lemmy.worldB
    I'd rather die than let Elon Musk put shit in my brain.
  • Backblaze Drive Stats for Q1 2025

    Technology technology
    1
    1
    49 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet
  • 342 Stimmen
    43 Beiträge
    33 Aufrufe
    G
    highly recommend using containerized torrents through a VPN. I have transmission and openvpn containers. when the network goes down transmission can't connect since it's networked through the ovpn container. once the vpn is restored, everything restarts and resumes where it left off. ever since I've had this setup running, I haven't had a nastygram sent to me.