Skip to content

We need to stop pretending AI is intelligent

Technology
319 146 3
  • What language was the first language based upon?

    What music influenced the first song performed?

    What art influenced the first cave painter?

    You seem to think that one day somebody invented the first language, or made the first song?

    There was no "first language" and no "first song". These things would have evolved from something that was not quite a full language, or not quite a full song.

    Animals influenced the first cave painters, that seems pretty obvious.

  • Ya of course I do. Humans are the most unreliable slick disgusting diseased morally inept living organisms on the planet.

    And they made the programs you seem to trust so much.

  • I think your argument is a bit besides the point.

    The first issue we have is that intelligence isn't well-defined at all. Without a clear definition of intelligence, we can't say if something is intelligent, and even though we as a species tried to come up with a definition of intelligence for centuries, there still isn't a well-defined one yet.

    But the actual question here isn't "Can AI serve information?" but is AI an intelligence. And LLMs are not. They are not beings, they don't evolve, they don't experience.

    For example, LLMs don't have a memory. If you use something like ChatGPT, its state doesn't change when you talk to it. It doesn't remember. The only way it can keep up a conversation is that for each request the whole chat history is fed back into the LLM as an input. It's like talking to a demented person, but you give that demented person a transcript of your conversation, so that they can look up everything you or they have said during the conversation.

    The LLM itself can't change due to the conversation you are having with them. They can't learn, they can't experience, they can't change.

    All that is done in a separate training step, where essentially a new LLM is generated.

    If we can't say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we're developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don't know if we're a few steps away from having massive AI breakthroughs, we don't know if we already have pieces of algorithms that closely resemble our brains' own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it's our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we've been down this road with animals before as well, claiming they dont have souls or aren't conscious beings, that somehow because they don't very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they're somehow an inferior or less valid existence.

    You're describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it's already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I'm putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it's meant to be an insult.

    I'm not saying LLMs are alive, and they clearly don't experience the reality we experience, but to say there's no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations....is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it's an emergent property, and enforcing this "intelligence" separation only hinders our ability to properly recognize whether we're on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn't let our hubris cloud that judgment.

  • Ai models are trained on basically the entirety of the internet, and more. Humans learn to speak on much less info. So, there's likely a huge difference in how human brains and LLMs work.

    It doesn’t take the entirety of the internet just for an LLM to respond in English. It could do so with far less. But it also has the entirety of the internet which arguably makes it superior to a human in breadth of information.

  • I've been thinking this for awhile. When people say "AI isn't really that smart, it's just doing pattern recognition" all I can help but think is "don't you realize that is one of the most commonly brought up traits concerning the human mind?" Pareidolia is literally the tendency to see faces in things because the human mind is constantly looking for the "face pattern". Humans are at least 90% regurgitating previous data. It's literally why you're supposed to read and interact with babies so much. It's how you learn "red glowy thing is hot". It's why education and access to knowledge is so important. It's every annoying person who has endless "did you know?" facts. Science is literally "look at previous data, iterate a little bit, look at new data".

    None of what AI is doing is truly novel or different. But we've placed the human mind on this pedestal despite all the evidence to the contrary. Eyewitness testimony, optical illusions, magic tricks, the hundreds of common fallacies we fall prey to.... our minds are incredibly fallible and are really just a hodgepodge of processes masquerading as "intelligence". We're a bunch of instincts in a trenchcoat. To think AI isn't or can't reach our level is just hubris. A trait that probably is more unique to humans.

    Yep we are on the same page. At our best, we can reach higher than regurgitating patterns. I’m talking about things like the scientific method and everything we’ve learned by it. But still, that’s a 5% minority, at best, of what’s going on between human ears.

  • Self Driving is only safer than people in absolutely pristine road conditions with no inclement weather and no construction. As soon as anything disrupts "normal" road conditions, self driving becomes significantly more dangerous than a human driving.

    Yes of course edge and corner cases are going to take much longer to train on because they don’t occur as often. But as soon as one self-driving car learns how to handle one of them, they ALL know. Meanwhile humans continue to be born and must be trained up individually and they continue to make stupid mistakes like not using their signal and checking their mirrors.

    Humans CAN handle cases that AI doesn’t know how to, yet, but humans often fail in inclement weather, around construction, etc etc.

  • Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you're right, for you its not much different than AI probably

    I’m pretty sure an AI could throw out a lazy straw man and ad hominem as quickly as you did.

  • No idea why you're getting downvoted. People here don't seem to understand even the simplest concepts of consciousness.

    I guess it wasn't super relevant to the prior comment, which was focused more on AI embodiment. Eh, it's just numbers anyway, no sweat off my back. Appreciate you, though!

  • If we can't say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we're developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don't know if we're a few steps away from having massive AI breakthroughs, we don't know if we already have pieces of algorithms that closely resemble our brains' own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it's our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we've been down this road with animals before as well, claiming they dont have souls or aren't conscious beings, that somehow because they don't very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they're somehow an inferior or less valid existence.

    You're describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it's already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I'm putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it's meant to be an insult.

    I'm not saying LLMs are alive, and they clearly don't experience the reality we experience, but to say there's no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations....is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it's an emergent property, and enforcing this "intelligence" separation only hinders our ability to properly recognize whether we're on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn't let our hubris cloud that judgment.

    What is kinda stupid is not understanding how LLMs work, not understanding what the inherent limitations of LLMs are, not understanding what intelligence is, not understanding what the difference between an algorithm and intelligence is, not understanding what the difference between immitating something and being something is, claiming to "perfectly" understand all sorts of issues surrounding LLMs and then choosing to just ignore them and then still thinking you actually have enough of a point to call other people in the discussion "kind of stupid".

  • But, will you do it 24-7-365?

    i dont have anything else going on, man

  • You seem to think that one day somebody invented the first language, or made the first song?

    There was no "first language" and no "first song". These things would have evolved from something that was not quite a full language, or not quite a full song.

    Animals influenced the first cave painters, that seems pretty obvious.

    Yeah dude at one point there was no languages and no songs. You can get into "what counts as a language" but at one point there was none. Same with songs.

    Language specifically was pretty unlikely to be an individual effort, but at one point people grunting at each other became something else entirely.

    Your whole "there is nothing new under the sun" way of thinking is just an artifact of the era you were born in.

  • Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.

    You don’t think that’s already happening considering how Sam Altman and Peter Thiel have ties?

    I do, but was thinking 1984-levels of control of reality.

  • Yeah dude at one point there was no languages and no songs. You can get into "what counts as a language" but at one point there was none. Same with songs.

    Language specifically was pretty unlikely to be an individual effort, but at one point people grunting at each other became something else entirely.

    Your whole "there is nothing new under the sun" way of thinking is just an artifact of the era you were born in.

    Haha wtf are you talking about. You have no idea what generation I am, you don't know how old I am and I never said there is nothing new under the sun.

  • Haha wtf are you talking about. You have no idea what generation I am, you don't know how old I am and I never said there is nothing new under the sun.

    I'm summarizing your shitty argument and viewpoint. I never said it was a direct quote.

    Though, at one point even that tired ass quote and your whole way of thinking was put into words by someone for the first time.

  • I think the self driving is likely to be safer in the most boring scenarios, the sort of situations where a human driver can get complacent because things have been going so well for the past hour of freeway driving. The self driving is kind of dumb, but it's at least consistently paying attention, and literally has eyes in the back of it's head.

    However, there's so much data about how it fails in stupidly obvious ways that it shouldn't, so you still need the human attention to cover the more anomalous scenarios that foul self driving.

    Anomalous scenarios like a giant flashing school bus? 😄

  • I'm summarizing your shitty argument and viewpoint. I never said it was a direct quote.

    Though, at one point even that tired ass quote and your whole way of thinking was put into words by someone for the first time.

    Well you are doing a poor job of it and are bringing an unnecessary amount of heat to an otherwise civil discussion

  • Anomalous scenarios like a giant flashing school bus? 😄

    Yes, as common as that is, in the scheme of driving it is relatively anomolous.

    By hours in car, most of the time is spent on a freeway driving between two lines either at cruising speed or in a traffic jam. The most mind numbing things for a human, pretty comfortably in the wheel house of driving.

    Once you are dealing with pedestrians, signs, intersections, etc, all those despite 'common' are anomolous enough to be dramatically more tricky for these systems.

  • Well you are doing a poor job of it and are bringing an unnecessary amount of heat to an otherwise civil discussion

    That's right. If you cannot win the argument the next best thing is to call for civility.

  • much less? I'm pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.

    Your brain is running on sugar. Do you take into account the energy spent in coal mining, oil fields exploration, refinery, transportation, electricity transmission loss when computing the amount of energy required to build and run AI? Do you take into account all the energy consumption for the knowledge production in first place to train your model?
    Running the brain alone is much less energy intensive than running an AI model. And the brain can create actual new content/knowledge. There is nothing like the brain. AI excel at processing large amount of data, which the brain is not made for.

  • At least in my car, the lane following (not keeping system) is handy because the steering wheel naturally tends to go where it should and less often am I "fighting" the tendency to center. The keeping system is at least for me largely nothing. If I turn signal, it ignores me crossing a lane. If circumstances demand an evasive maneuver that crosses a line, it's resistance isn't enough to cause an issue. At least mine has fared surprisingly well in areas where the lane markings are all kind of jacked up due to temporary changes for construction. If it is off, then my arms are just having to generally assert more effort to be in the same place I was going to be with the system. Generally no passenger notices when the system engages/disengages in the car except for the chiming it does when it switches over to unaided operation.

    So at least my experience has been a positive one, but it hits things just right with intervention versus human attention, including monitoring gaze to make sure I am looking where I should. However there are people who test "how long can I keep my hands off the steering wheel", which is a more dangerous mode of thinking.

    And yes, having cameras everywhere makes fine maneuvering so much nicer, even with the limited visualization possible in the synthesized 'overhead' view of your car.

    The rental cars I have driven with lane keeper functions have all been too aggressive / easily fooled by visual anomalies on the road for me to feel like I'm getting any help. My wife comments on how jerky the car is driving when we have those systems. I don't feel like it's dangerous, and if I were falling asleep or something it could be helpful, but in 40+ years of driving I've had "falling asleep at the wheel" problems maybe 3 times - not something I need constant help for.

  • 113 Stimmen
    10 Beiträge
    0 Aufrufe
    S
    I admire your positivity. I do not share it though, because from what I have seen, because even if there are open weights, the one with the biggest datacenter will in the future hold the most intelligent and performance model. Very similar to how even if storage space is very cheap today, large companies are holding all the data anyway. AI will go the same way, and thus the megacorps will and in some extent already are owning not only our data, but our thoughts and the ability to modify them. I mean, sponsored prompt injection is just the first thought modifying thing, imagine Google search sponsored hits, but instead it's a hyperconvincing AI response that subtly nudges you to a certain brand or way of thinking. Absolutely terrifies me, especially with all the research Meta has done on how to manipulate people's mood and behaviour through which social media posts they are presented with
  • 79 Stimmen
    3 Beiträge
    8 Aufrufe
    D
    Right? The surprise would be if they weren't doing that.
  • How to store data on paper?

    Technology technology
    9
    44 Stimmen
    9 Beiträge
    12 Aufrufe
    U
    This has to be a shitpost. Transportation of paper-stored data You can take the sheets with you, send them by post, or even attach them to homing pigeons
  • Texting myself the weather every day

    Technology technology
    4
    15 Stimmen
    4 Beiträge
    11 Aufrufe
    G
    Even being too lazy to open the weather app, there are so many better and free ways of receiving a message on your phone. This is profoundly stupid.
  • Best way to block distractions

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    9 Aufrufe
    Niemand hat geantwortet
  • Whatever happened to cheap eReaders? – Terence Eden’s Blog

    Technology technology
    72
    1
    125 Stimmen
    72 Beiträge
    32 Aufrufe
    T
    This is a weirdly aggressive take without considering variables. Almost petulant seeming. 6” readers are relatively cheap no matter the brand, but cost goes up with size. $250 to $300 is what a 7.8” or 8” reader costs, but there’s not a single one I know of at 6” at that price. There’s 10” and 13” models. Are you saying they should cost the same as a Kindle? Not to mention, regarding Kindle, Amazon spent years building the brand but selling either at cost or possibly even taking a loss on the devices as they make money on the book sales. Companies who can’t do that tend to charge more. Lastly, it’s not “feature creep” to improve the devices over time, many changes are quality of life. Larger displays for those that want them. Frontlit displays, and later the addition of warm lighting. Displays essentially doubled their resolution allowing for crisper fonts and custom fonts to render well. Higher contrast displays with darker blacks for text. More recently color displays as an option. This is all progress, but it’s not free. Also, inflation is a thing and generally happens at a rate of 2% to 3% annually or thereabouts during “normal” times, and we’ve hardly been living in normal times over the last decade and a half.
  • 175 Stimmen
    38 Beiträge
    26 Aufrufe
    whotookkarl@lemmy.worldW
    It's not a back door, it's just a rear entryway
  • signal blogpost on windows recall

    Technology technology
    5
    1
    69 Stimmen
    5 Beiträge
    11 Aufrufe
    P
    I wouldn't trust windows to follow their don't screenshot API, whether out of ignorance or malice.