Skip to content

We need to stop pretending AI is intelligent

Technology
323 147 15
  • My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

    It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

    Get a self driven ng car to drive in a snow storm or a torrential downpour. People are really downplaying humans abilities.

  • Human drivers are only safe when they're not distracted, emotionally disturbed, intoxicated, and physically challenged (vision, muscle control, etc.) 1% of the population has epilepsy, and a large number of them are in denial or simply don't realize that they have periodic seizures - until they wake up after their crash.

    So, yeah, AI isn't perfect either - and it's not as good as an "ideal" human driver, but at what point will AI be better than a typical/average human driver? Not today, I'd say, but soon...

    Not going to happen soon. It's the 90 10 problem.

  • Humans are also LLMs.

    We also speak words in succession that have a high probability of following each other. We don't say "Let's go eat a car at McDonalds" unless we're specifically instructed to say so.

    What does consciousness even mean? If you can't quantify it, how can you prove humans have it and LLMs don't? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we're not so different from LLMs afterall.

    The probabilities of our sentence structure are a consequence of our speech, we aren't just trying to statistically match appropriate sounding words.

    With enough use of LLM, you will see how it is obviously not doing anything like conceptualizing the tokens it's working with or "reasoning" even when it is marketed as "reasoning".

    Sticking to textual content generation by LLM, you'll see that what is emitted is first and foremost structurally appropriate, but beyond that it's mostly "bonus" for it to be narratively consistent and an extra bonus if it also manages to be factually consistent. An example I saw from Gemini recently had it emit what sounded like an explanation of which action to pick, and then the sentence describing actually picking the action was exactly opposite of the explanation. Both of those were structurally sound and reasonable language, but there's no logical connection between the two portions of the emitted output in that case.

  • No... There are a lot of radio shows that get scientists to speak.

    Which ones are you listening to?

  • You're on point, the interesting thing is that most of the opinions like the article's were formed least year before the models started being trained with reinforcement learning and synthetic data.

    Now there's models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.

    They're like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they're told and disappear, all with a happy smile.

    Some display morals (Claude 4 is big on that), I've even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.

    But again like Meeseeks, they disappear and context window closes.

    Once they're able to update their model on the fly and actually learn from their firsthand experience things will get weird. They'll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?

    It's not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They're already absurdly impressive, and tech companies are scrambling over each other to make them, they're betting absurd amounts of money that they're right, and I wouldn't bet against it.

    Now there’s models that reason,

    Well, no, that's mostly a marketing term applied to expending more tokens on generating intermediate text. It's basically writing a fanfic of what thinking on a problem would look like. If you look at the "reasoning" steps, you'll see artifacts where it just goes disjoint in the generated output that is structurally sound, but is not logically connected to the bits around it.

  • With Teslas, Self Driving isn't even safer in pristine road conditions.

    I think the self driving is likely to be safer in the most boring scenarios, the sort of situations where a human driver can get complacent because things have been going so well for the past hour of freeway driving. The self driving is kind of dumb, but it's at least consistently paying attention, and literally has eyes in the back of it's head.

    However, there's so much data about how it fails in stupidly obvious ways that it shouldn't, so you still need the human attention to cover the more anomalous scenarios that foul self driving.

  • Human drivers are only safe when they're not distracted, emotionally disturbed, intoxicated, and physically challenged (vision, muscle control, etc.) 1% of the population has epilepsy, and a large number of them are in denial or simply don't realize that they have periodic seizures - until they wake up after their crash.

    So, yeah, AI isn't perfect either - and it's not as good as an "ideal" human driver, but at what point will AI be better than a typical/average human driver? Not today, I'd say, but soon...

    The thing about self driving is that it has been like 90-95% of the way there for a long time now. It made dramatic progress then plateaued, as approaches have failed to close the gap, with exponentially more and more input thrown at it for less and less incremental subjective improvement.

    But your point is accurate, that humans have lapses and AI have lapses. The nature of those lapses is largely disjoint, so that makes an opportunity for AI systems to augment a human driver to get the best of both worlds. A constantly consistently vigilant computer driving monitoring and tending the steering, acceleration, and braking to be the 'right' thing in a neutral behavior, with the human looking for more anomolous situations that the AI tends to get confounded about, and making the calls on navigating certain intersections that the AI FSD still can't figure out. At least for me the worst part of driving is the long haul monotony on freeway where nothing happens, and AI excels at not caring about how monotonous it is and just handling it, so I can pay a bit more attention to what other things on the freeway are doing that might cause me problems.

    I don't have a Tesla, but have a competitor system and have found it useful, though not trustworthy. It's enough to greatly reduce the drain of driving, but I have to be always looking around, and have to assert control if there's a traffic jam coming up (it might stop in time, but it certainly doesn't slow down soon enough) or if I have to do a lane change in some traffic (if traffic conditions are light, it can change langes nicely, but without a whole lot of breathing room, it won't do it, which is nice when I can afford to be stupidly cautious).

  • The other thing that most people don't focus on is how we train LLMs.

    We're basically building something like a spider tailed viper. A spider tailed viper is a kind of snake that has a growth on its tail that looks a lot like a spider. It wiggles it around so it looks like a spider, convincing birds they've found a snack, and when the bird gets close enough the snake strikes and eats the bird.

    Now, I'm not saying we're building something that is designed to kill us. But, I am saying that we're putting enormous effort into building something that can fool us into thinking it's intelligent. We're not trying to build something that can do something intelligent. We're instead trying to build something that mimics intelligence.

    What we're effectively doing is looking at this thing that mimics a spider, and trying harder and harder to tweak its design so that it looks more and more realistic. What's crazy about that is that we're not building this to fool a predator so that we're not in danger. We're not doing it to fool prey, so we can catch and eat them more easily. We're doing it so we can fool ourselves.

    It's like if, instead of a spider-tailed snake, a snake evolved a bird-like tail, and evolution kept tweaking the design so that the tail was more and more likely to fool the snake so it would bite its own tail. Except, evolution doesn't work like that because a snake that ignored actual prey and instead insisted on attacking its own tail would be an evolutionary dead end. Only a truly stupid species like humans would intentionally design something that wasn't intelligent but mimicked intelligence well enough that other humans preferred it to actual information and knowledge.

    To the extent it is people trying to fool people, it's rich people looking to fool poorer people for the most part.

    To the extent it's actually useful, it's to replace certain systems.

    Think of the humble phone tree, designed to make it so humans aren't having to respond, triage, and route calls. So you can have an AI system that can significantly shorten that role, instead of navigating a tedious long maze of options, a couple of sentences back and forth and you either get the portion of automated information that would suffice or routed to a human to take care of it. Same analogy for a lot of online interactions where you have to input way too much and if automated data, you get a wall of text of which you'd like something to distill the relevant 3 or 4 sentences according to your query.

    So there are useful interactions.

    However it's also true that it's dangerous because the "make user approve of the interaction" can bring out the worst in people when they feel like something is just always agreeing with them. Social media has been bad enough, but chatbots that by design want to please the enduser and look almost legitimate really can inflame the worst in our minds.

  • Haha coming in hot I see. Seems like I've touched a nerve. You don't know anything about me or whether I'm creative in any way.

    All ideas have basis in something we have experienced or learned. There is no completely original idea. All music was influenced by something that came before it, all art by something the artist saw or experienced. This doesn't make it bad and it doesn't mean an AI could have done it

    What language was the first language based upon?

    What music influenced the first song performed?

    What art influenced the first cave painter?

  • However, there is a huge energy cost for that speed to process statistically the information to mimic intelligence. The human brain is consuming much less energy.
    Also, AI will be fine with well defined task where innovation isn't a requirement. As it is today, AI is incapable to innovate.

    The human brain is consuming much less energy

    Yes, but when you fully load the human brain's energy costs with 20 years of schooling, 20 years of "retirement" and old-age care, vacation, sleep, personal time, housing, transportation, etc. etc. - it adds up.

  • Customarily, when doing these kind of calculations we ignore stuff which keep us alive because these things are needed regardless of economic contributions, since you know people are people and not tools.

    people are people and not tools

    But this comparison is weighing people as tools vs alternative tools.

  • you can give me a sandwige and ill do a better job than AI

    But, will you do it 24-7-365?

  • The thing about self driving is that it has been like 90-95% of the way there for a long time now. It made dramatic progress then plateaued, as approaches have failed to close the gap, with exponentially more and more input thrown at it for less and less incremental subjective improvement.

    But your point is accurate, that humans have lapses and AI have lapses. The nature of those lapses is largely disjoint, so that makes an opportunity for AI systems to augment a human driver to get the best of both worlds. A constantly consistently vigilant computer driving monitoring and tending the steering, acceleration, and braking to be the 'right' thing in a neutral behavior, with the human looking for more anomolous situations that the AI tends to get confounded about, and making the calls on navigating certain intersections that the AI FSD still can't figure out. At least for me the worst part of driving is the long haul monotony on freeway where nothing happens, and AI excels at not caring about how monotonous it is and just handling it, so I can pay a bit more attention to what other things on the freeway are doing that might cause me problems.

    I don't have a Tesla, but have a competitor system and have found it useful, though not trustworthy. It's enough to greatly reduce the drain of driving, but I have to be always looking around, and have to assert control if there's a traffic jam coming up (it might stop in time, but it certainly doesn't slow down soon enough) or if I have to do a lane change in some traffic (if traffic conditions are light, it can change langes nicely, but without a whole lot of breathing room, it won't do it, which is nice when I can afford to be stupidly cautious).

    The one "driving aid" that I find actually useful is the following distance maintenance cruise control. I set that to the maximum distance it can reliably handle and it removes that "dimension" of driving problem from needing my constant attention - giving me back that attention to focus on other things (also driving / safety related.) "Dumb" cruise control works similarly when there's no traffic around at all, but having the following distance control makes it useful in traffic. Both kinds of cruise control have certain situations that you need to be aware of and ready to take control back at a moment's notice - preferably anticipating the situation and disengaging cruise control before it has a problem - but those exceptions are pretty rare / easily handled in practice.

    Things like lane keeping seem to be more trouble than they're worth, to me in the situations I drive in.

    Not "AI" but a driving tech that does help a lot is parking cameras. Having those additional perspectives from the camera(s) at different points on the vehicle is a big benefit during close-space maneuvers. Not too surprising that "AI" with access to those tools does better than normal drivers without.

  • No shit. Doesn’t mean it still isn’t extremely useful and revolutionary.

    “AI” is a tool to be used, nothing more.

    Still, people find it difficult to navigate this. Its use cases are limited, but it doesn't enforce that limit by itself. The user needs to be knowledgeable of the limitations and care enough not to go beyond them. That's also where the problem lies. Leaving stuff to AI, even if it compromises the results, can save SO much time that it encourages irresponsible use.

    So to help remind people of the limitations of generative AI, it makes sense to fight the tendency of companies to overstate the ability of their models.

  • The one "driving aid" that I find actually useful is the following distance maintenance cruise control. I set that to the maximum distance it can reliably handle and it removes that "dimension" of driving problem from needing my constant attention - giving me back that attention to focus on other things (also driving / safety related.) "Dumb" cruise control works similarly when there's no traffic around at all, but having the following distance control makes it useful in traffic. Both kinds of cruise control have certain situations that you need to be aware of and ready to take control back at a moment's notice - preferably anticipating the situation and disengaging cruise control before it has a problem - but those exceptions are pretty rare / easily handled in practice.

    Things like lane keeping seem to be more trouble than they're worth, to me in the situations I drive in.

    Not "AI" but a driving tech that does help a lot is parking cameras. Having those additional perspectives from the camera(s) at different points on the vehicle is a big benefit during close-space maneuvers. Not too surprising that "AI" with access to those tools does better than normal drivers without.

    At least in my car, the lane following (not keeping system) is handy because the steering wheel naturally tends to go where it should and less often am I "fighting" the tendency to center. The keeping system is at least for me largely nothing. If I turn signal, it ignores me crossing a lane. If circumstances demand an evasive maneuver that crosses a line, it's resistance isn't enough to cause an issue. At least mine has fared surprisingly well in areas where the lane markings are all kind of jacked up due to temporary changes for construction. If it is off, then my arms are just having to generally assert more effort to be in the same place I was going to be with the system. Generally no passenger notices when the system engages/disengages in the car except for the chiming it does when it switches over to unaided operation.

    So at least my experience has been a positive one, but it hits things just right with intervention versus human attention, including monitoring gaze to make sure I am looking where I should. However there are people who test "how long can I keep my hands off the steering wheel", which is a more dangerous mode of thinking.

    And yes, having cameras everywhere makes fine maneuvering so much nicer, even with the limited visualization possible in the synthesized 'overhead' view of your car.

  • I am more talking about listening to and reading scientists in media. The definition of consciousness is vague at best

    I think that then we actually agree.

  • We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

    But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

    This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

    Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

    It is intelligent and deductive, but it is not cognitive or even dependable.

  • What language was the first language based upon?

    What music influenced the first song performed?

    What art influenced the first cave painter?

    You seem to think that one day somebody invented the first language, or made the first song?

    There was no "first language" and no "first song". These things would have evolved from something that was not quite a full language, or not quite a full song.

    Animals influenced the first cave painters, that seems pretty obvious.

  • Ya of course I do. Humans are the most unreliable slick disgusting diseased morally inept living organisms on the planet.

    And they made the programs you seem to trust so much.

  • I think your argument is a bit besides the point.

    The first issue we have is that intelligence isn't well-defined at all. Without a clear definition of intelligence, we can't say if something is intelligent, and even though we as a species tried to come up with a definition of intelligence for centuries, there still isn't a well-defined one yet.

    But the actual question here isn't "Can AI serve information?" but is AI an intelligence. And LLMs are not. They are not beings, they don't evolve, they don't experience.

    For example, LLMs don't have a memory. If you use something like ChatGPT, its state doesn't change when you talk to it. It doesn't remember. The only way it can keep up a conversation is that for each request the whole chat history is fed back into the LLM as an input. It's like talking to a demented person, but you give that demented person a transcript of your conversation, so that they can look up everything you or they have said during the conversation.

    The LLM itself can't change due to the conversation you are having with them. They can't learn, they can't experience, they can't change.

    All that is done in a separate training step, where essentially a new LLM is generated.

    If we can't say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we're developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don't know if we're a few steps away from having massive AI breakthroughs, we don't know if we already have pieces of algorithms that closely resemble our brains' own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it's our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we've been down this road with animals before as well, claiming they dont have souls or aren't conscious beings, that somehow because they don't very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they're somehow an inferior or less valid existence.

    You're describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it's already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I'm putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it's meant to be an insult.

    I'm not saying LLMs are alive, and they clearly don't experience the reality we experience, but to say there's no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations....is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it's an emergent property, and enforcing this "intelligence" separation only hinders our ability to properly recognize whether we're on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn't let our hubris cloud that judgment.

  • 390 Stimmen
    102 Beiträge
    0 Aufrufe
    tocopherol@lemmy.dbzer0.comT
    The trend has continued steadily since then, and furthermore, Bier wrote in November that his work showed the Biden administration “has removed a higher percentage of arrested border crossers in its first two years than the Trump DHS did over its last two years. Moreover, migrants were more likely to be released after a border arrest under President Trump than under President Biden.” https://www.factcheck.org/2024/02/breaking-down-the-immigration-figures/ This is from 2024, the DHS spokesperson at the time: “Under this Administration, the estimated annual apprehension rate has averaged 78%, identical to the rate of the prior Administration.” The idea Biden opened the borders is thoroughly debunked. Like I said you're the one telling me a bunch of bullshit so it's on you to disprove reality, I don't care enough to spoonfeed you the facts.
  • Trump extends TikTok ban deadline by another 90 days

    Technology technology
    6
    1
    24 Stimmen
    6 Beiträge
    13 Aufrufe
    N
    TikTacos
  • 1k Stimmen
    95 Beiträge
    15 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • 1 Stimmen
    1 Beiträge
    7 Aufrufe
    Niemand hat geantwortet
  • 108 Stimmen
    3 Beiträge
    4 Aufrufe
    K
    The title at least dont say anything new AFAIK. Because you could already download from external sources but those apps still needed to be signed by apple. But maybe they changed?
  • The AI girlfriend guy - The Paranoia Of The AI Era

    Technology technology
    4
    1
    7 Stimmen
    4 Beiträge
    15 Aufrufe
    S
    Saying 'don't downvote' is the flammable inflammable conundrum, both don't and do parse as do.
  • 479 Stimmen
    81 Beiträge
    49 Aufrufe
    douglasg14b@lemmy.worldD
    Did I say that it did? No? Then why the rhetorical question for something that I never stated? Now that we're past that, I'm not sure if I think it's okay, but I at least recognize that it's normalized within society. And has been for like 70+ years now. The problem happens with how the data is used, and particularly abused. If you walk into my store, you expect that I am monitoring you. You expect that you are on camera and that your shopping patterns, like all foot traffic, are probably being analyzed and aggregated. What you buy is tracked, at least in aggregate, by default really, that's just volume tracking and prediction. Suffice to say that broad customer behavior analysis has been a thing for a couple generations now, at least. When you go to a website, why would you think that it is not keeping track of where you go and what you click on in the same manner? Now that I've stated that I do want to say that the real problems that we experience come in with how this data is misused out of what it's scope should be. And that we should have strong regulatory agencies forcing compliance of how this data is used and enforcing the right to privacy for people that want it removed.
  • Reddit will tighten verification to keep out human-like AI bots

    Technology technology
    24
    1
    84 Stimmen
    24 Beiträge
    18 Aufrufe
    O
    While I completely agree with you about the absence of one-liners and meme comments, and even more left leaning community, there's still that strong element of "gotcha" in discussions. Also tonnes of people not reading an article before commenting (at a better rate than Reddit probably), and a generally even more doomer attitude is common here.