Skip to content

AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.

Technology
63 40 0
  • This post did not contain any content.

    It's easy, just ask the AI "are you sure"? Until it stops changing it's answer.

    But seriously, LLMs are just advanced autocomplete.

  • This post did not contain any content.

    Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.

  • This post did not contain any content.

    Confidently incorrect.

  • This Nobel Prize winner and subject matter expert takes the opposite view

    People really do not like seeing opposing viewpoints, eh? There's disagreeing, and then there's downvoting to oblivion without even engaging in a discussion, haha.

    Even if they're probably right, in such murky uncertain waters where we're not experts, one should have at least a little open mind, or live and let live.

  • Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.

    Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

  • Sounds pretty human to me. /s

    Sounds pretty human to me. no /s

  • I guess, but it's like proving your phones predictive text has confidence in its suggestions regardless of accuracy. Confidence is not an attribute of a math function, they are attributing intelligence to a predictive model.

    I work in risk management, but don't really have a strong understanding of LLM mechanics. "Confidence" is something that i quantify in my work, but it has different terms that are associated with it. In modeling outcomes, I may say that we have 60% confidence in achieving our budget objectives, while others would express the same result by saying our chances of achieving our budget objective are 60%. Again, I'm not sure if this is what the LLM is doing, but if it is producing a modeled prediction with a CDF of possible outcomes, then representing its result with 100% confindence means that the LLM didn't model any other possible outcomes other than the answer it is providing, which does seem troubling.

  • People really do not like seeing opposing viewpoints, eh? There's disagreeing, and then there's downvoting to oblivion without even engaging in a discussion, haha.

    Even if they're probably right, in such murky uncertain waters where we're not experts, one should have at least a little open mind, or live and let live.

    It's like talking with someone who thinks the Earth is flat. There isn't anything to discuss. They're objectively wrong.

    Humans like to anthropomorphize everything. It's why you can see a face on a car's front grille. LLMs are ultra advanced pattern matching algorithms. They do not think or reason or have any kind of opinion or sentience, yet they are being utilized as if they do. Let's see how it works out for the world, I guess.

  • It's easy, just ask the AI "are you sure"? Until it stops changing it's answer.

    But seriously, LLMs are just advanced autocomplete.

    They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.

  • Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

    There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.

  • I work in risk management, but don't really have a strong understanding of LLM mechanics. "Confidence" is something that i quantify in my work, but it has different terms that are associated with it. In modeling outcomes, I may say that we have 60% confidence in achieving our budget objectives, while others would express the same result by saying our chances of achieving our budget objective are 60%. Again, I'm not sure if this is what the LLM is doing, but if it is producing a modeled prediction with a CDF of possible outcomes, then representing its result with 100% confindence means that the LLM didn't model any other possible outcomes other than the answer it is providing, which does seem troubling.

    Nah so their definition is the classical "how confident are you that you got the answer right". If you read the article they asked a bunch of people and 4 LLMs a bunch of random questions, then asked the respondent whether they/it had confidence their answer was correct, and then checked the answer. The LLMs initially lined up with people (over confident) but then when they iterated, shared results and asked further questions the LLMs confidence increased while people's tends to decrease to mitigate the over confidence.

    But the study still assumes intelligence enough to review past results and adjust accordingly, but disregards the fact that an AI isnt intelligence, it's a word prediction model based on a data set of written text tending to infinity. It's not assessing validity of results, it's predicting what the answer is based on all previous inputs. The whole study is irrelevant.

  • This Nobel Prize winner and subject matter expert takes the opposite view

    Interesting talk but the number of times he completely dismisses the entire field of linguistics kind of makes me think he's being disingenuous about his familiarity with it.

    For one, I think he is dismissing holotes, the concept of "wholeness." That when you cut something apart to it's individual parts, you lose something about the bigger picture. This deconstruction of language misses the larger picture of the human body as a whole, and how every part of us, from our assemblage of organs down to our DNA, impact how we interact with and understand the world. He may have a great definition of understanding but it still sounds (to me) like it's potentially missing aspects of human/animal biologically based understanding.

    For example, I have cancer, and about six months before I was diagnosed, I had begun to get more chronically depressed than usual. I felt hopeless and I didn't know why. Surprisingly, that's actually a symptom of my cancer. What understanding did I have that changed how I felt inside and how I understood the things around me? Suddenly I felt different about words and ideas, but nothing had changed externally, something had change internally. The connections in my neural network had adjusted, the feelings and associations with words and ideas was different, but I hadn't done anything to make that adjustment. No learning or understanding had happened. I had a mutation in my DNA that made that adjustment for me.

    Further, I think he's deeply misunderstanding (possibly intentionally?) what linguists like Chomsky are saying when they say humans are born with language. They mean that we are born with a genetic blueprint to understand language. Just like animals are born with a genetic blueprint to do things they were never trained to do. Many animals are born and almost immediately stand up to walk. This is the same principle. There are innate biologically ingrained understandings that help us along the path to understanding. It does not mean we are born understanding language as much as we are born with the building blocks of understanding the physical world in which we exist.

    Anyway, interesting talk, but I immediately am skeptical of anyone who wholly dismisses an entire field of thought so casually.

    For what it's worth, I didn't downvote you and I'm sorry people are doing so.

  • They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.

    Language models are unsuitable for math problems broadly speaking. We already have good technology solutions for that category of problems. Luckily, you can combine the two - prompt the model to write a program that solves your math problem, then execute it. You're likely to see a lot more success using this approach.

  • This post did not contain any content.

    What a terrible headline. Self-aware? Really?

  • People really do not like seeing opposing viewpoints, eh? There's disagreeing, and then there's downvoting to oblivion without even engaging in a discussion, haha.

    Even if they're probably right, in such murky uncertain waters where we're not experts, one should have at least a little open mind, or live and let live.

    I think there's two basic mistakes that you made. First, you think that we aren't experts, but it's definitely true that some of us have studied these topics for years in college or graduate school, and surely many other people are well read on the subject. Obviously you can't easily confirm our backgrounds, but we exist. Second, people who are somewhat aware of the topic might realize that it's not particularly productive to engage in discussion on it here because there's too much background information that's missing. It's often the case that experts don't try to discuss things because it's the wrong venue, not because they feel superior.

  • Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

    Every thread about LLMs has to have some guy like yourself saying how LLMs are like humans and smarter than humans for some reason.

  • This post did not contain any content.

    Is that a recycled piece from 2023? Because we already knew that.

  • This post did not contain any content.

    Oh shit, they do behave like humans after all.

  • Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

  • This post did not contain any content.

    prompting concerns

    Oh you.

  • 0 Stimmen
    1 Beiträge
    22 Aufrufe
    Niemand hat geantwortet
  • How to turn off Gemini on Android — and why you should

    Technology technology
    47
    1
    402 Stimmen
    47 Beiträge
    390 Aufrufe
    K
    Fair enough. Unfortunately, that only covers around 5% of Android users in the US.
  • Former and current Microsofties react to the latest layoffs

    Technology technology
    20
    1
    85 Stimmen
    20 Beiträge
    127 Aufrufe
    eightbitblood@lemmy.worldE
    Incredibly well said. And couldn't agree more! Especially after working as a game dev for Apple Arcade. We spent months proving to them their saving architecture was faulty and would lead to people losing their save file for each Apple Arcade game they play. We were ignored, and then told it was a dev problem. Cut to the launch of Arcade: every single game has several 1 star reviews about players losing their save files. This cannot be fixed by devs as it's an Apple problem, so devs have to figure out novel ways to prevent the issue from happening using their own time and resources. 1.5 years later, Apple finishes restructuring the entire backend of Arcade, fixing the problem. They tell all their devs to reimplement the saving architecture of their games to be compliant with Apples new backend or get booted from Arcade. This costs devs months of time to complete for literally zero return (Apple Arcade deals are upfront - little to no revenue is seen after launch). Apple used their trillions of dollars to ignore a massive backend issue that affected every player and developer on Apple Arcade. They then forced every dev to make an update to their game at their own expense just to keep it listed on Arcade. All while directing user frustration over the issue towards developers instead of taking accountability for launching a faulty product. Literally, these companies are run by sociopaths that have egos bigger than their paychecks. Issues like this are ignored as it's easier to place the blame on someone down the line. People like your manager end up getting promoted to the top of an office heirachy of bullshit, and everything the company makes just gets worse until whatever corpse is left is sold for parts to whatever bigger dumb company hasn't collapsed yet. It's really painful to watch, and even more painful to work with these idiots.
  • 26 Stimmen
    11 Beiträge
    50 Aufrufe
    F
    Absolute horseshit. Bulbs don't have microphones. If they did, any junior security hacker could sniff out the traffic and post about it for cred. The article quickly pivots to TP-Link and other devices exposing certificates. That has nothing to do with surveillance and everything to do with incompetent programming. Then it swings over to Matter and makes a bunch of incorrect assertion I don't even care to correct. Also, all the links are to articles on the same site, every single one of which is easily refutable crap. Yes, there are privacy tradeoffs with connected devices, but this article is nothing but hot clickbait garbage.
  • Pornaroma Review a Detailed Comparison with Top Adult Sites

    Technology technology
    1
    2
    4 Stimmen
    1 Beiträge
    12 Aufrufe
    Niemand hat geantwortet
  • 0 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • 377 Stimmen
    58 Beiträge
    194 Aufrufe
    avidamoeba@lemmy.caA
    Does anyone know if there's additional sandboxing of local ports happening for apps running in Private Space? E: Checked myself. Can access servers in Private Space from non-Private Space browsers and vice versa. So Facebook installed in Private Space is no bueno. Even if the time to transfer data is limited since Private Space is running for short periods of time, it's likely enough to pass a token while browsing some sites.
  • 11 Stimmen
    19 Beiträge
    69 Aufrufe
    E
    No, just laminated ones. Closed at one end. Easy enough to make or buy. You can even improvise the propellant.