Skip to content

AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.

Technology
66 41 0
  • People really do not like seeing opposing viewpoints, eh? There's disagreeing, and then there's downvoting to oblivion without even engaging in a discussion, haha.

    Even if they're probably right, in such murky uncertain waters where we're not experts, one should have at least a little open mind, or live and let live.

    It's like talking with someone who thinks the Earth is flat. There isn't anything to discuss. They're objectively wrong.

    Humans like to anthropomorphize everything. It's why you can see a face on a car's front grille. LLMs are ultra advanced pattern matching algorithms. They do not think or reason or have any kind of opinion or sentience, yet they are being utilized as if they do. Let's see how it works out for the world, I guess.

  • It's easy, just ask the AI "are you sure"? Until it stops changing it's answer.

    But seriously, LLMs are just advanced autocomplete.

    They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.

  • Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

    There are plenty of similarities in the output of both the human brain and LLMs, but overall they’re very different. Unlike LLMs, the human brain is generally intelligent - it can adapt to a huge variety of cognitive tasks. LLMs, on the other hand, can only do one thing: generate language. It’s tempting to anthropomorphize systems like ChatGPT because of how competent they seem, but there’s no actual thinking going on. It’s just generating language based on patterns and probabilities.

  • I work in risk management, but don't really have a strong understanding of LLM mechanics. "Confidence" is something that i quantify in my work, but it has different terms that are associated with it. In modeling outcomes, I may say that we have 60% confidence in achieving our budget objectives, while others would express the same result by saying our chances of achieving our budget objective are 60%. Again, I'm not sure if this is what the LLM is doing, but if it is producing a modeled prediction with a CDF of possible outcomes, then representing its result with 100% confindence means that the LLM didn't model any other possible outcomes other than the answer it is providing, which does seem troubling.

    Nah so their definition is the classical "how confident are you that you got the answer right". If you read the article they asked a bunch of people and 4 LLMs a bunch of random questions, then asked the respondent whether they/it had confidence their answer was correct, and then checked the answer. The LLMs initially lined up with people (over confident) but then when they iterated, shared results and asked further questions the LLMs confidence increased while people's tends to decrease to mitigate the over confidence.

    But the study still assumes intelligence enough to review past results and adjust accordingly, but disregards the fact that an AI isnt intelligence, it's a word prediction model based on a data set of written text tending to infinity. It's not assessing validity of results, it's predicting what the answer is based on all previous inputs. The whole study is irrelevant.

  • This Nobel Prize winner and subject matter expert takes the opposite view

    Interesting talk but the number of times he completely dismisses the entire field of linguistics kind of makes me think he's being disingenuous about his familiarity with it.

    For one, I think he is dismissing holotes, the concept of "wholeness." That when you cut something apart to it's individual parts, you lose something about the bigger picture. This deconstruction of language misses the larger picture of the human body as a whole, and how every part of us, from our assemblage of organs down to our DNA, impact how we interact with and understand the world. He may have a great definition of understanding but it still sounds (to me) like it's potentially missing aspects of human/animal biologically based understanding.

    For example, I have cancer, and about six months before I was diagnosed, I had begun to get more chronically depressed than usual. I felt hopeless and I didn't know why. Surprisingly, that's actually a symptom of my cancer. What understanding did I have that changed how I felt inside and how I understood the things around me? Suddenly I felt different about words and ideas, but nothing had changed externally, something had change internally. The connections in my neural network had adjusted, the feelings and associations with words and ideas was different, but I hadn't done anything to make that adjustment. No learning or understanding had happened. I had a mutation in my DNA that made that adjustment for me.

    Further, I think he's deeply misunderstanding (possibly intentionally?) what linguists like Chomsky are saying when they say humans are born with language. They mean that we are born with a genetic blueprint to understand language. Just like animals are born with a genetic blueprint to do things they were never trained to do. Many animals are born and almost immediately stand up to walk. This is the same principle. There are innate biologically ingrained understandings that help us along the path to understanding. It does not mean we are born understanding language as much as we are born with the building blocks of understanding the physical world in which we exist.

    Anyway, interesting talk, but I immediately am skeptical of anyone who wholly dismisses an entire field of thought so casually.

    For what it's worth, I didn't downvote you and I'm sorry people are doing so.

  • They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.

    Language models are unsuitable for math problems broadly speaking. We already have good technology solutions for that category of problems. Luckily, you can combine the two - prompt the model to write a program that solves your math problem, then execute it. You're likely to see a lot more success using this approach.

  • This post did not contain any content.

    What a terrible headline. Self-aware? Really?

  • People really do not like seeing opposing viewpoints, eh? There's disagreeing, and then there's downvoting to oblivion without even engaging in a discussion, haha.

    Even if they're probably right, in such murky uncertain waters where we're not experts, one should have at least a little open mind, or live and let live.

    I think there's two basic mistakes that you made. First, you think that we aren't experts, but it's definitely true that some of us have studied these topics for years in college or graduate school, and surely many other people are well read on the subject. Obviously you can't easily confirm our backgrounds, but we exist. Second, people who are somewhat aware of the topic might realize that it's not particularly productive to engage in discussion on it here because there's too much background information that's missing. It's often the case that experts don't try to discuss things because it's the wrong venue, not because they feel superior.

  • Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

    Every thread about LLMs has to have some guy like yourself saying how LLMs are like humans and smarter than humans for some reason.

  • This post did not contain any content.

    Is that a recycled piece from 2023? Because we already knew that.

  • This post did not contain any content.

    Oh shit, they do behave like humans after all.

  • Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

  • This post did not contain any content.

    prompting concerns

    Oh you.

  • It's easy, just ask the AI "are you sure"? Until it stops changing it's answer.

    But seriously, LLMs are just advanced autocomplete.

    Ah, the monte-carlo approach to truth.

  • They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.

    I once gave some kind of math problem (how to break down a certain amount of money into bills) and the llm wrote a python script for it, ran it and thus gave me the correct answer. Kind of clever really.

  • It's like talking with someone who thinks the Earth is flat. There isn't anything to discuss. They're objectively wrong.

    Humans like to anthropomorphize everything. It's why you can see a face on a car's front grille. LLMs are ultra advanced pattern matching algorithms. They do not think or reason or have any kind of opinion or sentience, yet they are being utilized as if they do. Let's see how it works out for the world, I guess.

    I think so too, but I am really curious what will happen when we give them "bodies" with sensors so they can explore the world and make individual "experiences". I could imagine they would act much more human after a while and might even develop some kind of sentience.

    Of course they would also need some kind of memory and self-actualization processes.

  • Every thread about LLMs has to have some guy like yourself saying how LLMs are like humans and smarter than humans for some reason.

    Some humans are not as smart as LLMs, I give them that.

  • Language models are unsuitable for math problems broadly speaking. We already have good technology solutions for that category of problems. Luckily, you can combine the two - prompt the model to write a program that solves your math problem, then execute it. You're likely to see a lot more success using this approach.

    Also, generally the best interfaces for LLM will combine non-LLM facilities transparently. The LLM might be able to translate the prose to the format the math engine desires and then an intermediate layer recognizes a tag to submit an excerpt to a math engine and substitute the chunk with output from the math engine.

    Even for servicing a request to generate an image, the text generation model runs independent of the image generation, and the intermediate layer combines them. Which can cause fun disconnects like the guy asking for a full glass of wine. The text generation half is completely oblivious to the image generation half. So it responds playing the role of a graphic artist dutifully doing the work without ever 'seeing' the image, but it assumes the image is good because that's consistent with training output, but then the user corrects it and it goes about admitting that the picture (that it never 'looked' at) was wrong and retrying the image generator with the additional context, to produce a similarly botched picture.

  • This post did not contain any content.

    They are not only unaware of their own mistakes, they are unaware of their successes. They are generating content that is, per their training corpus, consistent with the input. This gets eerie, and the 'uncanny valley' of the mistakes are all the more striking, but they are just generating content without concept of 'mistake' or' 'success' or the content being a model for something else and not just being a blend of stuff from the training data.

    For example:

    Me: Generate an image of a frog on a lilypad.
    LLM: I'll try to create that — a peaceful frog on a lilypad in a serene pond scene. The image will appear shortly below.

    <includes a perfectly credible picture of a frog on a lilypad, request successfully processed>

    Me (lying): That seems to have produced a frog under a lilypad instead of on top.
    LLM: Thanks for pointing that out! I'm generating a corrected version now with the frog clearly sitting on top of the lilypad. It’ll appear below shortly.

    <includes another perfectly credible picture>

    It didn't know anything about the picture, it just took the input at it's word. A human would have stopped to say "uhh... what do you mean, the lilypad is on water and frog is on top of that?" Or if the human were really trying to just do the request without clarification, they might have tried to think "maybe he wanted it from the perspective of a fish, and he wanted the frog underwater?". A human wouldn't have gone "you are right, I made a mistake, here I've tried again" and include almost the exact same thing.

    But tha training data isn't predominantly people blatantly lying about such obvious things or second guessing things that were done so obviously normally correct.

  • This post did not contain any content.

    This happened to me the other day with Jippity. It outright lied to me:

    "You're absolutely right. Although I don't have access to the earlier parts of the conversation".

    So it says that I was right in a particular statement, but didn't actually know what I said. So I said to it, you just lied. It kept saying variations of:

    "I didn't lie intentionally"

    "I understand why it seems that way"

    "I wasn't misleading you"

    etc

    It flat out lied and tried to gaslight me into thinking I was in the wrong for taking that way.

  • Getting Started with Ebitengine (Go game engine)

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • Transparent PCBs Trigger 90s Nostalgia

    Technology technology
    31
    1
    222 Stimmen
    31 Beiträge
    27 Aufrufe
    S
    I would hate that case.
  • China's Robotaxi Companies Are Racing Ahead of Tesla

    Technology technology
    38
    1
    178 Stimmen
    38 Beiträge
    381 Aufrufe
    I
    It could. Imagine 80% autonomous vehicle traffic, 30% of that is multipassenger capable taxi service. Autonomous vehicle lanes moving reliably at 75mph. With this amount of taxi service the advantages of personal vehicle ownership falls and the wait time for an available pickup diminishes rapidly. China has many areas with pretty good public transportation. In the US, tech advances and legislation changes to enable the above model is better suited to the existing infrastructure.
  • Right to Repair Gains Traction as John Deere Faces Trial

    Technology technology
    30
    1
    621 Stimmen
    30 Beiträge
    140 Aufrufe
    R
    Run the Jewels?
  • 558 Stimmen
    99 Beiträge
    397 Aufrufe
    N
    In this year of 2025? No. But it still is basically setting oneself for failure from the perspective of Graphene, IMO. Like, the strongest protection in the world (assuming Graphene even is, which is quite a tall order statement) is useless if it only works on the mornings of a Tuesday that falls in a prime number day that has a blue moon and where there are no ATP tennis matches going on. Everyone else is, like, living in the real world, and the uniqueness of your scenario is going to go down the drain once your users get presented with a $5 wrench, or even cheaper: a waterboard. Because cops, let alone ICE, are not going to stop to ask you if they can make you more comfortable with your privacy being violated.
  • Power-Hungry Data Centers Are Warming Homes in Nordic Countries

    Technology technology
    3
    1
    12 Stimmen
    3 Beiträge
    26 Aufrufe
    T
    This is also a thing in Denmark. It's required by law to even build a data center.
  • 386 Stimmen
    9 Beiträge
    37 Aufrufe
    C
    Melon Usk doomed their FSD efforts from the start with his dunning-kruger-brain take of "humans drive just using their eyes, so cars shouldn't need any sensors besides cameras." Considering how many excellent engineers there are (or were, at least) at his companies, it's kind of fascinating how "stupid at the top" is just as bad, if not worse, than "stupid all the way down."
  • OpenAI plans massive UAE data center project

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    31 Aufrufe
    V
    TD Cowen (which is basically the US arm of one of the largest Canadian investment banks) did an extensive report on the state of AI investment. What they found was that despite all their big claims about the future of AI, Microsoft were quietly allowing letters of intent for billions of dollars worth of new compute capacity to expire. Basically, scrapping future plans for expansion, but in a way that's not showy and doesn't require any kind of big announcement. The equivalent of promising to be at the party and then just not showing up. Not long after this reporting came out, it got confirmed by Microsoft, and not long after it came out that Amazon was doing the same thing. Ed Zitron has a really good write up on it; https://www.wheresyoured.at/power-cut/ Amazon isn't the big surprise, they've always been the most cautious of the big players on the whole AI thing. Microsoft on the other hand are very much trying to play things both ways. They know AI is fucked, which is why they're scaling back, but they've also invested a lot of money into their OpenAI partnership so now they have to justify that expenditure which means convincing investors that consumers absolutely love their AI products and are desparate for more. As always, follow the money. Stuff like the three mile island thing is mostly just applying for permits and so on at this point. Relatively small investments. As soon as it comes to big money hitting the table, they're pulling back. That's how you know how they really feel.