Skip to content

AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.

Technology
63 40 0
  • This post did not contain any content.

    Is that a recycled piece from 2023? Because we already knew that.

  • This post did not contain any content.

    Oh shit, they do behave like humans after all.

  • Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

  • This post did not contain any content.

    prompting concerns

    Oh you.

  • It's easy, just ask the AI "are you sure"? Until it stops changing it's answer.

    But seriously, LLMs are just advanced autocomplete.

    Ah, the monte-carlo approach to truth.

  • They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.

    I once gave some kind of math problem (how to break down a certain amount of money into bills) and the llm wrote a python script for it, ran it and thus gave me the correct answer. Kind of clever really.

  • It's like talking with someone who thinks the Earth is flat. There isn't anything to discuss. They're objectively wrong.

    Humans like to anthropomorphize everything. It's why you can see a face on a car's front grille. LLMs are ultra advanced pattern matching algorithms. They do not think or reason or have any kind of opinion or sentience, yet they are being utilized as if they do. Let's see how it works out for the world, I guess.

    I think so too, but I am really curious what will happen when we give them "bodies" with sensors so they can explore the world and make individual "experiences". I could imagine they would act much more human after a while and might even develop some kind of sentience.

    Of course they would also need some kind of memory and self-actualization processes.

  • Every thread about LLMs has to have some guy like yourself saying how LLMs are like humans and smarter than humans for some reason.

    Some humans are not as smart as LLMs, I give them that.

  • Language models are unsuitable for math problems broadly speaking. We already have good technology solutions for that category of problems. Luckily, you can combine the two - prompt the model to write a program that solves your math problem, then execute it. You're likely to see a lot more success using this approach.

    Also, generally the best interfaces for LLM will combine non-LLM facilities transparently. The LLM might be able to translate the prose to the format the math engine desires and then an intermediate layer recognizes a tag to submit an excerpt to a math engine and substitute the chunk with output from the math engine.

    Even for servicing a request to generate an image, the text generation model runs independent of the image generation, and the intermediate layer combines them. Which can cause fun disconnects like the guy asking for a full glass of wine. The text generation half is completely oblivious to the image generation half. So it responds playing the role of a graphic artist dutifully doing the work without ever 'seeing' the image, but it assumes the image is good because that's consistent with training output, but then the user corrects it and it goes about admitting that the picture (that it never 'looked' at) was wrong and retrying the image generator with the additional context, to produce a similarly botched picture.

  • This post did not contain any content.

    They are not only unaware of their own mistakes, they are unaware of their successes. They are generating content that is, per their training corpus, consistent with the input. This gets eerie, and the 'uncanny valley' of the mistakes are all the more striking, but they are just generating content without concept of 'mistake' or' 'success' or the content being a model for something else and not just being a blend of stuff from the training data.

    For example:

    Me: Generate an image of a frog on a lilypad.
    LLM: I'll try to create that — a peaceful frog on a lilypad in a serene pond scene. The image will appear shortly below.

    <includes a perfectly credible picture of a frog on a lilypad, request successfully processed>

    Me (lying): That seems to have produced a frog under a lilypad instead of on top.
    LLM: Thanks for pointing that out! I'm generating a corrected version now with the frog clearly sitting on top of the lilypad. It’ll appear below shortly.

    <includes another perfectly credible picture>

    It didn't know anything about the picture, it just took the input at it's word. A human would have stopped to say "uhh... what do you mean, the lilypad is on water and frog is on top of that?" Or if the human were really trying to just do the request without clarification, they might have tried to think "maybe he wanted it from the perspective of a fish, and he wanted the frog underwater?". A human wouldn't have gone "you are right, I made a mistake, here I've tried again" and include almost the exact same thing.

    But tha training data isn't predominantly people blatantly lying about such obvious things or second guessing things that were done so obviously normally correct.

  • This post did not contain any content.

    This happened to me the other day with Jippity. It outright lied to me:

    "You're absolutely right. Although I don't have access to the earlier parts of the conversation".

    So it says that I was right in a particular statement, but didn't actually know what I said. So I said to it, you just lied. It kept saying variations of:

    "I didn't lie intentionally"

    "I understand why it seems that way"

    "I wasn't misleading you"

    etc

    It flat out lied and tried to gaslight me into thinking I was in the wrong for taking that way.

  • Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

    It's not that they may be deceived, it's that they have no concept of what truth or fiction, mistake or success even are.

    Our brains know the concepts and may fall to deceipt without recognizing it, but we at least recognize that the concept exists.

    An AI generates content that is a blend of material from the training material consistent with extending the given prompt. It only seems to introduce a concept of lying or mistakes when the human injects that into the human half of the prompt material. It will also do so in a way that the human can just as easily instruct it to correct a genuine mistake as well as the human instruct it to correct something that is already correct (unless the training data includes a lot of reaffirmation of the material in the face of such doubts).

    An LLM can consume more input than a human can gather in multiple lifetimes and still bo wonky in generating content, because it needs enough to credibly blend content to extend every conceivable input. It's why so many people used to judging human content get derailed by judging AI content. An AI generates a fantastic answer to an interview question that only solid humans get right, only to falter 'on the job' because the utterly generic interview question looks like millions of samples in the input, but the actual job was niche.

  • This post did not contain any content.

    If you don't know you are wrong, when you have been shown to be wrong, you are not intelligent. So A.I. has become "Adequate Intelligence".

  • It's easy, just ask the AI "are you sure"? Until it stops changing it's answer.

    But seriously, LLMs are just advanced autocomplete.

    I kid you not, early on (mid 2023) some guy mentioned using ChatGPT for his work and not even checking the output (he was in some sort of non-techie field that was still in the wheelhouse of text generation). I expresssed that LLMs can include some glaring mistakes and he said he fixed it by always including in his prompt "Do not hallucinate content and verify all data is actually correct.".

  • They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.

    Fun thing, when it gets the answer right, tell it is was wrong and then see it apologize and "correct" itself to give the wrong answer.

  • I think so too, but I am really curious what will happen when we give them "bodies" with sensors so they can explore the world and make individual "experiences". I could imagine they would act much more human after a while and might even develop some kind of sentience.

    Of course they would also need some kind of memory and self-actualization processes.

    Interaction with the physical world isn't really required for us to evaluate how they deal with 'experiences'. They have in principle access to all sorts of interesting experiences in the online data. Some models have been enabled to fetch internet data and add them to the prompt to help synthesize an answer.

    One key thing is they don't bother until direction tells them. They don't have any desire they just have "generate search query from prompt, execute search query and fetch results, consider the combination of the original prompt and the results to be the context for generating more content and return to user".

    LLM is not a scheme that credibly implies that more LLM == sapient existance. Such a concept may come, but it will be something different than LLM. LLM just looks crazily like dealing with people.

  • Nah so their definition is the classical "how confident are you that you got the answer right". If you read the article they asked a bunch of people and 4 LLMs a bunch of random questions, then asked the respondent whether they/it had confidence their answer was correct, and then checked the answer. The LLMs initially lined up with people (over confident) but then when they iterated, shared results and asked further questions the LLMs confidence increased while people's tends to decrease to mitigate the over confidence.

    But the study still assumes intelligence enough to review past results and adjust accordingly, but disregards the fact that an AI isnt intelligence, it's a word prediction model based on a data set of written text tending to infinity. It's not assessing validity of results, it's predicting what the answer is based on all previous inputs. The whole study is irrelevant.

    Well, not irrelevant. Lots of our world is trying to treat the LLM output as human-like output, so if human's are going to treat LLM output the same way they treat human generated content, then we have to characterize, for the people, how their expectations are broken in that context.

    So as weird as it may seem to treat a stastical content extrapolation engine in the context of social science, there's a great deal of the reality and investment that wants to treat it as "person equivalent" output and so it must be studied in that context, if for no other reason to demonstrate to people that it should be considered "weird".

  • If you don't know you are wrong, when you have been shown to be wrong, you are not intelligent. So A.I. has become "Adequate Intelligence".

    That definition seems a bit shaky. Trump & co. are mentally ill but they do have a minimum of intelligence.

  • I kid you not, early on (mid 2023) some guy mentioned using ChatGPT for his work and not even checking the output (he was in some sort of non-techie field that was still in the wheelhouse of text generation). I expresssed that LLMs can include some glaring mistakes and he said he fixed it by always including in his prompt "Do not hallucinate content and verify all data is actually correct.".

    Ah, well then, if he tells the bot to not hallucinate and validate output there's no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.

  • Ah, well then, if he tells the bot to not hallucinate and validate output there's no reason to not trust the output. After all, you told the bot not to, and we all know that self regulation works without issue all of the time.

    It gave me flashbacks when the Replit guy complained that the LLM deleted his data despite being told in all caps not to multiple times.

    People really really don't understand how these things work...

  • 528 Stimmen
    141 Beiträge
    74 Aufrufe
    P
    It's not at all bad for an initial proof of concept.
  • [Horses] Phones Ruined Everything

    Technology technology
    2
    1
    16 Stimmen
    2 Beiträge
    25 Aufrufe
    M
    I found this really interesting and thought provoking. I loved the line how ultimate optimism is not always misplaced.
  • Exclusive: OpenAI to release web browser in challenge to Google Chrome

    Technology technology
    28
    54 Stimmen
    28 Beiträge
    202 Aufrufe
    T
    Also Servo is now under the Linux Foundation. Both this and Ladybird are very exciting.
  • 89 Stimmen
    15 Beiträge
    71 Aufrufe
    S
    I suspect people (not billionaires) are realising that they can get by with less. And that the planet needs that too. And that working 40+ hours a week isn’t giving people what they really want either. Tbh, I don't think that's the case. If you look at any of the relevant metrics (CO², energy consumption, plastic waste, ...) they only know one direction globally and that's up. I think the actual issues are Russian invasion of Ukraine and associated sanctions on one of the main energy providers of Europe Trump's "trade wars" which make global supply lines unreliable and costs incalculable (global supply chains love nothing more than uncertainty) Uncertainty in regards to China/Taiwan Boomers retiring in western countries, which for the first time since pretty much ever means that the work force is shrinking instead of growing. Economical growth was mostly driven by population growth for the last half century with per-capita productivity staying very close to inflation. Disrupting changes in key industries like cars and energy. The west has been sleeping on may of these developments (e.g. electric cars, batteries, solar) and now China is curbstomping the rest of the world in regards to market share. High key interest rates (which are applied to reduce high inflation due to some of the reason above) reduce demand on financial investments into companies. The low interest rates of the 2010s and also before lead to more investments into companies. With interest going back up, investments dry up. All these changes mean that companies, countries and people in the west have much less free cash available. There’s also the value of money has never been lower either. That's been the case since every. Inflation has always been a thing and with that the value of money is monotonically decreasing. But that doesn't really matter for the whole argument, since the absolute value of money doesn't matter, only the relative value. To put it differently: If you earn €100 and the thing you want to buy costs €10, that is equivalent to if you earn €1000 and the thing you want to buy costing €100. The value of money dropping is only relevant for savings, and if people are saving too much then the economy slows down and jobs are cut, thus some inflation is positive or even required. What is an actual issue is that wages are not increasing at the same rate as the cost of things, but that's not a "value of the money" issue.
  • 184 Stimmen
    9 Beiträge
    40 Aufrufe
    G
    i used to work for secretary of state police and driver privacy was taken deathly seriously. glad to see alexi’s keeping up the good work.
  • Apple acquires RAC7, its first-ever video game studio

    Technology technology
    16
    1
    67 Stimmen
    16 Beiträge
    80 Aufrufe
    E
    I'm not questioning whether or not the game is good, just wondering why Apple would want to limit their customer base so much.
  • Britain’s Companies Are Being Hacked

    Technology technology
    9
    1
    21 Stimmen
    9 Beiträge
    52 Aufrufe
    D
    Is that "goodbye" in Russian? Why?
  • Someone left this review on my free Reddit client rdx for Reddit

    Technology technology
    2
    2
    0 Stimmen
    2 Beiträge
    23 Aufrufe
    F
    I’m going to give you a five star review to balance it.