Skip to content

AI Chatbots Remain Overconfident — Even When They’re Wrong: Large Language Models appear to be unaware of their own mistakes, prompting concerns about common uses for AI chatbots.

Technology
66 41 0
  • People really do not like seeing opposing viewpoints, eh? There's disagreeing, and then there's downvoting to oblivion without even engaging in a discussion, haha.

    Even if they're probably right, in such murky uncertain waters where we're not experts, one should have at least a little open mind, or live and let live.

    I think there's two basic mistakes that you made. First, you think that we aren't experts, but it's definitely true that some of us have studied these topics for years in college or graduate school, and surely many other people are well read on the subject. Obviously you can't easily confirm our backgrounds, but we exist. Second, people who are somewhat aware of the topic might realize that it's not particularly productive to engage in discussion on it here because there's too much background information that's missing. It's often the case that experts don't try to discuss things because it's the wrong venue, not because they feel superior.

  • Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

    Every thread about LLMs has to have some guy like yourself saying how LLMs are like humans and smarter than humans for some reason.

  • This post did not contain any content.

    Is that a recycled piece from 2023? Because we already knew that.

  • This post did not contain any content.

    Oh shit, they do behave like humans after all.

  • Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

  • This post did not contain any content.

    prompting concerns

    Oh you.

  • It's easy, just ask the AI "are you sure"? Until it stops changing it's answer.

    But seriously, LLMs are just advanced autocomplete.

    Ah, the monte-carlo approach to truth.

  • They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.

    I once gave some kind of math problem (how to break down a certain amount of money into bills) and the llm wrote a python script for it, ran it and thus gave me the correct answer. Kind of clever really.

  • It's like talking with someone who thinks the Earth is flat. There isn't anything to discuss. They're objectively wrong.

    Humans like to anthropomorphize everything. It's why you can see a face on a car's front grille. LLMs are ultra advanced pattern matching algorithms. They do not think or reason or have any kind of opinion or sentience, yet they are being utilized as if they do. Let's see how it works out for the world, I guess.

    I think so too, but I am really curious what will happen when we give them "bodies" with sensors so they can explore the world and make individual "experiences". I could imagine they would act much more human after a while and might even develop some kind of sentience.

    Of course they would also need some kind of memory and self-actualization processes.

  • Every thread about LLMs has to have some guy like yourself saying how LLMs are like humans and smarter than humans for some reason.

    Some humans are not as smart as LLMs, I give them that.

  • Language models are unsuitable for math problems broadly speaking. We already have good technology solutions for that category of problems. Luckily, you can combine the two - prompt the model to write a program that solves your math problem, then execute it. You're likely to see a lot more success using this approach.

    Also, generally the best interfaces for LLM will combine non-LLM facilities transparently. The LLM might be able to translate the prose to the format the math engine desires and then an intermediate layer recognizes a tag to submit an excerpt to a math engine and substitute the chunk with output from the math engine.

    Even for servicing a request to generate an image, the text generation model runs independent of the image generation, and the intermediate layer combines them. Which can cause fun disconnects like the guy asking for a full glass of wine. The text generation half is completely oblivious to the image generation half. So it responds playing the role of a graphic artist dutifully doing the work without ever 'seeing' the image, but it assumes the image is good because that's consistent with training output, but then the user corrects it and it goes about admitting that the picture (that it never 'looked' at) was wrong and retrying the image generator with the additional context, to produce a similarly botched picture.

  • This post did not contain any content.

    They are not only unaware of their own mistakes, they are unaware of their successes. They are generating content that is, per their training corpus, consistent with the input. This gets eerie, and the 'uncanny valley' of the mistakes are all the more striking, but they are just generating content without concept of 'mistake' or' 'success' or the content being a model for something else and not just being a blend of stuff from the training data.

    For example:

    Me: Generate an image of a frog on a lilypad.
    LLM: I'll try to create that — a peaceful frog on a lilypad in a serene pond scene. The image will appear shortly below.

    <includes a perfectly credible picture of a frog on a lilypad, request successfully processed>

    Me (lying): That seems to have produced a frog under a lilypad instead of on top.
    LLM: Thanks for pointing that out! I'm generating a corrected version now with the frog clearly sitting on top of the lilypad. It’ll appear below shortly.

    <includes another perfectly credible picture>

    It didn't know anything about the picture, it just took the input at it's word. A human would have stopped to say "uhh... what do you mean, the lilypad is on water and frog is on top of that?" Or if the human were really trying to just do the request without clarification, they might have tried to think "maybe he wanted it from the perspective of a fish, and he wanted the frog underwater?". A human wouldn't have gone "you are right, I made a mistake, here I've tried again" and include almost the exact same thing.

    But tha training data isn't predominantly people blatantly lying about such obvious things or second guessing things that were done so obviously normally correct.

  • This post did not contain any content.

    This happened to me the other day with Jippity. It outright lied to me:

    "You're absolutely right. Although I don't have access to the earlier parts of the conversation".

    So it says that I was right in a particular statement, but didn't actually know what I said. So I said to it, you just lied. It kept saying variations of:

    "I didn't lie intentionally"

    "I understand why it seems that way"

    "I wasn't misleading you"

    etc

    It flat out lied and tried to gaslight me into thinking I was in the wrong for taking that way.

  • Neither are our brains.

    “Brains are survival engines, not truth detectors. If self-deception promotes fitness, the brain lies. Stops noticing—irrelevant things. Truth never matters. Only fitness. By now you don’t experience the world as it exists at all. You experience a simulation built from assumptions. Shortcuts. Lies. Whole species is agnosiac by default.”

    ― Peter Watts, Blindsight (fiction)

    Starting to think we're really not much smarter. "But LLMs tell us what we want to hear!" Been on FaceBook lately, or lemmy?

    If nothing else, LLMs have woke me to how stupid humans are vs. the machines.

    It's not that they may be deceived, it's that they have no concept of what truth or fiction, mistake or success even are.

    Our brains know the concepts and may fall to deceipt without recognizing it, but we at least recognize that the concept exists.

    An AI generates content that is a blend of material from the training material consistent with extending the given prompt. It only seems to introduce a concept of lying or mistakes when the human injects that into the human half of the prompt material. It will also do so in a way that the human can just as easily instruct it to correct a genuine mistake as well as the human instruct it to correct something that is already correct (unless the training data includes a lot of reaffirmation of the material in the face of such doubts).

    An LLM can consume more input than a human can gather in multiple lifetimes and still bo wonky in generating content, because it needs enough to credibly blend content to extend every conceivable input. It's why so many people used to judging human content get derailed by judging AI content. An AI generates a fantastic answer to an interview question that only solid humans get right, only to falter 'on the job' because the utterly generic interview question looks like millions of samples in the input, but the actual job was niche.

  • This post did not contain any content.

    If you don't know you are wrong, when you have been shown to be wrong, you are not intelligent. So A.I. has become "Adequate Intelligence".

  • It's easy, just ask the AI "are you sure"? Until it stops changing it's answer.

    But seriously, LLMs are just advanced autocomplete.

    I kid you not, early on (mid 2023) some guy mentioned using ChatGPT for his work and not even checking the output (he was in some sort of non-techie field that was still in the wheelhouse of text generation). I expresssed that LLMs can include some glaring mistakes and he said he fixed it by always including in his prompt "Do not hallucinate content and verify all data is actually correct.".

  • They can even get math wrong. Which surprised me. Had to tell it the answer is wrong for them to recalculate and then get the correct answer. It was simple percentages of a list of numbers I had asked.

    Fun thing, when it gets the answer right, tell it is was wrong and then see it apologize and "correct" itself to give the wrong answer.

  • I think so too, but I am really curious what will happen when we give them "bodies" with sensors so they can explore the world and make individual "experiences". I could imagine they would act much more human after a while and might even develop some kind of sentience.

    Of course they would also need some kind of memory and self-actualization processes.

    Interaction with the physical world isn't really required for us to evaluate how they deal with 'experiences'. They have in principle access to all sorts of interesting experiences in the online data. Some models have been enabled to fetch internet data and add them to the prompt to help synthesize an answer.

    One key thing is they don't bother until direction tells them. They don't have any desire they just have "generate search query from prompt, execute search query and fetch results, consider the combination of the original prompt and the results to be the context for generating more content and return to user".

    LLM is not a scheme that credibly implies that more LLM == sapient existance. Such a concept may come, but it will be something different than LLM. LLM just looks crazily like dealing with people.

  • Nah so their definition is the classical "how confident are you that you got the answer right". If you read the article they asked a bunch of people and 4 LLMs a bunch of random questions, then asked the respondent whether they/it had confidence their answer was correct, and then checked the answer. The LLMs initially lined up with people (over confident) but then when they iterated, shared results and asked further questions the LLMs confidence increased while people's tends to decrease to mitigate the over confidence.

    But the study still assumes intelligence enough to review past results and adjust accordingly, but disregards the fact that an AI isnt intelligence, it's a word prediction model based on a data set of written text tending to infinity. It's not assessing validity of results, it's predicting what the answer is based on all previous inputs. The whole study is irrelevant.

    Well, not irrelevant. Lots of our world is trying to treat the LLM output as human-like output, so if human's are going to treat LLM output the same way they treat human generated content, then we have to characterize, for the people, how their expectations are broken in that context.

    So as weird as it may seem to treat a stastical content extrapolation engine in the context of social science, there's a great deal of the reality and investment that wants to treat it as "person equivalent" output and so it must be studied in that context, if for no other reason to demonstrate to people that it should be considered "weird".

  • If you don't know you are wrong, when you have been shown to be wrong, you are not intelligent. So A.I. has become "Adequate Intelligence".

    That definition seems a bit shaky. Trump & co. are mentally ill but they do have a minimum of intelligence.

  • 0 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • 801 Stimmen
    220 Beiträge
    2k Aufrufe
    uriel238@lemmy.blahaj.zoneU
    algos / AI has already been used to justify racial discrimination in some counties who use predictive policing software to adjust the sentences of convicts (the software takes in a range of facts about the suspect and the incident and compares it to how prior incidents and suspects were similar features were adjudicated) and wouldn't you know it, it simply highlighted and exaggerated the prejudices of police and the courts to absurdity, giving whites absurdly lighter sentences than nonwhites, for example. This is essentially mind control or coercion technology based on the KGB technology of компромат (Kompromat, or compromising information, or as CIA calls it biographical leverage, ) essentially, information about a person that can be used either to jeopardize their life, blackmail material or means to lure and bribe them. Take this from tradecraft and apply it to marketing or civil control, and you get things like the Social Credit System in China to keep people from misbehaving, engaging in discontent and coming out of the closet (LGBTQ+ but there are plenty of other applicable closets). From a futurist perspective, we homo-sapiens appear just incapable of noping out of a technology or process, no matter how morally black or heinous that technology is, we'll use it, especially those with wealth and power to evade legal prosecution (or civil persecution). It breaks down into three categories: Technologies we use anyway, and suffer, e.g. usury, bonded servitude, mass-media propaganda distribution Technologies we collectively decide are just not worth the consequences, e.g. the hydrogen bomb, biochemical warfare Technologies for which we create countermeasures, usually turning into a tech race between states or between the public and the state, e.g. secure communication, secure data encryption, forbidden data distribution / censorship We're clearly on the cusp of mind control and weaponizing data harvesting into a coercion mechanism. Currently we're already seeing it used to establish and defend specific power structures that are antithetical to the public good. It's currently in the first category, and hopefully it'll fall into the third, because we have to make a mess (e.g. Castle Bravo / Bikini Atol) and clean it up before deciding not to do that again. Also, with the rise of the internet, we've run out of myths that justify capitalism, which is bonded servitude with extra steps. So we may soon (within centuries) see that go into one of the latter two categories, since the US is currently experiencing the endgame consequences of forcing labor, and the rest of the industrialized world is having to bulwark from the blast.
  • 0 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • Apple Just Proved They're No Different Than Google

    Technology technology
    20
    32 Stimmen
    20 Beiträge
    121 Aufrufe
    S
    2 ads when Linus mentioned candy crush. There is zero flow to youtube anymore
  • Biotech uses fermentation to produce milk proteins without cows

    Technology technology
    26
    199 Stimmen
    26 Beiträge
    160 Aufrufe
    M
    Alpro Not Milk comes pretty close for me, oat drink.
  • 99 Stimmen
    48 Beiträge
    212 Aufrufe
    Y
    enable the absolute worst of what humanity has to offer. can we call it a reality check? we think of humans as so great and important and unique for quite a while now while the world is spiraling downwards. maybe humans arent so great after all. like what is art? ppl vibe with slob music but birds cant vote. how does that make sense? if one can watch AI slob (and we all will with the constant improvements in ai) and like it, well maybe our taste of art is not any better than what a bird can do and like. i hope LLM will lead to a breakthrough in understanding what type of animal we really are.
  • 1 Stimmen
    8 Beiträge
    39 Aufrufe
    L
    I made a PayPal account like 20 years ago in a third world country. The only thing you needed then is an email and password. I have no real name on there and no PII, technically my bank card is attached but on PP itself there's no KYC. I think you could probably use some types of prepaid cards with it if you want to avoid using a bank altogether but for me this wasn't an issue, I just didn't want my ID on any records, I don't have any serious OpSec concerns otherwise. I'm sure you could either buy PayPal accounts like this if you needed to, or make one in a country that doesn't have KYC laws somehow. From there I'd add money to my balance and send money as F&F. At no point did I need an ID so in that sense there's no KYC. Some sellers on localmarket were fancy enough to list that they wanted an ID for KYC, but I'm sure you could just send them any random ID you made in paint from the republic of dave and you'd be fine.
  • Discord co-founder and CEO Jason Citron is stepping down

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    12 Aufrufe
    Niemand hat geantwortet