Skip to content

AI agents wrong ~70% of time: Carnegie Mellon study

Technology
285 108 721
  • What does "I give it data to put in a formulaic sentence." mean here

    Why not just share the details. I often find a lot of people saying it's doing crazy things and never like to share the details. It's very similar to discussing things with Trump supporters who do the same shit when pressed on details about stuff they say occurs. Like the same "you're a troll for asking for evidence of my claim" that trumpets do. It's wild how similar it is.

    And yes asking to do things like iterate over rows isn't how it works. It's getting better but that's not what it's primarily used for. It could be but isn't. It only catches so many tokens. It's getting better and has some persistence but it's nowhere near what its strength is.

    I would be in breach of contract to tell you the details. How about you just stop trying to blame me for the clear and obvious lies that the LLM churned out and start believing that LLMs ARE are strikingly fallible, because, buddy, you have your head so far in the sand on this issue it's weird.

    The solution to the problem was to realise that an LLM cannot be trusted for accuracy even if the first few results are completely accurate, the bullshit well creep in. Don't trust the LLM. Check every fucking thing.

    In the end I wrote a quick script that broke the input up on tab characters and wrote the sentence. That's how formulaic it was. I regretted deeply trying to get an LLM to use data.

    The frustrating thing is that it is clearly capable of doing the task some of the time, but drifting off into FANTASY is its strong suit, and it doesn't matter how firmly or how often you ask it to be accurate or use the input carefully. It's going to lie to you before long. It's an LLM. Bullshitting is what it does. Get it to do ONE THING only, then check the fuck out of its answer. Don't trust it to tell you the truth any more than you would trust Donald J Trump to.

  • Dunno. Asking 10 humans at random to do a task and probably one will do it better than AI. Just not as fast.

    You're better off asking one human to do the same task ten times. Humans get better and faster at things as they go along. Always slower than an LLM, but LLMs get more and more likely to veer off on some flight of fancy, further and further from reality, the more it says to you. The chances of it staying factual in the long term are really low.

    It's a born bullshitter. It knows a little about a lot, but it has no clue what's real and what's made up, or it doesn't care.

    If you want some text quickly, that sounds right, but you genuinely don't care whether it is right at all, go for it, use an LLM. It'll be great at that.

  • This post did not contain any content.

    Reading with CEO mindset. 3 out of 10 employees can be fired.

  • I would be in breach of contract to tell you the details. How about you just stop trying to blame me for the clear and obvious lies that the LLM churned out and start believing that LLMs ARE are strikingly fallible, because, buddy, you have your head so far in the sand on this issue it's weird.

    The solution to the problem was to realise that an LLM cannot be trusted for accuracy even if the first few results are completely accurate, the bullshit well creep in. Don't trust the LLM. Check every fucking thing.

    In the end I wrote a quick script that broke the input up on tab characters and wrote the sentence. That's how formulaic it was. I regretted deeply trying to get an LLM to use data.

    The frustrating thing is that it is clearly capable of doing the task some of the time, but drifting off into FANTASY is its strong suit, and it doesn't matter how firmly or how often you ask it to be accurate or use the input carefully. It's going to lie to you before long. It's an LLM. Bullshitting is what it does. Get it to do ONE THING only, then check the fuck out of its answer. Don't trust it to tell you the truth any more than you would trust Donald J Trump to.

    This is crazy. I've literally been saying they are fallible. You're saying your professional fed and LLM some type of dataset. So I can't really say what it was you're trying to accomplish but I'm just arguing that trying to have it process data is not what they're trained to do. LLM are incredible tools and I'm tired of trying to act like they're not because people keep using them for things they're not built to do. It's not a fire and forget thing. It does need to be supervised and verified. It's not exactly an answer machine. But it's so good at parsing text and documents, summarizing, formatting and acting like a search engine that you can communicate with rather than trying to grok some arcane sentence. Its power is in language applications.

    It is so much fun to just play around with and figure out where it can help. I'm constantly doing things on my computer it's great for instructions. Especially if I get a problem that's kind of unique and needs a big of discussion to solve.

  • This is crazy. I've literally been saying they are fallible. You're saying your professional fed and LLM some type of dataset. So I can't really say what it was you're trying to accomplish but I'm just arguing that trying to have it process data is not what they're trained to do. LLM are incredible tools and I'm tired of trying to act like they're not because people keep using them for things they're not built to do. It's not a fire and forget thing. It does need to be supervised and verified. It's not exactly an answer machine. But it's so good at parsing text and documents, summarizing, formatting and acting like a search engine that you can communicate with rather than trying to grok some arcane sentence. Its power is in language applications.

    It is so much fun to just play around with and figure out where it can help. I'm constantly doing things on my computer it's great for instructions. Especially if I get a problem that's kind of unique and needs a big of discussion to solve.

    it’s so good at parsing text and documents, summarizing

    No. Not when it matters. It makes stuff up. The less you carefully check every single fucking thing it says, the more likely you are to believe some lies it subtly slipped in as it went along. If truth doesn't matter, go ahead and use LLMs.

    If you just want some ideas that you're going to sift through, independently verify and check for yourself with extreme skepticism as if Donald Trump were telling you how to achieve world peace, great, you're using LLMs effectively.

    But if you're trusting it, you're doing it very, very wrong and you're going to get humiliated because other people are going to catch you out in repeating an LLM's bullshit.

  • it’s so good at parsing text and documents, summarizing

    No. Not when it matters. It makes stuff up. The less you carefully check every single fucking thing it says, the more likely you are to believe some lies it subtly slipped in as it went along. If truth doesn't matter, go ahead and use LLMs.

    If you just want some ideas that you're going to sift through, independently verify and check for yourself with extreme skepticism as if Donald Trump were telling you how to achieve world peace, great, you're using LLMs effectively.

    But if you're trusting it, you're doing it very, very wrong and you're going to get humiliated because other people are going to catch you out in repeating an LLM's bullshit.

    If it's so bad as if you say, could you give an example of a prompt where it'll tell you incorrect information.

  • If it's so bad as if you say, could you give an example of a prompt where it'll tell you incorrect information.

    It's like you didn't listen to anything I ever said, or you discounted everything I said as fiction, but everything your dear LLM said is gospel truth in your eyes. It's utterly irrational. You have to be trolling me now.

  • It's like you didn't listen to anything I ever said, or you discounted everything I said as fiction, but everything your dear LLM said is gospel truth in your eyes. It's utterly irrational. You have to be trolling me now.

    Should be easy if it's that bad though

  • Should be easy if it's that bad though

    I already told you my experience of the crapness of LLMs and even explained why I can't share the prompt etc. You clearly weren't listening or are incapable of taking in information.

    There's also all the testing done by the people talked about in the article we're discussing which you're also irrationally dismissing.

    You have extreme confirmation bias.

    Everything you hear that disagrees with your absurd faith in the accuracy of the extreme blagging of LLMs gets dismissed for any excuse you can come up with.

  • I already told you my experience of the crapness of LLMs and even explained why I can't share the prompt etc. You clearly weren't listening or are incapable of taking in information.

    There's also all the testing done by the people talked about in the article we're discussing which you're also irrationally dismissing.

    You have extreme confirmation bias.

    Everything you hear that disagrees with your absurd faith in the accuracy of the extreme blagging of LLMs gets dismissed for any excuse you can come up with.

    You're projecting here. I'm asking you to give an example of any prompt. You're saying it's so bad that it needs to be babysat because it's errors. I'll only asking for your to give an example and you're saying that's confirmation bias and acting like I'm being religiously ignorant

  • You're projecting here. I'm asking you to give an example of any prompt. You're saying it's so bad that it needs to be babysat because it's errors. I'll only asking for your to give an example and you're saying that's confirmation bias and acting like I'm being religiously ignorant

    This is you

  • 17 Stimmen
    1 Beiträge
    4 Aufrufe
    Niemand hat geantwortet
  • 257 Stimmen
    38 Beiträge
    128 Aufrufe
    F
    A whole article about how terrible this is, then towards the end they got clarification from Google and, surprise surprise, it was all an overreaction and they were fear mongering. “This update is good for users: they can now use Gemini to complete daily tasks on their mobile devices like send messages, initiate phone calls, and set timers while Gemini Apps Activity is turned off. With Gemini Apps Activity turned off, their Gemini chats are not being reviewed or used to improve our AI models. It’s just giving Gemini more local assistant abilities.
  • 117 Stimmen
    4 Beiträge
    22 Aufrufe
    V
    encourage innovation in the banking and financial system What "innovation" do we need in the banking system?
  • 0 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • 2 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • Fake It Till You Make It? Builder.ai’s $1.5B AI Scam Exposed

    Technology technology
    14
    1
    70 Stimmen
    14 Beiträge
    62 Aufrufe
    W
    Religion and fiat are always at the top
  • 73 Stimmen
    38 Beiträge
    133 Aufrufe
    F
    For sure they are! Meta more then the others though
  • Why Japan's animation industry has embraced AI

    Technology technology
    12
    1
    1 Stimmen
    12 Beiträge
    57 Aufrufe
    R
    The genre itself has become neutered, too. A lot of anime series have the usual "anime elements" and a couple custom ideas. And similar style, too glossy for my taste. OK, what I think is old and boring libertarian stuff, I'll still spell it out. The reason people are having such problems is because groups and businesses are de facto legally enshrined in their fields, it's almost like feudal Europe's system of privileges and treaties. At some point I thought this is good, I hope no evil god decided to fulfill my wish. There's no movement, and a faction (like Disney with Star Wars) that buys a place (a brand) can make any garbage, and people will still try to find the depth in it and justify it (that complaint has been made about Star Wars prequels, but no, they are full of garbage AND have consistent arcs, goals and ideas, which is why they revitalized the Expanded Universe for almost a decade, despite Lucas-<companies> having sort of an internal social collapse in year 2005 right after Revenge of the Sith being premiered ; I love the prequels, despite all the pretense and cringe, but their verbal parts are almost fillers, their cinematographic language and matching music are flawless, the dialogue just disrupts it all while not adding much, - I think Lucas should have been more decisive, a bit like Tartakovsky with the Clone Wars cartoon, just more serious, because non-verbal doesn't equal stupid). OK, my thought wandered away. Why were the legal means they use to keep such positions created? To make the economy nicer to the majority, to writers, to actors, to producers. Do they still fulfill that role? When keeping monopolies, even producing garbage or, lately, AI slop, - no. Do we know a solution? Not yet, because pressing for deregulation means the opponent doing a judo movement and using that energy for deregulating the way everything becomes worse. Is that solution in minimizing and rebuilding the system? I believe still yes, nothing is perfect, so everything should be easy to quickly replace, because errors and mistakes plaguing future generations will inevitably continue to be made. The laws of the 60s were simple enough for that in most countries. The current laws are not. So the general direction to be taken is still libertarian. Is this text useful? Of course not. I just think that in the feudal Europe metaphor I'd want to be a Hussite or a Cossack or at worst a Venetian trader.