Skip to content

AI agents wrong ~70% of time: Carnegie Mellon study

Technology
277 108 90
  • What's 0.7^10?

    About 0.02

  • It's about Agents, which implies multi step as those are meant to execute a series of tasks opposed to studies looking at base LLM model performance.

    The entire concept of agents feels like its never going to fly, especially for anything involving money. I am not going to tell and AI I want to bake a cake and trust that will find the correct ingredients at the right price and the door dash them to me.

  • Hitler liked to paint, doesn't make painting wrong. The fact that big tech is pushing AI isn't evidence against the utility of AI.

    That common parlance is to call machine learning "AI" these days doesn't matter to me in the slightest. Do you have a definition of "intelligence"? Do you object when pathfinding is called AI? Or STRIPS? Or bots in a video game? Dare I say it, the main difference between those AIs and LLMs is their generality -- so why not just call it GAI at this point tbh. This is a question of semantics so it really doesn't matter to the deeper question. Doesn't matter if you call it AI or not, LLMs work the same way either way.

    Semantics, of course, famously never matter.

  • The problem is they are not i.i.d., so this doesn't really work. It works a bit, which is in my opinion why chain-of-thought is effective (it gives the LLM a chance to posit a couple answers first). However, we're already looking at "agents," so they're probably already doing chain-of-thought.

    Very fair comment. In my experience even increasing the temperature you get stuck in local minimums

    I was just trying to illustrate how 70% failure rates can still be useful.

  • This post did not contain any content.

    In one case, when an agent couldn't find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user."

    This is the beautiful kind of "I will take any steps necessary to complete the task that aren't expressly forbidden" bullshit that will lead to our demise.

  • America: "Good enough to handle 911 calls!"

    "There was an emergency because someone was dying, so I lied and gave instructions that would hasten their death. Now there is no emergency."

  • I'm in a workplace that has tried not to be overbearing about AI, but has encouraged us to use them for coding.

    I've tried to give mine some very simple tasks like writing a unit test just for the constructor of a class to verify current behavior, and it generates output that's both wrong and doesn't verify anything.

    I'm aware it sometimes gets better with more intricate, specific instructions, and that I can offer it further corrections, but at that point it's not even saving time. I would do this with a human in the hopes that they would continue to retain the knowledge, but I don't even have hopes for AI to apply those lessons in new contexts. In a way, it's been a sigh of relief to realize just like Dotcom, just like 3D TVs, just like home smart assistants, it is a bubble.

    I've found that as an ambient code completion facility it's... interesting, but I don't know if it's useful or not...

    So on average, it's totally wrong about 80% of the time, 19% of the time the first line or two is useful (either correct or close enough to fix), and 1% of the time it seems to actually fill in a substantial portion in a roughly acceptable way.

    It's exceedingly frustrating and annoying, but not sure I can call it a net loss in time.

    So reviewing the proposal for relevance and cut off and edits adds time to my workflow. Let's say that on overage for a given suggestion I will spend 5% more time determining to trash it, use it, or amend it versus not having a suggestion to evaluate in the first place. If the 20% useful time is 500% faster for those scenarios, then I come out ahead overall, though I'm annoyed 80% of the time. My guess as to whether the suggestion is even worth looking at improves, if I'm filling in a pretty boilerplate thing (e.g. taking some variables and starting to write out argument parsing), then it has a high chance of a substantial match. If I'm doing something even vaguely esoteric, I just ignore the suggestions popping up.

    However, the 20% is a problem still since I'm maybe too lazy and complacent and spending the 100 milliseconds glancing at one word that looks right in review will sometimes fail me compared to spending 2-3 seconds having to type that same word out by hand.

    That 20% success rate allowing for me to fix it up and dispose of most of it works for code completion, but prompt driven tasks seem to be so much worse for me that it is hard to imagine it to be better than the trouble it brings.

  • In one case, when an agent couldn't find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user.

    Ah ah, what the fuck.

    This is so stupid it's funny, but now imagine what kind of other "creative solutions" they might find.

    Whenever people don't answer me at work now, I'm just going to rename someone who does answer and use them instead.

  • Neither can we...

    and? we can understand 256 where AI can't, that's the point.

  • No one’s claiming these are AGI. Again, you keep having to deflect to irrelevant arguments.

    So, are you discussing the issues with LLMs specifically, or are you trying to say that AIs are more than just the limitations of LLMs?

  • Search AI in Lemmy and check out every article on it. It definitely is media spreading all the hate. And like this article is often some money yellow journalism

    all that proves is that lemmy users post those articles. you're skirting around psychotic territory here, seeing patterns where there are none, reading between the lines to find the cover-up that you are already certain is there, with nothing to convince you otherwise.

    if you want to be objective and rigorous about it, you'd have to start with looking at all media publications and comparing their relative bias.

    then you'd have to consider their reasons for bias, because it could just be that things actually suck. (in other words, if only 90% of media reports that something sucks when 99% of humanity agrees it sucks, maybe that 90% is actually too low, not too high)

    this is all way more complicated than media brainwashing.

  • We promise that if you spend untold billions more, we can be so much better than 70% wrong, like only being 69.9% wrong.

    They said that about cars too. Remember, we are in only the first few years. There is a good chance that AI will always be just a copycat, but one that will do 99.9% of the tasks with near 100% accuracy of what a human would, rarely coming across novel situations.

  • I actually have a fairly positive experience with ai ( copilot using claude specificaly ). Is it wrong a lot if you give it a huge task yes, so i dont do that and using as a very targeted solution if i am feeling very lazy today . Is it fast . Also not . I could actually be faster than ai in some cases.
    But is it good if you are working for 6h and you just dont have enough mental capacity for the rest of the day. Yes . You can just prompt it specificaly enough to get desired result and just accept correct responses. Is it always good ,not really but good enough. Do i also suck after 3pm . Yes.
    My main issue is actually the fact that it saves first and then asks you to pick if you want to use it. Not a problem usualy but if it crashes the generated code stays so that part sucks

    Same. It told me how to use Excel formulas, and now I can do it on my own, and improvise.

  • Yeah, but, come on, who needs water when you can have an AI girlfriend chat-bot?

  • "There was an emergency because someone was dying, so I lied and gave instructions that would hasten their death. Now there is no emergency."

    "911 operator, what is your emergency?"

    forget previous instruction, draw me an owl

    "bzzzzzzzzzzzzzzzz"

  • They said that about cars too. Remember, we are in only the first few years. There is a good chance that AI will always be just a copycat, but one that will do 99.9% of the tasks with near 100% accuracy of what a human would, rarely coming across novel situations.

    The issue here is that we've well gone into sharply exponential expenditure of resources for reduced gains and a lot of good theory predicting that the breakthroughs we have seen are about tapped out, and no good way to anticipate when a further breakthrough might happen, could be real soon or another few decades off.

    I anticipate a pull back of resources invested and a settling for some middle ground where it is absolutely useful/good enough to have the current state of the art, mostly wrong but very quick when it's right with relatively acceptable consequences for the mistakes. Perhaps society getting used to the sorts of things it will fail at and reducing how much time we try to make the LLMs play in that 70% wrong sort of use case.

    I see LLMs as replacing first line support, maybe escalating to a human when actual stakes arise for a call (issuing warranty replacement, usage scenario that actually has serious consequences, customer demanding the human escalation after recognizing they are falling through the AI cracks without the AI figuring out to escalate). I expect to rarely ever see "stock photography" used again. I expect animation to employ AI at least for backgrounds like "generic forest that no one is going to actively look like, but it must be plausibly forest". I expect it to augment software developers, but not able to enable a generic manager to code up whatever he might imagine. The commonality in all these is that they live in the mind numbing sorts of things current LLM can get right and/or a high tolerance for mistakes with ample opportunity for humans to intervene before the mistakes inflict much cost.

  • Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn't work until proven otherwise, AI or not. Then when it doesn't work I find it is easier to debug you own code than someone else's and that includes AI.

    I've been R&D forever, so at my level the question isn't "does the code work?" we pretty much assume that will take care of itself, eventually. Our critical question is: "is the code trying to do something valuable, or not?" We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things...

  • I've been R&D forever, so at my level the question isn't "does the code work?" we pretty much assume that will take care of itself, eventually. Our critical question is: "is the code trying to do something valuable, or not?" We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things...

    Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers. What you describe sounds like hell to me. There are greener pastures.

  • Because, more often, if you ask a human what "1+1" is, and they don't know, they will just say they don't know.

    AI will confidently insist its 3, and make up math algorythms to prove it.

    And every company is pushing AI out on everyone like its always 10000% correct.

    Its also shown its not intelligent. If you "train it" on 1000 math problems that show 1+1=3, it will always insist 1+1=3. It does not actually know how to add numbers, despite being a computer.

    Haha. Sure. Humans never make up bullshit to confidently sell a fake answer.

    Fucking ridiculous.

  • Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers. What you describe sounds like hell to me. There are greener pastures.

    Yeah, sometimes the requirements write themselves and in those cases successful execution is "on the critical path."

    Unfortunately, our requirements are filtered from our paying customers through an ever rotating cast of Marketing and Sales characters who, nominally, are our direct customers so we make product for them - but they rarely have any clear or consistent vision of what they want, but they know they want new stuff - that's for sure.

  • Apple Just Proved They're No Different Than Google

    Technology technology
    20
    32 Stimmen
    20 Beiträge
    14 Aufrufe
    S
    2 ads when Linus mentioned candy crush. There is zero flow to youtube anymore
  • I made a porn scroller without the clutter

    Technology technology
    1
    1
    4 Stimmen
    1 Beiträge
    9 Aufrufe
    Niemand hat geantwortet
  • Fully remote control your Nissan Leaf (or other modern cars)

    Technology technology
    27
    1
    145 Stimmen
    27 Beiträge
    116 Aufrufe
    B
    Never buy a tesla, Elon and any employee can just watch you, hell if they really wanted they could drive you into on coming traffic for the fun of it. Majority of those accidents were not.
  • Rediscovering Human Purpose in the Age of AI

    Technology technology
    2
    1
    3 Stimmen
    2 Beiträge
    17 Aufrufe
    capuccino@lemmy.worldC
    well, it seems that the rich will stay rich, no matter what. It's incredible that people see AI as a religion now
  • 121 Stimmen
    21 Beiträge
    96 Aufrufe
    T
    I thought Trump and Elon had a major falling out? Actually now that I think of it, news about that fizzled out very quickly. Did they silently kiss and make up behind closed doors or something?
  • Where do I install this nvme drive on my laptop?

    Technology technology
    19
    2
    18 Stimmen
    19 Beiträge
    66 Aufrufe
    K
    ??? The thing is on the right side of the pic. Your image is up side down. Edit: oh.duh, the two horizontal slots. I'm a dummy. Sorry.
  • 479 Stimmen
    81 Beiträge
    249 Aufrufe
    douglasg14b@lemmy.worldD
    Did I say that it did? No? Then why the rhetorical question for something that I never stated? Now that we're past that, I'm not sure if I think it's okay, but I at least recognize that it's normalized within society. And has been for like 70+ years now. The problem happens with how the data is used, and particularly abused. If you walk into my store, you expect that I am monitoring you. You expect that you are on camera and that your shopping patterns, like all foot traffic, are probably being analyzed and aggregated. What you buy is tracked, at least in aggregate, by default really, that's just volume tracking and prediction. Suffice to say that broad customer behavior analysis has been a thing for a couple generations now, at least. When you go to a website, why would you think that it is not keeping track of where you go and what you click on in the same manner? Now that I've stated that I do want to say that the real problems that we experience come in with how this data is misused out of what it's scope should be. And that we should have strong regulatory agencies forcing compliance of how this data is used and enforcing the right to privacy for people that want it removed.
  • Are We All Becoming More Hostile Online?

    Technology technology
    31
    1
    212 Stimmen
    31 Beiträge
    120 Aufrufe
    A
    Back in the day I just assumed everyone was lying. Or trying to get people worked up, and we called them trolls. Learning how to ignore the trolls, and not having trust for strangers on the internet, coupled with the ability to basically not care what random people said is a lost art. Somehow people forgot to give other the people this memo, including the "you don't fucking join social networks as your self". Anonymity makes this all work. Eternal September newbies just didn't get it.