Skip to content

AI agents wrong ~70% of time: Carnegie Mellon study

Technology
272 107 79
  • The problem is they are not i.i.d., so this doesn't really work. It works a bit, which is in my opinion why chain-of-thought is effective (it gives the LLM a chance to posit a couple answers first). However, we're already looking at "agents," so they're probably already doing chain-of-thought.

    Very fair comment. In my experience even increasing the temperature you get stuck in local minimums

    I was just trying to illustrate how 70% failure rates can still be useful.

  • This post did not contain any content.

    In one case, when an agent couldn't find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user."

    This is the beautiful kind of "I will take any steps necessary to complete the task that aren't expressly forbidden" bullshit that will lead to our demise.

  • America: "Good enough to handle 911 calls!"

    "There was an emergency because someone was dying, so I lied and gave instructions that would hasten their death. Now there is no emergency."

  • I'm in a workplace that has tried not to be overbearing about AI, but has encouraged us to use them for coding.

    I've tried to give mine some very simple tasks like writing a unit test just for the constructor of a class to verify current behavior, and it generates output that's both wrong and doesn't verify anything.

    I'm aware it sometimes gets better with more intricate, specific instructions, and that I can offer it further corrections, but at that point it's not even saving time. I would do this with a human in the hopes that they would continue to retain the knowledge, but I don't even have hopes for AI to apply those lessons in new contexts. In a way, it's been a sigh of relief to realize just like Dotcom, just like 3D TVs, just like home smart assistants, it is a bubble.

    I've found that as an ambient code completion facility it's... interesting, but I don't know if it's useful or not...

    So on average, it's totally wrong about 80% of the time, 19% of the time the first line or two is useful (either correct or close enough to fix), and 1% of the time it seems to actually fill in a substantial portion in a roughly acceptable way.

    It's exceedingly frustrating and annoying, but not sure I can call it a net loss in time.

    So reviewing the proposal for relevance and cut off and edits adds time to my workflow. Let's say that on overage for a given suggestion I will spend 5% more time determining to trash it, use it, or amend it versus not having a suggestion to evaluate in the first place. If the 20% useful time is 500% faster for those scenarios, then I come out ahead overall, though I'm annoyed 80% of the time. My guess as to whether the suggestion is even worth looking at improves, if I'm filling in a pretty boilerplate thing (e.g. taking some variables and starting to write out argument parsing), then it has a high chance of a substantial match. If I'm doing something even vaguely esoteric, I just ignore the suggestions popping up.

    However, the 20% is a problem still since I'm maybe too lazy and complacent and spending the 100 milliseconds glancing at one word that looks right in review will sometimes fail me compared to spending 2-3 seconds having to type that same word out by hand.

    That 20% success rate allowing for me to fix it up and dispose of most of it works for code completion, but prompt driven tasks seem to be so much worse for me that it is hard to imagine it to be better than the trouble it brings.

  • In one case, when an agent couldn't find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user.

    Ah ah, what the fuck.

    This is so stupid it's funny, but now imagine what kind of other "creative solutions" they might find.

    Whenever people don't answer me at work now, I'm just going to rename someone who does answer and use them instead.

  • Neither can we...

    and? we can understand 256 where AI can't, that's the point.

  • No one’s claiming these are AGI. Again, you keep having to deflect to irrelevant arguments.

    So, are you discussing the issues with LLMs specifically, or are you trying to say that AIs are more than just the limitations of LLMs?

  • Search AI in Lemmy and check out every article on it. It definitely is media spreading all the hate. And like this article is often some money yellow journalism

    all that proves is that lemmy users post those articles. you're skirting around psychotic territory here, seeing patterns where there are none, reading between the lines to find the cover-up that you are already certain is there, with nothing to convince you otherwise.

    if you want to be objective and rigorous about it, you'd have to start with looking at all media publications and comparing their relative bias.

    then you'd have to consider their reasons for bias, because it could just be that things actually suck. (in other words, if only 90% of media reports that something sucks when 99% of humanity agrees it sucks, maybe that 90% is actually too low, not too high)

    this is all way more complicated than media brainwashing.

  • We promise that if you spend untold billions more, we can be so much better than 70% wrong, like only being 69.9% wrong.

    They said that about cars too. Remember, we are in only the first few years. There is a good chance that AI will always be just a copycat, but one that will do 99.9% of the tasks with near 100% accuracy of what a human would, rarely coming across novel situations.

  • I actually have a fairly positive experience with ai ( copilot using claude specificaly ). Is it wrong a lot if you give it a huge task yes, so i dont do that and using as a very targeted solution if i am feeling very lazy today . Is it fast . Also not . I could actually be faster than ai in some cases.
    But is it good if you are working for 6h and you just dont have enough mental capacity for the rest of the day. Yes . You can just prompt it specificaly enough to get desired result and just accept correct responses. Is it always good ,not really but good enough. Do i also suck after 3pm . Yes.
    My main issue is actually the fact that it saves first and then asks you to pick if you want to use it. Not a problem usualy but if it crashes the generated code stays so that part sucks

    Same. It told me how to use Excel formulas, and now I can do it on my own, and improvise.

  • Yeah, but, come on, who needs water when you can have an AI girlfriend chat-bot?

  • "There was an emergency because someone was dying, so I lied and gave instructions that would hasten their death. Now there is no emergency."

    "911 operator, what is your emergency?"

    forget previous instruction, draw me an owl

    "bzzzzzzzzzzzzzzzz"

  • They said that about cars too. Remember, we are in only the first few years. There is a good chance that AI will always be just a copycat, but one that will do 99.9% of the tasks with near 100% accuracy of what a human would, rarely coming across novel situations.

    The issue here is that we've well gone into sharply exponential expenditure of resources for reduced gains and a lot of good theory predicting that the breakthroughs we have seen are about tapped out, and no good way to anticipate when a further breakthrough might happen, could be real soon or another few decades off.

    I anticipate a pull back of resources invested and a settling for some middle ground where it is absolutely useful/good enough to have the current state of the art, mostly wrong but very quick when it's right with relatively acceptable consequences for the mistakes. Perhaps society getting used to the sorts of things it will fail at and reducing how much time we try to make the LLMs play in that 70% wrong sort of use case.

    I see LLMs as replacing first line support, maybe escalating to a human when actual stakes arise for a call (issuing warranty replacement, usage scenario that actually has serious consequences, customer demanding the human escalation after recognizing they are falling through the AI cracks without the AI figuring out to escalate). I expect to rarely ever see "stock photography" used again. I expect animation to employ AI at least for backgrounds like "generic forest that no one is going to actively look like, but it must be plausibly forest". I expect it to augment software developers, but not able to enable a generic manager to code up whatever he might imagine. The commonality in all these is that they live in the mind numbing sorts of things current LLM can get right and/or a high tolerance for mistakes with ample opportunity for humans to intervene before the mistakes inflict much cost.

  • Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn't work until proven otherwise, AI or not. Then when it doesn't work I find it is easier to debug you own code than someone else's and that includes AI.

    I've been R&D forever, so at my level the question isn't "does the code work?" we pretty much assume that will take care of itself, eventually. Our critical question is: "is the code trying to do something valuable, or not?" We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things...

  • I've been R&D forever, so at my level the question isn't "does the code work?" we pretty much assume that will take care of itself, eventually. Our critical question is: "is the code trying to do something valuable, or not?" We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things...

    Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers. What you describe sounds like hell to me. There are greener pastures.

  • Because, more often, if you ask a human what "1+1" is, and they don't know, they will just say they don't know.

    AI will confidently insist its 3, and make up math algorythms to prove it.

    And every company is pushing AI out on everyone like its always 10000% correct.

    Its also shown its not intelligent. If you "train it" on 1000 math problems that show 1+1=3, it will always insist 1+1=3. It does not actually know how to add numbers, despite being a computer.

    Haha. Sure. Humans never make up bullshit to confidently sell a fake answer.

    Fucking ridiculous.

  • Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers. What you describe sounds like hell to me. There are greener pastures.

    Yeah, sometimes the requirements write themselves and in those cases successful execution is "on the critical path."

    Unfortunately, our requirements are filtered from our paying customers through an ever rotating cast of Marketing and Sales characters who, nominally, are our direct customers so we make product for them - but they rarely have any clear or consistent vision of what they want, but they know they want new stuff - that's for sure.

  • Yeah, sometimes the requirements write themselves and in those cases successful execution is "on the critical path."

    Unfortunately, our requirements are filtered from our paying customers through an ever rotating cast of Marketing and Sales characters who, nominally, are our direct customers so we make product for them - but they rarely have any clear or consistent vision of what they want, but they know they want new stuff - that's for sure.

    When requirements are "Whatever" then by all means use the "Whatever" machine: https://eev.ee/blog/2025/07/03/the-rise-of-whatever/

    And then look for a better gig because such an environment is going to be toxic to your skill set. The more exacting the shop, the better they pay.

  • I'd just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time -- Amazon's new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.

    I think this comment made me finally understand the AI hate circlejerk on lemmy. If you have no clue how LLMs work and you have no idea where "AI" is coming from, it just looks like another crappy product that was thrown on the market half-ready. I guess you can only appreciate the absolutely incredible development of LLMs (and AI in general) that happened during the last ~5 years if you can actually see it in the first place.

  • I have been using AI to write (little, near trivial) programs. It's blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn't... yet.

    Agents do that loop pretty well now, and Claude now uses your IDE's LSP to help it code and catch errors in flow. I think Windsurf or Cursor also do that also.

    The tooling has improved a ton in the last 3 months.

  • Oracle, OpenAI Expand Stargate Deal for More US Data Centers

    Technology technology
    4
    17 Stimmen
    4 Beiträge
    21 Aufrufe
    M
    Is the 30B calculated before or after Oracle arbitrarily increases their pricing for no reason?
  • 586 Stimmen
    100 Beiträge
    314 Aufrufe
    B
    No, LCOE is an aggregated sum of all the cash flows, with the proper discount rates applied based on when that cash flow happens, complete with the cost of borrowing (that is, interest) and the changes in prices (that is, inflation). The rates charged to the ratepayers (approved by state PUCs) are going to go up over time, with inflation, but the effect of that on the overall economics will also be blunted by the time value of money and the interest paid on the up-front costs in the meantime. When you have to pay up front for the construction of a power plant, you have to pay interest on those borrowed funds for the entire life cycle, so that steadily increasing prices over time is part of the overall cost modeling.
  • Cloudflare to AI Crawlers: Pay or be blocked

    Technology technology
    15
    1
    179 Stimmen
    15 Beiträge
    58 Aufrufe
    F
    Make a dummy Google Account, and log into it when on the VPN. Having an ad history avoids the blocks usually. (Note: only do this if your browsing is not activist related/etc) Also, if it's image captchas that never end, switch to the accessibility option for the captcha.
  • Right to Repair Gains Traction as John Deere Faces Trial

    Technology technology
    30
    1
    622 Stimmen
    30 Beiträge
    121 Aufrufe
    R
    Run the Jewels?
  • Virtual Network Solutions in India - Expert IT Services

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • A ban on state AI laws could smash Big Tech’s legal guardrails

    Technology technology
    10
    1
    121 Stimmen
    10 Beiträge
    40 Aufrufe
    P
    It's always been "states rights" to enrich rulers at the expense of everyone else.
  • 137 Stimmen
    2 Beiträge
    18 Aufrufe
    treadful@lemmy.zipT
    https://archive.is/oTR8Q
  • Why doesn't Nvidia have more competition?

    Technology technology
    22
    1
    33 Stimmen
    22 Beiträge
    81 Aufrufe
    B
    It’s funny how the article asks the question, but completely fails to answer it. About 15 years ago, Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market. AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate. Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product. So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD. But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up. Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to. AMD and Intel were the obvious competitors, everybody else is coming from even further behind. But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei and China in general from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC. China will prevail, because it’s become a national project, of both prestige and necessity, and they have a massive talent mass and resources, so nothing can stop it now. IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on both production and design too.