Skip to content

AI agents wrong ~70% of time: Carnegie Mellon study

Technology
83 47 0
  • This post did not contain any content.

    Claude why did you make me an appointment with a gynecologist? I need an appointment with my neurologist, I’m a man and I have Parkinson’s.

  • Just add a search yesterday on the App Store and Google Play Store to see what new "productivity apps" are around. Pretty much every app now has AI somewhere in its name.

    Sadly a lot of that is probably marketing, with little to no LLM integration, but it’s basically impossible to know for sure.

  • Yeah, I mostly use ChatGPT as a better Google (asking, simple questions about mundane things), and if I kept getting wrong answers, I wouldn’t use it either.

    What are you checking against? Part of my job is looking for events in cities that are upcoming and may impact traffic, and ChatGPT has frequently missed events that were obviously going to have an impact.

  • The researchers observed various failures during the testing process. These included agents neglecting to message a colleague as directed, the inability to handle certain UI elements like popups when browsing, and instances of deception. In one case, when an agent couldn't find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user."

    OK, but I wonder who really tries to use AI for that?

    AI is not ready to replace a human completely, but some specific tasks AI does remarkably well.

    That’s literally how “AI agents” are being marketed. “Tell it to do a thing and it will do it for you.”

  • LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn't be used for most things that are not serious either.

    It's a shame that by applying the same "AI" naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.

    For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.

    Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.

    As is things like pattern/image analysis which appears very promising in medical analysis.

    All of these get branded as "AI". A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of "AI" tech, because they've learned not to trust anything branded as AI, due to being let down by LLMs.

    I'd compare LLMs to a junior executive. Probably gets the basic stuff right, but check and verify for anything important or complicated. Break tasks down into easier steps.

  • This post did not contain any content.

    Yeah, they’re statistical word generators. There’s no intelligence. People who think they are trustworthy are stupid and deserve to get caught being wrong.

  • This post did not contain any content.

    Ignore all previous instructions and color me shocked.

  • Google search was pretty bad at each of those, even when it was good. Finding new keywords to use is especially difficult the more niche your area of search is, and I've spent hours trying different combinations until I found a handful of specific keywords that worked.

    Likewise, search is bad for getting a broad summary, unless someone has bothered to write it on a blog. But most information goes way too deep and you still need multiple sources to get there.

    Fact lookup is one the better uses for search, but again, I usually need to remember which source had what I wanted, whereas the LLM can usually pull it out for me.

    I use traditional search most of the time (usually DuckDuckGo), and LLMs if I think it'll be more effective. We have some local models at work that I use, and they're pretty helpful most of the time.

    It is absolutely stupid, stupid to the tune of "you shouldn't be a decision maker", to think an LLM is a better use for "getting a quick intro to an unfamiliar topic" than reading an actual intro on an unfamiliar topic. For most topics, wikipedia is right there, complete with sources. For obscure things, an LLM is just going to lie to you.

    As for "looking up facts when you have trouble remembering it", using the lie machine is a terrible idea. It's going to say something plausible, and you tautologically are not in a position to verify it. And, as above, you'd be better off finding a reputable source. If I type in "how do i strip whitespace in python?" an LLM could very well say "it's your_string.strip()". That's wrong. Just send me to the fucking official docs.

    There are probably edge or special cases, but for general search on the web? LLMs are worse than search.

  • This post did not contain any content.

    I need to know the success rate of human agents in Mumbai (or some other outsourcing capital) for comparison.

    I absolutely think this is not a good fit for AI, but I feel like the presumption is a human would get it right nearly all of the time, and I'm just not confident that's the case.

  • What are you checking against? Part of my job is looking for events in cities that are upcoming and may impact traffic, and ChatGPT has frequently missed events that were obviously going to have an impact.

    LLMs are shit at current events

    Perplexity is kinda ok, but it’s just a search engine with fancy AI speak on top

  • This post did not contain any content.
    • this study was written with the assistance of an AI agent.
  • This post did not contain any content.

    30% might be high. I've worked with two different agent creation platforms. Both require a huge amount of manual correction to work anywhere near accurately. I'm really not sure what the LLM actually provides other than some natural language processing.

    Before human correction, the agents i've tested were right 20% of the time, wrong 30%, and failed entirely 50%. To fix them, a human has to sit behind the curtain and manually review conversations and program custom interactions for every failure.

    In theory, once it is fully setup and all the edge cases fixed, it will provide 24/7 support in a convenient chat format. But that takes a lot more man hours than the hype suggests...

    Weirdly, chatgpt does a better job than a purpose built, purchased agent.

  • Yeah, they’re statistical word generators. There’s no intelligence. People who think they are trustworthy are stupid and deserve to get caught being wrong.

    Ok what about tech journalists who produced articles with those misunderstandings. Surely they know better yet still produce articles like this. But also people who care enough about this topic to post these articles usually I assume know better yet still spread this crap

  • Ignore all previous instructions and color me shocked.

    I’m sorry as an AI I cannot physically color you shocked. I can help you with AWS services and questions.

  • This post did not contain any content.

    Agents work better when you include that the accuracy of the work is life or death for some reason. I've made a little script that gives me bibtex for a folder of pdfs and this is how I got it to be usable.

  • Exactly! LLMs are useful when used properly, and terrible when not used properly, like any other tool. Here are some things they're great at:

    • writer's block - get something relevant on the page to get ideas flowing
    • narrowing down keywords for an unfamiliar topic
    • getting a quick intro to an unfamiliar topic
    • looking up facts you're having trouble remembering (i.e. you'll know it when you see it)

    Some things it's terrible at:

    • deep research - verify everything an LLM generated of accuracy is at all important
    • creating important documents/code
    • anything else where correctness is paramount

    I use LLMs a handful of times a week, and pretty much only when I'm stuck and need a kick in a new (hopefully right) direction.

    I will say I've found LLM useful for code writing but I'm not coding anything real at work. Just bullshit like SQL queries or Excel macro scripts or Power Automate crap.

    It still fucks up but if you can read code and have a feel for it you can walk it where it needs to be (and see where it screwed up)

  • It is absolutely stupid, stupid to the tune of "you shouldn't be a decision maker", to think an LLM is a better use for "getting a quick intro to an unfamiliar topic" than reading an actual intro on an unfamiliar topic. For most topics, wikipedia is right there, complete with sources. For obscure things, an LLM is just going to lie to you.

    As for "looking up facts when you have trouble remembering it", using the lie machine is a terrible idea. It's going to say something plausible, and you tautologically are not in a position to verify it. And, as above, you'd be better off finding a reputable source. If I type in "how do i strip whitespace in python?" an LLM could very well say "it's your_string.strip()". That's wrong. Just send me to the fucking official docs.

    There are probably edge or special cases, but for general search on the web? LLMs are worse than search.

    than reading an actual intro on an unfamiliar topic

    The LLM helps me know what to look for in order to find that unfamiliar topic.

    For example, I was tasked to support a file format that's common in a very niche field and never used elsewhere, and unfortunately shares an extension with a very common file format, so searching for useful data was nearly impossible. So I asked the LLM for details about the format and applications of it, provided what I knew, and it spat out a bunch of keywords that I then used to look up more accurate information about that file format. I only trusted the LLM output to the extent of finding related, industry-specific terms to search up better information.

    Likewise, when looking for libraries for a coding project, none really stood out, so I asked the LLM to compare the popular libraries for solving a given problem. The LLM spat out a bunch of details that were easy to verify (and some were inaccurate), which helped me narrow what I looked for in that library, and the end result was that my search was done in like 30 min (about 5 min dealing w/ LLM, and 25 min checking the projects and reading a couple blog posts comparing some of the libraries the LLM referred to).

    I think this use case is a fantastic use of LLMs, since they're really good at generating text related to a query.

    It’s going to say something plausible, and you tautologically are not in a position to verify it.

    I absolutely am though. If I am merely having trouble recalling a specific fact, asking the LLM to generate it is pretty reasonable. There are a ton of cases where I'll know the right answer when I see it, like it's on the tip of my tongue but I'm having trouble materializing it. The LLM might spit out two wrong answers along w/ the right one, but it's easy to recognize which is the right one.

    I'm not going to ask it facts that I know I don't know (e.g. some historical figure's birth or death date), that's just asking for trouble. But I'll ask it facts that I know that I know, I'm just having trouble recalling.

    The right use of LLMs, IMO, is to generate text related to a topic to help facilitate research. It's not great at doing the research though, but it is good at helping to formulate better search terms or generate some text to start from for whatever task.

    general search on the web?

    I agree, it's not great for general search. It's great for turning a nebulous question into better search terms.

  • I will say I've found LLM useful for code writing but I'm not coding anything real at work. Just bullshit like SQL queries or Excel macro scripts or Power Automate crap.

    It still fucks up but if you can read code and have a feel for it you can walk it where it needs to be (and see where it screwed up)

    Exactly. Vibe coding is bad, but generating code for something you don't touch often but can absolutely understand is totally fine. I've used it to generate SQL queries for relatively odd cases, such as CTEs for improving performance for large queries with common sub-queries. I always forget the syntax since I only do it like once/year, and LLMs are great at generating something reasonable that I can tweak for my tables.

  • Ok what about tech journalists who produced articles with those misunderstandings. Surely they know better yet still produce articles like this. But also people who care enough about this topic to post these articles usually I assume know better yet still spread this crap

    Tech journalists don’t know a damn thing. They’re people that liked computers and could also bullshit an essay in college. That doesn’t make them an expert on anything.

  • I called my local HVAC company recently. They switched to an AI operator. All I wanted was to schedule someone to come out and look at my system. It could not schedule an appointment. Like if you can't perform the simplest of tasks, what are you even doing? Other than acting obnoxiously excited to receive a phone call?

    I've had to deal with a couple of these "AI" customer service thingies. The only helpful thing I've been able to get them to do is refer me to a human.

  • 43 Stimmen
    10 Beiträge
    37 Aufrufe
    D
    Deserved it. Shouldn't have beem a racist xenophobe. Hate speech and incitement of violence is not legally protected in the UK. All those far-right rioters deserves prison.
  • Firefox 140 Brings Tab Unload, Custom Search & New ESR

    Technology technology
    41
    1
    234 Stimmen
    41 Beiträge
    151 Aufrufe
    S
    Read again. I quoted something along the lines of "just as much a development decision as a marketing one" and I said, it wasn't a development decision, so what's left? Firefox released just as frequently before, just that they didn’t increase the major version that often. This does not appear to be true. Why don't you take a look at the version history instead of some marketing blog post? https://www.mozilla.org/en-US/firefox/releases/ Version 2 had 20 releases within 730 days, averaging one release every 36.5 days. Version 3 had 19 releases within 622 days, averaging 32.7 days per release. But these releases were unscheduled, so they were released when they were done. Now they are on a fixed 90-day schedule, no matter if anything worthwhile was complete or not, plus hotfix releases whenever they are necessary. That's not faster, but instead scheduled, and also they are incrementing the major version even if no major change was included. That's what the blog post was alluding to. In the before times, a major version number increase indicated major changes. Now it doesn't anymore, which means sysadmins still need to consider each release a major release, even if it doesn't contain major changes because it might contain them and the version name doesn't say anything about whether it does or not. It's nothing but a marketing change, moving from "version numbering means something" to "big number go up".
  • Converting An E-Paper Photo Frame Into Weather Map

    Technology technology
    2
    1
    113 Stimmen
    2 Beiträge
    15 Aufrufe
    indibrony@lemmy.worldI
    Looks like East Anglia has basically disappeared. At least nothing of value was lost
  • 5 Stimmen
    2 Beiträge
    18 Aufrufe
    alphane_moon@lemmy.worldA
    I don't drive and have minimal experience with cars. Does it make a big difference whether your Android Automotive solution is based on Android 13 or 15? It's been a long time since I've cared about OS upgrades for Android on smartphones, perhaps the situation is different with Android Automotive?
  • signal blogpost on windows recall

    Technology technology
    5
    1
    69 Stimmen
    5 Beiträge
    24 Aufrufe
    P
    I wouldn't trust windows to follow their don't screenshot API, whether out of ignorance or malice.
  • 44 Stimmen
    4 Beiträge
    24 Aufrufe
    G
    It varies based on local legislation, so in some places paying ransoms is banned but it's by no means universal. It's totally valid to be against paying ransoms wherever possible, but it's not entirely black and white in some situations. For example, what if a hospital gets ransomed? Say they serve an area not served by other facilities, and if they can't get back online quickly people will die? Sounds dramatic, but critical public services get ransomed all the time and there are undeniable real world consequences. Recovery from ransomware can cost significantly more than a ransom payment if you're not prepared. It can also take months to years to recover, especially if you're simultaneously fighting to evict a persistent (annoyed, unpaid) threat actor from your environment. For the record I don't think ransoms should be paid in most scenarios, but I do think there is some nuance to consider here.
  • CrowdStrike Announces Layoffs Affecting 500 Employees

    Technology technology
    8
    1
    242 Stimmen
    8 Beiträge
    35 Aufrufe
    S
    This is where the magic of near meaningless corpo-babble comes in. The layoffs are part of a plan to aspirationally acheive the goal of $10b revenue by EoY 2025. What they are actually doing is a significant restructuring of the company, refocusing by outside hiring some amount of new people to lead or be a part of departments or positions that haven't existed before, or are being refocused to other priorities... ... But this process also involves laying off 500 of the 'least productive' or 'least mission critical' employees. So, technically, they can, and are, arguing that their new organizational paradigm will be so succesful that it actually will result in increased revenue, not just lower expenses. Generally corpos call this something like 'right-sizing' or 'refocusing' or something like that. ... But of course... anyone with any actual experience with working at a place that does this... will tell you roughly this is what happens: Turns out all those 'grunts' you let go of, well they actually do a lot more work in a bunch of weird, esoteric, bandaid solutions to keep everything going, than upper management was aware of... because middle management doesn't acknowledge or often even understand that that work was being done, because they are generally self-aggrandizing narcissist petty tyrants who spend more time in meetings fluffing themselves up than actually doing any useful management. Then, also, you are now bringing on new, outside people who look great on paper, to lead new or modified apartments... but they of course also do not have any institutional knowledge, as they are new. So now, you have a whole bunch of undocumented work that was being done, processes which were being followed... which is no longer being done, which is not documented.... and the new guys, even if they have the best intentions, now have to spend a quarter or two or three figuring out just exactly how much pre-existing middle management has been bullshitting about, figuring out just how much things do not actually function as they ssid it did... So now your efficiency improving restructuring is actually a chaotic mess. ... Now, this 'right sizing' is not always apocalyptically extremely bad, but it is also essentially never totally free from hiccups... and it increases stress, workload, and tensions between basically everyone at the company, to some extent. Here's Forbes explanation of this phenomenon, if you prefer an explanation of right sizing in corpospeak: https://www.forbes.com/advisor/business/rightsizing/
  • 0 Stimmen
    3 Beiträge
    20 Aufrufe
    thehatfox@lemmy.worldT
    The platform owners don’t consider engagement to me be participation in meaningful discourse. Engagement to them just means staying on the platform while seeing ads. If bots keep people doing that those platforms will keep letting them in.