Skip to content

AI agents wrong ~70% of time: Carnegie Mellon study

Technology
83 47 0
  • This post did not contain any content.
  • This post did not contain any content.

    The ones being implemented into emergency call centers are better though? Right?

  • This post did not contain any content.

    LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn't be used for most things that are not serious either.

    It's a shame that by applying the same "AI" naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.

    For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.

    Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.

    As is things like pattern/image analysis which appears very promising in medical analysis.

    All of these get branded as "AI". A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of "AI" tech, because they've learned not to trust anything branded as AI, due to being let down by LLMs.

  • This post did not contain any content.

    Rookie numbers! Let’s pump them up!

    To match their tech bro hypers, the should be wrong at least 90% of the time.

  • LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn't be used for most things that are not serious either.

    It's a shame that by applying the same "AI" naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.

    For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.

    Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.

    As is things like pattern/image analysis which appears very promising in medical analysis.

    All of these get branded as "AI". A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of "AI" tech, because they've learned not to trust anything branded as AI, due to being let down by LLMs.

    LLMs are like a multitool, they can do lots of easy things mostly fine as long as it is not complicated and doesn't need to be exactly right. But they are being promoted as a whole toolkit as if they are able to be used to do the same work as effectively as a hammer, power drill, table saw, vise, and wrench.

  • The ones being implemented into emergency call centers are better though? Right?

    Yes! We've gotten them up to 94℅ wrong at the behest of insurance agencies.

  • LLMs are like a multitool, they can do lots of easy things mostly fine as long as it is not complicated and doesn't need to be exactly right. But they are being promoted as a whole toolkit as if they are able to be used to do the same work as effectively as a hammer, power drill, table saw, vise, and wrench.

    Exactly! LLMs are useful when used properly, and terrible when not used properly, like any other tool. Here are some things they're great at:

    • writer's block - get something relevant on the page to get ideas flowing
    • narrowing down keywords for an unfamiliar topic
    • getting a quick intro to an unfamiliar topic
    • looking up facts you're having trouble remembering (i.e. you'll know it when you see it)

    Some things it's terrible at:

    • deep research - verify everything an LLM generated of accuracy is at all important
    • creating important documents/code
    • anything else where correctness is paramount

    I use LLMs a handful of times a week, and pretty much only when I'm stuck and need a kick in a new (hopefully right) direction.

  • This post did not contain any content.

    I haven't used AI agents yet, but my job is kinda pushing for them. but i have used the google one that creates audio podcasts, just to play around, since my coworkers were using it to "learn" new things. i feed it with some of my own writing and created the podcast. it was fun, it was an audio overview of what i wrote. about 80% was cool analysis, but 20% was straight out of nowhere bullshit (which i know because I wrote the original texts that the audio was talking about). i can't believe that people are using this for subjects that they have no knowledge. it is a fun toy for a few minutes (which is not worth the cost to the environment anyway)

  • Exactly! LLMs are useful when used properly, and terrible when not used properly, like any other tool. Here are some things they're great at:

    • writer's block - get something relevant on the page to get ideas flowing
    • narrowing down keywords for an unfamiliar topic
    • getting a quick intro to an unfamiliar topic
    • looking up facts you're having trouble remembering (i.e. you'll know it when you see it)

    Some things it's terrible at:

    • deep research - verify everything an LLM generated of accuracy is at all important
    • creating important documents/code
    • anything else where correctness is paramount

    I use LLMs a handful of times a week, and pretty much only when I'm stuck and need a kick in a new (hopefully right) direction.

    • narrowing down keywords for an unfamiliar topic
    • getting a quick intro to an unfamiliar topic
    • looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)

    I used to be able to use Google and other search engines to do these things before they went to shit in the pursuit of AI integration.

  • This post did not contain any content.

    The researchers observed various failures during the testing process. These included agents neglecting to message a colleague as directed, the inability to handle certain UI elements like popups when browsing, and instances of deception. In one case, when an agent couldn't find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user."

    OK, but I wonder who really tries to use AI for that?

    AI is not ready to replace a human completely, but some specific tasks AI does remarkably well.

  • This post did not contain any content.

    "Gartner estimates only about 130 of the thousands of agentic AI vendors are real."

    This whole industry is so full of hype and scams, the bubble surely has to burst at some point soon.

  • The ones being implemented into emergency call centers are better though? Right?

    I called my local HVAC company recently. They switched to an AI operator. All I wanted was to schedule someone to come out and look at my system. It could not schedule an appointment. Like if you can't perform the simplest of tasks, what are you even doing? Other than acting obnoxiously excited to receive a phone call?

    • narrowing down keywords for an unfamiliar topic
    • getting a quick intro to an unfamiliar topic
    • looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)

    I used to be able to use Google and other search engines to do these things before they went to shit in the pursuit of AI integration.

    Google search was pretty bad at each of those, even when it was good. Finding new keywords to use is especially difficult the more niche your area of search is, and I've spent hours trying different combinations until I found a handful of specific keywords that worked.

    Likewise, search is bad for getting a broad summary, unless someone has bothered to write it on a blog. But most information goes way too deep and you still need multiple sources to get there.

    Fact lookup is one the better uses for search, but again, I usually need to remember which source had what I wanted, whereas the LLM can usually pull it out for me.

    I use traditional search most of the time (usually DuckDuckGo), and LLMs if I think it'll be more effective. We have some local models at work that I use, and they're pretty helpful most of the time.

  • This post did not contain any content.

    70% seems pretty optimistic based on my experience...

  • LLMs are like a multitool, they can do lots of easy things mostly fine as long as it is not complicated and doesn't need to be exactly right. But they are being promoted as a whole toolkit as if they are able to be used to do the same work as effectively as a hammer, power drill, table saw, vise, and wrench.

    Because the tech industry hasn't had a real hit of it's favorite poison "private equity" in too long.

    The industry has played the same playbook since at least 2006. Likely before, but that's when I personally stated seeing it. My take is that they got addicted to the dotcom bubble and decided they can and should recreate the magic evey 3-5 years or so.

    This time it's AI, last it was crypto, and we've had web 2.0, 3.0, and a few others I'm likely missing.

    But yeah, it's sold like a panacea every time, when really it's revolutionary for like a handful of tasks.

  • This post did not contain any content.

    Wrong 70% doing what?

    I’ve used LLMs as a Stack Overflow / MSDN replacement for over a year and if they fucked up 7/10 questions I’d stop.

    Same with code, any free model can easily generate simple scripts and utilities with maybe 10% error rate, definitely not 70%

  • LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn't be used for most things that are not serious either.

    It's a shame that by applying the same "AI" naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.

    For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.

    Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.

    As is things like pattern/image analysis which appears very promising in medical analysis.

    All of these get branded as "AI". A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of "AI" tech, because they've learned not to trust anything branded as AI, due to being let down by LLMs.

    I tried to dictate some documents recently without paying the big bucks for specialized software, and was surprised just how bad Google and Microsoft's speech recognition still is. Then I tried getting Word to transcribe some audio talks I had recorded, and that resulted in unreadable stuff with punctuation in all the wrong places. You could just about make out what it meant to say, so I tried asking various LLMs to tidy it up. That resulted in readable stuff that was largely made up and wrong, which also left out large chunks of the source material. In the end I just had to transcribe it all by hand.

    It surprised me that these AI-ish products are still unable to transcribe speech coherently or tidy up a messy document without changing the meaning.

  • This post did not contain any content.

    In one case, when an agent couldn't find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user.

    Ah ah, what the fuck.

    This is so stupid it's funny, but now imagine what kind of other "creative solutions" they might find.

  • This post did not contain any content.

    While I do hope this leads to a pushback on "I just put all our corporate secrets into chatgpt":

    In the before times, people got their answers from stack overflow... or fricking youtube. And those are also wrong VERY VERY VERY often. Which is one of the biggest problems. The illegally scraped training data is from humans and humans are stupid.

  • This post did not contain any content.

    I tried to order food at Taco Bell drive through the other day and they had an AI thing taking your order. I was so frustrated that I couldn't order something that was on the menu I just drove to the window instead. The guy that worked there was more interested in lecturing me on how I need to order. I just said forget it and drove off.

    If you want to use AI, I'm not going to use your services or products unless I'm forced to. Looking at you Xfinity.

  • Hastags killed

    Technology technology
    6
    1
    16 Stimmen
    6 Beiträge
    31 Aufrufe
    klu9@lemmy.caK
    £ says: "The fuck they are, mate!"
  • 53 Stimmen
    19 Beiträge
    20 Aufrufe
    Z
    What is the technology angle here? What does this have to do with technology?
  • 0 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • 1 Stimmen
    2 Beiträge
    6 Aufrufe
    A
    If you're a developer, a startup founder, or part of a small team, you've poured countless hours into building your web application. You've perfected the UI, optimized the database, and shipped features your users love. But in the rush to build and deploy, a critical question often gets deferred: is your application secure? For many, the answer is a nervous "I hope so." The reality is that without a proper defense, your application is exposed to a barrage of automated attacks hitting the web every second. Threats like SQL Injection, Cross-Site Scripting (XSS), and Remote Code Execution are not just reserved for large enterprises; they are constant dangers for any application with a public IP address. The Security Barrier: When Cost and Complexity Get in the Way The standard recommendation is to place a Web Application Firewall (WAF) in front of your application. A WAF acts as a protective shield, inspecting incoming traffic and filtering out malicious requests before they can do any damage. It’s a foundational piece of modern web security. So, why doesn't everyone have one? Historically, robust WAFs have been complex and expensive. They required significant budgets, specialized knowledge to configure, and ongoing maintenance, putting them out of reach for students, solo developers, non-profits, and early-stage startups. This has created a dangerous security divide, leaving the most innovative and resource-constrained projects the most vulnerable. But that is changing. Democratizing Security: The Power of a Community WAF Security should be a right, not a privilege. Recognizing this, the landscape is shifting towards more accessible, community-driven tools. The goal is to provide powerful, enterprise-grade protection to everyone, for free. This is the principle behind the HaltDos Community WAF. It's a no-cost, perpetually free Web Application Firewall designed specifically for the community that has been underserved for too long. It’s not a stripped-down trial version; it’s a powerful security tool designed to give you immediate and effective protection against the OWASP Top 10 and other critical web threats. What Can You Actually Do with It? With a community WAF, you can deploy a security layer in minutes that: Blocks Malicious Payloads: Get instant, out-of-the-box protection against common attack patterns like SQLi, XSS, RCE, and more. Stops Bad Bots: Prevent malicious bots from scraping your content, attempting credential stuffing, or spamming your forms. Gives You Visibility: A real-time dashboard shows you exactly who is trying to attack your application and what methods they are using, providing invaluable security intelligence. Allows Customization: You can add your own custom security rules to tailor the protection specifically to your application's logic and technology stack. The best part? It can be deployed virtually anywhere—on-premises, in a private cloud, or with any major cloud provider like AWS, Azure, or Google Cloud. Get Started in Minutes You don't need to be a security guru to use it. The setup is straightforward, and the value is immediate. Protecting the project, you've worked so hard on is no longer a question of budget. Download: Get the free Community WAF from the HaltDos site. Deploy: Follow the simple instructions to set it up with your web server (it’s compatible with Nginx, Apache, and others). Secure: Watch the dashboard as it begins to inspect your traffic and block threats in real-time. Security is a journey, but it must start somewhere. For developers, startups, and anyone running a web application on a tight budget, a community WAF is the perfect first step. It's powerful, it's easy, and it's completely free.
  • 50 Stimmen
    11 Beiträge
    45 Aufrufe
    G
    Anyone here use XING?
  • Pocket shutting down

    Technology technology
    2
    2 Stimmen
    2 Beiträge
    13 Aufrufe
    B
    Can anyone recommend a good alternative? I still use it to bookmark most wanted sites.
  • Apple Watch Shipments’ Continuous Decline

    Technology technology
    10
    1
    22 Stimmen
    10 Beiträge
    41 Aufrufe
    A
    i mean as a core feature of a watch/smartwatch in general. garmin is going above and beyond compared to the competition in that area, and that's great. But that doesn't mean every other smartwatch manufacturer arbitrarily locking traditional watch features behind paywalls. and yeah apple does fitness themed commercials for apple watch because it does help with fitness a ton out of the box. just not specifically guided workouts.
  • 0 Stimmen
    3 Beiträge
    20 Aufrufe
    J
    I deleted the snapchat now.