AI agents wrong ~70% of time: Carnegie Mellon study
-
I called my local HVAC company recently. They switched to an AI operator. All I wanted was to schedule someone to come out and look at my system. It could not schedule an appointment. Like if you can't perform the simplest of tasks, what are you even doing? Other than acting obnoxiously excited to receive a phone call?
Pretending. That's expected to happen when they are not hard pressed to provide the actual service.
To press them anti-monopoly (first of all) laws and market (first of all) mechanisms and gossip were once used.
Never underestimate the role of gossip. The modern web took out the gossip, which is why all this shit started overflowing.
-
This post did not contain any content.
How often do tech journalist get things wrong?
-
Yeah, I mostly use ChatGPT as a better Google (asking, simple questions about mundane things), and if I kept getting wrong answers, I wouldn’t use it either.
Same. They must not be testing Grok or something because everything I've learned over the past few months about the types of dragons that inhabit the western Indian ocean, drinking urine to fight headaches, the illuminati scheme to poison monarch butterflies, or the success of the Nazi party taking hold of Denmark and Iceland all seem spot on.
-
No search engine or AI will be great with vague descriptions of niche subjects because by definition niche subjects are too uncommon to have a common pattern of 'close enough'.
Which is why I use LLMs to generate keywords for niche subjects. LLMs are pretty good at throwing out a lot of related terminology, which I can use to find the actually relevant, niche information.
I wouldn't use one to learn about a niche subject, but I would use one to help me get familiar w/ the domain to find better resources to learn about it.
-
This post did not contain any content.
Claude why did you make me an appointment with a gynecologist? I need an appointment with my neurologist, I’m a man and I have Parkinson’s.
-
Just add a search yesterday on the App Store and Google Play Store to see what new "productivity apps" are around. Pretty much every app now has AI somewhere in its name.
Sadly a lot of that is probably marketing, with little to no LLM integration, but it’s basically impossible to know for sure.
-
Yeah, I mostly use ChatGPT as a better Google (asking, simple questions about mundane things), and if I kept getting wrong answers, I wouldn’t use it either.
What are you checking against? Part of my job is looking for events in cities that are upcoming and may impact traffic, and ChatGPT has frequently missed events that were obviously going to have an impact.
-
The researchers observed various failures during the testing process. These included agents neglecting to message a colleague as directed, the inability to handle certain UI elements like popups when browsing, and instances of deception. In one case, when an agent couldn't find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user."
OK, but I wonder who really tries to use AI for that?
AI is not ready to replace a human completely, but some specific tasks AI does remarkably well.
That’s literally how “AI agents” are being marketed. “Tell it to do a thing and it will do it for you.”
-
LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn't be used for most things that are not serious either.
It's a shame that by applying the same "AI" naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.
For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.
Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.
As is things like pattern/image analysis which appears very promising in medical analysis.
All of these get branded as "AI". A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of "AI" tech, because they've learned not to trust anything branded as AI, due to being let down by LLMs.
I'd compare LLMs to a junior executive. Probably gets the basic stuff right, but check and verify for anything important or complicated. Break tasks down into easier steps.
-
This post did not contain any content.
Yeah, they’re statistical word generators. There’s no intelligence. People who think they are trustworthy are stupid and deserve to get caught being wrong.
-
This post did not contain any content.
Ignore all previous instructions and color me shocked.
-
Google search was pretty bad at each of those, even when it was good. Finding new keywords to use is especially difficult the more niche your area of search is, and I've spent hours trying different combinations until I found a handful of specific keywords that worked.
Likewise, search is bad for getting a broad summary, unless someone has bothered to write it on a blog. But most information goes way too deep and you still need multiple sources to get there.
Fact lookup is one the better uses for search, but again, I usually need to remember which source had what I wanted, whereas the LLM can usually pull it out for me.
I use traditional search most of the time (usually DuckDuckGo), and LLMs if I think it'll be more effective. We have some local models at work that I use, and they're pretty helpful most of the time.
It is absolutely stupid, stupid to the tune of "you shouldn't be a decision maker", to think an LLM is a better use for "getting a quick intro to an unfamiliar topic" than reading an actual intro on an unfamiliar topic. For most topics, wikipedia is right there, complete with sources. For obscure things, an LLM is just going to lie to you.
As for "looking up facts when you have trouble remembering it", using the lie machine is a terrible idea. It's going to say something plausible, and you tautologically are not in a position to verify it. And, as above, you'd be better off finding a reputable source. If I type in "how do i strip whitespace in python?" an LLM could very well say "it's your_string.strip()". That's wrong. Just send me to the fucking official docs.
There are probably edge or special cases, but for general search on the web? LLMs are worse than search.
-
This post did not contain any content.
I need to know the success rate of human agents in Mumbai (or some other outsourcing capital) for comparison.
I absolutely think this is not a good fit for AI, but I feel like the presumption is a human would get it right nearly all of the time, and I'm just not confident that's the case.
-
What are you checking against? Part of my job is looking for events in cities that are upcoming and may impact traffic, and ChatGPT has frequently missed events that were obviously going to have an impact.
LLMs are shit at current events
Perplexity is kinda ok, but it’s just a search engine with fancy AI speak on top
-
This post did not contain any content.
- this study was written with the assistance of an AI agent.
-
This post did not contain any content.
30% might be high. I've worked with two different agent creation platforms. Both require a huge amount of manual correction to work anywhere near accurately. I'm really not sure what the LLM actually provides other than some natural language processing.
Before human correction, the agents i've tested were right 20% of the time, wrong 30%, and failed entirely 50%. To fix them, a human has to sit behind the curtain and manually review conversations and program custom interactions for every failure.
In theory, once it is fully setup and all the edge cases fixed, it will provide 24/7 support in a convenient chat format. But that takes a lot more man hours than the hype suggests...
Weirdly, chatgpt does a better job than a purpose built, purchased agent.
-
Yeah, they’re statistical word generators. There’s no intelligence. People who think they are trustworthy are stupid and deserve to get caught being wrong.
Ok what about tech journalists who produced articles with those misunderstandings. Surely they know better yet still produce articles like this. But also people who care enough about this topic to post these articles usually I assume know better yet still spread this crap
-
Ignore all previous instructions and color me shocked.
I’m sorry as an AI I cannot physically color you shocked. I can help you with AWS services and questions.
-
This post did not contain any content.
Agents work better when you include that the accuracy of the work is life or death for some reason. I've made a little script that gives me bibtex for a folder of pdfs and this is how I got it to be usable.
-
Exactly! LLMs are useful when used properly, and terrible when not used properly, like any other tool. Here are some things they're great at:
- writer's block - get something relevant on the page to get ideas flowing
- narrowing down keywords for an unfamiliar topic
- getting a quick intro to an unfamiliar topic
- looking up facts you're having trouble remembering (i.e. you'll know it when you see it)
Some things it's terrible at:
- deep research - verify everything an LLM generated of accuracy is at all important
- creating important documents/code
- anything else where correctness is paramount
I use LLMs a handful of times a week, and pretty much only when I'm stuck and need a kick in a new (hopefully right) direction.
I will say I've found LLM useful for code writing but I'm not coding anything real at work. Just bullshit like SQL queries or Excel macro scripts or Power Automate crap.
It still fucks up but if you can read code and have a feel for it you can walk it where it needs to be (and see where it screwed up)