AI agents wrong ~70% of time: Carnegie Mellon study
-
Wdym, I have seen researchers using it to aid their research significantly. You just need to verify some stuff it says.
Verify every single bloody line of output. Top three to five are good, then it starts guessing the rest based on the pattern so far. If I wanted to make shit up randomly, I would do it myself.
People who trust LLMs to tell them things that are right rather than things that sound right have fundamentally misunderstood what an LLM is and how it works.
-
Verify every single bloody line of output. Top three to five are good, then it starts guessing the rest based on the pattern so far. If I wanted to make shit up randomly, I would do it myself.
People who trust LLMs to tell them things that are right rather than things that sound right have fundamentally misunderstood what an LLM is and how it works.
It's not that bad, the output isn't random.
Time to time, it can produce novel stuffs like new equations for engineering.
Also, verification does not take that much effort. At least according to my colleagues, it is great.
Also works well for coding well-known stuffs, as well! -
"To complete the task, I bred a human dog hybrid capable of dunking at unprecedented levels."
"Where are my balls Summer?"
-
I'd just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time -- Amazon's new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.
Thing is, they might achieve 99% accuracy given the speed of progress. Lots of brainpower is getting poured into LLMs.
Honestly, it is soo scary. It could be replacing me... -
Thing is, they might achieve 99% accuracy given the speed of progress. Lots of brainpower is getting poured into LLMs.
Honestly, it is soo scary. It could be replacing me...yeah, this is why I'm #fuck-ai to be honest.
-
"Where are my balls Summer?"
The first dunk is the hardest
-
It's not that bad, the output isn't random.
Time to time, it can produce novel stuffs like new equations for engineering.
Also, verification does not take that much effort. At least according to my colleagues, it is great.
Also works well for coding well-known stuffs, as well!It's not completely random, but I'm telling you it fucked up, it fucked up badly, time after time, and I had to check every single thing manually. It's correctness run never lasted beyond a handful. If you build something using some equation it invented you're insane and should quit engineering before you hurt someone.
-
This post did not contain any content.
And it won’t be until humans can agree on what’s a fact and true vs not.. there is always someone or some group spreading mis/dis-information
-
If that’s the quality of answer you’re getting, then it’s a user error
No, I know the data I gave it and I know how hard I tried to get it to use it truthfully.
You have an irrational and wildly inaccurate belief in the infallibility of LLMs.
You're also denying the evidence of my own experience. What on earth made you think I would believe you over what I saw with my own eyes?
Why are you giving it data. It's a chat and language tool. It's not data based. You need something trained to work for that specific use. I think Wolfram Alpha has better tools for that.
I wouldn't trust it to calculate how many patio stones I need to build a project. But I trust it to tell me where a good source is on a topic or if a quote was said by who ever or if I need to remember something but I only have vague pieces like old timey historical witch burning related factoid about villagers who pulled people through a hole in the church wall or what was a the princess who was skeptic and sent her scientist to villages to try to calm superstitious panic .
Other uses are like digging around my computer and seeing what processes do what. How concepts work regarding the think I'm currently learning. So many excellent users. But I fucking wouldn't trust it to do any kind of calculation.
-
You probably wanted to show off how smart you are, but instead you showed that you can't even talk to people without help of your favourite slop bucket.
It didn't answer my curiosity about what came first, but it solidified my conviction that your brain is cooked all the way, probably beyond repair. I would say you need to seek professional help, but at this point you would interpret it as needing to talk to the autocomplete, and it will cook you even more.
It started funny, but I feel very sorry for you now, and it sucked all the humour out.You just can't talk to people, period, you are just a dick, you were also just proven to be stupider than a fucking LLM, have a nice day
-
I actually have a fairly positive experience with ai ( copilot using claude specificaly ). Is it wrong a lot if you give it a huge task yes, so i dont do that and using as a very targeted solution if i am feeling very lazy today . Is it fast . Also not . I could actually be faster than ai in some cases.
But is it good if you are working for 6h and you just dont have enough mental capacity for the rest of the day. Yes . You can just prompt it specificaly enough to get desired result and just accept correct responses. Is it always good ,not really but good enough. Do i also suck after 3pm . Yes.
My main issue is actually the fact that it saves first and then asks you to pick if you want to use it. Not a problem usualy but if it crashes the generated code stays so that part sucksYou should give Claude Code a shot if you have a Claude subscription. I'd say this is where AI actually does a decent job: picking up human slack, under supervision, not replacing humans at anything. AI tools won't suddenly be productive enough to employ, but I as a professional can use it to accelerate my own workflow. It's actually where the risk of them taking jobs is real: for example, instead of 10 support people you can have 2 who just supervise the responses of an AI.
But of course, the Devil's in the detail. The only reason this is cost effective is because of VC money subsidizing and hiding the real cost of running these models.
-
Why are you giving it data. It's a chat and language tool. It's not data based. You need something trained to work for that specific use. I think Wolfram Alpha has better tools for that.
I wouldn't trust it to calculate how many patio stones I need to build a project. But I trust it to tell me where a good source is on a topic or if a quote was said by who ever or if I need to remember something but I only have vague pieces like old timey historical witch burning related factoid about villagers who pulled people through a hole in the church wall or what was a the princess who was skeptic and sent her scientist to villages to try to calm superstitious panic .
Other uses are like digging around my computer and seeing what processes do what. How concepts work regarding the think I'm currently learning. So many excellent users. But I fucking wouldn't trust it to do any kind of calculation.
Why are you giving it data
Because there's a button for that.
It’s output is dependent on the input
This thing that you said... It's false.
-
This post did not contain any content.
Wow. 30% accuracy was the high score!
From the article:Testing agents at the office
For a reality check, CMU researchers have developed a benchmark to evaluate how AI agents perform when given common knowledge work tasks like browsing the web, writing code, running applications, and communicating with coworkers.
They call it TheAgentCompany. It's a simulation environment designed to mimic a small software firm and its business operations. They did so to help clarify the debate between AI believers who argue that the majority of human labor can be automated and AI skeptics who see such claims as part of a gigantic AI grift.
the CMU boffins put the following models through their paces and evaluated them based on the task success rates. The results were underwhelming.
Gemini-2.5-Pro (30.3 percent)
Claude-3.7-Sonnet (26.3 percent)
Claude-3.5-Sonnet (24 percent)
Gemini-2.0-Flash (11.4 percent)
GPT-4o (8.6 percent)
o3-mini (4.0 percent)
Gemini-1.5-Pro (3.4 percent)
Amazon-Nova-Pro-v1 (1.7 percent)
Llama-3.1-405b (7.4 percent)
Llama-3.3-70b (6.9 percent),
Qwen-2.5-72b (5.7 percent),
Llama-3.1-70b (1.7 percent)
Qwen-2-72b (1.1 percent).
"We find in experiments that the best-performing model, Gemini 2.5 Pro, was able to autonomously perform 30.3 percent of the provided tests to completion, and achieve a score of 39.3 percent on our metric that provides extra credit for partially completed tasks," the authors state in their paper
-
Ah, my bad, you're right, for being consistently correct, I should have done 0.3^10=0.0000059049
so the chances of it being right ten times in a row are less than one thousandth of a percent.
No wonder I couldn't get it to summarise my list of data right and it was always lying by the 7th row.
That looks better. Even with a fair coin, 10 heads in a row is almost impossible.
And if you are feeding the output back into a new instance of a model then the quality is highly likely to degrade.
-
You just can't talk to people, period, you are just a dick, you were also just proven to be stupider than a fucking LLM, have a nice day
Did the autocomplete told you to answer this? Don't answer, actually, save some energy.
-
This post did not contain any content.
Now I'm curious, what's the average score for humans?
-
The 256 thing was written by a person. AI doesn't have exclusive rights to being dumb, plenty of dumb people around.
you're right, the dumb of AI is completely comparable to the dumb of human, there's no difference worth talking about, sorry i even spoke the fuck up
-
This post did not contain any content.
I asked Claude 3.5 Haiku to write me a quine in COBOL in the bs2000 dialect. Claude does now that creating a perfect quine in COBOL is challenging due to the need to represent the self-referential nature of the code. After a few suggestions Claude restated its first draft, without proper BS2000 incantations, without a perform statement, and without any self-referential redefines. It's a lot of work. I stopped caring and moved on.
For those who wonder: https://sourceforge.net/p/gnucobol/discussion/lounge/thread/495d8008/ has an example.
Colour me unimpressed. I dread the day when they force the use of 'AI' on us at work.
-
Why are you giving it data
Because there's a button for that.
It’s output is dependent on the input
This thing that you said... It's false.
There's a sleep button on my laptop. Doesn't mean I would use it.
I'm just trying to say you're saying the feature that everyone kind of knows doesn't work. Chatgpt is not trained to do calculations well.
I just like technology and I think and fully believe the left hatred of it is not logical. I believe it stems from a lot of media be and headlines. Why there's this push From media is a question I would like to know more. But overall, I see a lot of the same makers of bullshit yellow journalism for this stuff on the left as I do for similar bullshit on the right wing spaces towards other things.
-
America: "Good enough to handle 911 calls!"
Is there really a plan to use this for 911 services??
-
-
-
The $10 billion delivery empire built on Shein and TikTok orders: A Chinese courier company is out-delivering Amazon — and everyone else — across Southeast Asia.
Technology1
-
Same Sea, New Phish: Russian Government-Linked Social Engineering Targets App-Specific Passwords
Technology1
-
-
[JS Required] MiniMax M1 model claims Chinese LLM crown from DeepSeek - plus it's true open-source
Technology1
-
-