Skip to content

AI agents wrong ~70% of time: Carnegie Mellon study

Technology
277 108 90
  • and doesn't need to be exactly right

    What kind of tasks do you consider that don't need to be exactly right?

    Description generators for TTRPGs, as you will read through them afterwards anyway and correct when necessary.

    Generating lists of ideas. For creative writing, getting a bunch of ideas you can pick and choose from that fit the narrative you want.

    A search engine like Perplexity.ai which after searching summarizes the web page and adds a link to the page next to it. If the summary seems promising, you go to the real page to verify the actual information.

    Simple code like HTML pages and boilerplate code that you will still review afterwards anyway.

  • When LLMs get it right it's because they're summarizing a stack overflow or GitHub snippet it was trained on. But you loose all the benefits of other humans commenting on the context, pitfalls and other alternatives.

    You mean things you had to do anyway even if you didn't use LLMs?

  • That’s literally how “AI agents” are being marketed. “Tell it to do a thing and it will do it for you.”

    So? That doesn't mean they are supposed to be used like that.

    Show me any marketing that isn't full of lies.

  • The first half dozen times I tried AI for code, across the past year or so, it failed pretty much as you describe.

    Finally, I hit on some things it can do. For me: keeping the instructions more general, not specifying certain libraries for instance, was the key to getting something that actually does something. Also, if it doesn't show you the whole program, get it to show you the whole thing, and make it fix its own mistakes so you can build on working code with later requests.

    Have you tried insulting the AI in the system prompt (as well as other tunes to the system prompt)?

    I'm not joking, it really works

    For example:

    Instead of "You are an intelligent coding assistant..."

    "You are an absolute fucking idiot who can barely code..."

  • Emotion > Facts. Most people have been trained to blindly accept things and cheer on what fits with their agenda. Like technbro's exaggerating LLMs, or people like you misrepresenting LLMs as mere statistical word generators without intelligence. That's like saying a computer is just wires and switches, or missing the forest for the trees. Both is equally false.

    Yet if it fits with the emotional needs or with dogma, then other will agree. It's a convenient and comforting "A vs B" worldview we've been trained to accept. And so the satisfying notion and misinformation keeps spreading.

    LLMs tell us more about human intelligence and the human slop we've been generating. It tells us that most people are not that much more than statistical word generators.

    people like you misrepresenting LLMs as mere statistical word generators without intelligence.

    You've bought-in to the hype. I won't try to argue with you because you aren't cognizent of reality.

  • This post did not contain any content.

    We have created the overconfident intern in digital form.

  • When LLMs get it right it's because they're summarizing a stack overflow or GitHub snippet it was trained on. But you loose all the benefits of other humans commenting on the context, pitfalls and other alternatives.

    You’re not wrong, but often I’m just trying to do something I’ve done a thousand times before and I already know the pitfalls. Also, I’m sure I’ve copied code from stackoverflow before.

  • This post did not contain any content.

    Hey I went there

  • people like you misrepresenting LLMs as mere statistical word generators without intelligence.

    You've bought-in to the hype. I won't try to argue with you because you aren't cognizent of reality.

    You're projecting. Every accusation is a confession.

  • Have you tried insulting the AI in the system prompt (as well as other tunes to the system prompt)?

    I'm not joking, it really works

    For example:

    Instead of "You are an intelligent coding assistant..."

    "You are an absolute fucking idiot who can barely code..."

    “You are an absolute fucking idiot who can barely code…”

    Honestly, that's what you have to do. It's the only way I can get through using Claude.ai. I treat it like it's an absolute moron, I insult it, I "yell" at it, I threaten it and guess what? the solutions have gotten better. not great but a hell of a lot better than what they used to be. It really works. it forces it to really think through the problem, research solutions, cite sources, etc. I have even told it i'll cancel my subscription to it if it gets it wrong.

    no more "do this and this and then this but do this first and then do this" after calling it a "fucking moron" and what have you it will provide an answer and just say "done."

  • “You are an absolute fucking idiot who can barely code…”

    Honestly, that's what you have to do. It's the only way I can get through using Claude.ai. I treat it like it's an absolute moron, I insult it, I "yell" at it, I threaten it and guess what? the solutions have gotten better. not great but a hell of a lot better than what they used to be. It really works. it forces it to really think through the problem, research solutions, cite sources, etc. I have even told it i'll cancel my subscription to it if it gets it wrong.

    no more "do this and this and then this but do this first and then do this" after calling it a "fucking moron" and what have you it will provide an answer and just say "done."

    This guy is the moral lesson at the start of the apocalypse movie

  • This post did not contain any content.

    This is the same kind of short-sighted dismissal I see a lot in the religion vs science argument. When they hinge their pro-religion stance on the things science can’t explain, they’re defending an ever diminishing territory as science grows to explain more things. It’s a stupid strategy with an expiration date on your position.

    All of the anti-AI positions, that hinge on the low quality or reliability of the output, are defending an increasingly diminished stance as the AI’s are further refined. And I simply don’t believe that the majority of the people making this argument actually care about the quality of the output. Even when it gets to the point of producing better output than humans across the board, these folks are still going to oppose it regardless. Why not just openly oppose it in general, instead of pinning your position to an argument that grows increasingly irrelevant by the day?

    DeepSeek exposed the same issue with the anti-AI people dedicated to the environmental argument. We were shown proof that there’s significant progress in the development of efficient models, and it still didn’t change any of their minds. Because most of them don’t actually care about the environmental impacts. It’s just an anti-AI talking point that resonated with them.

    The more baseless these anti-AI stances get, the more it seems to me that it’s a lot of people afraid of change and afraid of the fundamental economic shifts this will require, but they’re embarrassed or unable to articulate that stance. And it doesn’t help that the luddites haven’t been able to predict a single development. Just constantly flailing to craft a new argument to criticize the current models and tech. People are learning not to take these folks seriously.

  • Have you tried insulting the AI in the system prompt (as well as other tunes to the system prompt)?

    I'm not joking, it really works

    For example:

    Instead of "You are an intelligent coding assistant..."

    "You are an absolute fucking idiot who can barely code..."

    I frequently find myself prompting it: "now show me the whole program with all the errors corrected." Sometimes I have to ask that two or three times, different ways, before it coughs up the next iteration ready to copy-paste-test. Most times when it gives errors I'll just write "address: " and copy-paste the error message in - frequently the text of the AI response will apologize, less frequently it will actually fix the error.

  • This guy is the moral lesson at the start of the apocalypse movie

    He's developing a toxic relationship with his AI agent. I don't think it's the best way to get what you want (demonstrating how to be abusive to the AI), but maybe it's the only method he is capable of getting results with.

  • This is the same kind of short-sighted dismissal I see a lot in the religion vs science argument. When they hinge their pro-religion stance on the things science can’t explain, they’re defending an ever diminishing territory as science grows to explain more things. It’s a stupid strategy with an expiration date on your position.

    All of the anti-AI positions, that hinge on the low quality or reliability of the output, are defending an increasingly diminished stance as the AI’s are further refined. And I simply don’t believe that the majority of the people making this argument actually care about the quality of the output. Even when it gets to the point of producing better output than humans across the board, these folks are still going to oppose it regardless. Why not just openly oppose it in general, instead of pinning your position to an argument that grows increasingly irrelevant by the day?

    DeepSeek exposed the same issue with the anti-AI people dedicated to the environmental argument. We were shown proof that there’s significant progress in the development of efficient models, and it still didn’t change any of their minds. Because most of them don’t actually care about the environmental impacts. It’s just an anti-AI talking point that resonated with them.

    The more baseless these anti-AI stances get, the more it seems to me that it’s a lot of people afraid of change and afraid of the fundamental economic shifts this will require, but they’re embarrassed or unable to articulate that stance. And it doesn’t help that the luddites haven’t been able to predict a single development. Just constantly flailing to craft a new argument to criticize the current models and tech. People are learning not to take these folks seriously.

    Maybe the marketers should be a bit more picky about what they slap "AI" on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that's just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.

  • Maybe the marketers should be a bit more picky about what they slap "AI" on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that's just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.

    I’m not sure the anti-AI marketing stance is any more solid of a position. Though it’s probably easier to defend, since it’s so vague and not based on anything measurable.

  • I’m not sure the anti-AI marketing stance is any more solid of a position. Though it’s probably easier to defend, since it’s so vague and not based on anything measurable.

    Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn't really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that's an awful long ways off from talking about AI itself (unless we've bought into the marketing hype).

  • Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn't really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that's an awful long ways off from talking about AI itself (unless we've bought into the marketing hype).

    So you’re saying the article’s measurements about AI agents being wrong 70% of the time is made up? Or is AI performance only measurable when the results help anti-AI narratives?

  • This post did not contain any content.

    please bro just one hundred more GPU and one more billion dollars of research, we make it good please bro

  • It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

    I usually write 3x the code to test the code itself. Verification is often harder than implementation.

    It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there's a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.

    (This is speculation.)

  • Misogyny and Violent Extremism: Can Big Tech Fix the Glitch?

    Technology technology
    18
    1
    20 Stimmen
    18 Beiträge
    65 Aufrufe
    G
    It is interesting that you are not answering my point... Good work
  • 615 Stimmen
    254 Beiträge
    2k Aufrufe
    N
    That’s a very emphatic restatement of your initial claim. I can’t help but notice that, for all the fancy formatting, that wall of text doesn’t contain a single line which actually defines the difference between “learning” and “statistical optimization”. It just repeats the claim that they are different without supporting that claim in any way. Nothing in there, precludes the alternative hypothesis; that human learning is entirely (or almost entirely) an emergent property of “statistical optimization”. Without some definition of what the difference would be we can’t even theorize a test
  • Fatphobia Is Fueled by AI-Created Images, Study Finds

    Technology technology
    14
    1
    15 Stimmen
    14 Beiträge
    57 Aufrufe
    K
    I pretty much agree. The only thing I would add is that it's not our place to tell others to lose weight or to point out their weight; people already know they are overweight and that it's unhealthy. We shouldn't be policing other people's bodies. It's also possible to be overweight and have body positivity; being overweight doesn't equate to being unattractive.
  • Resurrecting a dead torrent tracker and finding 3 million peers

    Technology technology
    59
    321 Stimmen
    59 Beiträge
    187 Aufrufe
    I
    Yeah i suppose any form of payment that you have to keep secret for some reason is a reason to use crypto, though I struggle to imagine needing that if you're not doing something dodgy imagine you’re a YouTuber and want to accept donations: that will force you to give out your name to them, which they could use to get your address and phone number. There’s always someone that hates you, and I rather not have them knowing my personal info Wat. Crypto is not good at solving that, it's in fact much much worse than traditional payment methods. There's a reason scammers always want to be paid in crypto if you’re the seller then it’s a lot better. With the traditional banking system, with enough knowledge you can cheat both sides: stolen cards, abusive chargebacks, bank accounts in other countries under fake name/fake ID… Crypto simplifies scamming when the seller, and pretty much makes it impossible for buyers What specifically are you boycotting? Card payments, international tranfers, national transfers taking days to complete, money being seizable at all times many banks lose money on them Their plans are basically all focused on the card you get. Pretty sure they make money with it, else many wouldn’t offer cash back (selling infos and getting a fee from card payments?) if you think the people that benefit from you using crypto (crypto exchange owners and billionaires that own crypto etc.) are less evil than goverment regulated banks, you're deluded. Banks are evil anyways, does it really change anything? The difference is that it technically helps everyone using crypto, not only the rich. Plus P2P exchanges are a thing You'll spend more money using crypto for that, not less That’s just factually false. Do you know the price of a swift transfer? Now compare it to crypto tx fees, with many being under $0.01
  • Dyson Has Killed Its Bizarre Zone Air-Purifying Headphones

    Technology technology
    45
    1
    227 Stimmen
    45 Beiträge
    158 Aufrufe
    rob_t_firefly@lemmy.worldR
    I have been chuckling like a dork at this particular patent since such things first became searchable online, and have never found any evidence of it being manufactured and marketed at all. The "non-adhesive adherence" is illustrated in the diagrams on the patent which you can see at the link. The inventor proposes "a facing of fluffy fibrous material" to provide the filtration and the adherence; basically this thing is the softer side of a velcro strip, bent in half with the fluff facing outward so it sticks to the inside of your buttcrack to hold itself in place in front of your anus and filter your farts through it.
  • Twitch is getting vertical livestreams

    Technology technology
    20
    1
    11 Stimmen
    20 Beiträge
    72 Aufrufe
    zombiemantis@lemmy.worldZ
    Oh, yeah, that makes sense. I kinda assumed they already supported it, like YouTube Shorts adopting the vertical format for shorts after Ticktock blew up.
  • 219 Stimmen
    119 Beiträge
    28 Aufrufe
    L
    Okay, I'd be interested to hear what you think is wrong with this, because I'm pretty sure it's more or less correct. Some sources for you to help you understand these concepts a bit better: What DLSS is and how it works as a starter: https://en.wikipedia.org/wiki/Deep_Learning_Super_Sampling Issues with modern "optimization", including DLSS: https://www.youtube.com/watch?v=lJu_DgCHfx4 TAA comparisons (yes, biased, but accurate): https://old.reddit.com/r/FuckTAA/comments/1e7ozv0/rfucktaa_resource/
  • 209 Stimmen
    30 Beiträge
    17 Aufrufe
    L
    people do get desensitized down there from watching alot of porn, and there were other forums discussing thier "ED" from decade of porn watching.