Skip to content

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Technology
224 116 0
  • did i do it here? also that's where i live, if i can't talk about womens struggle then i appologize

    I don't think that person cares about women or anything else. They just said that they don't even want to hear about it.

  • This post did not contain any content.

    this is so Apple, claiming to invent or discover something "first" 3 years later than the rest of the market

  • ...... So you're saying there's a chance?

    10^36 flops to be exact

  • Who is "you"?

    Just because some dummies supposedly think that NPCs are "AI", that doesn't make it so. I don't consider checkers to be a litmus test for "intelligence".

    "You" applies to anyone that doesnt understand what AI means. It's a portmanteau word for a lot of things.

    Npcs ARE AI. AI doesnt mean "human level intelligence" and never did. Read the wiki if you need help understanding.

  • Unlike Markov models, modern LLMs use transformers that attend to full contexts, enabling them to simulate structured, multi-step reasoning (albeit imperfectly). While they don’t initiate reasoning like humans, they can generate and refine internal chains of thought when prompted, and emerging frameworks (like ReAct or Toolformer) allow them to update working memory via external tools. Reasoning is limited, but not physically impossible, it’s evolving beyond simple pattern-matching toward more dynamic and compositional processing.

    I'm not convinced that humans don't reason in a similar fashion. When I'm asked to produce pointless bullshit at work my brain puts in a similar level of reasoning to an LLM.

    Think about "normal" programming: An experienced developer (that's self-trained on dozens of enterprise code bases) doesn't have to think much at all about 90% of what they're coding. It's all bog standard bullshit so they end up copying and pasting from previous work, Stack Overflow, etc because it's nothing special.

    The remaining 10% is "the hard stuff". They have to read documentation, search the Internet, and then—after all that effort to avoid having to think—they sigh and start actually start thinking in order to program the thing they need.

    LLMs go through similar motions behind the scenes! Probably because they were created by software developers but they still fail at that last 90%: The stuff that requires actual thinking.

    Eventually someone is going to figure out how to auto-generate LoRAs based on test cases combined with trial and error that then get used by the AI model to improve itself and that is when people are going to be like, "Oh shit! Maybe AGI really is imminent!" But again, they'll be wrong.

    AGI won't happen until AI models get good at retraining themselves with something better than basic reinforcement learning. In order for that to happen you need the working memory of the model to be nearly as big as the hardware that was used to train it. That, and loads and loads of spare matrix math processors ready to go for handing that retraining.

  • This post did not contain any content.

    Just like me

  • Humans apply judgment, because they have emotion. LLMs do not possess emotion. Mimicking emotion without ever actually having the capability of experiencing it is sociopathy. An LLM would at best apply patterns like a sociopath.

    That just means they'd be great CEOs!

    According to Wall Street.

  • why is it assumed that this isn’t what human reasoning consists of?

    Because science doesn't work work like that. Nobody should assume wild hypotheses without any evidence whatsoever.

    Isn’t all our reasoning ultimately a form of pattern memorization? I sure feel like it is.

    You should get a job in "AI". smh.

    Sorry, I can see why my original post was confusing, but I think you've misunderstood me. I'm not claiming that I know the way humans reason. In fact you and I are on total agreement that it is unscientific to assume hypotheses without evidence. This is exactly what I am saying is the mistake in the statement "AI doesn't actually reason, it just follows patterns". That is unscientific if we don't know whether or "actually reasoning" consists of following patterns, or something else. As far as I know, the jury is out on the fundamental nature of how human reasoning works. It's my personal, subjective feeling that human reasoning works by following patterns. But I'm not saying "AI does actually reason like humans because it follows patterns like we do". Again, I see how what I said could have come off that way. What I mean more precisely is:

    It's not clear whether AI's pattern-following techniques are the same as human reasoning, because we aren't clear on how human reasoning works. My intuition tells me that humans doing pattern following seems equally as valid of an initial guess as humans not doing pattern following, so shouldn't we have studies to back up the direction we lean in one way or the other?

    I think you and I are in agreement, we're upholding the same principle but in different directions.

  • It is. And has always been. "Artificial Intelligence" doesn't mean a feeling thinking robot person (that would fall under AGI or artificial conciousness), it's a vast field of research in computer science with many, many things under it.

    ITT: people who obviously did not study computer science or AI at at least an undergraduate level.

    Y'all are too patient. I can't be bothered to spend the time to give people free lessons.

  • This has been known for years, this is the default assumption of how these models work.

    You would have to prove that some kind of actual reasoning capacity has arisen as... some kind of emergent complexity phenomenon.... not the other way around.

    Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.

    Define, "reasoning". For decades software developers have been writing code with conditionals. That's "reasoning."

    LLMs are "reasoning"... They're just not doing human-like reasoning.

  • Employers who are foaming at the mouth at the thought of replacing their workers with cheap AI:

    🫢

    Can’t really replace. At best, this tech will make employees more productive at the cost of the rainforests.

  • In fact, simple computer programs do a great job of solving these puzzles...

    Yes, this shit is very basic. Not at all "intelligent."

    But reasoning about it is intelligent, and the point of this study is to determine the extent to which these models are reasoning or not. Which again, has nothing to do with emotions. And furthermore, my initial question about whether or not pattern following should automatically be disqualified as intelligence, as the person summarizing this study (and notably not the study itself) claims, is the real question here.

  • But it still manages to fuck it up.

    I've been experimenting with using Claude's Sonnet model in Copilot in agent mode for my job, and one of the things that's become abundantly clear is that it has certain types of behavior that are heavily represented in the model, so it assumes you want that behavior even if you explicitly tell it you don't.

    Say you're working in a yarn workspaces project, and you instruct Copilot to build and test a new dashboard using an instruction file. You'll need to include explicit and repeated reminders all throughout the file to use yarn, not NPM, because even though yarn is very popular today, there are so many older examples of using NPM in its model that it's just going to assume that's what you actually want - thereby fucking up your codebase.

    I've also had lots of cases where I tell it I don't want it to edit any code, just to analyze and explain something that's there and how to update it... and then I have to stop it from editing code anyway, because halfway through it forgot that I didn't want edits, just explanations.

    To be fair, the world of JavaScript is such a clusterfuck... Can you really blame the LLM for needing constant reminders about the specifics of your project?

    When a programming language has five hundred bazillion absolutely terrible ways of accomplishing a given thing—and endless absolutely awful code examples on the Internet to "learn from"—you're just asking for trouble. Not just from trying to get an LLM to produce what you want but also trying to get humans to do it.

    This is why LLMs are so fucking good at writing rust and Python: There's only so many ways to do a thing and the larger community pretty much always uses the same solutions.

    JavaScript? How can it even keep up? You're using yarn today but in a year you'll probably like, "fuuuuck this code is garbage... I need to convert this all to [new thing]."

  • No, it shows how certain people misunderstand the meaning of the word.

    You have called npcs in video games "AI" for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.

    "Artificial" has several meanings.

    One is:

    not being, showing, or resembling sincere or spontaneous behavior : fake

    AI in video games literally means "fake intelligence"

  • This is why I say these articles are so similar to how right wing media covers issues about immigrants.

    There's some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They're taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There's articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.

    Then when they pass laws, we're all primed to accept them removing whatever it is that advantageous them and disadvantageous us.

    Unlike fear-mongering from the right about immigrants, current iterations of AI development:

    • literally consume the environment (they are using electricity and water)
    • are taking jobs and siphoning money from the economy towards centralized corporate revenue streams that don't pay a fair share of taxes
    • I don't know of headlines claiming they will sexually assault you, but many headlines note that they can be used as part of sophisticated catfishing scams, which they are

    All of these things aren't scare tactics. They're often overblown and exaggerated for clicks, but the fundamental nature of the technology and corporate implementation of it indisputable.

    Open-source AI can change the world for the better. Corporate-controlled AI in some limited cases will improve the world, but without reasonable regulations they will severely harm it first.

  • Define reason.

    Like humans? Of course not. They lack intent, awareness, and grounded meaning. They don’t “understand” problems, they generate token sequences.

    as it is defined in the article

  • Brother you better hope it does because even if emissions dropped to 0 tonight the planet wouldnt stop warming and it wouldn't stop what's coming for us.

    If the situation gets dire, it's likely that the weather will be manipulated. Countries would then have to be convinced not to use this for military purposes.

  • Define, "reasoning". For decades software developers have been writing code with conditionals. That's "reasoning."

    LLMs are "reasoning"... They're just not doing human-like reasoning.

    Howabout uh...

    The ability to take a previously given set of knowledge, experiences and concepts, and combine or synthesize them in a consistent, non contradictory manner, to generate hitherto unrealized knowledge, or concepts, and then also be able to verify that those new knowledge and concepts are actually new, and actually valid, or at least be able to propose how one could test whether or not they are valid.

    Arguably this is or involves meta-cognition, but that is what I would say... is the difference between what we typically think of as 'machine reasoning', and 'human reasoning'.

    Now I will grant you that a large amount of humans essentially cannot do this, they suck at introspecting and maintaining logical consistency, that they are just told 'this is how things work', and they never question that untill decades later and their lives force them to address, or dismiss their own internally inconsisten beliefs.

    But I would also say that this means they are bad at 'human reasoning'.

    Basically, my definition of 'human reasoning' is perhaps more accurately described as 'critical thinking'.

  • 10^36 flops to be exact

    That sounds really floppy.

  • To be fair, the world of JavaScript is such a clusterfuck... Can you really blame the LLM for needing constant reminders about the specifics of your project?

    When a programming language has five hundred bazillion absolutely terrible ways of accomplishing a given thing—and endless absolutely awful code examples on the Internet to "learn from"—you're just asking for trouble. Not just from trying to get an LLM to produce what you want but also trying to get humans to do it.

    This is why LLMs are so fucking good at writing rust and Python: There's only so many ways to do a thing and the larger community pretty much always uses the same solutions.

    JavaScript? How can it even keep up? You're using yarn today but in a year you'll probably like, "fuuuuck this code is garbage... I need to convert this all to [new thing]."

    That's only part of the problem. Yes, JavaScript is a fragmented clusterfuck. Typescript is leagues better, but by no means perfect. Still, that doesn't explain why the LLM can't recall that I'm using Yarn while it's processing the instruction that specifically told it to use Yarn. Or why it tries to start editing code when I tell it not to. Those are still issues that aren't specific to the language.

  • 129 Stimmen
    4 Beiträge
    0 Aufrufe
    spankmonkey@lemmy.worldS
    Multiple people have died from glaringly obvious design flaws in Teslas already.
  • 151 Stimmen
    4 Beiträge
    1 Aufrufe
    J
    Agreed - the end of the article does state compiling untrusted repos is effectively the same as running an untrusted executable, and you should treat it with the same caution (especially if its malware or gaming cheat adjacent)
  • 99 Stimmen
    48 Beiträge
    2 Aufrufe
    Y
    enable the absolute worst of what humanity has to offer. can we call it a reality check? we think of humans as so great and important and unique for quite a while now while the world is spiraling downwards. maybe humans arent so great after all. like what is art? ppl vibe with slob music but birds cant vote. how does that make sense? if one can watch AI slob (and we all will with the constant improvements in ai) and like it, well maybe our taste of art is not any better than what a bird can do and like. i hope LLM will lead to a breakthrough in understanding what type of animal we really are.
  • 68 Stimmen
    7 Beiträge
    2 Aufrufe
    heythisisnttheymca@lemmy.worldH
    Worked with the US federal government for much of my professional career, mostly in an adversarial role. "reliable federal data sources" do not exist
  • 22 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • AI model collapse is not what we paid for

    Technology technology
    20
    1
    85 Stimmen
    20 Beiträge
    2 Aufrufe
    A
    I share your frustration. I went nuts about this the other day. It was in the context of searching on a discord server, rather than Google, but it was so aggravating because of the how the "I know better than you" is everywhere nowadays in tech. The discord server was a reading group, and I was searching for discussion regarding a recent book they'd studied, by someone named "Copi". At first, I didn't use quotation marks, and I found my results were swamped with messages that included the word "copy". At this point I was fairly chill and just added quotation marks to my query to emphasise that it definitely was "Copi" I wanted. I still was swamped with messages with "copy", and it drove me mad because there is literally no way to say "fucking use the terms I give you and not the ones you think I want". The software example you give is a great example of when it would be real great to be able to have this ability. TL;DR: Solidarity in rage
  • Why Japan's animation industry has embraced AI

    Technology technology
    12
    1
    1 Stimmen
    12 Beiträge
    3 Aufrufe
    R
    The genre itself has become neutered, too. A lot of anime series have the usual "anime elements" and a couple custom ideas. And similar style, too glossy for my taste. OK, what I think is old and boring libertarian stuff, I'll still spell it out. The reason people are having such problems is because groups and businesses are de facto legally enshrined in their fields, it's almost like feudal Europe's system of privileges and treaties. At some point I thought this is good, I hope no evil god decided to fulfill my wish. There's no movement, and a faction (like Disney with Star Wars) that buys a place (a brand) can make any garbage, and people will still try to find the depth in it and justify it (that complaint has been made about Star Wars prequels, but no, they are full of garbage AND have consistent arcs, goals and ideas, which is why they revitalized the Expanded Universe for almost a decade, despite Lucas-<companies> having sort of an internal social collapse in year 2005 right after Revenge of the Sith being premiered ; I love the prequels, despite all the pretense and cringe, but their verbal parts are almost fillers, their cinematographic language and matching music are flawless, the dialogue just disrupts it all while not adding much, - I think Lucas should have been more decisive, a bit like Tartakovsky with the Clone Wars cartoon, just more serious, because non-verbal doesn't equal stupid). OK, my thought wandered away. Why were the legal means they use to keep such positions created? To make the economy nicer to the majority, to writers, to actors, to producers. Do they still fulfill that role? When keeping monopolies, even producing garbage or, lately, AI slop, - no. Do we know a solution? Not yet, because pressing for deregulation means the opponent doing a judo movement and using that energy for deregulating the way everything becomes worse. Is that solution in minimizing and rebuilding the system? I believe still yes, nothing is perfect, so everything should be easy to quickly replace, because errors and mistakes plaguing future generations will inevitably continue to be made. The laws of the 60s were simple enough for that in most countries. The current laws are not. So the general direction to be taken is still libertarian. Is this text useful? Of course not. I just think that in the feudal Europe metaphor I'd want to be a Hussite or a Cossack or at worst a Venetian trader.
  • 35 Stimmen
    16 Beiträge
    0 Aufrufe
    M
    This is what I want to know also. "AI textbooks" is a great clickbait/ragebait term, but could mean a great variety of things.