Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.
-
"It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." -Pamela McCorduck´.
It's called the AI Effect.As Larry Tesler puts it, "AI is whatever hasn't been done yet.".
I'm going to write a program to play tic-tac-toe. If y'all don't think it's "AI", then you're just haters. Nothing will ever be good enough for y'all. You want scientific evidence of intelligence?!?! I can't even define intelligence so take that! \s
Seriously tho. This person is arguing that a checkers program is "AI". It kinda demonstrates the loooong history of this grift.
-
No, it shows how certain people misunderstand the meaning of the word.
You have called npcs in video games "AI" for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.
Who is "you"?
Just because some dummies supposedly think that NPCs are "AI", that doesn't make it so. I don't consider checkers to be a litmus test for "intelligence".
-
This is why I say these articles are so similar to how right wing media covers issues about immigrants.
There's some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They're taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There's articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.
Then when they pass laws, we're all primed to accept them removing whatever it is that advantageous them and disadvantageous us.
This is why I say these articles are so similar to how right wing media covers issues about immigrants.
Maybe the actual problem is people who equate computer programs with people.
Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.
You mean laws like this? jfc.
-
Because it's a fear-mongering angle that still sells. AI has been a vehicle for scifi for so long that trying to convince Boomers that of won't kill us all is the hard part.
I'm a moderate user for code and skeptic of LLM abilities, but 5 years from now when we are leveraging ML models for groundbreaking science and haven't been nuked by SkyNet, all of this will look quaint and silly.
5 years from now? Or was it supposed to be 5 years ago?
Pretty sure we already have skynet.
-
I'm going to write a program to play tic-tac-toe. If y'all don't think it's "AI", then you're just haters. Nothing will ever be good enough for y'all. You want scientific evidence of intelligence?!?! I can't even define intelligence so take that! \s
Seriously tho. This person is arguing that a checkers program is "AI". It kinda demonstrates the loooong history of this grift.
Yeah that’s exactly what I took from the above comment as well.
I have a pretty simple bar: until we’re debating the ethics of turning it off or otherwise giving it rights, it isn’t intelligent. No it’s not scientific, but it’s a hell of a lot more consistent than what all the AI evangelists espouse. And frankly if we’re talking about the ethics of how to treat something we consider intelligent, we have to go beyond pure scientific benchmarks anyway. It becomes a philosophy/ethics discussion.
Like crypto it has become a pseudo religion. Challenges to dogma and orthodoxy are shouted down, the non-believers are not welcome to critique it.
-
The paper doesn’t say LLMs can’t reason, it shows that their reasoning abilities are limited and collapse under increasing complexity or novel structure.
The paper doesn’t say LLMs can’t reason
Authors gotta get paid. This article is full of pseudo-scientific jargon.
-
Performance eventually collapses due to architectural constraints, this mirrors cognitive overload in humans: reasoning isn’t just about adding compute, it requires mechanisms like abstraction, recursion, and memory. The models’ collapse doesn’t prove “only pattern matching”, it highlights that today’s models simulate reasoning in narrow bands, but lack the structure to scale it reliably. That is a limitation of implementation, not a disproof of emergent reasoning.
Performance collapses because luck runs out. Bigger destruction of the planet won't fix that.
-
This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.
why is it assumed that this isn’t what human reasoning consists of?
Because science doesn't work work like that. Nobody should assume wild hypotheses without any evidence whatsoever.
Isn’t all our reasoning ultimately a form of pattern memorization? I sure feel like it is.
You should get a job in "AI". smh.
-
But for something like solving a Towers of Hanoi puzzle, which is what this study is about, we're not looking for emotional judgements - we're trying to evaluate the logical reasoning capabilities. A sociopath would be equally capable of solving logic puzzles compared to a non-sociopath. In fact, simple computer programs do a great job of solving these puzzles, and they certainly have nothing like emotions. So I'm not sure that emotions have much relevance to the topic of AI or human reasoning and problem solving, at least not this particular aspect of it.
As for analogizing LLMs to sociopaths, I think that's a bit odd too. The reason why we (stereotypically) find sociopathy concerning is that a person has their own desires which, in combination with a disinterest in others' feelings, incentivizes them to be deceitful or harmful in some scenarios. But LLMs are largely designed specifically as servile, having no will or desires of their own. If people find it concerning that LLMs imitate emotions, then I think we're giving them far too much credit as sentient autonomous beings - and this is coming from someone who thinks they think in the same way we do! The think like we do, IMO, but they lack a lot of the other subsystems that are necessary for an entity to function in a way that can be considered as autonomous/having free will/desires of its own choosing, etc.
In fact, simple computer programs do a great job of solving these puzzles...
Yes, this shit is very basic. Not at all "intelligent."
-
You've hit the nail on the head.
Personally, I wish that there's more progress in our understanding of human intelligence.
Their argument is that we don't understand human intelligence so we should call computers intelligent.
That's not hitting any nail on the head.
-
Agreed. We don't seem to have a very cohesive idea of what human consciousness is or how it works.
... And so we should call machines "intelligent"? That's not how science works.
-
I'm going to write a program to play tic-tac-toe. If y'all don't think it's "AI", then you're just haters. Nothing will ever be good enough for y'all. You want scientific evidence of intelligence?!?! I can't even define intelligence so take that! \s
Seriously tho. This person is arguing that a checkers program is "AI". It kinda demonstrates the loooong history of this grift.
It is. And has always been. "Artificial Intelligence" doesn't mean a feeling thinking robot person (that would fall under AGI or artificial conciousness), it's a vast field of research in computer science with many, many things under it.
-
Performance collapses because luck runs out. Bigger destruction of the planet won't fix that.
Brother you better hope it does because even if emissions dropped to 0 tonight the planet wouldnt stop warming and it wouldn't stop what's coming for us.
-
I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy "dataset" that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.
But I don't think we're anywhere near there yet.
The only reason we're not there yet is memory limitations.
Eventually some company will come out with AI hardware that lets you link up a petabyte of ultra fast memory to chips that contain a million parallel matrix math processors. Then we'll have an entirely new problem: AI that trains itself incorrectly too quickly.
Just you watch: The next big breakthrough in AI tech will come around 2032-2035 (when the hardware is available) and everyone will be bitching that "chain reasoning" (or whatever the term turns out to be) isn't as smart as everyone thinks it is.
-
did i do it here? also that's where i live, if i can't talk about womens struggle then i appologize
I don't think that person cares about women or anything else. They just said that they don't even want to hear about it.
-
This post did not contain any content.
this is so Apple, claiming to invent or discover something "first" 3 years later than the rest of the market
-
...... So you're saying there's a chance?
10^36 flops to be exact
-
Who is "you"?
Just because some dummies supposedly think that NPCs are "AI", that doesn't make it so. I don't consider checkers to be a litmus test for "intelligence".
"You" applies to anyone that doesnt understand what AI means. It's a portmanteau word for a lot of things.
Npcs ARE AI. AI doesnt mean "human level intelligence" and never did. Read the wiki if you need help understanding.
-
Unlike Markov models, modern LLMs use transformers that attend to full contexts, enabling them to simulate structured, multi-step reasoning (albeit imperfectly). While they don’t initiate reasoning like humans, they can generate and refine internal chains of thought when prompted, and emerging frameworks (like ReAct or Toolformer) allow them to update working memory via external tools. Reasoning is limited, but not physically impossible, it’s evolving beyond simple pattern-matching toward more dynamic and compositional processing.
I'm not convinced that humans don't reason in a similar fashion. When I'm asked to produce pointless bullshit at work my brain puts in a similar level of reasoning to an LLM.
Think about "normal" programming: An experienced developer (that's self-trained on dozens of enterprise code bases) doesn't have to think much at all about 90% of what they're coding. It's all bog standard bullshit so they end up copying and pasting from previous work, Stack Overflow, etc because it's nothing special.
The remaining 10% is "the hard stuff". They have to read documentation, search the Internet, and then—after all that effort to avoid having to think—they sigh and start actually start thinking in order to program the thing they need.
LLMs go through similar motions behind the scenes! Probably because they were created by software developers but they still fail at that last 90%: The stuff that requires actual thinking.
Eventually someone is going to figure out how to auto-generate LoRAs based on test cases combined with trial and error that then get used by the AI model to improve itself and that is when people are going to be like, "Oh shit! Maybe AGI really is imminent!" But again, they'll be wrong.
AGI won't happen until AI models get good at retraining themselves with something better than basic reinforcement learning. In order for that to happen you need the working memory of the model to be nearly as big as the hardware that was used to train it. That, and loads and loads of spare matrix math processors ready to go for handing that retraining.
-
This post did not contain any content.
Just like me