Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.
-
This post did not contain any content.
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
-
I mean they aren’t lying. I see your comment history, you do spam them. I see like three or four of them on aboringdystopia. Looks like you literally repost the same articles over and over throughout your history.
I’m just saying if they’ve got a problem then they should alert mods/admins and point to the actual cases.
Just look at his username he is just a troll
-
This post did not contain any content.
stochastic parrots. all of them. just upgraded “soundex” models.
this should be no surprise, of course!
-
This post did not contain any content.
I use LLMs as advanced search engines. No ads or sponsored results.
-
Why tf are you spamming rape stories?
And this is relevant to this post in what regard?
90% of Lemmy comments lately are not subject related and only about how OP is not leftist, not pro-israel, pro-palestine, pro-sjw enough. Is this what Lemmy aims to be?
-
And this is relevant to this post in what regard?
90% of Lemmy comments lately are not subject related and only about how OP is not leftist, not pro-israel, pro-palestine, pro-sjw enough. Is this what Lemmy aims to be?
thanks alot kind person for taking my side
-
does ANY model reason at all?
No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.
-
I use LLMs as advanced search engines. No ads or sponsored results.
There are search engines that do this better. There’s a world out there beyond Google.
-
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
The "Apple" part. CEOs only care what companies say.
-
This post did not contain any content.
Would like a link to the original research paper, instead of a link of a screenshot of screenshot
-
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
This is why I say these articles are so similar to how right wing media covers issues about immigrants.
There's some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They're taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There's articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.
Then when they pass laws, we're all primed to accept them removing whatever it is that advantageous them and disadvantageous us.
-
And this is relevant to this post in what regard?
90% of Lemmy comments lately are not subject related and only about how OP is not leftist, not pro-israel, pro-palestine, pro-sjw enough. Is this what Lemmy aims to be?
It's not relevant to the post... But what the fuck
-
This post did not contain any content.
Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.
-
Would like a link to the original research paper, instead of a link of a screenshot of screenshot
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes…
Apple Machine Learning Research (machinelearning.apple.com)
-
This post did not contain any content.
The difference between reasoning models and normal models is reasoning models are two steps, to oversimplify it a little they prompt "how would you go about responding to this" then prompt "write the response"
It's still predicting the most likely thing to come next, but the difference is that it gives the chance for the model to write the most likely instructions to follow for the task, then the most likely result of following the instructions - both of which are much more conformant to patterns than a single jump from prompt to response.
-
This post did not contain any content.
This has been known for years, this is the default assumption of how these models work.
You would have to prove that some kind of actual reasoning capacity has arisen as... some kind of emergent complexity phenomenon.... not the other way around.
Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.
-
I use LLMs as advanced search engines. No ads or sponsored results.
There are ads but they're subtle enough that you don't recognize them as such.
-
Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.
I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy "dataset" that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.
But I don't think we're anywhere near there yet.
-
This post did not contain any content.
This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.
-
This is why I say these articles are so similar to how right wing media covers issues about immigrants.
There's some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They're taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There's articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.
Then when they pass laws, we're all primed to accept them removing whatever it is that advantageous them and disadvantageous us.
Because it's a fear-mongering angle that still sells. AI has been a vehicle for scifi for so long that trying to convince Boomers that of won't kill us all is the hard part.
I'm a moderate user for code and skeptic of LLM abilities, but 5 years from now when we are leveraging ML models for groundbreaking science and haven't been nuked by SkyNet, all of this will look quaint and silly.