Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 11:54 zuletzt editiert vondoes ANY model reason at all?
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 12:01 zuletzt editiert vonlol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
-
::: spoiler spoiler
safsafsfsafs
:::schrieb am 8. Juni 2025, 12:03 zuletzt editiert vonJust look at his username he is just a troll
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 12:05 zuletzt editiert vonstochastic parrots. all of them. just upgraded “soundex” models.
this should be no surprise, of course!
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 12:08 zuletzt editiert von muskymelon@lemmy.world 6. Aug. 2025, 14:08I use LLMs as advanced search engines. No ads or sponsored results.
-
Why tf are you spamming rape stories?
schrieb am 8. Juni 2025, 12:12 zuletzt editiert vonAnd this is relevant to this post in what regard?
90% of Lemmy comments lately are not subject related and only about how OP is not leftist, not pro-israel, pro-palestine, pro-sjw enough. Is this what Lemmy aims to be?
-
And this is relevant to this post in what regard?
90% of Lemmy comments lately are not subject related and only about how OP is not leftist, not pro-israel, pro-palestine, pro-sjw enough. Is this what Lemmy aims to be?
schrieb am 8. Juni 2025, 12:22 zuletzt editiert vonthanks alot kind person for taking my side
-
does ANY model reason at all?
schrieb am 8. Juni 2025, 12:25 zuletzt editiert vonNo, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.
-
I use LLMs as advanced search engines. No ads or sponsored results.
schrieb am 8. Juni 2025, 12:31 zuletzt editiert vonThere are search engines that do this better. There’s a world out there beyond Google.
-
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
schrieb am 8. Juni 2025, 12:34 zuletzt editiert vonThe "Apple" part. CEOs only care what companies say.
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 12:36 zuletzt editiert von blaster_m@lemmy.world 6. Sept. 2025, 18:26Would like a link to the original research paper, instead of a link of a screenshot of a screenshot
-
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
schrieb am 8. Juni 2025, 12:37 zuletzt editiert vonThis is why I say these articles are so similar to how right wing media covers issues about immigrants.
There's some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They're taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There's articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.
Then when they pass laws, we're all primed to accept them removing whatever it is that advantageous them and disadvantageous us.
-
And this is relevant to this post in what regard?
90% of Lemmy comments lately are not subject related and only about how OP is not leftist, not pro-israel, pro-palestine, pro-sjw enough. Is this what Lemmy aims to be?
schrieb am 8. Juni 2025, 12:38 zuletzt editiert vonIt's not relevant to the post... But what the fuck
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 12:39 zuletzt editiert vonJust fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.
-
Would like a link to the original research paper, instead of a link of a screenshot of a screenshot
schrieb am 8. Juni 2025, 12:50 zuletzt editiert vonThe Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes…
Apple Machine Learning Research (machinelearning.apple.com)
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 12:52 zuletzt editiert vonThe difference between reasoning models and normal models is reasoning models are two steps, to oversimplify it a little they prompt "how would you go about responding to this" then prompt "write the response"
It's still predicting the most likely thing to come next, but the difference is that it gives the chance for the model to write the most likely instructions to follow for the task, then the most likely result of following the instructions - both of which are much more conformant to patterns than a single jump from prompt to response.
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 12:58 zuletzt editiert von sp3ctr4l@lemmy.dbzer0.com 6. Aug. 2025, 14:59This has been known for years, this is the default assumption of how these models work.
You would have to prove that some kind of actual reasoning capacity has arisen as... some kind of emergent complexity phenomenon.... not the other way around.
Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.
-
I use LLMs as advanced search engines. No ads or sponsored results.
schrieb am 8. Juni 2025, 13:00 zuletzt editiert vonThere are ads but they're subtle enough that you don't recognize them as such.
-
Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.
schrieb am 8. Juni 2025, 13:06 zuletzt editiert vonI can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy "dataset" that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.
But I don't think we're anywhere near there yet.
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 13:08 zuletzt editiert von mfed1122@discuss.tchncs.de 6. Aug. 2025, 15:09This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.
-
Donald Trump pressure extracts $100bn Apple investment pledge
Technology356 vor 6 Tagenvor 6 Tagen1
-
Are you worried about your child’s screentime? Get a landline
Technology356 vor 4 Tagenvor 13 Tagen1
-
GoGym4U - The Ultimate Gym Management App for Modern Fitness Businesses
Technology356 vor 16 Tagenvor 16 Tagen2
-
-
Flock Removes States From National Lookup Tool After ICE and Abortion Searches Revealed
Technology 26. Juni 2025, 00:201
-
-
Telegram, the FSB, and the Man in the Middle: The technical infrastructure that underpins Telegram is controlled by a man whose companies have collaborated with Russian intelligence services.
Technology 10. Juni 2025, 09:561
-