Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.
-
does ANY model reason at all?
schrieb am 8. Juni 2025, 12:25 zuletzt editiert vonNo, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.
-
I use LLMs as advanced search engines. No ads or sponsored results.
schrieb am 8. Juni 2025, 12:31 zuletzt editiert vonThere are search engines that do this better. There’s a world out there beyond Google.
-
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
schrieb am 8. Juni 2025, 12:34 zuletzt editiert vonThe "Apple" part. CEOs only care what companies say.
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 12:36 zuletzt editiert von blaster_m@lemmy.world 6. Sept. 2025, 18:26Would like a link to the original research paper, instead of a link of a screenshot of a screenshot
-
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
schrieb am 8. Juni 2025, 12:37 zuletzt editiert vonThis is why I say these articles are so similar to how right wing media covers issues about immigrants.
There's some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They're taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There's articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.
Then when they pass laws, we're all primed to accept them removing whatever it is that advantageous them and disadvantageous us.
-
And this is relevant to this post in what regard?
90% of Lemmy comments lately are not subject related and only about how OP is not leftist, not pro-israel, pro-palestine, pro-sjw enough. Is this what Lemmy aims to be?
schrieb am 8. Juni 2025, 12:38 zuletzt editiert vonIt's not relevant to the post... But what the fuck
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 12:39 zuletzt editiert vonJust fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.
-
Would like a link to the original research paper, instead of a link of a screenshot of a screenshot
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 12:52 zuletzt editiert vonThe difference between reasoning models and normal models is reasoning models are two steps, to oversimplify it a little they prompt "how would you go about responding to this" then prompt "write the response"
It's still predicting the most likely thing to come next, but the difference is that it gives the chance for the model to write the most likely instructions to follow for the task, then the most likely result of following the instructions - both of which are much more conformant to patterns than a single jump from prompt to response.
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 12:58 zuletzt editiert von sp3ctr4l@lemmy.dbzer0.com 6. Aug. 2025, 14:59This has been known for years, this is the default assumption of how these models work.
You would have to prove that some kind of actual reasoning capacity has arisen as... some kind of emergent complexity phenomenon.... not the other way around.
Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.
-
I use LLMs as advanced search engines. No ads or sponsored results.
schrieb am 8. Juni 2025, 13:00 zuletzt editiert vonThere are ads but they're subtle enough that you don't recognize them as such.
-
Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.
schrieb am 8. Juni 2025, 13:06 zuletzt editiert vonI can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy "dataset" that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.
But I don't think we're anywhere near there yet.
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 13:08 zuletzt editiert von mfed1122@discuss.tchncs.de 6. Aug. 2025, 15:09This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.
-
This is why I say these articles are so similar to how right wing media covers issues about immigrants.
There's some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They're taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There's articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.
Then when they pass laws, we're all primed to accept them removing whatever it is that advantageous them and disadvantageous us.
schrieb am 8. Juni 2025, 13:11 zuletzt editiert vonBecause it's a fear-mongering angle that still sells. AI has been a vehicle for scifi for so long that trying to convince Boomers that of won't kill us all is the hard part.
I'm a moderate user for code and skeptic of LLM abilities, but 5 years from now when we are leveraging ML models for groundbreaking science and haven't been nuked by SkyNet, all of this will look quaint and silly.
-
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
schrieb am 8. Juni 2025, 13:12 zuletzt editiert von johnedwa@sopuli.xyz 6. Aug. 2025, 15:13"It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." -Pamela McCorduck´.
It's called the AI Effect .As Larry Tesler puts it, "AI is whatever hasn't been done yet.".
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 13:17 zuletzt editiert vonYou know, despite not really believing LLM "intelligence" works anywhere like real intelligence, I kind of thought maybe being good at recognizing patterns was a way to emulate it to a point...
But that study seems to prove they're still not even good at that. At first I was wondering how hard the puzzles must have been, and then there's a bit about LLM finishing 100 move towers of Hanoï (on which they were trained) and failing 4 move river crossings. Logically, those problems are very similar... Also, failing to apply a step-by-step solution they were given.
-
LOOK MAA I AM ON FRONT PAGE
schrieb am 8. Juni 2025, 13:19 zuletzt editiert vonSo they have worked out that LLMs do what they were programmed to do in the way that they were programmed? Shocking.
-
The difference between reasoning models and normal models is reasoning models are two steps, to oversimplify it a little they prompt "how would you go about responding to this" then prompt "write the response"
It's still predicting the most likely thing to come next, but the difference is that it gives the chance for the model to write the most likely instructions to follow for the task, then the most likely result of following the instructions - both of which are much more conformant to patterns than a single jump from prompt to response.
schrieb am 8. Juni 2025, 13:19 zuletzt editiert vonBut it still manages to fuck it up.
I've been experimenting with using Claude's Sonnet model in Copilot in agent mode for my job, and one of the things that's become abundantly clear is that it has certain types of behavior that are heavily represented in the model, so it assumes you want that behavior even if you explicitly tell it you don't.
Say you're working in a yarn workspaces project, and you instruct Copilot to build and test a new dashboard using an instruction file. You'll need to include explicit and repeated reminders all throughout the file to use yarn, not NPM, because even though yarn is very popular today, there are so many older examples of using NPM in its model that it's just going to assume that's what you actually want - thereby fucking up your codebase.
I've also had lots of cases where I tell it I don't want it to edit any code, just to analyze and explain something that's there and how to update it... and then I have to stop it from editing code anyway, because halfway through it forgot that I didn't want edits, just explanations.
-
This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.
schrieb am 8. Juni 2025, 13:24 zuletzt editiert vonYou've hit the nail on the head.
Personally, I wish that there's more progress in our understanding of human intelligence.
-
This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.
schrieb am 8. Juni 2025, 13:26 zuletzt editiert vonAgreed. We don't seem to have a very cohesive idea of what human consciousness is or how it works.
-
(Edited title, see details for original) Here's why you're getting enshittified...
Technology356 vor 5 Stundenvor etwa einem Tag1
-
The Astronomer CEO's Coldplay Concert Fiasco Is Emblematic of Our Social Media Surveillance Dystopia
Technology356 vor 11 Tagenvor 13 Tagen1
-
Microsoft Pivots, Offers Free Windows 10 Updates after End-Of-Life Deadline with a Strategic Catch - WinBuzzer
Technology 26. Juni 2025, 00:161
-
This Week In Security: That Time I Caused A 9.5 CVE, IOS Spyware, And The Day The Internet Went Down
Technology 20. Juni 2025, 17:201
-
Social media overtakes TV as main source of news in US, analysis finds
Technology 17. Juni 2025, 21:121
-
-
Duolingo CEO says AI is a better teacher than humans—but schools will exist ‘because you still need childcare’
Technology 21. Mai 2025, 13:291
-
Pope Betting Odds: Bettors Lose Millions Predicting the New Pope as Polymarket Edge Fizzles Out
Technology 9. Mai 2025, 15:451