Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.
-
Thanks for highlighting this. Blocked em. I know these horrible things happen, but if they're happening on the other side of the world and there is literally nothing i can do to help all they do is spread sadness and despair, and at worst provoke racism (as all the stories being shared are the same country, yet these incidents happen worldwide).
did i do it here? also that's where i live, if i can't talk about womens struggle then i appologize
-
maybe alerts mods/admins or comment where that’s happening…?
he is lying i didn't do it here
-
he is lying i didn't do it here
::: spoiler spoiler
safsafsfsafs
::: -
LOOK MAA I AM ON FRONT PAGE
does ANY model reason at all?
-
LOOK MAA I AM ON FRONT PAGE
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
-
::: spoiler spoiler
safsafsfsafs
:::Just look at his username he is just a troll
-
LOOK MAA I AM ON FRONT PAGE
stochastic parrots. all of them. just upgraded “soundex” models.
this should be no surprise, of course!
-
LOOK MAA I AM ON FRONT PAGE
I use LLMs as advanced search engines. No ads or sponsored results.
-
Why tf are you spamming rape stories?
And this is relevant to this post in what regard?
90% of Lemmy comments lately are not subject related and only about how OP is not leftist, not pro-israel, pro-palestine, pro-sjw enough. Is this what Lemmy aims to be?
-
And this is relevant to this post in what regard?
90% of Lemmy comments lately are not subject related and only about how OP is not leftist, not pro-israel, pro-palestine, pro-sjw enough. Is this what Lemmy aims to be?
thanks alot kind person for taking my side
-
does ANY model reason at all?
No, and to make that work using the current structures we use for creating AI models we’d probably need all the collective computing power on earth at once.
-
I use LLMs as advanced search engines. No ads or sponsored results.
There are search engines that do this better. There’s a world out there beyond Google.
-
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
The "Apple" part. CEOs only care what companies say.
-
LOOK MAA I AM ON FRONT PAGE
Would like a link to the original research paper, instead of a link of a screenshot of a screenshot
-
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
This is why I say these articles are so similar to how right wing media covers issues about immigrants.
There's some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They're taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There's articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.
Then when they pass laws, we're all primed to accept them removing whatever it is that advantageous them and disadvantageous us.
-
And this is relevant to this post in what regard?
90% of Lemmy comments lately are not subject related and only about how OP is not leftist, not pro-israel, pro-palestine, pro-sjw enough. Is this what Lemmy aims to be?
It's not relevant to the post... But what the fuck
-
LOOK MAA I AM ON FRONT PAGE
Just fancy Markov chains with the ability to link bigger and bigger token sets. It can only ever kick off processing as a response and can never initiate any line of reasoning. This, along with the fact that its working set of data can never be updated moment-to-moment, means that it would be a physical impossibility for any LLM to achieve any real "reasoning" processes.
-
Would like a link to the original research paper, instead of a link of a screenshot of a screenshot
The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes…
Apple Machine Learning Research (machinelearning.apple.com)
-
LOOK MAA I AM ON FRONT PAGE
The difference between reasoning models and normal models is reasoning models are two steps, to oversimplify it a little they prompt "how would you go about responding to this" then prompt "write the response"
It's still predicting the most likely thing to come next, but the difference is that it gives the chance for the model to write the most likely instructions to follow for the task, then the most likely result of following the instructions - both of which are much more conformant to patterns than a single jump from prompt to response.
-
LOOK MAA I AM ON FRONT PAGE
This has been known for years, this is the default assumption of how these models work.
You would have to prove that some kind of actual reasoning capacity has arisen as... some kind of emergent complexity phenomenon.... not the other way around.
Corpos have just marketed/gaslit us/themselves so hard that they apparently forgot this.
-
Help us understand the challenges patients face opting out of voluntary uses of their data, or getting access to their records.
Technology1
-
-
-
-
‘I sold my iris; now what?’: What drives Brazilians to hand over their unique, personal data
Technology1
-
-
Meta and Yandex are de-anonymizing Android users’ web browsing identifiers - Ars Technica
Technology1
-
Tesla confirms it has given up on its Cybertruck range extender to achieve promised range
Technology1