Skip to content

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Technology
346 148 17
  • TBH idk how people can convince themselves otherwise.

    They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

    It's no surprise to me that the person at work who is most excited by AI, is the same person who is most likely to be replaced by it.

  • LOOK MAA I AM ON FRONT PAGE

    No way!

    Statistical Language models don't reason?

    But OpenAI, robots taking over!

  • I mean... Is that not reasoning, I guess? It's what my brain does-- recognizes patterns and makes split second decisions.

    Yes, this comment seems to indicate that your brain does work that way.

  • It's no surprise to me that the person at work who is most excited by AI, is the same person who is most likely to be replaced by it.

    Yeah the excitement comes from the fact that they’re thinking of replacing themselves and keeping the money. They don’t get to “Step 2” in theirs heads lmao.

  • LOOK MAA I AM ON FRONT PAGE

    Thank you Captain Obvious! Only those who think LLMs are like "little people in the computer" didn't knew this already.

  • LOOK MAA I AM ON FRONT PAGE

    Of course, that is obvious to all having basic knowledge of neural networks, no?

  • Maybe you failed all your high school classes, but that ain't got none to do with me.

    Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.

  • lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.

    Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.

  • Maybe they are so far behind because they jumped on the same train but then failed at achieving what they wanted based on the claims. And then they started digging around.

    Yes, Apple haters can't admit nor understand it but Apple doesn't do pseudo-tech.

    They may do silly things, they may love their 100% mark up but it's all real technology.

    The AI pushers or today are akin to the pushers of paranormal phenomenon from a century ago. These pushers want us to believe, need us to believe it so they can get us addicted and extract value from our very existence.

  • No, it shows how certain people misunderstand the meaning of the word.

    You have called npcs in video games "AI" for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.

    Intellegence has a very clear definition.

    It's requires the ability to acquire knowledge, understand knowledge and use knowledge.

    No one has been able to create an system that can understand knowledge, therefor me none of it is artificial intelligence. Each generation is merely more and more complex knowledge models. Useful in many ways but never intelligent.

  • I can envision a system where an LLM becomes one part of a reasoning AI, acting as a kind of fuzzy "dataset" that a proper neural network incorporates and reasons with, and the LLM could be kept real-time updated (sort of) with MCP servers that incorporate anything new it learns.

    But I don't think we're anywhere near there yet.

    LLMs (at least in their current form) are proper neural networks.

  • This sort of thing has been published a lot for awhile now, but why is it assumed that this isn't what human reasoning consists of? Isn't all our reasoning ultimately a form of pattern memorization? I sure feel like it is. So to me all these studies that prove they're "just" memorizing patterns don't prove anything other than that, unless coupled with research on the human brain to prove we do something different.

    This. Same with the discussion about consciousness. People always claim that AI is not real intelligence, but no one can ever define what real/human intelligence is. It's like people believe in something like a human soul without admitting it.

  • LOOK MAA I AM ON FRONT PAGE

    Most humans don't reason. They just parrot shit too. The design is very human.

  • The sticky wicket is the proof that humans (functioning 'normally') do more than pattern.

    I think if you look at child development research, you'll see that kids can learn to do crazy shit with very little input, waaay less than you'd need to train a neural net to do the same. So either kids are the luckiest neural nets and always make the correct adjustment after failing, or they have some innate knowledge that isn't pattern-based at all.

    There's even some examples in linguistics specifically, where children tend towards certain grammar rules despite all evidence in their language pointing to another rule. Pure pattern-matching would find the real-world rule without first modelling a different (universally common) rule.

  • What isn't there to gain?

    Its power lies in ingesting language and producing infinite variations. We can feed it talking points, ask it to refine our ideas, test their logic, and even request counterarguments to pressure-test our stance. It helps us build stronger, more resilient narratives.

    We can use it to make memes. Generate images. Expose logical fallacies. Link to credible research. It can detect misinformation in real-time and act as a force multiplier for anyone trying to raise awareness or push back on disinfo.

    Most importantly, it gives a voice to people with strong ideas who might not have the skills or confidence to share them. Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

    Sure, it has flaws. But rejecting it outright while the right embraces it? That’s beyond shortsighted it’s self-sabotage. And unfortunately, after the last decade, that kind of misstep is par for the course.

    I have no idea what sort of AI you've used that could do any of this stuff you've listed. A program that doesn't reason won't expose logical fallacies with any rigour or refine anyone's ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it's completely divorced from how the stuff as it is currently works.

    Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

    That's a misguided view of how art is created. Supposed "brilliant ideas" are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don't make it visual, write a story or an essay.

    Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).

    For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).

  • ITT: people who obviously did not study computer science or AI at at least an undergraduate level.

    Y'all are too patient. I can't be bothered to spend the time to give people free lessons.

    The computer science industry isn't the authority on artificial intelligence it thinks it is. The industry is driven by a level of hubris that causes people to step beyond the bounds of science and into the realm of humanities without acknowledgment.

  • Not when large swaths of people are being told to use it everyday. Upper management has bought in on it.

    Yep. I'm retired now, but before retirement a month or so ago, I was working on a project that relied on several hundred people back in 2020. "Why can't AI do it?"

    The people I worked with are continuing the research and putting it up against the human coders, but...there was definitely an element of "AI can do that, we won't need people" next time. I sincerely hope management listens to reason. Our decisions would lead to potentially firing people, so I think we were able to push back on the "AI can make all of these decisions"...for now.

    The AI people were all in, they were ready to build an interface that told the human what the AI would recommend for each item. Errrm, no, that's not how an independent test works. We had to reel them back in.

  • I have no idea what sort of AI you've used that could do any of this stuff you've listed. A program that doesn't reason won't expose logical fallacies with any rigour or refine anyone's ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it's completely divorced from how the stuff as it is currently works.

    Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.

    That's a misguided view of how art is created. Supposed "brilliant ideas" are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don't make it visual, write a story or an essay.

    Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).

    For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).

    Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

    You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


    1. Straw Man Fallacy

    "Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

    This misrepresents the original claim:

    "AI can help create a framework at the very least so they can get their ideas down."

    The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


    1. False Dichotomy

    "If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

    This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


    1. Hasty Generalization

    "Supposed 'brilliant ideas' are a dime a dozen..."

    While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


    1. Appeal to Ridicule / Ad Hominem (Light)

    "...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

    Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


    1. Tu Quoque / Whataboutism (Borderline)

    "For now I see no particular benefits that the right-wing has obtained by using AI either..."

    This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


    Summary of Fallacies Identified:

    Type Description

    Straw Man Misrepresents the role of AI in creative assistance.
    False Dichotomy Assumes one must either be visually skilled or not attempt visual media.
    Hasty Generalization Devalues “brilliant ideas” universally.
    Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis.
    Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


    Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

    At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

  • Most humans don't reason. They just parrot shit too. The design is very human.

    Yeah I've always said the the flaw in Turing's Imitation Game concept is that if an AI was indistinguishable from a human it wouldn't prove it's intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.

  • Here is chatgpt doing what you said it can't. Finding all the logical fallacies in what you write:

    You're raising strong criticisms, and it's worth unpacking them carefully. Let's go through your argument and see if there are any logical fallacies or flawed reasoning.


    1. Straw Man Fallacy

    "Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept."

    This misrepresents the original claim:

    "AI can help create a framework at the very least so they can get their ideas down."

    The original point wasn't that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.


    1. False Dichotomy

    "If you are not competent in a visual medium, then don't make it visual, write a story or an essay."

    This suggests a binary: either you're competent at visual art or you shouldn't try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).


    1. Hasty Generalization

    "Supposed 'brilliant ideas' are a dime a dozen..."

    While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn't invalidate the potential value of enabling more people to test theirs.


    1. Appeal to Ridicule / Ad Hominem (Light)

    "...result in a boring comic..." / "...just bad (look at SMBC or xkcd or...)"

    Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn't really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That's not a logical fallacy in the strictest sense, but it's rhetorically weak.


    1. Tu Quoque / Whataboutism (Borderline)

    "For now I see no particular benefits that the right-wing has obtained by using AI either..."

    This seems like a rebuttal to a point that wasn't made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.


    Summary of Fallacies Identified:

    Type Description

    Straw Man Misrepresents the role of AI in creative assistance.
    False Dichotomy Assumes one must either be visually skilled or not attempt visual media.
    Hasty Generalization Devalues “brilliant ideas” universally.
    Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis.
    Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.


    Your criticism is thoughtful and not without merit—but it's wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?

    At this point you're just arguing for arguments sake. You're not wrong or right but instead muddying things. Saying it'll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they've bought into the hype and need to justify it.

    Did you even read this garbage? It’s just words strung together without any meaning. The things it’s claiming show a fundamental lack of understanding of what it is responding to.

    This didn’t prove your point at all, quite the opposite. And it wasted everyone’s time in the process. Good job, this was worthless.

  • 18 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • Why doesn't Nvidia have more competition?

    Technology technology
    22
    1
    33 Stimmen
    22 Beiträge
    2 Aufrufe
    B
    It’s funny how the article asks the question, but completely fails to answer it. About 15 years ago, Nvidia discovered there was a demand for compute in datacenters that could be met with powerful GPU’s, and they were quick to respond to it, and they had the resources to focus on it strongly, because of their huge success and high profitability in the GPU market. AMD also saw the market, and wanted to pursue it, but just over a decade ago where it began to clearly show the high potential for profitability, AMD was near bankrupt, and was very hard pressed to finance developments on GPU and compute in datacenters. AMD really tried the best they could, and was moderately successful from a technology perspective, but Nvidia already had a head start, and the proprietary development system CUDA was already an established standard that was very hard to penetrate. Intel simply fumbled the ball from start to finish. After a decade of trying to push ARM down from having the mobile crown by far, investing billions or actually the equivalent of ARM’s total revenue. They never managed to catch up to ARM despite they had the better production process at the time. This was the main focus of Intel, and Intel believed that GPU would never be more than a niche product. So when intel tried to compete on compute for datacenters, they tried to do it with X86 chips, One of their most bold efforts was to build a monstrosity of a cluster of Celeron chips, which of course performed laughably bad compared to Nvidia! Because as it turns out, the way forward at least for now, is indeed the massively parralel compute capability of a GPU, which Nvidia has refined for decades, only with (inferior) competition from AMD. But despite the lack of competition, Nvidia did not slow down, in fact with increased profits, they only grew bolder in their efforts. Making it even harder to catch up. Now AMD has had more money to compete for a while, and they do have some decent compute units, but Nvidia remains ahead and the CUDA problem is still there, so for AMD to really compete with Nvidia, they have to be better to attract customers. That’s a very tall order against Nvidia that simply seems to never stop progressing. So the only other option for AMD is to sell a bit cheaper. Which I suppose they have to. AMD and Intel were the obvious competitors, everybody else is coming from even further behind. But if I had to make a bet, it would be on Huawei. Huawei has some crazy good developers, and Trump is basically forcing them to figure it out themselves, because he is blocking Huawei and China in general from using both AMD and Nvidia AI chips. And the chips will probably be made by Chinese SMIC, because they are also prevented from using advanced production in the west, most notably TSMC. China will prevail, because it’s become a national project, of both prestige and necessity, and they have a massive talent mass and resources, so nothing can stop it now. IMO USA would clearly have been better off allowing China to use American chips. Now China will soon compete directly on both production and design too.
  • 106 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • Pocket shutting down

    Technology technology
    2
    2 Stimmen
    2 Beiträge
    1 Aufrufe
    B
    Can anyone recommend a good alternative? I still use it to bookmark most wanted sites.
  • 37 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • Microsoft Bans Employees From Using DeepSeek App

    Technology technology
    11
    1
    122 Stimmen
    11 Beiträge
    2 Aufrufe
    L
    (Premise - suppose I accept that there is such a definable thing as capitalism) I'm not sure why you feel the need to state this in a discussion that already assumes it as a necessary precondition of, but, uh, you do you. People blaming capitalism for everything then build a country that imports grain, while before them and after them it’s among the largest exporters on the planet (if we combine Russia and Ukraine for the “after” metric, no pun intended). ...what? What does this have to do with literally anything, much less my comment about innovation/competition? Even setting aside the wild-assed assumptions you're making about me criticizing capitalism means I 'blame [it] for everything', this tirade you've launched into, presumably about Ukraine and the USSR, has no bearing on anything even tangentially related to this conversation. People praising capitalism create conditions in which there’s no reason to praise it. Like, it’s competitive - they kill competitiveness with patents, IP, very complex legal systems. It’s self-regulating and self-optimizing - they make regulations and do bailouts preventing sick companies from dying, make laws after their interests, then reactively make regulations to make conditions with them existing bearable, which have a side effect of killing smaller companies. Please allow me to reiterate: ...what? Capitalists didn't build literally any of those things, governments did, and capitalists have been trying to escape, subvert, or dismantle those systems at every turn, so this... vain, confusing attempt to pin a medal on capitalism's chest for restraining itself is not only wrong, it fails to understand basic facts about history. It's the opposite of self-regulating because it actively seeks to dismantle regulations (environmental, labor, wage, etc), and the only thing it optimizes for is the wealth of oligarchs, and maybe if they're lucky, there will be a few crumbs left over for their simps. That’s the problem, both “socialist” and “capitalist” ideal systems ignore ape power dynamics. I'm going to go ahead an assume that 'the problem' has more to do with assuming that complex interacting systems can be simplified to 'ape (or any other animal's) power dynamics' than with failing to let the richest people just do whatever they want. Such systems should be designed on top of the fact that jungle law is always allowed So we should just be cool with everybody being poor so Jeff Bezos or whoever can upgrade his megayacht to a gigayacht or whatever? Let me say this in the politest way I know how: LOL no. Also, do you remember when I said this? ‘Won’t someone please think of the billionaires’ is wearing kinda thin You know, right before you went on this very long-winded, surreal, barely-coherent ramble? Did you imagine I would be convinced by literally any of it when all it amounts to is one giant, extraneous, tedious equivalent of 'Won't someone please think of the billionaires?' Simp harder and I bet maybe you can get a crumb or two yourself.
  • WhatsApp provides no cryptographic management for group messages

    Technology technology
    3
    1
    17 Stimmen
    3 Beiträge
    3 Aufrufe
    S
    Just be sure to add only the people you want to be there. I've heard some people add others and it's a bit messy
  • 0 Stimmen
    2 Beiträge
    2 Aufrufe
    G
    Wow... Just learned about that NOW. I wanted to play some games today and wondered why my account doesnt work nor the "forgot password"-Function... Fuck Meta. Fuck Oculus... Fuck this whole Enshittification that is going on lately... Is there ANY Way, to get my CV1 to run Without an account?!