Skip to content

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Technology
351 149 24
  • Yah of course they do they’re computers

    That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

  • Like what?

    I don’t think there’s any search engine better than Perplexity. And for scientific research Consensus is miles ahead.

    Through the years I've bounced between different engines. I gave Bing a decent go some years back, mostly because I was interested in gauging the performance and wanted to just pit something against Google. After that I've swapped between Qwant and Startpage a bunch. I'm a big fan of Startpage's "Anonymous view" function.

    Since then I've landed on Kagi, which I've used for almost a year now. It's the first search engine I've used that you can make work for you. I use the lens feature to focus on specific tasks, and de-prioritise pages that annoy me, sometimes outright omitting results from sites I find useless or unserious. For example when I'm doing web stuff and need to reference the MDN, I don't really care for w3schools polluting my results.

    I'm a big fan of using my own agency and making my own decisions, and the recent trend in making LLMs think for us is something I find rather worrying, it allows for a much subtler manipulation than what Google does with its rankings and sponsor inserts.

    Perplexity openly talking about wanting to buy Chrome and harvesting basically all the private data is also terrifying, thus I wouldn't touch that service with a stick. That said, I appreciate their candour, somehow being open about being evil is a lot more palatable to me than all these companies pretending to be good.

  • If emissions dropped to 0 tonight, we would be substantially better off than if we maintain our current trajectory. Doomerism helps nobody.

    It’s not doomerism it’s just realistic. Deluding yourself won’t change that.

  • If the situation gets dire, it's likely that the weather will be manipulated. Countries would then have to be convinced not to use this for military purposes.

    This isn’t a thing.

  • That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

    I think because it's language.

    There's a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking "if you put in the wrong figures, will the correct ones be output" and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.

    People are people, the main thing that's changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.

    And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.

  • I think because it's language.

    There's a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking "if you put in the wrong figures, will the correct ones be output" and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.

    People are people, the main thing that's changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.

    And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.

    I often feel like I'm surrounded by idiots, but even I can't begin to imagine what it must have felt like to be Charles Babbage explaining computers to people in 1840.

  • That's not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

    TBH idk how people can convince themselves otherwise.

    They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

  • "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." -Pamela McCorduck´.
    It's called the AI Effect.

    As Larry Tesler puts it, "AI is whatever hasn't been done yet.".

    Yesterday I asked an LLM "how much energy is stored in a grand piano?" It responded with saying there is no energy stored in a grad piano because it doesn't have a battery.

    Any reasoning human would have understood that question to be referring to the tension in the strings.

    Another example is asking "does lime cause kidney stones?". It didn't assume I mean lime the mineral and went with lime the citrus fruit instead.

    Once again a reasoning human would assume the question is about the mineral.

    Ask these questions again in a slightly different way and you might get a correct answer, but it won't be because the LLM was thinking.

  • LOOK MAA I AM ON FRONT PAGE

    What a dumb title. I proved it by asking a series of questions. It’s not AI, stop calling it AI, it’s a dumb af language model. Can you get a ton of help from it, as a tool? Yes! Can it reason? NO! It never could and for the foreseeable future, it will not.

    It’s phenomenal at patterns, much much better than us meat peeps. That’s why they’re accurate as hell when it comes to analyzing medical scans.

  • Yesterday I asked an LLM "how much energy is stored in a grand piano?" It responded with saying there is no energy stored in a grad piano because it doesn't have a battery.

    Any reasoning human would have understood that question to be referring to the tension in the strings.

    Another example is asking "does lime cause kidney stones?". It didn't assume I mean lime the mineral and went with lime the citrus fruit instead.

    Once again a reasoning human would assume the question is about the mineral.

    Ask these questions again in a slightly different way and you might get a correct answer, but it won't be because the LLM was thinking.

    But 90% of "reasoning humans" would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.

  • LOOK MAA I AM ON FRONT PAGE

    You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

  • Unlike Markov models, modern LLMs use transformers that attend to full contexts, enabling them to simulate structured, multi-step reasoning (albeit imperfectly). While they don’t initiate reasoning like humans, they can generate and refine internal chains of thought when prompted, and emerging frameworks (like ReAct or Toolformer) allow them to update working memory via external tools. Reasoning is limited, but not physically impossible, it’s evolving beyond simple pattern-matching toward more dynamic and compositional processing.

    previous input goes in. Completely static, prebuilt model processes it and comes up with a probability distribution.

    There is no "unlike markov chains". They are markov chains. Ones with a long context (a markov chain also kakes use of all the context provided to it, so I don't know what you're on about there). LLMs are just a (very) lossy compression scheme for the state transition table. Computed once, applied blindly to any context fed in.

  • previous input goes in. Completely static, prebuilt model processes it and comes up with a probability distribution.

    There is no "unlike markov chains". They are markov chains. Ones with a long context (a markov chain also kakes use of all the context provided to it, so I don't know what you're on about there). LLMs are just a (very) lossy compression scheme for the state transition table. Computed once, applied blindly to any context fed in.

    LLMs are not Markov chains, even extended ones. A Markov model, by definition, relies on a fixed-order history and treats transitions as independent of deeper structure. LLMs use transformer attention mechanisms that dynamically weigh relationships between all tokens in the input—not just recent ones. This enables global context modeling, hierarchical structure, and even emergent behaviors like in-context learning. Markov models can't reweight context dynamically or condition on abstract token relationships.

    The idea that LLMs are "computed once" and then applied blindly ignores the fact that LLMs adapt their behavior based on input. They don’t change weights during inference, true—but they do adapt responses through soft prompting, chain-of-thought reasoning, or even emulated state machines via tokens alone. That’s a powerful form of contextual plasticity, not blind table lookup.

    Calling them “lossy compressors of state transition tables” misses the fact that the “table” they’re compressing is not fixed—it’s context-sensitive and computed in real time using self-attention over high-dimensional embeddings. That’s not how Markov chains work, even with large windows.

  • LLMs are not Markov chains, even extended ones. A Markov model, by definition, relies on a fixed-order history and treats transitions as independent of deeper structure. LLMs use transformer attention mechanisms that dynamically weigh relationships between all tokens in the input—not just recent ones. This enables global context modeling, hierarchical structure, and even emergent behaviors like in-context learning. Markov models can't reweight context dynamically or condition on abstract token relationships.

    The idea that LLMs are "computed once" and then applied blindly ignores the fact that LLMs adapt their behavior based on input. They don’t change weights during inference, true—but they do adapt responses through soft prompting, chain-of-thought reasoning, or even emulated state machines via tokens alone. That’s a powerful form of contextual plasticity, not blind table lookup.

    Calling them “lossy compressors of state transition tables” misses the fact that the “table” they’re compressing is not fixed—it’s context-sensitive and computed in real time using self-attention over high-dimensional embeddings. That’s not how Markov chains work, even with large windows.

    their input is the context window. Markov chains also use their whole context window. Llms are a novel implementation that can work with much longer contexts, but as soon as something slides out of its window, it's forgotten. just like any other markov chain. They don't adapt. You add their token to the context, slide the oldest one out and then you have a different context, on which you run the same thing again. A normal markov chain will also give you a different outuut if you give it a different context. Their biggest weakness is that they don't and can't adapt. You are confusing the encoding of the context with the model itself. Just to see how static the model is, try setting temperature to 0, and giving it the same context. i.e. only try to predict one token with the exact same context each time. As soon as you try to predict a 2nd token, you've just changed the input and ran the thing again. It's not adapting, you asked it something different, so it came up with a different answer

  • You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

    We also reward people who can memorize and regurgitate even if they don't understand what they are doing.

  • their input is the context window. Markov chains also use their whole context window. Llms are a novel implementation that can work with much longer contexts, but as soon as something slides out of its window, it's forgotten. just like any other markov chain. They don't adapt. You add their token to the context, slide the oldest one out and then you have a different context, on which you run the same thing again. A normal markov chain will also give you a different outuut if you give it a different context. Their biggest weakness is that they don't and can't adapt. You are confusing the encoding of the context with the model itself. Just to see how static the model is, try setting temperature to 0, and giving it the same context. i.e. only try to predict one token with the exact same context each time. As soon as you try to predict a 2nd token, you've just changed the input and ran the thing again. It's not adapting, you asked it something different, so it came up with a different answer

    While both Markov models and LLMs forget information outside their window, that’s where the similarity ends. A Markov model relies on fixed transition probabilities and treats the past as a chain of discrete states. An LLM evaluates every token in relation to every other using learned, high-dimensional attention patterns that shift dynamically based on meaning, position, and structure.

    Changing one word in the input can shift the model’s output dramatically by altering how attention layers interpret relationships across the entire sequence. It’s a fundamentally richer computation that captures syntax, semantics, and even task intent, which a Markov chain cannot model regardless of how much context it sees.

  • Literally what I'm talking about. They have been pushing anti AI propaganda to alienate the left from embracing it while the right embraces it. You have such a blind spot you this, you can't even see you're making my argument for me.

    That depends on your assumption that the left would have anything relevant to gain by embracing AI (whatever that's actually supposed to mean).

  • ITT: people who obviously did not study computer science or AI at at least an undergraduate level.

    Y'all are too patient. I can't be bothered to spend the time to give people free lessons.

    Wow, I would deeply apologise on the behalf of all of us uneducated proles having opinions on stuff that we're bombarded with daily through the media.

  • You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.

    Maybe you failed all your high school classes, but that ain't got none to do with me.

  • ... And so we should call machines "intelligent"? That's not how science works.

    I think you're misunderstanding the argument. I haven't seen people here saying that the study was incorrect so far as it goes, or that AI is equal to human intelligence. But it does seem like it has a kind of intelligence. "Glorified auto complete" doesn't seem sufficient, because it has a completely different quality from any past tool. Supposing yes, on a technical level the software pieces together probability based on overtraining. Can we say with any precision how the human mind stores information and how it creates intelligence? Maybe we're stumbling down the right path but need further innovations.

  • 126 Stimmen
    20 Beiträge
    0 Aufrufe
    A
    Updooted
  • 0 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • Programming languages

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • Front Brake Lights Could Drastically Diminish Road Accident Rates

    Technology technology
    337
    1
    595 Stimmen
    337 Beiträge
    14 Aufrufe
    M
    I always say there are drivers out there who only survive by the grace of other drivers.
  • 110 Stimmen
    84 Beiträge
    5 Aufrufe
    T
    It's not new technology you numpty. It's not news. It's not a scientific paper. Wireless energy transfer isn't "bullshit", it's been an understood aspect of physics for a long time. Since you seem unable to grasp the concept, I'll put it in bold and italics: This is a video of a guy doing a DIY project where he wanted to make his setup as wireless as possible. In the video he also goes over his thoughts and design considerations, and explains how the tech works for people who don't already know. It is not new technology. It is not pseudoscience. It is a guy showing off his bespoke PC setup. It does not need an article or a blog post. He can post about it in any form he wants. Personally, I think showcasing this kind of thing in a video is much better than a wall of text. I want to see the process, the finished product, the tools used and how he used them.
  • 5 Stimmen
    6 Beiträge
    2 Aufrufe
    B
    Oh sorry, my mind must have been a bit foggy when I read that. We agree 100%
  • Microsoft is putting AI actions into the Windows File Explorer

    Technology technology
    11
    1
    1 Stimmen
    11 Beiträge
    3 Aufrufe
    I
    Cool, so that's a specific problem with your needed use case. That's not what you said before.
  • Microsoft Bans Employees From Using DeepSeek App

    Technology technology
    11
    1
    122 Stimmen
    11 Beiträge
    2 Aufrufe
    L
    (Premise - suppose I accept that there is such a definable thing as capitalism) I'm not sure why you feel the need to state this in a discussion that already assumes it as a necessary precondition of, but, uh, you do you. People blaming capitalism for everything then build a country that imports grain, while before them and after them it’s among the largest exporters on the planet (if we combine Russia and Ukraine for the “after” metric, no pun intended). ...what? What does this have to do with literally anything, much less my comment about innovation/competition? Even setting aside the wild-assed assumptions you're making about me criticizing capitalism means I 'blame [it] for everything', this tirade you've launched into, presumably about Ukraine and the USSR, has no bearing on anything even tangentially related to this conversation. People praising capitalism create conditions in which there’s no reason to praise it. Like, it’s competitive - they kill competitiveness with patents, IP, very complex legal systems. It’s self-regulating and self-optimizing - they make regulations and do bailouts preventing sick companies from dying, make laws after their interests, then reactively make regulations to make conditions with them existing bearable, which have a side effect of killing smaller companies. Please allow me to reiterate: ...what? Capitalists didn't build literally any of those things, governments did, and capitalists have been trying to escape, subvert, or dismantle those systems at every turn, so this... vain, confusing attempt to pin a medal on capitalism's chest for restraining itself is not only wrong, it fails to understand basic facts about history. It's the opposite of self-regulating because it actively seeks to dismantle regulations (environmental, labor, wage, etc), and the only thing it optimizes for is the wealth of oligarchs, and maybe if they're lucky, there will be a few crumbs left over for their simps. That’s the problem, both “socialist” and “capitalist” ideal systems ignore ape power dynamics. I'm going to go ahead an assume that 'the problem' has more to do with assuming that complex interacting systems can be simplified to 'ape (or any other animal's) power dynamics' than with failing to let the richest people just do whatever they want. Such systems should be designed on top of the fact that jungle law is always allowed So we should just be cool with everybody being poor so Jeff Bezos or whoever can upgrade his megayacht to a gigayacht or whatever? Let me say this in the politest way I know how: LOL no. Also, do you remember when I said this? ‘Won’t someone please think of the billionaires’ is wearing kinda thin You know, right before you went on this very long-winded, surreal, barely-coherent ramble? Did you imagine I would be convinced by literally any of it when all it amounts to is one giant, extraneous, tedious equivalent of 'Won't someone please think of the billionaires?' Simp harder and I bet maybe you can get a crumb or two yourself.