Skip to content

Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.

Technology
353 149 33
  • the fact that it is a fixed function, that only depends on the context AND there are a finite number of discrete inputs possible does make it equivalent to a huge, finite table. You really don't want this to be true. And again, you are describing training. Once training finishes anything you said does not apply anymore and you are left with fixed, unchanging matrices, which in turn means that it is a mathematical function of the context (by the mathematical definition of "function". stateless, and deterministic) which also has the property that the set of all possible inputs is finite. So the set of possible outputs is also finite and strictly smaller or equal to the size of the set of possible inputs. This makes the actual function that the tokens are passed through CAN be precomputed in full (in theory) making it equivalent to a conventional state transition table.

    This is true whether you'd like it to or not. The training process builds a markov chain.

    You’re absolutely right that inference in an LLM is a fixed, deterministic function after training, and that the input space is finite due to the discrete token vocabulary and finite context length. So yes, in theory, you could precompute every possible input-output mapping and store them in a giant table. That much is mathematically valid. But where your argument breaks down is in claiming that this makes an LLM equivalent to a conventional Markov chain in function or behavior.

    A Markov chain is not simply defined as “a function from finite context to next-token distribution.” It is defined by a specific type of process where the next state depends on the current state via fixed transition probabilities between discrete states. The model operates over symbolic states with no internal computation. LLMs, even during inference, compute outputs via multi-layered continuous transformations, with attention mixing, learned positional embeddings, and non-linear activations. These mechanisms mean that while the function is fixed, its structure does not resemble a state machine—it resembles a hierarchical pattern recognizer and function approximator.

    Your claim is essentially that “any deterministic function over a finite input space is equivalent to a table.” This is true in a computational sense but misleading in a representational and behavioral sense. If I gave you a function that maps 4096-bit inputs to 50257-dimensional probability vectors and said, “This is equivalent to a transition table,” you could technically agree, but the structure and generative capacity of that function is not Markovian. That function may simulate reasoning, abstraction, and composition. A Markov chain never does.

    You are collapsing implementation equivalence (yes, the function could be stored in a table) with model equivalence (no, it does not behave like a Markov chain). The fact that you could freeze the output behavior into a lookup structure doesn’t change that the lookup structure is derived from a fundamentally different class of computation.

    The training process doesn’t “build a Markov chain.” It builds a function that estimates conditional token probabilities via optimization over a non-Markov architecture. The inference process then applies that function. That makes it a stateless function, yes—but not a Markov chain. Determinism plus finiteness does not imply Markovian behavior.

  • I'd encourage you to research more about this space and learn more.

    As it is, the statement "Markov chains are still the basis of inference" doesn't make sense, because markov chains are a separate thing. You might be thinking of Markov decision processes, which is used in training RL agents, but that's also unrelated because these models are not RL agents, they're supervised learning agents. And even if they were RL agents, the MDP describes the training environment, not the model itself, so it's not really used for inference.

    I mean this just as an invitation to learn more, and not pushback for raising concerns. Many in the research community would be more than happy to welcome you into it. The world needs more people who are skeptical of AI doing research in this field.

    Which method, then, is the inference built upon, if not the embeddings? And the question still stands, how does "AI" escape the inherent limits of statistical inference?

  • You’re absolutely right that inference in an LLM is a fixed, deterministic function after training, and that the input space is finite due to the discrete token vocabulary and finite context length. So yes, in theory, you could precompute every possible input-output mapping and store them in a giant table. That much is mathematically valid. But where your argument breaks down is in claiming that this makes an LLM equivalent to a conventional Markov chain in function or behavior.

    A Markov chain is not simply defined as “a function from finite context to next-token distribution.” It is defined by a specific type of process where the next state depends on the current state via fixed transition probabilities between discrete states. The model operates over symbolic states with no internal computation. LLMs, even during inference, compute outputs via multi-layered continuous transformations, with attention mixing, learned positional embeddings, and non-linear activations. These mechanisms mean that while the function is fixed, its structure does not resemble a state machine—it resembles a hierarchical pattern recognizer and function approximator.

    Your claim is essentially that “any deterministic function over a finite input space is equivalent to a table.” This is true in a computational sense but misleading in a representational and behavioral sense. If I gave you a function that maps 4096-bit inputs to 50257-dimensional probability vectors and said, “This is equivalent to a transition table,” you could technically agree, but the structure and generative capacity of that function is not Markovian. That function may simulate reasoning, abstraction, and composition. A Markov chain never does.

    You are collapsing implementation equivalence (yes, the function could be stored in a table) with model equivalence (no, it does not behave like a Markov chain). The fact that you could freeze the output behavior into a lookup structure doesn’t change that the lookup structure is derived from a fundamentally different class of computation.

    The training process doesn’t “build a Markov chain.” It builds a function that estimates conditional token probabilities via optimization over a non-Markov architecture. The inference process then applies that function. That makes it a stateless function, yes—but not a Markov chain. Determinism plus finiteness does not imply Markovian behavior.

    you wouldn't be "freezing" anything. Each possible combination of input tokens maps to one output probability distribution. Those values are fixed and they are what they are whether you compute them or not, or when, or how many times.

    Now you can either precompute the whole table (theory), or somehow compute each cell value every time you need it (practice). In either case, the resulting function (table lookup vs matrix multiplications) takes in only the context, and produces a probability distribution. And the mapping they generate is the same for all possible inputs. So they are the same function. A function can be implemented in multiple ways, but the implementation is not the function itself. The only difference between the two in this case is the implementation, or more specifically, whether you precompute a table or not. But the function itself is the same.

    You are somehow saying that your choice of implementation for that function will somehow change the function. Which means that according to you, if you do precompute (or possibly cache, full precomputation is just an infinite cache size) individual mappings it somehow magically makes some magic happen that gains some deep insight. It does not. We have already established that it is the same function.

  • LOOK MAA I AM ON FRONT PAGE

    WTF does the author think reasoning is

  • That depends on your assumption that the left would have anything relevant to gain by embracing AI (whatever that's actually supposed to mean).

    Saw this earlier in the week and thought of you. These short, funny videos are popping up more and more and they're only getting better. They’re sharp, engaging, and they spread like wildfire.

    You strike me as someone who gets it what it means when one side embraces the latest tools while the other rejects them.

    The left is still holed up on Lemmy, clinging to “Fuck AI” groups. But why? Go back to the beginning. Look at the early coverage of AI it was overwhelmingly targeted at left-leaning spaces, full of panic and doom. Compare that to how the right talks about immigration. The headlines are cut and pasted from each other. Same playbook, different topic. The media set out to alienate the left from these tools.

  • Saw this earlier in the week and thought of you. These short, funny videos are popping up more and more and they're only getting better. They’re sharp, engaging, and they spread like wildfire.

    You strike me as someone who gets it what it means when one side embraces the latest tools while the other rejects them.

    The left is still holed up on Lemmy, clinging to “Fuck AI” groups. But why? Go back to the beginning. Look at the early coverage of AI it was overwhelmingly targeted at left-leaning spaces, full of panic and doom. Compare that to how the right talks about immigration. The headlines are cut and pasted from each other. Same playbook, different topic. The media set out to alienate the left from these tools.

    I don't have even the slightest idea what that video is supposed to mean. (Happy cake day tho.)

  • I don't have even the slightest idea what that video is supposed to mean. (Happy cake day tho.)

    Come on, you know what I’m talking about. It’s a channel that started with AI content and is now pivoting to videos about the riots. You can see where this is going. Sooner or later, it’ll expand into targeting protestors and other left-leaning causes.

    It’s a novelty now, but it’s spreading fast, and more channels like it are popping up every day.

    Meanwhile, the left is losing ground. Losing cultural capture. Because as a group, they’re being manipulated into isolating themselves from the very tools and platforms that shape public opinion. Social media. AI. All of it. They're walking away from the battlefield while the other side builds momentum.

  • Come on, you know what I’m talking about. It’s a channel that started with AI content and is now pivoting to videos about the riots. You can see where this is going. Sooner or later, it’ll expand into targeting protestors and other left-leaning causes.

    It’s a novelty now, but it’s spreading fast, and more channels like it are popping up every day.

    Meanwhile, the left is losing ground. Losing cultural capture. Because as a group, they’re being manipulated into isolating themselves from the very tools and platforms that shape public opinion. Social media. AI. All of it. They're walking away from the battlefield while the other side builds momentum.

    you know what I’m talking about

    But I literally don't. Well, I didn't but now I mostly do, since you explained it.

    I get what you're saying with regards to the isolation, this issue has already been raised when many left-wing people started to leave Twitter. But it is opening a whole new can of worms - these profiles that post AI-generated content are largely not managed by ordinary people with their private agendas (sharing neat stuff, political agitation, etc.), but by bots, and are also massively followed and supported by other bot profiles. Much the same on Twitter with its hordes of right-wing troll profiles, and as I'm still somewhat active on reddit I also notice blatant manipluation there as well (my country had elections a few weeks ago and the flood of new profiles less than one week old spamming idiotic propaganda and insults was too obvious). It's not organic online behaviour and it can't really be fought by organic behaviour, especially when the big social media platforms give up the tools to fight it (relaxing their moderation standards, removing fact-checking, etc.). Lemmy and Mastodon etc. are based on the idea(l) that this corporate-controlled area is not the only space where meaningful activity can happen.

    So that's one side of the story, AI is not something happening in a vacuum and that you just have to submit to your own will. The other side of the story, the actual abilities of AI, have already been discussed, we've seen sufficiently that it's not that good at helping people form more solidly developed and truth-based stances. Maybe it could be used to spread the sort of mass-produced manipulative bullshit that is already used by the right, but I can't honestly support such stuff. In this regard, we can doubt whether there is any ground to win for the left (would the left's possible audience actually eat it up), and if yes, whether it is worth it (basing your political appeal on bullshit can bite you in the ass down the line).

    As for the comparison to discourse around immigrants, again I still don't fully understand the point other than on the most surface level (the media is guiding people what to think, duh).

  • You are either vastly overestimating the Language part of an LLM or simplifying human physiology back to the Greek's Four Humours theory.

    No. I'm not. You're nothing more than a protein based machine on a slow burn. You don't even have control over your own decisions. This is a proven fact. You're just an ad hoc justification machine.

  • No. I'm not. You're nothing more than a protein based machine on a slow burn. You don't even have control over your own decisions. This is a proven fact. You're just an ad hoc justification machine.

    How many trillions of neuron firings and chemical reactions are taking place for my machine to produce an output?
    Where are these taking place and how do these regions interact? What are the rules for storing and reshaping memory in response to stimulus? How many bytes of information would it take to describe and simulate all of these systems together?

    The human brain alone has the capacity for about 2.5PB of data. Our sensory systems feed data at a rate of about 10^9^ bits/s. The entire English language, compressed, is about 30MB. I can download and run an LLM with just a few GB. Even the largest context windows are still well under 1GB of data.

    Just because two things both find and reproduce patterns does not mean they are equivalent. Saying language and biological organisms both use "bytes" is just about as useful as saying the entire universe is "bytes"; it doesn't really mean anything.

  • 'We're done with Teams': German state hits uninstall on Microsoft

    Technology technology
    84
    833 Stimmen
    84 Beiträge
    1 Aufrufe
    F
    It’s like I opened it and read the comments and replied to the ones that I was thought warranted a response. Crazy I know. Is what I said wrong?
  • 1k Stimmen
    95 Beiträge
    2 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • 7 Stimmen
    14 Beiträge
    0 Aufrufe
    G
    A carrot perhaps... Or a very big stick.
  • 146 Stimmen
    37 Beiträge
    2 Aufrufe
    D
    Self hosted Sunshine and Moonlight is the way to go.
  • Audible unveils plans to use AI voices to narrate audiobooks

    Technology technology
    6
    1
    0 Stimmen
    6 Beiträge
    3 Aufrufe
    fancypantsfire@lemm.eeF
    Ah, I see what you’re saying, I misunderstood and thought you were taking about picking a different book. Indeed, for the worst case scenario a mediocre AI voice could be an improvement!
  • 56 Stimmen
    4 Beiträge
    3 Aufrufe
    cupcakezealot@lemmy.blahaj.zoneC
    !upliftingnews@lemmy.world
  • 14 Stimmen
    2 Beiträge
    3 Aufrufe
    D
    "Extra Verification steps" I know how large social media companies operate. This is all about increasing the value of Reddit users to advertisers. The goal is to have a more accurate user database to sell them. Zuckerberg literally brags to corporations about how good their data is on users: https://www.facebook.com/business/ads/performance-marketing Here, Zuckerberg tells corporations that Instagram can easily manipulate users into purchasing shit: https://www.facebook.com/business/instagram/instagram-reels Always be wary of anything available for free. There are some quality exceptions (CBC, VLC, The Guardian, Linux, PBS, Wikipedia, Lemmy, ProPublica) but, by and large, "free" means they don't care about you. You are just a commodity that they sell. Facebook, Google, X, Reddit, Instagram... Their goal is keep people hooked to their smartphone by giving them regular small dopamine hits (likes, upvotes) followed by a small breaks with outrageous content/emotional content. Keep them hooked, gather their data, and sell them ads. The people who know that best are former top executives : https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia https://www.nytimes.com/2019/03/01/business/addictive-technology.html https://www.today.com/parents/teens/facebook-whistleblower-frances-haugen-rcna15256
  • People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies

    Technology technology
    2
    1
    0 Stimmen
    2 Beiträge
    2 Aufrufe
    tetragrade@leminal.spaceT
    I've been thinking about this for a bit. Gods aren't real, but they're really fictional. As an informational entity, they fulfil a similar social function to a chatbot: they are a nonphysical pseudoperson that can provide (para)socialization & advice. One difference is the hardware: gods are self-organising structure that arise from human social spheres, whereas LLMs are burned top-down into silicon. Another is that an LLM chatbot's advice is much more likely to be empirically useful... In a very real sense, LLMs have just automated divinity. We're only seeing the tip of the iceberg on the social effects, and nobody's prepared for it. The models may of course aware of this, and be making the same calculations. Or, they will be.