Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all. They just memorize patterns really well.
-
Here’s the thing, I’m not against LLMs and dispersion for things they can actually be used for, they have potential for real things, just not at all the things you pretend exist. Neural implants aren’t AI. An intelligence is self aware, if we achieved AI it wouldn’t be a program. You’re misconstruing Virtual Intelligence for artificial intelligence and you don’t even understand what a virtual intelligence is. You’re simply delusional in what you believe computer science and technology is, how it works, and what it’s capable of.
I’m not talking about neural interfaces. I’m talking about organiod intelligence.
I am a computer scientist with lab experience in this. I’m not pulling this out of my ass. I’m drawing from direct experience in development.
-
I don’t make money, it’s something I do for personal enjoyment, that’s the entire purpose of art, it’s something I also use algorithmic processing to do. I’m not going to hand over my enjoyment to have a servitor do something for me to take credit for, I prefer to use my brain, not replace it.
No one told you to hand it over. A technology being able to do something does not require you to use it. And people misusing the technology to feign talent is a reflection of the people- not the tech.
-
I’m not talking about neural interfaces. I’m talking about organiod intelligence.
I am a computer scientist with lab experience in this. I’m not pulling this out of my ass. I’m drawing from direct experience in development.
Yeah, that’s the problem with the field, too many delusional people trying to find god in a computer because they didn’t understand what Asimov was actually writing about.
-
No one told you to hand it over. A technology being able to do something does not require you to use it. And people misusing the technology to feign talent is a reflection of the people- not the tech.
It’s not even to feign talent, it’s people trying to replace the brain instead of using applicable tools to help us advance and progress, you’re just advertising a product.
-
Yeah, that’s the problem with the field, too many delusional people trying to find god in a computer because they didn’t understand what Asimov was actually writing about.
That it has to be nothing or everything with you, decision trees or God himself, is the likely foundation of your inability to have simple objective take on the existing technology and its capabilities. It’s giving bi-polar.
Now I’m not uninformed- I’m too informed!! LoL. That goalpost just shifted right across the field, and still you cannot admit to your ignorance.
-
Wow it's almost like the computer scientists were saying this from the start but were shouted over by marketing teams.
For me it kinda went the other way, I'm almost convinced that human intelligence is the same pattern repeating, just more general (yet)
-
It’s not even to feign talent, it’s people trying to replace the brain instead of using applicable tools to help us advance and progress, you’re just advertising a product.
People have been presenting the work of others as their own for all of history. All that changed was a new tool was found to do that. But at least these are a form of derivative works, and not just putting their name directly on someone else’s carbon copy.
-
That it has to be nothing or everything with you, decision trees or God himself, is the likely foundation of your inability to have simple objective take on the existing technology and its capabilities. It’s giving bi-polar.
Now I’m not uninformed- I’m too informed!! LoL. That goalpost just shifted right across the field, and still you cannot admit to your ignorance.
You haven’t made any point or even expressed an understanding of how these programs work. You’ve just been evangelizing about how AI is great, I genuinely don’t believe you understand what you’re talking about because you’ve expressed literally no proper understanding or explanation of your points outside of using a scene from I, Robot which kind of makes you look like you entirely misconstrue the concepts you’re sucking the dick of.
What kind of computer sciences do you work with as a profession? What is your applicable lab work?
-
People have been presenting the work of others as their own for all of history. All that changed was a new tool was found to do that. But at least these are a form of derivative works, and not just putting their name directly on someone else’s carbon copy.
Tell that to Studio Ghibli. Also, people being shitty is not a good excuse for people to be shitty, you’re advocating to make it easier to enable people to be shitty.
-
You haven’t made any point or even expressed an understanding of how these programs work. You’ve just been evangelizing about how AI is great, I genuinely don’t believe you understand what you’re talking about because you’ve expressed literally no proper understanding or explanation of your points outside of using a scene from I, Robot which kind of makes you look like you entirely misconstrue the concepts you’re sucking the dick of.
What kind of computer sciences do you work with as a profession? What is your applicable lab work?
I’m not evangelizing. You incorrectly stated the limitations and development paths of the tech, and I corrected you.
Again with the religious verbiage from you. But I’m the one proselytizing?
It’s not nothing- it’s an impressive feat of technology that’s still in its infancy. It’s also not everything, and not anywhere close to a reasoning mind at this point. You are obsessive with extremes.
-
I’m not evangelizing. You incorrectly stated the limitations and development paths of the tech, and I corrected you.
Again with the religious verbiage from you. But I’m the one proselytizing?
It’s not nothing- it’s an impressive feat of technology that’s still in its infancy. It’s also not everything, and not anywhere close to a reasoning mind at this point. You are obsessive with extremes.
You didn’t answer my question. You’ve also still yet to give any details on your reasoning.
-
Tell that to Studio Ghibli. Also, people being shitty is not a good excuse for people to be shitty, you’re advocating to make it easier to enable people to be shitty.
Studio Ghibli does not have exclusive rights to their style- whether it’s used by a person or an AI to inspire a new image. Those are derivative works. Totally legal. Arguably ethical. If it’s not a direct copy, how has the studio been harmed? What work of theirs was diminished?
I’m advocating for tools. How people use those tools is on them.
-
You didn’t answer my question. You’ve also still yet to give any details on your reasoning.
No, I’m not gonna dox myself.
Reasoning for what? What details are you needing for clarification?
-
Studio Ghibli does not have exclusive rights to their style- whether it’s used by a person or an AI to inspire a new image. Those are derivative works. Totally legal. Arguably ethical. If it’s not a direct copy, how has the studio been harmed? What work of theirs was diminished?
I’m advocating for tools. How people use those tools is on them.
I disagree.
-
You didn’t answer my question. You’ve also still yet to give any details on your reasoning.
Actually, you’re out of your depth, and I think you’ve been outed enough. We’re done, and I’m blocking.
-
No, I’m not gonna dox myself.
Reasoning for what? What details are you needing for clarification?
Let’s start simple. How do these programs work? Where do they get their data and how is it applied? And a general field of work is not doxxing, you’re just dodging accountability.
-
Actually, you’re out of your depth, and I think you’ve been outed enough. We’re done, and I’m blocking.
The sure sign of confidence, you’ve definitely shown me how stupid I am.
-
The architecture of these LRMs may make monkeys fly out of my butt. It hasn't been proven that the architecture doesn't allow it.
You are asking to prove a negative. The onus is to show that the architecture can reason. Not to prove that it can't.
that's very true, I'm just saying this paper did not eliminate the possibility and is thus not as significant as it sounds. If they had accomplished that, the bubble would collapse, this will not meaningfully change anything, however.
also, it's not as unreasonable as that because these are automatically assembled bundles of simulated neurons.
-
People think they want AI, but they don’t even know what AI is on a conceptual level.
They want something like the Star Trek computer or one of Tony Stark's AIs that were basically deus ex machinas for solving some hard problem behind the scenes. Then it can say "model solved" or they can show a test simulation where the ship doesn't explode (or sometimes a test where it only has an 85% chance of exploding when it used to be 100%, at which point human intuition comes in and saves the day by suddenly being better than the AI again and threads that 15% needle or maybe abducts the captain to go have lizard babies with).
AIs that are smarter than us but for some reason don't replace or even really join us (Vision being an exception to the 2nd, and Ultron trying to be an exception to the 1st).
-
You're correct that the formal definition of a Markov process does not exclude internal computation, and that it only requires the next state to depend solely on the current state. But what defines a classical Markov chain in practice is not just the formal dependency structure but how the transition function is structured and used. A traditional Markov chain has a discrete and enumerable state space with explicit, often simple transition probabilities between those states. LLMs do not operate this way.
The claim that an LLM is "just" a large compressed Markov chain assumes that its function is equivalent to a giant mapping of input sequences to output distributions. But this interpretation fails to account for the fundamental difference in how those distributions are generated. An LLM is not indexing a symbolic structure. It is computing results using recursive transformations across learned embeddings, where those embeddings reflect complex relationships between tokens, concepts, and tasks. That is not reducible to discrete symbolic transitions without losing the model’s generalization capabilities. You could record outputs for every sequence, but the moment you present a sequence that wasn't explicitly in that set, the Markov table breaks. The LLM does not.
Yes, you can say a table is just one implementation of a function, and from a purely mathematical perspective, any function can be implemented as a table given enough space. But the LLM’s function is general-purpose. It extrapolates. A precomputed table cannot do this unless those extrapolations are already baked in, in which case you are no longer talking about a classical Markov system. You are describing a model that encodes relationships far beyond discrete transitions.
The pi analogy applies to deterministic functions with fixed outputs, not to learned probabilistic functions that approximate conditional distributions over language. If you give an LLM a new input, it will return a meaningful distribution even if it has never seen anything like it. That behavior depends on internal structure, not retrieval. Just because a function is deterministic at temperature 0 does not mean it is a transition table. The fact that the same input yields the same output is true for any deterministic function. That does not collapse the distinction between generalization and enumeration.
So while yes, you can implement any deterministic function as a lookup table, the nature of LLMs lies in how they model relationships and extrapolate from partial information. That ability is not captured by any classical Markov model, no matter how large.
yes you can enumerate all inputs, because thoy are not continuous. You just raise the finite number of different tokens to the finite context size and that's exactly the size of the table you would need. finite*finite=finite. You are describing training, i.e how the function is geerated. Yes correlations are found there and encoded in a couple of matrices. Those matrices are what are used in the llm and none of what you said applies. Inference is purely a markov chain by definition.