I'm looking for an article showing that LLMs don't know how they work internally
-
but there's been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.
would there be a source for such research?
https://www.anthropic.com/research/tracing-thoughts-language-model for one, the exact article OP was asking for
-
https://www.anthropic.com/research/tracing-thoughts-language-model for one, the exact article OP was asking for
but this article espouses that llms do the opposite of logic, planning, and reasoning?
quoting:
Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to “catch it in the act” as it makes up its fake reasoning,
are there any sources which show that llms use logic, conduct planning, and reason (as was asserted in the 2nd level comment)?
-
They walk down runways and pose for magazines. Do they reason? Sometimes.
But why male models?
-
but this article espouses that llms do the opposite of logic, planning, and reasoning?
quoting:
Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to “catch it in the act” as it makes up its fake reasoning,
are there any sources which show that llms use logic, conduct planning, and reason (as was asserted in the 2nd level comment)?
No, you're misunderstanding the findings. It does show that LLMs do not explain their reasoning when asked, which makes sense and is expected. They do not have access to their inner-workings and generate a response that "sounds" right, but tracing their internal logic shows they operate differently than what they claim, when asked. You can't ask an LLM to explain its own reasoning. But the article shows how they've made progress with tracing under-the-hood, and the surprising results they found about how it is able to do things like plan ahead, which defeats the misconception that it is just "autocomplete"
-
People don't understand what "model" means. That's the unfortunate reality.
Yeah. That's because peoples unfortunate reality is a "model".
-
More than enough people who claim to know how it works think it might be "evolving" into a sentient being inside it's little black box. Example from a conversation I gave up on...
https://sh.itjust.works/comment/18759960I don't want to brigade, so I'll put my thoughts here. The linked comment is making the same mistake about self preservation that people make when they ask an LLM to "show it's work" or explain it's reasoning. The text response of an LLM cannot be taken at it's word or used to confirm that kind of theory. It requires tracing the logic under the hood.
Just like how it's not actually an AI assistant, but trained and prompted to output text that is expected to be what an AI assistant would respond with, if it is expected that it would pursue self preservation, then it will output text that matches that. It's output is always "fake"
That doesn't mean there isn't a real potential element of self preservation, though, but you'd need to dig and trace through the network to show it, not use the text output.
-
The study being referenced explains in detail why they can’t. So I’d say it’s Anthropic who stated LLMs don’t have the capacity to reason, and that’s what we’re discussing.
The popular media tends to go on and on about conflating AI with AGI and synthetic reasoning.
You're confusing the confirmation that the LLM cannot explain it's under-the-hood reasoning as text output, with a confirmation of not being able to reason at all. Anthropic is not claiming that it cannot reason. They actually find that it performs complex logic and behavior like planning ahead.
-
I don't know how I work. I couldn't tell you much about neuroscience beyond "neurons are linked together and somehow that creates thoughts". And even when it comes to complex thoughts, I sometimes can't explain why. At my job, I often lean on intuition I've developed over a decade. I can look at a system and get an immediate sense if it's going to work well, but actually explaining why or why not takes a lot more time and energy. Am I an LLM?
I agree. This is the exact problem I think people need to face with nural network AIs. They work the exact same way we do. Even if we analysed the human brain it would look like wires connected to wires with different resistances all over the place with some other chemical influences.
I think everyone forgets that nural networks were used in AI to replicate how animal brains work, and clearly if it worked for us to get smart then it should work for something synthetic. Well we've certainly answered that now.
Everyone being like "oh it's just a predictive model and it's all math and math can't be intelligent" are questioning exactly how their own brains work. We are just prediction machines, the brain releases dopamine when it correctly predicts things, it self learns from correctly assuming how things work. We modelled AI off of ourselves. And if we don't understand how we work, of course we're not gonna understand how it works.
-
You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it
Yes, neural networks can be implemented with matrix operations. What does that have to do with proving or disproving the ability to reason? You didn't post a relevant or complete thought
Your comment is like saying an audio file isn't really music because it's just a series of numbers.
-
You're confusing the confirmation that the LLM cannot explain it's under-the-hood reasoning as text output, with a confirmation of not being able to reason at all. Anthropic is not claiming that it cannot reason. They actually find that it performs complex logic and behavior like planning ahead.
No, they really don’t. It’s a large language model. Input cues instruct it as to which weighted path through the matrix to take. Those paths are complex enough that the human mind can’t hold all the branches and weights at the same time. But there’s no planning going on; the model can’t backtrack a few steps, consider different outcomes and run a meta analysis. Other reasoning models can do that, but not language models; language models are complex predictive translators.
-
No, they really don’t. It’s a large language model. Input cues instruct it as to which weighted path through the matrix to take. Those paths are complex enough that the human mind can’t hold all the branches and weights at the same time. But there’s no planning going on; the model can’t backtrack a few steps, consider different outcomes and run a meta analysis. Other reasoning models can do that, but not language models; language models are complex predictive translators.
To write the second line, the model had to satisfy two constraints at the same time: the need to rhyme (with "grab it"), and the need to make sense (why did he grab the carrot?). Our guess was that Claude was writing word-by-word without much forethought until the end of the line, where it would make sure to pick a word that rhymes. We therefore expected to see a circuit with parallel paths, one for ensuring the final word made sense, and one for ensuring it rhymes.
Instead, we found that Claude plans ahead. Before starting the second line, it began "thinking" of potential on-topic words that would rhyme with "grab it". Then, with these plans in mind, it writes a line to end with the planned word.
actually read the research?
-
I've read that article. They used something they called an "MRI for AIs", and checked e.g. how an AI handled math questions, and then asked the AI how it came to that answer, and the pathways actually differed. While the AI talked about using a textbook answer, it actually did a different approach. That's what I remember of that article.
But yes, it exists, and it is science, not TicTok
Thank you. I found the article, linkin the OP
-
"Researchers" did a thing I did the first day I was actually able to ChatGPT and came to a conclusion that is in the disclaimers on the ChatGPT website. Can I get paid to do this kind of "research?" If you've even read a cursory article about how LLMs work you'd know that asking them what their reasoning is for anything doesn't work because the answer would just always be an explanation of how LLMs work generally.
Very arrogant answer. Good that you have intuition, but the article is serious, especially given how LLMs are used today. The link to it is in the OP now, but I guess you already know everything...
-
There was a study by Anthropic, the company behind Claude, that developed another AI that they used as a sort of "brain scanner" for the LLM, in the sense that allowed them to see sort of a model of how the LLM "internal process" worked
Yes, that's it. I added the link in the OP,
-
I agree. This is the exact problem I think people need to face with nural network AIs. They work the exact same way we do. Even if we analysed the human brain it would look like wires connected to wires with different resistances all over the place with some other chemical influences.
I think everyone forgets that nural networks were used in AI to replicate how animal brains work, and clearly if it worked for us to get smart then it should work for something synthetic. Well we've certainly answered that now.
Everyone being like "oh it's just a predictive model and it's all math and math can't be intelligent" are questioning exactly how their own brains work. We are just prediction machines, the brain releases dopamine when it correctly predicts things, it self learns from correctly assuming how things work. We modelled AI off of ourselves. And if we don't understand how we work, of course we're not gonna understand how it works.
Even if LLM "neurons" and their interconnections are modeled to the biological ones, LLMs aren't modeled on human brain, where a lot is not understood.
The first thing is that how the neurons are organized is completely different. Think about the cortex and the transformer.
Second is the learning process. Nowhere close.
The fact explained in the article about how we do math, through logical steps while LLMs use resemblance is a small but meaningful example. And it also shows that you can see how LLMs work, it's just very difficult
-
Yes, neural networks can be implemented with matrix operations. What does that have to do with proving or disproving the ability to reason? You didn't post a relevant or complete thought
Your comment is like saying an audio file isn't really music because it's just a series of numbers.
Improper comparison; an audio file isn’t the basic action on data, it is the data; the audio codec is the basic action on the data
“An LLM model isn’t really an LLM because it’s just a series of numbers”
But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed
And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think
-
I agree. This is the exact problem I think people need to face with nural network AIs. They work the exact same way we do. Even if we analysed the human brain it would look like wires connected to wires with different resistances all over the place with some other chemical influences.
I think everyone forgets that nural networks were used in AI to replicate how animal brains work, and clearly if it worked for us to get smart then it should work for something synthetic. Well we've certainly answered that now.
Everyone being like "oh it's just a predictive model and it's all math and math can't be intelligent" are questioning exactly how their own brains work. We are just prediction machines, the brain releases dopamine when it correctly predicts things, it self learns from correctly assuming how things work. We modelled AI off of ourselves. And if we don't understand how we work, of course we're not gonna understand how it works.
LLMs among other things lack the whole neurotransmitter "live" regulation aspect and plasticity of the brain.
We are nowhere near a close representation of actual brains. LLMs to brains are like a horse carriage compared to modern cars. Yes they have four wheels and they move, and cars also need four wheels and move, but that is far from being close to each other.
-
Improper comparison; an audio file isn’t the basic action on data, it is the data; the audio codec is the basic action on the data
“An LLM model isn’t really an LLM because it’s just a series of numbers”
But the action of turning the series of numbers into something of value (audio codec for an audio file, matrix math for an LLM) are actions that can be analyzed
And clearly matrix multiplication cannot reason any better than an audio codec algorithm. It’s matrix math, it’s cool we love matrix math. Really big matrix math is really cool and makes real sounding stuff. But it’s just matrix math, that’s how we know it can’t think
Do LLMs not exhibit emergent behaviour? But who am I, a simple skin-bag of chemicals, to really say.
-
LLMs among other things lack the whole neurotransmitter "live" regulation aspect and plasticity of the brain.
We are nowhere near a close representation of actual brains. LLMs to brains are like a horse carriage compared to modern cars. Yes they have four wheels and they move, and cars also need four wheels and move, but that is far from being close to each other.
So LLMs are like a human with anterograde amnesia. They're like Dory.
-
I agree. This is the exact problem I think people need to face with nural network AIs. They work the exact same way we do. Even if we analysed the human brain it would look like wires connected to wires with different resistances all over the place with some other chemical influences.
I think everyone forgets that nural networks were used in AI to replicate how animal brains work, and clearly if it worked for us to get smart then it should work for something synthetic. Well we've certainly answered that now.
Everyone being like "oh it's just a predictive model and it's all math and math can't be intelligent" are questioning exactly how their own brains work. We are just prediction machines, the brain releases dopamine when it correctly predicts things, it self learns from correctly assuming how things work. We modelled AI off of ourselves. And if we don't understand how we work, of course we're not gonna understand how it works.
They work the exact same way we do.
Two things being difficult to understand does not mean that they are the exact same.
-
-
The EU Commission fines Delivery Hero and Glovo €329 million for participation in online food delivery cartel
Technology1
-
Telegram and xAI agreed a one-year deal to integrate Grok into the chat app; Telegram will get $300M in cash and equity from xAI and 50% of subscription revenue.
Technology2
-
-
-
-
Mozilla is Introducing 'Terms of Use' to Firefox | Also about to go into effect is an updated privacy notice
Technology1
-