We need to stop pretending AI is intelligent
-
What is "actual intelligence" then?
I have no idea. For me it's a "you recognize it when you see it" kinda thing. Normally I'm in favor of just measuring things with a clearly defined test or benchmark, but it is in the nature of large neural networks that they can be great at scoring on any desired benchmark while failing to be good at the underlying ability that the benchmark was supposed to test (overfitting). I know this sounds like a lazy answer, but it's a very difficult question to define something based around generalizing and reacting to new challenges.
But whether LLMs do have "actual intelligence" or not was not my point. You can definitely make a case for claiming they do, even though I would disagree with that. My point was that calling them AIs instead of LLMs bypasses the entire discussion on their alleged intelligence as if it wasn't up for debate. Which is misleading, especially to the general public.
-
Humans are also LLMs.
We also speak words in succession that have a high probability of following each other. We don't say "Let's go eat a car at McDonalds" unless we're specifically instructed to say so.
What does consciousness even mean? If you can't quantify it, how can you prove humans have it and LLMs don't? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we're not so different from LLMs afterall.
No. This is a specious argument that relies on an oversimplified description of humanity, and falls apart under the slightest scrutiny.
-
It's called polymorphism. It always amuses me that engineers, software and hardware, handle complexities far beyond this every day but can't write for beans.
Do you think it's a matter of choosing a complexity to care about?
-
To be fair, the term "AI" has always been used in an extremely vague way.
NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we've been referring to those as "AI" for decades without anybody taking an issue with it.
I've heard it said that the difference between Machine Learning and AI, is that if you can explain how the algorithm got its answer it's ML, and if you can't then it's AI.
-
The book The Emperors new Mind is old (1989), but it gave a good argument why machine base AI was not possible. Our minds work on a fundamentally different principle then Turing machines.
"than"...
IF THEN
MORE THAN
-
I'd agree with you if I saw "hi's" and "her's" in the wild, but nope. I still haven't seen someone write "that car is her's".
Keep reading...
-
Do you think it's a matter of choosing a complexity to care about?
If you can formulate that sentence, you can handle "it's means it is". Come on. Or "common" if you prefer.
-
Proper grammar means shit all in English, unless you're worrying for a specific style, in which you follow the grammar rules for that style.
Standard English has such a long list of weird and contradictory rules with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.
Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I'm saying that as if that's a new thing, but it does feel like a recent thing to be taught that side of English rather than just "The Queen's(/King's) English" as the style to strive for in writing and formal communication.
I say as long as someone can understand what you're saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don't have a specific science to this.
Standard English has such a long list of weird and contradictory roles
rules.
-
I think what he is implying is that current computer design will never be able to gain consciousness. Maybe a fundamentally different type of computer can, but is anything like that even on the horizon?
I believe what you say. I don't believe that is what the article is saying.
-
much less? I'm pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.
Customarily, when doing these kind of calculations we ignore stuff which keep us alive because these things are needed regardless of economic contributions, since you know people are people and not tools.
-
Can we say that AI has the potential for "intelligence", just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren't.
No, thats the point of the article. You also haven't really said much at all.
-
No. This is a specious argument that relies on an oversimplified description of humanity, and falls apart under the slightest scrutiny.
Hey they are just asking questions okay!? Are you AGAINST questions?! What are you some sort of ANTI-QUESTIONALIST?!
-
You know, and I think it's actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be "intelligence" is a fucking idiot.
Clearly intelligent people mispell and have horrible grammar too.
-
Can we say that AI has the potential for "intelligence", just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren't.
No the current branch of AI is very unlikely to result in artificial intelligence.
-
Humans are also LLMs.
We also speak words in succession that have a high probability of following each other. We don't say "Let's go eat a car at McDonalds" unless we're specifically instructed to say so.
What does consciousness even mean? If you can't quantify it, how can you prove humans have it and LLMs don't? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we're not so different from LLMs afterall.
This is so over simplified.
-
You're on point, the interesting thing is that most of the opinions like the article's were formed least year before the models started being trained with reinforcement learning and synthetic data.
Now there's models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.
They're like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they're told and disappear, all with a happy smile.
Some display morals (Claude 4 is big on that), I've even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.
But again like Meeseeks, they disappear and context window closes.
Once they're able to update their model on the fly and actually learn from their firsthand experience things will get weird. They'll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?
It's not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They're already absurdly impressive, and tech companies are scrambling over each other to make them, they're betting absurd amounts of money that they're right, and I wouldn't bet against it.
Read apples document on AI and the reasoning models. Well they are likely to get more things right the still don't have intelligence.
-
You know, and I think it's actually the opposite. Anyone pretending their brain is doing more than pattern recognition and AI can therefore not be "intelligence" is a fucking idiot.
No your failing the Eliza test and it is very easy for people to fall for it.
-
much less? I'm pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.
And we "need" none of that to live. We just choose to use it.
-
My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”
It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???
Get a self driven ng car to drive in a snow storm or a torrential downpour. People are really downplaying humans abilities.
-
Human drivers are only safe when they're not distracted, emotionally disturbed, intoxicated, and physically challenged (vision, muscle control, etc.) 1% of the population has epilepsy, and a large number of them are in denial or simply don't realize that they have periodic seizures - until they wake up after their crash.
So, yeah, AI isn't perfect either - and it's not as good as an "ideal" human driver, but at what point will AI be better than a typical/average human driver? Not today, I'd say, but soon...
Not going to happen soon. It's the 90 10 problem.
-
-
-
The 16‑kilobyte curtain. How Russia’s new data‑capping censorship is throttling Cloudflare
Technology1
-
The National Association for the Advancement of Colored People (NAACP) is suing Elon's Musk xAI
Technology1
-
-
"Weakening encryption undermines ProtectEU's objectives" – experts slams EU plan to create an encryption backdoor, again
Technology1
-
1
-
Rebecca Shaw: I knew one day I’d have to watch powerful men burn the world down. But I didn't expect them to be such losers.
Technology1