We need to stop pretending AI is intelligent
-
Yours didn't and read it just fine.
That's irrelevant. That's like saying you shouldn't complain about someone running a red light if you stopped in time before they t-boned you - because you understood the situation.
-
Kinda dumb that apostrophe s means possessive in some circumstances and then a contraction in others.
I wonder how different it'll be in 500 years.
I'd agree with you if I saw "hi's" and "her's" in the wild, but nope. I still haven't seen someone write "that car is her's".
-
Proper grammar means shit all in English, unless you're worrying for a specific style, in which you follow the grammar rules for that style.
Standard English has such a long list of weird and contradictory roles with nonsensical exceptions, that in every day English, getting your point across in communication is better than trying to follow some more arbitrary rules.
Which become even more arbitrary as English becomes more and more a melting pot of multicultural idioms and slang. Although I'm saying that as if that's a new thing, but it does feel like a recent thing to be taught that side of English rather than just "The Queen's(/King's) English" as the style to strive for in writing and formal communication.
I say as long as someone can understand what you're saying, your English is correct. If it becomes vague due to mishandling of the classic rules of English, then maybe you need to follow them a bit. I don't have a specific science to this.
I understand that languages evolve, but for now, writing "it's" when you meant "its" is a grammatical error.
-
That's irrelevant. That's like saying you shouldn't complain about someone running a red light if you stopped in time before they t-boned you - because you understood the situation.
Are you really comparing my repsonse to the tone when correcting minor grammatical errors to someone brushing off nearly killing someone right now?
-
The machinery needed for human thought is certainly a part of AI. At most you can only claim its not intelligent because intelligence is a specifically human trait.
Tell that to the crows and chimps that know how to solve novel problems.
-
Huh? Since when an AI's purpose is to "imitate human behavior"? AI is about solving problems.
It is and it isn't. Again, the whole thing is super vague. Machine vision or pattern seeking algorithms do not try to imitate any human behavior, but they fall under AI.
Let me put it this way: Things that try to imitate human behavior or intelligence are AI, but not all AI is about trying to imitate human behavior or intelligence.
-
Be careful... If you get in an accident I guaran-god-damn-tee you they will use it as an excuse not to pay out. Maybe after a lawsuit you'd see some money but at that point half of it goes to the lawyer and you're still screwed.
Oh I'm aware of the potential pitfalls but it's something I'm willing to risk to stick it to insurance. I wouldn't even carry it if it wasn't required by law. I have the funds to cover what they would cover.
-
I’m still sad about that dot.
The dot does not care. It can't even care. I doesn't even know it exists. I can't know shit.
-
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
The other thing that most people don't focus on is how we train LLMs.
We're basically building something like a spider tailed viper. A spider tailed viper is a kind of snake that has a growth on its tail that looks a lot like a spider. It wiggles it around so it looks like a spider, convincing birds they've found a snack, and when the bird gets close enough the snake strikes and eats the bird.
Now, I'm not saying we're building something that is designed to kill us. But, I am saying that we're putting enormous effort into building something that can fool us into thinking it's intelligent. We're not trying to build something that can do something intelligent. We're instead trying to build something that mimics intelligence.
What we're effectively doing is looking at this thing that mimics a spider, and trying harder and harder to tweak its design so that it looks more and more realistic. What's crazy about that is that we're not building this to fool a predator so that we're not in danger. We're not doing it to fool prey, so we can catch and eat them more easily. We're doing it so we can fool ourselves.
It's like if, instead of a spider-tailed snake, a snake evolved a bird-like tail, and evolution kept tweaking the design so that the tail was more and more likely to fool the snake so it would bite its own tail. Except, evolution doesn't work like that because a snake that ignored actual prey and instead insisted on attacking its own tail would be an evolutionary dead end. Only a truly stupid species like humans would intentionally design something that wasn't intelligent but mimicked intelligence well enough that other humans preferred it to actual information and knowledge.
-
It very much isn't and that's extremely technically wrong on many, many levels.
Yet still one of the higher up voted comments here.
Which says a lot.
I'll be pedantic, but yeah. It's all transistors all the way down, and transistors are pretty much chained if/then switches.
-
Oh I'm aware of the potential pitfalls but it's something I'm willing to risk to stick it to insurance. I wouldn't even carry it if it wasn't required by law. I have the funds to cover what they would cover.
If you have the funds you could self insure. You'd need to look up the details for your jurisdiction, but the gist of it is you keep the amount required coverage in an account that you never touch until you need to pay out.
-
My auto correct doesn't care.
So you trust your slm more than your fellow humans?
-
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.
This is not a good argument.
-
Calling these new LLM's just if statements is quite a over simplification. These are technically something that has not existed before, they do enable use cases that previously were impossible to implement.
This is far from General Intelligence, but there are solutions now to few coding issues that were near impossible 5 years ago
5 years ago I would have laughed in your face if you came to suggest that can you code a code that summarizes this description that was inputed by user. Now I laugh that give me your wallet because I need to call API or buy few GPU's.
I think the point is that this is not the path to general intelligence. This is more like cheating on the Turing test.
-
That is not really true. Yes, there are jump instructions being executed when you run interference on a model, but they are in no way related to the model itself. There's no translation of weights to jumps in transformers and the underlying attention mechanisms.
I suggest reading https://en.m.wikipedia.org/wiki/Transformer_(deep_learning_architecture)
That is not really true. Yes, there are jump instructions being executed when you run interference on a model, but they are in no way related to the model itself.
The model is data. It needs to be operated on to get information out. That means lots of JMPs.
If someone said viewing a gif is just a bunch of if-else's, that's also true. That the data in the gif isn't itself a bunch of if-else's isn't relevant.
Executing LLM'S is particularly JMP heavy. It's why you need massive fast ram because caching doesn't help them.
-
Then, unfortunately, you're even less self-aware than the average LLM chatbot.
Dude chatbots lie about their "internal reasoning process" because they don't really have one.
Writing is an offshoot of verbal language, which during construction for people almost always has more to do with sound and personal style than the popularity of words. It's not uncommon to bump into individuals that have a near singular personal grammar and vocabulary and that speak and write completely differently with a distinct style of their own. Also, people are terrible at probabilities.
As a person, I can also learn a fucking concept and apply it without having to have millions of examples of it in my "training data". Because I'm a person not a fucking statistical model.
But you know, you have to leave your house, touch grass, and actually listen to some people speak that aren't talking heads on television in order to discover that truth.
-
Dafuq? Artificial always means man-made.
Nature also makes fake stuff. For example, fish that have an appendix that looks like a worm, to attract prey. It's a fake worm. Is it "artificial"? Nope. Not man made.
May I present to you:
The Marriam-Webster Dictionary
Definition of ARTIFICIAL
The meaning of ARTIFICIAL is made, produced, or done by humans especially to seem like something natural : man-made. How to use artificial in a sentence.
(www.merriam-webster.com)
Definition #3b
-
What do you mean what do I mean? You were the one that said about ideas in the first place...
If you don't think humans can conceive of new ideas wholesale, then how do you think we ever invented anything (like, for instance, the languages that chat bots write)?
Also, you're the one with the burden of proof in this exchange. It's a pretty hefty claim to say that humans are unable to conceive of new ideas and are simply chatbots with organs given that we created the freaking chat bot you are convinced we all are.
You may not have new ideas, or be creative. So maybe you're a chatbot with organs, but people who aren't do exist.
-
So couldn't we say LLM's aren't really AI? Cuz that's what I've seen to come to terms with.
Llms are really good relational databases, not an intelligence, imo
-
Pretty low bar honestly.
-
-
-
Brain-computer interfaces: Brain implants are letting people move, speak, and interact with machines using only their thoughts. The first FDA approvals may arrive within five years.
Technology1
-
-
VCs are starting to partner with private equity to buy up call centers, accounting firms and other "mature companies" to replace their operations with AI
Technology1
-
-
-