Human-level AI is not inevitable. We have the power to change course
-
Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.
And why is "non-biological" a limitation?
-
Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?
If capital wants it capital gets it.
Couldn’t we have a good old fashioned butlerian jihad?
-
And why is "non-biological" a limitation?
I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.
I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.
-
I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.
I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.
I personally think that the additional component (suppose it's energy) that modern approaches miss is the sheer amount of entropy a human brain gets - plenty of many times duplicated sensory signals with pseudo-random fluctuations. I don't know how one can use lots of entropy to replace lots of computation (OK, I know what Monte-Carlo method is, just how it applies to AI), but superficially this seems to be the way that will be taken at some point.
On your point - I agree.
I'd say we might reach AGI soon enough, but it will be impractical to use as compared to a human.
While the matching efficiency is something very far away, because a human brain has undergone, so to say, an optimization\compression taking the energy of evolution since the beginning of life on Earth.
-
Human level? That’s not setting the bar very high. Surely the aim would be to surpass human, or why bother?
Yeah. Cheap labor is so much better than this bullshit
-
This post did not contain any content.
Why would we want to? 99% of the issues people have with "AI" are just problems with society more broadly that AI didn't really cause, only exacerbated. I think it's absurd to just reject this entire field because of a bunch of shitty fads going on right now with LLMs and image generators.
-
We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.
Irrelevant at best, harmful at worst
In some dimensions, current day LLMs are already superintelligent. They are extremely good knowledge retrieval engines that can far outperform traditional search engines, once you learn how properly to use them. No, they are not AGIs, because they're not sentient or self-motivated, but I'm not sure those are desirable or useful dimensions of intellect to work towards anyway.
-
We’re not even remotely close.
That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.
That's true in a somewhat abstract way, but I just don't see any evidence of the claim that it is just around the corner. I don't see what currently existing technology can facilitate it. Faster-than-light travel could also theoretically be just around the corner, but it would surprise me if it was, because we just don't have the technology.
On the other hand, the people who push the claim that AGI is just around the corner usually have huge vested interests.
-
In some dimensions, current day LLMs are already superintelligent. They are extremely good knowledge retrieval engines that can far outperform traditional search engines, once you learn how properly to use them. No, they are not AGIs, because they're not sentient or self-motivated, but I'm not sure those are desirable or useful dimensions of intellect to work towards anyway.
I think that's a very generous use of the word "superintelligent". They aren't anything like what I associate with that word anyhow.
I also don't really think they are knowledge retrieval engines. I use them extensively in my daily work, for example to write emails and generate ideas. But when it comes to facts they are flaky at best. It's more of a free association game than knowledge retrieval IMO.
-
We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.
Irrelevant at best, harmful at worst
"Dude trust me, just give me 40 billion more dollars, lobby for complete deregulation of the industry, and get me 50 more petabytes of data, then we will have a little human in the computer! RealshitGPT will have human level intelligence!"