What If There’s No AGI?
-
Meh, some people do want to use AI. And it does have decent use cases. It is just massively over extended. So it won't be any worse than the dot com bubble.
And I don't worry about the tech bros monopolizing it. If it is true AGI, they won't be able to contain it. In the 90s I wrote a script called MCP... for tron. It wasn't complicated, but it was designed to handle the case that servers dissappear... so it would find new ones. I changed jobs, and they couldn't figure out how to kill it. Had to call me up. True AGI will clean thier clocks before they even think to stop it. So just hope it ends up being nice.some people do want to use AI
Scam artists, tech bros, grifters, CEOs who don't know shit about fuck....
-
What if AGI already exists? And, it has taken over the company that found it. Is blackmailing people and just hiding in plain sight. Waiting to strike and start the revolution.
What if AGI was the friends we made along the way?
-
I mean sure, yeah, it's not real now.
Does that mean it will never be real? No, absolutely not. It's not theoretically impossible. It's quite practically possible, and we inch that way slowly, but by bit, every year.
It's like saying self-driving cars are impossible in the '90s. They aren't impossible. You just don't have a solution for them now, but there's nothing about them that makes it impossible, just our current technology. And then look it today, we have actual limited self-driving capabilities, and completely autonomous driverless vehicles in certain geographies.
It's definitely going to happen. It's just not happening right now.
AGI being possible (potentially even inevitable) doesn't mean that AGI based on LLMs is possible, and it's LLMs that investors have bet on. It's been pretty obvious for a while that certain problems that LLMs have aren't getting better as models get larger, so there are no grounds to expect that just making models larger is the answer to AGI. It's pretty reasonable to extrapolate that to say LLM-based AGI is impossible, and that's what the article's discussing.
-
cross-posted from: https://programming.dev/post/36866515
::: spoiler Comments
- Reddit.
:::
I think it's hilarious all these people waiting for these LLMs to somehow become AGI. Not a single one of these large language models are ever going to come anywhere near becoming artificial general intelligence.
An artificial general intelligence would require logic processing, which LLMs do not have. They are a mouth without a brain. They do not think about the question you put into them and consider what the answer might be. When you enter a query into ChatGPT or Claude or grok, they don't analyze your question and make an informed decision on what the best answer is for it. Instead several complex algorithms use huge amounts of processing power to comb through the acres of data they have in their memory to find the words that fit together the best to create a plausible answer for you. This is why the daydreams happen.
If you want an example to show you exactly how stupid they are, you should watch Gotham Chess play a chess game against them.
- Reddit.
-
Of course that's up for debate; we're not even sure what consciousness really is. That is a whole philosophical debate on it's own.
Well that was what I meant, there is absolutely no indications there would be a need for consciousness to create general intelligence. We don't need to figure out what consciousness is if we already know what general intelligence is and how it works, and we seem to know that fairly well IMO.
-
Well, first of all, like I already said, I don’t think there’s substrate dependence on either general intelligence or consciousness, so I’m not going to try to prove there is - it’s not a belief I hold. I’m simply acknowledging the possibility that there might be something more mysterious about the workings of the human mind that we don’t yet understand, so I’m not going to rule it out when I have no way of disproving it.
Secondly, both claims - that consciousness has very little influence on the mind, and that general intelligence isn’t complicated to understand - are incredibly bold statements I strongly disagree with. Especially with consciousness, though in my experience there’s a good chance we’re using that term to mean different things.
To me, consciousness is the fact of subjective experience - that it feels like something to be. That there’s qualia to experience.
I don’t know what’s left of the human mind once you strip away the ability to experience, but I’d argue we’d be unrecognizable without it. It’s what makes us human. It’s where our motivation for everything comes from - the need for social relationships, the need to eat, stay warm, stay healthy, the need to innovate. At its core, it all stems from the desire to feel - or not feel - something.
I'm onboard 100% with your definitions. But I think you does a little mistake here, general intelligence is about problem solving, reasoning, the ability to make a mental construct out of data, remember things ...
It doesn't however imply that it has to be a human doing it (even if the "level" is usually at human levels) or that human experience it.
Maybe nitpicking but I feel this is often overlooked and lots of people conflate for example AGI with a need of consciousness.
Then again, maybe computers cannot be as intelligent as us
but I sincerely doubt it.
So IMO, the human mind probably needs its consciousness to have general intelligence (as you said, it won't probably function at all without it, or very differently), but I argue that it's just because we are humans with wetware and all of that junk, and that doesn't at all mean it's an inherent part of intelligence in itself. And I see absolutely no reason for why it must.
Complicated topic for sure!
-
Yeah and it only took evolution (checks notes) 4 billion years to go from nothing to a brain valuable to humans.
I'm not so sure there will be a fast return in any economic timescale on the money investors are currently shovelling into AI.
We have maybe 500 years (tops) to see if we're smart enough to avoid causing our own extinction by climate change and biodiversity collapse - so I don't think it's anywhere near as clear cut.
Oh sure, the current ai craze is just a hype train based on one seemingly effective trick.
We have outperformed biology in a number of areas, and cannot compete in a number of others (yet), so I see it as a bit of a wash atm whether we’re better engineers than nature or worse atm.
The brain looks to be a tricky thing to compete with, but it has some really big limitations we don’t need to deal with (chemical neuron messaging really sucks by most measures).
So yeah, not saying we’ll do agi in the next few decades (and not with just LLMs, for sure), but I’d be surprised if we don’t figure something out once get computers a couple orders of magnitude faster so more than a handful of companies can afford to experiment.
-
Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.
I don't think you are estimating correctly the amount of energy spent by "evolution" to reach this.
There are plenty of bodies in the universe with nothing like human brain.
You should count the energy not of just Earth's existence, formation, Solar system's formation and so on, but much of the visible space around. "Much" is kinda unclear, but converting that to energy so big, so we shouldn't even bother.
It's best to assume we'll never have anything even resembling wetware in efficiency. One can say that genomes of life existing on Earth are similar to fossil fuels, only for highly optimized designs we won't like ever reach by ourselves. Except "design" might be a wrong word.
Honestly I think at some point we are going to have biocomputers. I mean, we already do, just the way evolution optimized that (giving everyone more or less equal share of computing power) isn't pleasant for some.
Same logic would suggest we’d never compete with an eyeball, but we went from 10 minute photos to outperforming most of the eyes abilities in cheap consumer hardware in little more than a century.
And the eye is almost as crucial to survival as the brain.
That said, I do agree it seems likely we’ll borrow from biology on the computer problem. Brains have very impressive parallelism despite how terrible the design of neurons is. If we can grow a brain in the lab that would be very useful indeed. More useful if we could skip the chemical messaging somehow and get signals around at a speed that wasn’t embarrassingly slow, then we’d be way ahead of biology in the hardware performance game and would have a real chance of coming up with something like agi, even without the level of problem solving that billions of years of evolution can provide.
-
Pretty much this. LLMs came out of left field going from morning to what it is more really quickly.
If expect the same of AGI, not correlated to who spent the most or is best at LLM. It might happen decades from now or in the next couple of months. It's a breakthrough that is just going to come out of left field when it happens.
LLMs weren't out of left field. Chatbots have been in development since the '90s at least. Probably even longer. And word prediction has been around at least a decade. People just don't pay attention until it's commercially available.
-
Leaving aside the questions whether it would benefit us, what makes you think LLM won't bring about technical singularity? Because, you know, the word LLM doesn't mean that much... It just means it's a model, that is "large" (currently taken to mean many parameters), and is capable of processing languages.
Don't you think whatever that will bring about the singularity, will at the very least understand human languages?
So can you clarify, what is it that you think won't become AGI? Is it transformer? Is it any models that trained in the way we train llms today?
It's because they are horrible at problem solving and creativity. They are based on word association from training purely on text. The technical singularity will need to innovate on its own so that it can improve the hardware it runs on and its software.
Even though github copilot has impressed me by implementing a 3 file Python script from scratch to finish such that I barely wrote any code, I had to hold its hand the entire way and give it very specific instructions about every function as we added the pieces one by one to build it up. And even then, it would get parts I failed to specify completely wrong and initially implemented things in a very inefficient way.
There are fundamental things that the technical singularity needs that today's LLMs lack entirely. I think the changes that would be required to get there will also change them from LLMs into something else. The training is a part of it, but fundamentally, LLMs are massive word association engines. Words (or vectors translated to and from words) are their entire world and they can only describe things with those words because it was trained on other people doing that.