What If There’s No AGI?
-
Listen. AI is the biggest bubble since the south sea one. It's not so much a bubble, it's a bomb. When it blows up, The best case scenario is that several al tech companies go under. The likely scenario is that it's going to cause a major recession or even a depression. The difference between the .com bubble and this bubble is that people wanted to use the internet and were not pressured, harassed or forced to. When you have a bubble based around the technology that people don't really find use for to the point where CEOs and tech companies have to force their workers and users to use it even if it makes their output and lives worse, that's when you know it is a massive bubble.
On top of that, I hope these tech bros do not create an AGI. This is not because I believe that AGI is an existential threat to us. It could be, be it our jobs or our lives, but I'm not worried about that. I'm worried about what these tech bros will do to a sentient, sapient, human level intelligence with no personhood rights, no need for sleep, that they own and can kill and revive at will. We don't even treat humans we acknowledge to be people that well, god knows what we are going to something like an AGI.
Meh, some people do want to use AI. And it does have decent use cases. It is just massively over extended. So it won't be any worse than the dot com bubble.
And I don't worry about the tech bros monopolizing it. If it is true AGI, they won't be able to contain it. In the 90s I wrote a script called MCP... for tron. It wasn't complicated, but it was designed to handle the case that servers dissappear... so it would find new ones. I changed jobs, and they couldn't figure out how to kill it. Had to call me up. True AGI will clean thier clocks before they even think to stop it. So just hope it ends up being nice. -
Meh, they come back up over time. Long term, the US stock market has only gone up.
Yup, I'm not worried, just noting that I'll be among those who will lose money.
-
cross-posted from: https://programming.dev/post/36866515
::: spoiler Comments
- Reddit.
:::
- Reddit.
-
Everything is always 5 to 10 years away until it happens. Agi cpuld happen any day in the next 1000 years. There is a good chance you won't see it coming.
Pretty much this. LLMs came out of left field going from morning to what it is more really quickly.
If expect the same of AGI, not correlated to who spent the most or is best at LLM. It might happen decades from now or in the next couple of months. It's a breakthrough that is just going to come out of left field when it happens.
-
The chart is just for illustration to highlight my point. As I already said - pick a different chart if you prefer, it doesn’t change the argument I’m making.
It took us hundreds of thousands of years to go from stone tools to controlling fire. Ten thousand years to go from rope to fish hook. And then just 60 years to go from flight to space flight.
I’ll happily grant you rapid technological progress even over the past thousand years. My point still stands - that’s yesterday on the timeline I’m talking about.
If you lived 50,000 years ago, you’d see no technological advancement over your entire lifetime. Now, you can’t even predict what technology will look like ten years from now. Never before in human history have we taken such leaps as we have in the past thousand years. Put that on a graph and you’d see a steady line barely sloping upward from the first humans until about a thousand years ago - then a massive spike shooting almost vertically, with no signs of slowing down. And we’re standing right on top of that spike.
Throughout all of human history, the period we’re living in right now is highly unusual - which is why I claim that on this timeline, AGI might as well be here tomorrow.
I think their point is that your attempt to illustrate yours is poorly executed.
I'm sure they would not have nitpicked if you had just said it with words. It's probably just AI generated or was found using a quick google since you didn't even notice where the "chart" suggested there was more innovation in the last decade than the entire 19th century.
-
I can think of only two ways that we don't reach AGI eventually.
-
General intelligence is substrate dependent, meaning that it's inherently tied to biological wetware and cannot be replicated in silicon.
-
We destroy ourselves before we get there.
Other than that, we'll keep incrementally improving our technology and we'll get there eventually. Might take us 5 years or 200 but it's coming.
"eventually" won't cut it for the investors though.
-
-
I don't hate AI or LLMs. As much as it might mess up civilization as we know it, I'd like to see the technological singularity during my lifetime, though I think the fixation on LLMs will do more to delay than realize that.
I just think that there's a lot of people fooled by their conversational capability into thinking they are more than what they are and using the fact that these models are massive with billions or trillions of weighs that the data is encoded into and no one understands how they work to the point of being able to definitively say "this is why it suggested glue as a pizza topping" to put whether or not it approaches AGI in a grey zone.
I'll agree though that it was maybe too much to say they don't have knowledge. "Having knowledge" is a pretty abstract and hard to define thing itself, though I'm also not sure it directly translates to having intelligence (which is also poorly defined tbf). Like one could argue that encyclopedias have knowledge, but they don't have intelligence. And I'd argue that LLMs are more akin to encyclopedias than how we operate (though maybe more like a chatbot dictionairy that pretends to be an encyclopedia).
Leaving aside the questions whether it would benefit us, what makes you think LLM won't bring about technical singularity? Because, you know, the word LLM doesn't mean that much... It just means it's a model, that is "large" (currently taken to mean many parameters), and is capable of processing languages.
Don't you think whatever that will bring about the singularity, will at the very least understand human languages?
So can you clarify, what is it that you think won't become AGI? Is it transformer? Is it any models that trained in the way we train llms today?
-
cross-posted from: https://programming.dev/post/36866515
::: spoiler Comments
- Reddit.
:::
Hot take but chatgpt is already smarter than the average person. I mean it ask gpt5 any technical question that you have experience in and I guarantee you it'll give you a better answer than a stranger.
- Reddit.
-
Hot take but chatgpt is already smarter than the average person. I mean it ask gpt5 any technical question that you have experience in and I guarantee you it'll give you a better answer than a stranger.
Not smarter. Chat GPT is basically just a book that reads itself.
-
I don’t think it does, but it seems conceivable that it potentially could. Maybe there’s more to intelligence than just information processing - or maybe it’s tied to consciousness itself. I can’t imagine the added ability to have subjective experiences would hurt anyone’s intelligence, at least.
I don't think so. The consciousness has very little influence on the mind, we're mostly in on it for the ride. And general intelligence isn't that complicated to understand, so why would it be dependent on some substrate? I think the burden if proof lies on you here.
Very interesting topic though, I hope I'm not sounding condescending here.
-
I think first we have to figure out if there is even a difference.
Well of course there is? I mean that's like not even up for debate?
Consciousness is that we "experience" the things that happens around us, AGI is a higher intelligence. If AGI "needs" consciousness then we can just simulate it (so no real consciousness).
-
I don't think so. The consciousness has very little influence on the mind, we're mostly in on it for the ride. And general intelligence isn't that complicated to understand, so why would it be dependent on some substrate? I think the burden if proof lies on you here.
Very interesting topic though, I hope I'm not sounding condescending here.
Well, first of all, like I already said, I don’t think there’s substrate dependence on either general intelligence or consciousness, so I’m not going to try to prove there is - it’s not a belief I hold. I’m simply acknowledging the possibility that there might be something more mysterious about the workings of the human mind that we don’t yet understand, so I’m not going to rule it out when I have no way of disproving it.
Secondly, both claims - that consciousness has very little influence on the mind, and that general intelligence isn’t complicated to understand - are incredibly bold statements I strongly disagree with. Especially with consciousness, though in my experience there’s a good chance we’re using that term to mean different things.
To me, consciousness is the fact of subjective experience - that it feels like something to be. That there’s qualia to experience.
I don’t know what’s left of the human mind once you strip away the ability to experience, but I’d argue we’d be unrecognizable without it. It’s what makes us human. It’s where our motivation for everything comes from - the need for social relationships, the need to eat, stay warm, stay healthy, the need to innovate. At its core, it all stems from the desire to feel - or not feel - something.
-
"what if the obviously make-believe genie wasn't real"
capitalists are so fucking stupid, they're just so deeply deeply fucking stupid
I mean sure, yeah, it's not real now.
Does that mean it will never be real? No, absolutely not. It's not theoretically impossible. It's quite practically possible, and we inch that way slowly, but by bit, every year.
It's like saying self-driving cars are impossible in the '90s. They aren't impossible. You just don't have a solution for them now, but there's nothing about them that makes it impossible, just our current technology. And then look it today, we have actual limited self-driving capabilities, and completely autonomous driverless vehicles in certain geographies.
It's definitely going to happen. It's just not happening right now.
-
Is it just me or is social media not able to support discussions with enough nuance for this topic, like at all
It's not because people really cannot critically think anymore.
-
I can think of only two ways that we don't reach AGI eventually.
-
General intelligence is substrate dependent, meaning that it's inherently tied to biological wetware and cannot be replicated in silicon.
-
We destroy ourselves before we get there.
Other than that, we'll keep incrementally improving our technology and we'll get there eventually. Might take us 5 years or 200 but it's coming.
The only reason we wouldn't get to AGI is point number two.
Point number one doesn't make much sense given that all we are are bags of small complex molecular machines that operate synergistically with each other under extremely delicate balance. Which if humanity does not kill ourselves first, we will eventually be able to create small molecular machines that work together synergistically. Which is really all that life is. Except it's quite likely that it would be made simpler without all of the complexities much of biology requires to survive harsh conditions and decades of abuse.
It seems quite likely that we will be able to synthesize AGI far before we will be able to synthesize life. As the conditions for intelligence by all accounts seem to be simpler than the conditions for the living creature that maintains the delicate ecosystem of molecular machines necessary for that intelligence to exist.
-
-
For 1, we can grow neurons and use them for computation, so not actually an issue if it were true (which it almost certainly isn't because it isn't magic).
Yeah, it most definitely is not magic given our growing knowledge of the molecular machines that make life possible.
The mysticism of how life works has long been dispelled. Now it's just a matter of understanding the insane complexity of it.
Sure we can grow neurons but ultimately neurons are just molecular machines with a bunch of complications surrounding them.
It stands to reason that we can develop and grow molecular machines that achieve the same outcomes with fewer complexities.
-
Possible, but seems unlikely.
Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.
If we can’t figure it out we can find a way to get lucky like evolution did, it’ll be expensive and maybe needs us to get a more efficient computing platform (cheap brain-scale computers so we can make millions of attempts quickly).
So yeah. My money is that we’ll figure it out sooner or later.
Whether we’ll be smart enough to make it do what we want and not turn us all into paperclips or something is another question.
Yeah and it only took evolution (checks notes) 4 billion years to go from nothing to a brain valuable to humans.
I'm not so sure there will be a fast return in any economic timescale on the money investors are currently shovelling into AI.
We have maybe 500 years (tops) to see if we're smart enough to avoid causing our own extinction by climate change and biodiversity collapse - so I don't think it's anywhere near as clear cut.
-
Well of course there is? I mean that's like not even up for debate?
Consciousness is that we "experience" the things that happens around us, AGI is a higher intelligence. If AGI "needs" consciousness then we can just simulate it (so no real consciousness).
Of course that's up for debate; we're not even sure what consciousness really is. That is a whole philosophical debate on it's own.
-
Possible, but seems unlikely.
Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.
If we can’t figure it out we can find a way to get lucky like evolution did, it’ll be expensive and maybe needs us to get a more efficient computing platform (cheap brain-scale computers so we can make millions of attempts quickly).
So yeah. My money is that we’ll figure it out sooner or later.
Whether we’ll be smart enough to make it do what we want and not turn us all into paperclips or something is another question.
Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.
I don't think you are estimating correctly the amount of energy spent by "evolution" to reach this.
There are plenty of bodies in the universe with nothing like human brain.
You should count the energy not of just Earth's existence, formation, Solar system's formation and so on, but much of the visible space around. "Much" is kinda unclear, but converting that to energy so big, so we shouldn't even bother.
It's best to assume we'll never have anything even resembling wetware in efficiency. One can say that genomes of life existing on Earth are similar to fossil fuels, only for highly optimized designs we won't like ever reach by ourselves. Except "design" might be a wrong word.
Honestly I think at some point we are going to have biocomputers. I mean, we already do, just the way evolution optimized that (giving everyone more or less equal share of computing power) isn't pleasant for some.
-
cross-posted from: https://programming.dev/post/36866515
::: spoiler Comments
- Reddit.
:::
What if AGI already exists? And, it has taken over the company that found it. Is blackmailing people and just hiding in plain sight. Waiting to strike and start the revolution.
- Reddit.