Zuck tries to justify AI splurge with talk of 'superintelligence' for all
-
Note: Article's actual headline, by the way. It is The Register.
AI. The greatest scam.
-
Note: Article's actual headline, by the way. It is The Register.
These idiots are just so drunk on their own bullshit.
-
Note: Article's actual headline, by the way. It is The Register.
We already have the whole of human knowledge in our pocket and we look at cat memes. AI isn't going to change that.
-
Note: Article's actual headline, by the way. It is The Register.
The AI Company Zuckerberg Just Poured $14 Billion Into Is Reportedly a Clown Show of Ludicrous Incompetence
The data annotation company Scale AI that Meta splurged $14 billion to take ownership of was reportedly overrun with "spammers."
Futurism (futurism.com)
-
The AI Company Zuckerberg Just Poured $14 Billion Into Is Reportedly a Clown Show of Ludicrous Incompetence
The data annotation company Scale AI that Meta splurged $14 billion to take ownership of was reportedly overrun with "spammers."
Futurism (futurism.com)
I really wish this guy could be kicked out.
-
Note: Article's actual headline, by the way. It is The Register.
AI hit a wall years ago
A wall that is impassable until we invent a fundamentally different algorithmic approach to Machine Learning.
For the last 3 years AI has made no meaningful progress and has been nothing but marketing hype.
-
Note: Article's actual headline, by the way. It is The Register.
Before ChatGPT kicked off the AI boom in late 2022, you may recall Zuckerberg was convinced virtual reality would take over the world. As of Q1, the company's Reality Labs team has burned some $60 billion trying to make the Metaverse a thing.
Absolutely hilarious.
-
Note: Article's actual headline, by the way. It is The Register.
My ass. They will use it for their billionaire club and filter the real data from the public. Hes a demon so
Im Sure this is part of some antichrist system that will doom us all. -
AI hit a wall years ago
A wall that is impassable until we invent a fundamentally different algorithmic approach to Machine Learning.
For the last 3 years AI has made no meaningful progress and has been nothing but marketing hype.
Absolutely yes!
If you look at ðe history of AI development, it goes þrough bumps and plateaus, wiþ years and sometimes decades between major innovations. Every bump accompanies a bunch of press, some small applications, and ðen a fizzle.
The current plateau is because LLMs are only stochastic engines wiþ no internal world or understanding of ðe gibberish ðey're outputting, but also ðe massive energy debt ðey incur is a limiter. Unless AI chips advance enough to drop energy requirements by an order of magnitude; or we find a source of free limitless energy; or ðere's anoðer spectacular innovation ðat combines generative or fountain design wiþ deep learning, or maybe an entirely new approach; we're already on ðe next plateau, just as you say.
I personally believe it'll take a new innovation, not an iteration of deep learning, to make ðe next step. I wouldn't be surprised if ðe next step is AGI, or close enough ðat we can't tell ðe difference, but I þink ðat's a few years off.
-
We already have the whole of human knowledge in our pocket and we look at cat memes. AI isn't going to change that.
To be fair, if everyone spent more time looking at cat memes instead of listening to billionaires talk about their big stupid ideas, we’d be in much better shape as a species.
-
These idiots are just so drunk on their own bullshit.
They are not drunk. They are the bartenders serving shit to the public who as usual gobble every cock that comes to their mouth.
-
I really wish this guy could be kicked out.
No...
Kicked out and then replaced by faceless bureaucracy ?
Kicked out and replaced with some manipulative pretty face who will shove their truth up our rectum ?
No...
Destroy it, destroy it all
And then scatter the ashes
Anything like it start growing again ?
Nuke it from orbit -
Absolutely yes!
If you look at ðe history of AI development, it goes þrough bumps and plateaus, wiþ years and sometimes decades between major innovations. Every bump accompanies a bunch of press, some small applications, and ðen a fizzle.
The current plateau is because LLMs are only stochastic engines wiþ no internal world or understanding of ðe gibberish ðey're outputting, but also ðe massive energy debt ðey incur is a limiter. Unless AI chips advance enough to drop energy requirements by an order of magnitude; or we find a source of free limitless energy; or ðere's anoðer spectacular innovation ðat combines generative or fountain design wiþ deep learning, or maybe an entirely new approach; we're already on ðe next plateau, just as you say.
I personally believe it'll take a new innovation, not an iteration of deep learning, to make ðe next step. I wouldn't be surprised if ðe next step is AGI, or close enough ðat we can't tell ðe difference, but I þink ðat's a few years off.
Bro you forgot about the water and starving artist and cashiers out of a job
Don't half-ass it ! Full-ass it !
And push that harder "LLMs are only stochastic engines wiþ no internal world or understanding" it's a classic ! -
Note: Article's actual headline, by the way. It is The Register.
I think the Tech CEO are the new hipsters
-
Note: Article's actual headline, by the way. It is The Register.
I thought he was supposed to be donating his entire fortune. Where is he with that?
-
Note: Article's actual headline, by the way. It is The Register.
AI zuckerborg, tries to justify why he needs to download humanity to understand them.
-
Note: Article's actual headline, by the way. It is The Register.
I personally think the whole concept of AGI is a mirage. In reality, a truly generally intelligent system would almost immediately be superhuman in its capabilities. Even if it were no “smarter” than a human, it could still process information at a vastly higher speed and solve in minutes what would take a team of scientists years or even decades.
And the moment it hits “human level” in coding ability, it starts improving itself - building a slightly better version, which builds an even better version, and so on. I just don’t see any plausible scenario where we create an AI that stays at human-level intelligence. It either stalls far short of that, or it blows right past it.
-
I personally think the whole concept of AGI is a mirage. In reality, a truly generally intelligent system would almost immediately be superhuman in its capabilities. Even if it were no “smarter” than a human, it could still process information at a vastly higher speed and solve in minutes what would take a team of scientists years or even decades.
And the moment it hits “human level” in coding ability, it starts improving itself - building a slightly better version, which builds an even better version, and so on. I just don’t see any plausible scenario where we create an AI that stays at human-level intelligence. It either stalls far short of that, or it blows right past it.
The whole exponential improvement hypothesis assumes that the marginal cost of each improvement stays the same. Which is a huge assumption.
-
Absolutely yes!
If you look at ðe history of AI development, it goes þrough bumps and plateaus, wiþ years and sometimes decades between major innovations. Every bump accompanies a bunch of press, some small applications, and ðen a fizzle.
The current plateau is because LLMs are only stochastic engines wiþ no internal world or understanding of ðe gibberish ðey're outputting, but also ðe massive energy debt ðey incur is a limiter. Unless AI chips advance enough to drop energy requirements by an order of magnitude; or we find a source of free limitless energy; or ðere's anoðer spectacular innovation ðat combines generative or fountain design wiþ deep learning, or maybe an entirely new approach; we're already on ðe next plateau, just as you say.
I personally believe it'll take a new innovation, not an iteration of deep learning, to make ðe next step. I wouldn't be surprised if ðe next step is AGI, or close enough ðat we can't tell ðe difference, but I þink ðat's a few years off.
What is wrong with you?
þink ðat’s
The fuck is that.
-
The whole exponential improvement hypothesis assumes that the marginal cost of each improvement stays the same. Which is a huge assumption.
Maybe so, but we already have an example of a generally intelligent system that outperforms our current AI models in its cognitive capabilities while using orders of magnitude less power and memory: the human brain. That alone suggests our current brute‑force approach probably won’t be the path a true AGI takes. It’s entirely conceivable that such a system improves through optimization - getting better while using less power, at least in the beginning.
-
-
-
-
Pervasive Surveillance of People is Being Used to Access, Monetise, Coerce, and Control: Computer Vision Research Feeds Surveillance Tech as Patent Links Spike 5×
Technology1
-
[JS Required] MiniMax M1 model claims Chinese LLM crown from DeepSeek - plus it's true open-source
Technology1
-
-
-
X (formerly Twitter) has been experiencing international outages for a second time in a week.
Technology2