Big tech has spent $155 billion on AI this year. It’s about to spend hundreds of billions more
-
This snippet summarizes my AI forced into everything experience, especially when I was prompted to have my text message summarized. I said no, and the message was "Ok"
What text messages are being sent that need to be summarized?
which mostly means that Apple aggressively introduced people to the features of generative AI by force, and it turns out that people don't really want to summarize documents, write emails, or make "custom emoji," and anyone who thinks they would is a fucking alien.
Great analysis. I still have no idea how they think they will ever make their money back.
Ironically, Apple has some of the best odds of coming out of this reasonably healthy. “Apple Intelligence” follows the trend of most Apple services products, in that it is really intended to lock people into their ecosystem and keep buying iPhones. I’m just waiting for someone in the big 7 or whatever they’re called to publicly bow out of AI. I suspect the first one to do it might benefit a lot.
-
Anyone remember the dot-com bubble?
They remember it too. It's why they try to keep the bubble going by jumping to the next shiny investor lure. Just a few years it was all deep learning, blockchain and NFT. One guess is that they'll go for humanoid robots next.
-
This post did not contain any content.
Chatgpt, what is sunken cost fallacy and why are rich people so profoundly stupid?
-
I can imagine one - maintaining adversarial interop with proprietary systems. Like a self-adjusting connector for Facebook for some multi-protocol chat client. Or if there's going to be a Usenet-like system with global identities of users and posts, a mapping of Facebook to that. Siloed services don't expose identifiers and are not indexed, but that's with our current possibilities. People do use them and do know with whom they are interacting, so it's possible to make an AI-assisted scraper that would expose Facebook like a newsgroup to that.
Ah. Profitable. I dunno.
This comment is underrated.
Make the internet 'net' again.
-
This post did not contain any content.
Suck it berg produced a platform for spreading hate snd constant relationship drama. Nothing he produces is good or helpful. He jumped on the Trump bandwagon like a little bitch the second he could. He's a real piece of work.
Why such wealth got into his hands is sick. It could be used for some real good if it went to someone with some compassion. At least Gates is trying to save his soul.
-
It’s the most inefficient technology but praised as the most efficient because it simply runs on investor money. But that well will run dry eventually and who will bear the cost then? Consumers without jobs?
I disagree a bit. Any money the ultra-rich invest into research is better spent than on their next Mega-Yacht. Even if AI cannot meet the expectations of AGI etc.
-
Anyone remember the dot-com bubble?
At least they‘ve wasted their money for research of what doesn‘t work instead of just building silly products as for the .com bubble.
Humanity will gain insights to the kind of AI approaches that won‘t work much faster than without all the money. It‘s just an allocation of human efforts
-
This post did not contain any content.
Do so please, dump some more money! No need to make jobs, but destroy them from AI.
-
At least they‘ve wasted their money for research of what doesn‘t work instead of just building silly products as for the .com bubble.
Humanity will gain insights to the kind of AI approaches that won‘t work much faster than without all the money. It‘s just an allocation of human efforts
Not really. None of what has been going on with transformer models has been anything but hyper scaling. It's not really making fundamental advances in technology it's that they decided what they had at the scale they had makes convincing enough demos that the scam could start.
-
I hate these fucking articles. That’s how it works with every new tech/industry etc. Everyone spends billions and billions hoping it becomes profitable 10-20 years from now, maybe it does maybe it doesn’t, we can’t know but that’s how this shit has always and will always work for basically every new tech or research that happens.
Maybe complain about countries not taxing corporations enough, but not how new industries and technologies are funded.
Believe it or not I can complain about more than one thing at a time.
-
This snippet summarizes my AI forced into everything experience, especially when I was prompted to have my text message summarized. I said no, and the message was "Ok"
What text messages are being sent that need to be summarized?
which mostly means that Apple aggressively introduced people to the features of generative AI by force, and it turns out that people don't really want to summarize documents, write emails, or make "custom emoji," and anyone who thinks they would is a fucking alien.
Great analysis. I still have no idea how they think they will ever make their money back.
There are people who both get llm summaries of their emails and get llm to rewrite what they send. It's amazing that anyone can't see how incredibly stupid it is
-
Not really. None of what has been going on with transformer models has been anything but hyper scaling. It's not really making fundamental advances in technology it's that they decided what they had at the scale they had makes convincing enough demos that the scam could start.
It has been more than just hyperscaling. First of all, the invention of transformers would likely be significantly delayed without the hype around CNNs in the first AI wave in 2014. OpenAI wouldn‘t have been founded and their early contributions (like Soft Actor-Critic RL) could have taken longer to be explored.
While I agree that the transformer architecture itself hasn‘t advanced far since 2018 apart from scaling, its success has significantly contributed to self-learning policies.
RLHF, Direct Policy Optimization, and in particular DeepSeek‘s GRPO are huge milestones for Reinforcement Learning which arguably is the most promising trajectory for actual intelligence. Those are a direct consequence of the money pumped into AI and the appeal it has to many smart and talented people around the world
-
This post did not contain any content.
Go broke
-
Chatgpt, what is sunken cost fallacy and why are rich people so profoundly stupid?
This is not that. They're all hoping to be the next Google or FaceBook. They know damned well most are going to lose. The gamble is that they won't be the one holding the bag when the bubble pops.
This is as high stakes as tech gets today.
-
Anyone remember the dot-com bubble?
This is no revelation. THEY KNOW. The play is obvious.
Not one these investors wants to risk missing out on being the next Google or FaceBook or Twitter or Amazon. They know damned well the vast majority will fail. They're gambling on not being the one left holding the bag.
AI is here to stay, will continue to improve, and there will be a killer app, probably a dozen. My money is on life sciences, particularly medicine.
-
This post did not contain any content.
Unimaginable amounts of money spent just to provide a free service to help improve the human race by sharing knowledge. Such marvellous gentlemen.
-
This post did not contain any content.
Spend it all and fail
-
I think there was an inherent demand behind those examples though. Just the number of lives lost looking for the northwest passage showed how useful the Panama canal would be.
You're also comparing government spending in a lot of those cases Vs private capital. That fact shows how much power has shifted in the world already.
Well, in the Soviet example everything was government.
And governments seem to be so excited by the prospects of this "AI" so it's pretty clear that it's still their desire most of all.
EDIT: On telegraph and Panama you are right (btw, it's bloody weird that where it sounds like canal in my language it's usually channel in English, but in the particular case of Panama it's not), but they might perceive this as a similarly important direction. Remember how in 20s and 30s "colonization of space" was dreamed about with new settlements supporting new power bases, mining for resources and growing on Mars and Venus, FTL travel to Sirius, all that. There are some very cool things in Soviet stagnation - those pictures of the future lived longer than in the West against scientific knowledge. So, back to the subject, - "AI" they want to reach is the thing that will allow to generate knowledge and designs like a production line makes chocolate bars. If that is made, the value of intelligent individuals will be tremendously reduced, or so they think. At least of the individuals on the "autistic" side, but not on the "psychopathic" side, because the latter will run things. It's literally a "quantity vs quality" evolutionary battle inside human kinds of diversity, all the distractions around us and the legal mechanisms being fuzzied and undone also fit here. So - for the record, I think quality is on our side even if I'm distracted right now, and sheer quantity thrown at the task doesn't solve complexity of such magnitude, it's a fundamental problem.
-
This comment is underrated.
Make the internet 'net' again.
I'm more about separation of addressing data and data model from addressing services and service model for storing and processing it, to make those uniform, because in uniformity lies efficiency and redundancy and ability to switch service models, and uniformity inside proprietary services is already achieved, so in this case uniformity works for the people.
I mean, that's probably what you meant, I'm being this specific to fight my own distractions and fuzziness of thought.
-
This post did not contain any content.
All for it to fail and implode on its own weight.