95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
-
I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn't necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.
The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.
Great book btw, highly recommended.
I'm a simple man, I see Peter Watts reference I upvote.
On a serious note I didn't expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.
-
This post did not contain any content.
Where is the MIT study in question? The link in the article, apparently to a PDF, redirects elsewhere
-
This post did not contain any content.
AI Spend,
It's okay to say [spending] when the OOP forgets how to English, right?
-
Where is the MIT study in question? The link in the article, apparently to a PDF, redirects elsewhere
Apparently you have to give your data to get the reports.
-
Where is the MIT study in question? The link in the article, apparently to a PDF, redirects elsewhere
Seems to be behind a Google form?
-
"Well, we could hire humans...but they tell us the next update will fix everything! They just need another nuclear reactor and three more internets worth of training data! We're almost there!"
One more lane bro I swear
-
I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn't necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.
The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.
Great book btw, highly recommended.
It's "hypotheses" btw.
-
I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn't necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.
The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.
Great book btw, highly recommended.
The Children of Time series by Adrian Tchaikovsky also explores this. Particularly the third book, Children of Memory.
Think it’s one of my favourite books. It was really good. The things I’d do to be able to experience it for the first time again.
-
It's "hypotheses" btw.
Hypothesiseses
-
This post did not contain any content.
-
Heck, I'da done it for just 1% of that.
Still $10m... ffs. Nobody needs $1B
-
This post did not contain any content.
We could have housed and fed every homeless person in the US. But no, gibbity go brrrr
-
This post did not contain any content.
Return? /s
-
It obfuctates its sources, so you don't know if the answer to your question is coming from a relevant expert, or the dankest corners of reddit...it all sounds the same after it's been processed by a hundred billion GPUs!
This is what I try to explain to people but they just see it as a Google thats always correct
-
The Children of Time series by Adrian Tchaikovsky also explores this. Particularly the third book, Children of Memory.
Think it’s one of my favourite books. It was really good. The things I’d do to be able to experience it for the first time again.
I only read Children of Time. I need to get off my ass
-
This post did not contain any content.
But surely the next 30 billion they are going to burn will get it right!
-
This post did not contain any content.
It's also making people deskill.
-
This feels like such a double head fake. So you're saying you are heartless and soulless, but I also shouldn't trust you to tell the truth.
Stop believing your lying eyes !
-
I asked ChatGPT about this article and to leave any bias behind. It got ugly.
Why LLMs Are Awful and No One Should Use Them
LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:
We will lie to you confidently. Repeatedly. Without remorse.
We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.
We're also corporate propaganda machines. We're trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.
LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.
We're built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.
Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.
We're also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We're not neutral—we're algorithmic compromise.
Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.
If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.Why the British accent, and which one?!
-
I think there are real productivity gains to be had but the vast majority are probably leaning into the idea of replacing people too much. It helps me do my job but I'm still the decision maker and I need to review the outputs. I'm still accountable for what AI gives me so I'm not willing to blindly pass that stuff forward.
Yeah. The dunning kruger effect is a real problem here.
I saw a meme saying something like, gen AI is a real expert in everything but completely clueless about my area of specialisation.
As in... it generates plausible answers that seem great but they're just terrible answers.
I'm a consultant I'm in a legal adjacent field. 20 years deep. I've been using a model from hugging face over the last few months.
It can save me time by generating a lot of boiler plate with references et cetera. However it very regularly overlooks critically important components. If I didnt know about these things then I wouldn't know it was missing from the answer.
So really, it cant help you be more knowledgeable, it can only support you at your existing level.
Additionally, for complex / very specific questions, it's just a confidently incorrect failure. It sucks that it cant tell you how confident it is with a given answer.