95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
-
This post did not contain any content.
95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
Over the last three years, companies worldwide have invested between 30 and 40 billion dollars into generative artificial intelligence projects. Yet most of these efforts have brought no real business…
The Daily Adda (thedailyadda.com)
Surprise, surprise, motherfxxxers. Now you'll have to re-hire most of the people you ditched. AND become humble. What a nightmare!
-
So I'll be getting job interviews soon? Right?
nope, they will be hiring outsourced employees instead, AI=ALWAYS indians. on the very same post on reddit, they already said that is happening already. its going to get worst.
-
The first problem is the name. It's NOT artificial intelligence, it's artificial stupidity.
People BOUGHT intelligence but GOT stupidity.
the ceo and csuites did, they hyped all up and was excited for its innovation.
-
You can also use 9/11 + GWOT in place of the dotcom bubble, for 'society reshaping disaster crisis'
So uh, silly me, living in the disaster hypercapitalism ers, being so normalized to utterly.world redefining chaos at every level, so.often, that i have lost count.
That is more American focused though. Sure I heard about 9/11 but I was 8 and didn't really care because I wanted to go play outside.
-
It obfuctates its sources, so you don't know if the answer to your question is coming from a relevant expert, or the dankest corners of reddit...it all sounds the same after it's been processed by a hundred billion GPUs!
yup, i was looking some terms, or conditions up, it was USING stuff froma blog, and sites that just stole from other sites.
-
Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.
Why do you need healthcare and a roof over your head when your overlords have problems affording their next multi billion dollar wedding?
We had that recently. 10% redundant and pay freeze because we were not profitable enough. Guess what, morale tanked and they only slightly improved it by giving everyone +10 days holiday.
-
Surprise, surprise, motherfxxxers. Now you'll have to re-hire most of the people you ditched. AND become humble. What a nightmare!
they will rehire, but it will be outsourced for lower wages, at least thats what the same posts on reddit of the same article is discussing.
-
This comment really exemplifies the ignorance around AI. It's not fancy autocorrect, it's fancy autocomplete.
It's fancy autoincorrect
-
Can you elaborate? How is this not reasoning? Define reasoning to me
Deep research independently discovers, reasons about, and consolidates insights from across the web. To accomplish this, it was trained on real-world tasks requiring browser and Python tool use, using the same reinforcement learning methods behind OpenAI o1, our first reasoning model. While o1 demonstrates impressive capabilities in coding, math, and other technical domains, many real-world challenges demand extensive context and information gathering from diverse online sources. Deep research builds on these reasoning capabilities to bridge that gap, allowing it to take on the types of problems people face in work and everyday life.
While that contains the word "reasoning" that does not make it such. If this is about the new "reasoning" capabilities of the new LLMS. It was if I recall correctly, found our that it's not actually reasoning, just doing a fancy footwork appear as if it was reasoning, just like it's doing fancy dice rolling to appear to be talking like a human being.
As in, if you just change the underlying numbers and names on a test, the models will fail more often, even though the logic of the problem stays the same. This means, it's not actually "reasoning", it's just applying another pattern.
With the current technology we've gone so far into this brute forcing the appearance of intelligence that it is becoming quite the challenge in diagnosing what the model is even truly doing now. I personally doubt that the current approach, which is decades old and ultimately quite simple, is a viable way forwards. At least with our current computer technology, I suspect we'll need a breakthrough of some kind.
But besides the more powerful video cards, the basic principles of the current AI craze are the same as they were in the 70s or so when they tried the connectionist approach with hardware that could not parallel process, and had only datasets made by hand and not with stolen content. So, we're just using the same approach as we were before we tried to do "handcrafted" AI with LISP machines in the 80s. Which failed. I doubt this earlier and (very) inefficient approach can solve the problem, ultimately. If this keeps on going, we'll get pretty convincing results, but I seriously doubt we'll get proper reasoning with this current approach.
-
Surprise, surprise, motherfxxxers. Now you'll have to re-hire most of the people you ditched. AND become humble. What a nightmare!
Either spell the word properly, or use something else, what the fuck are you doing? Don't just glibly strait-jacket language, you're part of the ongoing decline of the internet with this bullshit.
-
Note that I'm not one of the people talking about it on X, I don't know who they are. I just linked it with a simple "this looks like reasoning to me".
They can't reason. LLMs, the tech all the latest and greatest still are, like GPT5 or whatever generate output by taking every previous token (simplified) and using them to generate the most likely next token. Thanks to their training this results in pretty good human looking language among other things like somewhat effective code output (thanks to sites like stack overflow being included in the training data).
Generating images works essentially the same way but is more easily described as reverse jpg compression. You think I'm joking? No really they start out with static and then transform the static using a bunch of wave functions they came up with during training. LLMs and the image generation stuff is equally able to reason, that being not at all whatsoever
-
Surprise, surprise, motherfxxxers. Now you'll have to re-hire most of the people you ditched. AND become humble. What a nightmare!
Investors and executives still show strong interest in AI, hoping that ongoing advances will close these gaps. But the short-term outlook points to slower progress than many expected.
Doesn't sound like that's gonna happen in the near future
-
While that contains the word "reasoning" that does not make it such. If this is about the new "reasoning" capabilities of the new LLMS. It was if I recall correctly, found our that it's not actually reasoning, just doing a fancy footwork appear as if it was reasoning, just like it's doing fancy dice rolling to appear to be talking like a human being.
As in, if you just change the underlying numbers and names on a test, the models will fail more often, even though the logic of the problem stays the same. This means, it's not actually "reasoning", it's just applying another pattern.
With the current technology we've gone so far into this brute forcing the appearance of intelligence that it is becoming quite the challenge in diagnosing what the model is even truly doing now. I personally doubt that the current approach, which is decades old and ultimately quite simple, is a viable way forwards. At least with our current computer technology, I suspect we'll need a breakthrough of some kind.
But besides the more powerful video cards, the basic principles of the current AI craze are the same as they were in the 70s or so when they tried the connectionist approach with hardware that could not parallel process, and had only datasets made by hand and not with stolen content. So, we're just using the same approach as we were before we tried to do "handcrafted" AI with LISP machines in the 80s. Which failed. I doubt this earlier and (very) inefficient approach can solve the problem, ultimately. If this keeps on going, we'll get pretty convincing results, but I seriously doubt we'll get proper reasoning with this current approach.
But pattern recognition is literally reasoning. Your argument sounds like "it reasons, but not as good as humans, therefore it does not reason"
I feel like you should take a look at this: https://en.m.wikipedia.org/wiki/Reasoning_system
-
They can't reason. LLMs, the tech all the latest and greatest still are, like GPT5 or whatever generate output by taking every previous token (simplified) and using them to generate the most likely next token. Thanks to their training this results in pretty good human looking language among other things like somewhat effective code output (thanks to sites like stack overflow being included in the training data).
Generating images works essentially the same way but is more easily described as reverse jpg compression. You think I'm joking? No really they start out with static and then transform the static using a bunch of wave functions they came up with during training. LLMs and the image generation stuff is equally able to reason, that being not at all whatsoever
You partly described reasoning tho
-
But pattern recognition is literally reasoning. Your argument sounds like "it reasons, but not as good as humans, therefore it does not reason"
I feel like you should take a look at this: https://en.m.wikipedia.org/wiki/Reasoning_system
If we're talking about Artificial INTELLIGENCE, then we should talk about "reasoning" as an ability to apply logic and not just match patterns. Because pure pattern matching is decidedly NOT reasoning, because if the pattern changes even a little (change the names and numbers, keeping the logic intact) all models start showing failures. So, yes, some people decided to reframe what "reasoning" means in this context (moving goalposts), but I'm pretty sure that 99% people who use the term when referring to AI don't mean reasoning like that. Regardless, it's not actually that of an interesting discussion, not do I actually care that much. So, sure, I'll give you that point.
-
As programmer. It’s helping my productivity. And look I am SDET in theory I will be the first to go, and I tried to make an agent doing most of my job, but it always things to correct.
But programming requires a lot of boilerplate code, using an agent to make boilerplate files so I can correct and adjust is speeding up a lot what I do.
I don’t think I can replaced so far, but my team is not looking to expand the team right now because we are doing more work.
Same here. I love it when Windsurf corrects nested syntax that's always a pain, or when I need it to refactor six similar functions into one, or write trivial tests and basic regex. It's so incredibly handy when it works right.
Sadly other times it cheats and does the lazy thing. Like when I ask it to write me an object, but chooses to derive it from the one I'm trying to rework. That's when I ask it to move and I do it myself.
-
Go learn simple regression analysis (not necessarily the commenter, but anyone). Then you'll understand why it's simply a prediction machine. It's guessing probabilities for what the next character or word is. It's guessing the average line, the likely followup. It's extrapolating from data.
This is why there will never be "sentient" machines. There is and always will be inherent programming and fancy ass business rules behind it all.
We simply set it to max churn on all data.
Also just the training of these models has already done the energy damage.
It's extrapolating from data.
AI is interpolating data. It's not great at extrapolation. That's why it struggles with things outside its training set.
-
Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.
Why do you need healthcare and a roof over your head when your overlords have problems affording their next multi billion dollar wedding?
I really understand this is a reality, especially in the US, and that this is really happening, but is there really no one, even around the world, who is taking advantage of laid-off skilled workforce?
Are they really all going to end up as pizza riders or worse, or are there companies making a long-term investment in workforce that could prove useful for different uses in the short AND long term?
I am quite sure that's what Novo Nordisk is doing with their hire push here in Denmark, as long as the money lasts, but I would be surprised no one is doing it in the US itself.
-
You partly described reasoning tho
This link is about reasoning system, not reasoning. Reasoning involves actually understanding the knowledge, not just having it. Testing or validating where knowledge is contradictionary.
LLM doesn't understand the difference between hard and soft rules of the world. Everything is up to debate, everything is just text and words that can be ordered with some probabilities.
It cannot check if something is true, it just 'knows' that someone on the internet talked about something, sometimes with and often without or contradicting resolutions..
It is a gossip machine, that trys to 'reason' about whatever it has heard people say.
-
You partly described reasoning tho
If you truly believe that you fundamentally misunderstand the definition of that word or are being purposely disingenuous as you Ai brown nose folk tend to be. To pretend for a second you genuinely just don't understand how to read LLMs, the most advanced "Ai" they are trying to sell everybody is as capable of reasoning as any compression algorithm, jpg, png, webp, zip, tar whatever you want. They cannot reason. They take some input and generate an output deterministically. The reason the output changes slightly is because they put random shit in there for complicated important reasons.
Again to recap here LLMs and similar neural network "Ai" is as capable of reasoning as any other computer program you interact with knowingly or unknowingly, that being not at all. Your silly Wikipedia page is a very specific term "Reasoning System" which would include stuff like standard video game NPC Ai such as the zombies in Minecraft. I hope you aren't stupid enough to say those are capable of reasoning
-
-
-
-
-
A chemical industry lobbyist is attempting to use AI to amplify doubts about the dangers of pollutants
Technology1
-
-
Bill Atkinson, visionary engineer behind the Apple Macintosh operating system, dies at 74
Technology1
-
Meta(Facebook) and Yandex apps silently de-anonymize users’ browsing habits without consent.
Technology1