95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
-
This post did not contain any content.
95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
Over the last three years, companies worldwide have invested between 30 and 40 billion dollars into generative artificial intelligence projects. Yet most of these efforts have brought no real business…
The Daily Adda (thedailyadda.com)
I would argue we have seen return. Documentation is easier. Tools for PDF, Markdown have increased in efficacy. Coding alone has lowered the barrier to bringing building blocks and some understanding to the masses. If we could hitch this with trusted and solid LLM data, it makes a lot of things easier for many people. Translation is another.
I find it very hard to believe 95% got ZERO benefit. We’re still benefiting and it’s forcing a lot of change (in the real world). Example, more power use? More renewable energy, and even (yes safe) nuclear is expanding. Energy storage is next.
These ‘AI’ (broadly used) tools will also get better and improve the interface between physical and digital. This will become ubiquitous, and we’ll forget we couldn’t just ‘talk’ to computers so easily.
I’ll end with, I don’t say ‘AI’ is an overblown and overused and overutilized buzzword everywhere these days. I can’t say about bubbles and shit either. But what I see is a lot of smart people making LLMs and related technologies more efficient, more powerful, and is trickling into many areas of software alone. It’s easier to review code, participate, etc. Literal papers are published constantly about how they find new and better and more efficient ways to do things.
-
This post did not contain any content.
95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
Over the last three years, companies worldwide have invested between 30 and 40 billion dollars into generative artificial intelligence projects. Yet most of these efforts have brought no real business…
The Daily Adda (thedailyadda.com)
Thank god they have their metaverse investments to fall back on. And their NFTs. And their crypto. What do you mean the tech industry has been nothing but scams for a decade?
-
This post did not contain any content.
95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
Over the last three years, companies worldwide have invested between 30 and 40 billion dollars into generative artificial intelligence projects. Yet most of these efforts have brought no real business…
The Daily Adda (thedailyadda.com)
Oh, that reminds me that we've always lived in false bubbles, and when they burst, crises and other things started, and eventually the biggest bubble that we call civilization and progress will burst, maybe in 2040-2050+.
-
I would argue we have seen return. Documentation is easier. Tools for PDF, Markdown have increased in efficacy. Coding alone has lowered the barrier to bringing building blocks and some understanding to the masses. If we could hitch this with trusted and solid LLM data, it makes a lot of things easier for many people. Translation is another.
I find it very hard to believe 95% got ZERO benefit. We’re still benefiting and it’s forcing a lot of change (in the real world). Example, more power use? More renewable energy, and even (yes safe) nuclear is expanding. Energy storage is next.
These ‘AI’ (broadly used) tools will also get better and improve the interface between physical and digital. This will become ubiquitous, and we’ll forget we couldn’t just ‘talk’ to computers so easily.
I’ll end with, I don’t say ‘AI’ is an overblown and overused and overutilized buzzword everywhere these days. I can’t say about bubbles and shit either. But what I see is a lot of smart people making LLMs and related technologies more efficient, more powerful, and is trickling into many areas of software alone. It’s easier to review code, participate, etc. Literal papers are published constantly about how they find new and better and more efficient ways to do things.
Well written response. There is an undeniable huge improvement to LLMs over the last few years, and that already has many applications in day to day life, workplace and whatnot.
From writing complicated Excel formulas, proofreading, and providing me with quick, straightforward recipes based on what I have at hand, AI assistants are already sold on me.
That being said, take a good look between the type of responses here -an open source space with barely any shills or astroturfers (or so I'd like to believe) - and compare them to the myriad of Reddit posts that questioned the same thing on subs like r/singularity and whatnot. It's anecdotal evidence of course, but the amount of BS answers saying "AI IS GONNA DOMINATE SOON" ; "NEXT YEAR NOBODY WILL HAVE A JOB", "THIS IS THE FUTURE" etc. is staggering. From doomsayers to people who are paid to disseminate this type of shit, this is ONE of the things that mainly leads me to think we are in a bubble. The same thing happened/ is happening to crypto over the last 10 years. Too much money being inserted by billionaire whales into a specific subject, and in years they are able to convince the general population that EVERYBODY and their mother is missing out a lot if they don't start using "X".
-
That is more American focused though. Sure I heard about 9/11 but I was 8 and didn't really care because I wanted to go play outside.
True, true, sorry, my America-centrism is showing.
Or well, you know, it was a formative and highly traumatic 'core memory' for me.
And, at the time, we were the largest economy in the world, and that event broke our collective minds, and reoriented that economy, and our society, down a dark path that only ended up causing waste, death and destruction.
Imagine the timeline where Gore won, not Bush, and all the US really did was send in a specops team to Afghanistan to get Bin Laden, as opposed to occupy the whole country, never did Iraq 2.
Thats... a lot of political capital and money that could have been directed to... anything else, i dunno, maybe kickstarting a green energy push?
-
Well written response. There is an undeniable huge improvement to LLMs over the last few years, and that already has many applications in day to day life, workplace and whatnot.
From writing complicated Excel formulas, proofreading, and providing me with quick, straightforward recipes based on what I have at hand, AI assistants are already sold on me.
That being said, take a good look between the type of responses here -an open source space with barely any shills or astroturfers (or so I'd like to believe) - and compare them to the myriad of Reddit posts that questioned the same thing on subs like r/singularity and whatnot. It's anecdotal evidence of course, but the amount of BS answers saying "AI IS GONNA DOMINATE SOON" ; "NEXT YEAR NOBODY WILL HAVE A JOB", "THIS IS THE FUTURE" etc. is staggering. From doomsayers to people who are paid to disseminate this type of shit, this is ONE of the things that mainly leads me to think we are in a bubble. The same thing happened/ is happening to crypto over the last 10 years. Too much money being inserted by billionaire whales into a specific subject, and in years they are able to convince the general population that EVERYBODY and their mother is missing out a lot if they don't start using "X".
Excel still struggles with correct formula suggestions. Basic #REF errors when the cell above and below in the table function just fine. The ever present, this data is a formula error when there is no longer a formula in the entire column.
And searching, just like its predecessor the google algo, gives you useless suggestions if anything remotely fashionable uses the scientific name too.
-
This post did not contain any content.
95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
Over the last three years, companies worldwide have invested between 30 and 40 billion dollars into generative artificial intelligence projects. Yet most of these efforts have brought no real business…
The Daily Adda (thedailyadda.com)
Every technology invented is a dual edge sword. Other edge propulses deluge of misinformation, llm hallucinations, brain washing of the masses, and exploit exploit for profit. The better side advances progress in science, well being, availbility of useful knowledge. Like the nuclerbomb, LLM "ai" is currenty in its infancy and is used as a weapon, there is a literal race to who makes the "biggest best" fkn "AI" to dominate the world. Eventually, the over optimistic buble bursts and reality of the flaws and risks will kick in. (Hopefully...)
-
If you truly believe that you fundamentally misunderstand the definition of that word or are being purposely disingenuous as you Ai brown nose folk tend to be. To pretend for a second you genuinely just don't understand how to read LLMs, the most advanced "Ai" they are trying to sell everybody is as capable of reasoning as any compression algorithm, jpg, png, webp, zip, tar whatever you want. They cannot reason. They take some input and generate an output deterministically. The reason the output changes slightly is because they put random shit in there for complicated important reasons.
Again to recap here LLMs and similar neural network "Ai" is as capable of reasoning as any other computer program you interact with knowingly or unknowingly, that being not at all. Your silly Wikipedia page is a very specific term "Reasoning System" which would include stuff like standard video game NPC Ai such as the zombies in Minecraft. I hope you aren't stupid enough to say those are capable of reasoning
Wtf?
Do I even have to point out the parts you need to read? Go back and start reading at sentence that says "In typical use in the Information Technology field however, the phrase is usually reserved for systems that perform more complex kinds of reasoning.", and then check out NLP page, or part about machine learning, which are all seperate/different reasoning systems, but we just tend to say "reasoning".
Not your hilarious NPC anology.
-
I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn't necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.
The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.
Great book btw, highly recommended.
Blindsighted by Peter Watts right? Incredible story. Can recommend.
-
Note that I'm not one of the people talking about it on X, I don't know who they are. I just linked it with a simple "this looks like reasoning to me".
Yes, your confidence in something you apparently know nothing about is apparent.
Have you ever thought that openai, and most xitter influencers, are lying for profit?
-
Either spell the word properly, or use something else, what the fuck are you doing? Don't just glibly strait-jacket language, you're part of the ongoing decline of the internet with this bullshit.
You're absolutely right about that, motherfucker.
-
Thank god they have their metaverse investments to fall back on. And their NFTs. And their crypto. What do you mean the tech industry has been nothing but scams for a decade?
Tech CEOs really should be replaced with AI, since they all behave like the seagulls from Finding Nemo and just follow the trends set out by whatever bs Elon starts
-
I would argue we have seen return. Documentation is easier. Tools for PDF, Markdown have increased in efficacy. Coding alone has lowered the barrier to bringing building blocks and some understanding to the masses. If we could hitch this with trusted and solid LLM data, it makes a lot of things easier for many people. Translation is another.
I find it very hard to believe 95% got ZERO benefit. We’re still benefiting and it’s forcing a lot of change (in the real world). Example, more power use? More renewable energy, and even (yes safe) nuclear is expanding. Energy storage is next.
These ‘AI’ (broadly used) tools will also get better and improve the interface between physical and digital. This will become ubiquitous, and we’ll forget we couldn’t just ‘talk’ to computers so easily.
I’ll end with, I don’t say ‘AI’ is an overblown and overused and overutilized buzzword everywhere these days. I can’t say about bubbles and shit either. But what I see is a lot of smart people making LLMs and related technologies more efficient, more powerful, and is trickling into many areas of software alone. It’s easier to review code, participate, etc. Literal papers are published constantly about how they find new and better and more efficient ways to do things.
You know what I think it is? The title is misleading. These companies probably had ZERO SUM GAIN when investing in AI. The upfront costs of investing in them didn't see returns yet. That's like saying a new restaurant isn't profitable. If you know you know.
Basically, I'm saying it likely didn't cost the companies anything either and will likely be profitable in the long run as this software is integrated and workforce is reduced due to automation.
R&D returned as much value as it consumed. So you can technically say "Zero Return," and be correct from an accounting perspective. And since everyone hates AI they'll believe it.
Don't get me wrong. AI is a Bubble industry but it's not going to go away when it pops.
-
It's extrapolating from data.
AI is interpolating data. It's not great at extrapolation. That's why it struggles with things outside its training set.
I'd still call it extrapolation, it creates new stuff, based on previous data. Is it novel (like science) and creative? Nah, but it's new. Otherwise I couldn't give it simple stuff and let it extend it.
-
Blindsighted by Peter Watts right? Incredible story. Can recommend.
Yep that's it. Really enjoyed it, just starting Echopraxia.
-
This post did not contain any content.
95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
Over the last three years, companies worldwide have invested between 30 and 40 billion dollars into generative artificial intelligence projects. Yet most of these efforts have brought no real business…
The Daily Adda (thedailyadda.com)
-
Same here. I love it when Windsurf corrects nested syntax that's always a pain, or when I need it to refactor six similar functions into one, or write trivial tests and basic regex. It's so incredibly handy when it works right.
Sadly other times it cheats and does the lazy thing. Like when I ask it to write me an object, but chooses to derive it from the one I'm trying to rework. That's when I ask it to move and I do it myself.
AI is not needed for any of the points you mentioned. That's just intellisense and auto complete with extra pollution and fossil fuels
Good luck when you need to link tests with requirements and you don't know what the tests are doing
-
Thank god they have their metaverse investments to fall back on. And their NFTs. And their crypto. What do you mean the tech industry has been nothing but scams for a decade?
Suppose many of the CEOs are just milking general venture capital. And those CEOs know that it's a bubble and it'll burst, but have a good enough way to predict when it will, thus leaving with profit. I mean, anyway, CEOs are usually not reliant upon company's performance, so no need even to know.
Also suppose that some very good source of free\cheap computation is used for the initial hype - like, a conspiracy theory, a backdoor in most popular TCP/IP realizations making all of the Internet's major routers work as a VM for some limited bytecode for someone who knows about that backdoor and controls two machines, talking to each other via the Internet and directly.
Then the blockchain bubble and the AI bubble would be similar in relying upon such computation (convenient for something slow in latency but endlessly parallel), and those inflating the bubbles and knowing of such a backdoor wouldn't risk anything, and would clear the field of plenty of competition with each iteration, making fortunes via hedge funds. They would spend very little for the initial stage of mining the initial party of bitcoins (what if Satoshi were actually Bill Joy or someone like that, who could have put such a backdoor, in theory), and training the initial stages of superficially impressive LLMs.
And then all this perpetual process of bubble after bubble makes some group of people (narrow enough, if they can keep the secret constituting my conspiracy theory) richer and richer quick enough on the planetary scale to gradually own bigger and bigger percent of the world economy, indirectly, of course, while regularly cleaning the field of clueless normies.
Just a conspiracy theory, don't treat it too seriously. But if, suppose, this were true, it would be both cartoonishly evil and cinematographically epic.
-
This post did not contain any content.
95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds
Over the last three years, companies worldwide have invested between 30 and 40 billion dollars into generative artificial intelligence projects. Yet most of these efforts have brought no real business…
The Daily Adda (thedailyadda.com)
Why do they keep throwing their money away on it?
-
Tech CEOs really should be replaced with AI, since they all behave like the seagulls from Finding Nemo and just follow the trends set out by whatever bs Elon starts
If only there was some group of people with detailed knowledge of the company, who would be informed enough to steer its direction wisely. /s