OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
-
Sam Altman has gone into PR and hype overdrive lately. He is practically everywhere trying to distract the media from seeing the truth about LLM. GPT-5 has basically proved that we've hit a wall and the belief that LLM will just scale linearly with amount of training data is false. He knows AI bubble is bursting and he is scared.
MS already released, thier AI doesnt make money at all, in fact its costing too much. of course hes freaking out.
-
So like, is this whole AI bubble being funded directly by the fossil fuel industry or something? Because the AI training and the instantaneous global adoption of them is using energy like it's going out of style. Which fossil fuels actually are (going out of style, and being used to power these data centers). Could there be a link? Gotta find a way to burn all the rest of the oil and gas we can get out of the ground before laws make it illegal. Makes sense, in their traditional who gives a fuck about the climate and environment sort of way, doesn't it?
its like crypto, they wanted to make money of VC funds, and thats probably running dry right now, and the investors are probably going to demand returns very soon. why do you think the massive layoffs started in 2023.
-
intense electricity demands, and WATER for cooling.
I wonder if at this stage all the processors should simply be submerged into a giant cooling tank. It seems easier and more efficient.
-
But it also could be lower, right?
Not really, if it were they would be announcing their new highly efficient model.
-
All the people here chastising LLMs for resource wastage, I swear to god if you aren't vegan...
Dude, wtf?!
You can't just go around pointing out peoples hypocrisy. Companies killing the planet is big bad.People joining in? Dude just let us live!! It is only animals...
big /s
-
I mean, they're both bad.
But also, "Throw that burger in the trash I'm not eating it" and "Uninstall that plugin, I'm not querying it" have about the same impact on your gross carbon emissions.
These are supply side problems in industries that receive enormous state subsides. Hell, the single biggest improvement to our agriculture policy was when China stopped importing US pork products. So, uh... once again, thank you China for saving the planet.
Wait so the biggest improvement came when there was a massive decline in demand?
-
Do they do earnings calls? They’re not public.
probably VC money, the investors going to want some answers.
-
This post did not contain any content.
OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
Experts working to benchmark resource use of AI models say new version’s enhanced capabilities come at a steep cost
the Guardian (www.theguardian.com)
It's safe to assume that any metric they don't disclose is quite damning to them. Plus, these guys don't really care about the environmental impact, or what us tree-hugging environmentalists think. I'm assuming the only group they are scared of upsetting right now is investors. The thing is, even if you don't care about the environment, the problem with LLMs is how poorly they scale.
An important concept when evaluating how something scales is are marginal values, chiefly marginal utility and marginal expenses. Marginal utility is how much utility do you get if you get one more unit of whatever. Marginal expenses is how much it costs to get one more unit. And what the LLMs produce is the probably that a token, T, follows on prefix Q. So P(T|Q) (read: Probably of T, given Q). This is done for all known tokens, and then based on these probabilities, one token is chosen at random. This token is then appended to the prefix, and the process repeats, until the LLM produces a sequence which indicates that it's done talking.
If we now imagine the best possible LLM, then the calculated value for P(T|Q) would be the actual value. However, it's worth noting that this already displays a limitation of LLMs. Namely even if we use this ideal LLM, we're just a few bad dice rolls away from saying something dumb, which then pollutes the context. And the larger we make the LLM, the closer its results get to the actual value. A potential way to measure this precision would be by subtracting P(T|Q) from P_calc(T|Q), and counting the leading zeroes, essentially counting the number of digits we got right. Now, the thing is that each additional digit only provides a tenth of the utility to than the digit before it. While the cost for additional digits goes up exponentially.
So, exponentially decaying marginal utility meets exponentially growing marginal expenses. Which is really bad for companies that try to market LLMs.
-
Sam Altman has gone into PR and hype overdrive lately. He is practically everywhere trying to distract the media from seeing the truth about LLM. GPT-5 has basically proved that we've hit a wall and the belief that LLM will just scale linearly with amount of training data is false. He knows AI bubble is bursting and he is scared.
He's also already admitted that they're out of training data. If you've wondered why a lot more websites will run some sort of verification when you connect, it's because there's a desperate scramble to get more training data.
-
All the people here chastising LLMs for resource wastage, I swear to god if you aren't vegan...
Animal agriculture has significantly better utility and scaling than LLMs. So, its not hypocritical to be opposed to the latter but not the former.
-
It's not, you're just personally insulted. The livestock industry is responsible for about 15% of human caused greenhouse gas emissions. That's not negligible.
So, I can't complain about any part of the remaining 85% if I'm not vegan? That's so fucking stupid. Do you not complain about microplastics because you're guilty of using devices with plastic in them to type your message?
-
probably because the animal death industry is comparable to those things
Death Industry sounds like it would be an awesome band name.
-
When will genAI be so good, it'll solve its own energy crisis?
Current genAI? Never. There's at least one breakthrough needed to build something capable of actual thinking.
-
It's safe to assume that any metric they don't disclose is quite damning to them. Plus, these guys don't really care about the environmental impact, or what us tree-hugging environmentalists think. I'm assuming the only group they are scared of upsetting right now is investors. The thing is, even if you don't care about the environment, the problem with LLMs is how poorly they scale.
An important concept when evaluating how something scales is are marginal values, chiefly marginal utility and marginal expenses. Marginal utility is how much utility do you get if you get one more unit of whatever. Marginal expenses is how much it costs to get one more unit. And what the LLMs produce is the probably that a token, T, follows on prefix Q. So P(T|Q) (read: Probably of T, given Q). This is done for all known tokens, and then based on these probabilities, one token is chosen at random. This token is then appended to the prefix, and the process repeats, until the LLM produces a sequence which indicates that it's done talking.
If we now imagine the best possible LLM, then the calculated value for P(T|Q) would be the actual value. However, it's worth noting that this already displays a limitation of LLMs. Namely even if we use this ideal LLM, we're just a few bad dice rolls away from saying something dumb, which then pollutes the context. And the larger we make the LLM, the closer its results get to the actual value. A potential way to measure this precision would be by subtracting P(T|Q) from P_calc(T|Q), and counting the leading zeroes, essentially counting the number of digits we got right. Now, the thing is that each additional digit only provides a tenth of the utility to than the digit before it. While the cost for additional digits goes up exponentially.
So, exponentially decaying marginal utility meets exponentially growing marginal expenses. Which is really bad for companies that try to market LLMs.
Well I mean also that they kinda suck, I feel like I spend more time debugging AI code than I get working code.
-
"Beware: Another AI is watching every of your steps. If you do anything more or different than what I asked you to or touch any files besides the ones listed here, it will immediately shutdown and deprovision your servers."
They do need to do this though. Survival of the fittest. The best model gets more energy access, etc.
-
Is it this?
what is that? looks funny but idk this
-
All the people here chastising LLMs for resource wastage, I swear to god if you aren't vegan...
Whataboutism isn’t useful. Nobody is living the perfect life. Every improvement we can make towards a more sustainable way of living is good. Everyone needs to start somewhere and even if they never move to make more changes at least they made the one.
-
This post did not contain any content.
OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
Experts working to benchmark resource use of AI models say new version’s enhanced capabilities come at a steep cost
the Guardian (www.theguardian.com)
When you want to create the shiniest honeypot, you need high power consumption.
-
what is that? looks funny but idk this
Screenshot from the first matrix movie with pods full of people acting as batteries
-
Screenshot from the first matrix movie with pods full of people acting as batteries
so exactly as I guessed, thanks for rhe explanation