OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
-
It's safe to assume that any metric they don't disclose is quite damning to them. Plus, these guys don't really care about the environmental impact, or what us tree-hugging environmentalists think. I'm assuming the only group they are scared of upsetting right now is investors. The thing is, even if you don't care about the environment, the problem with LLMs is how poorly they scale.
An important concept when evaluating how something scales is are marginal values, chiefly marginal utility and marginal expenses. Marginal utility is how much utility do you get if you get one more unit of whatever. Marginal expenses is how much it costs to get one more unit. And what the LLMs produce is the probably that a token, T, follows on prefix Q. So P(T|Q) (read: Probably of T, given Q). This is done for all known tokens, and then based on these probabilities, one token is chosen at random. This token is then appended to the prefix, and the process repeats, until the LLM produces a sequence which indicates that it's done talking.
If we now imagine the best possible LLM, then the calculated value for P(T|Q) would be the actual value. However, it's worth noting that this already displays a limitation of LLMs. Namely even if we use this ideal LLM, we're just a few bad dice rolls away from saying something dumb, which then pollutes the context. And the larger we make the LLM, the closer its results get to the actual value. A potential way to measure this precision would be by subtracting P(T|Q) from P_calc(T|Q), and counting the leading zeroes, essentially counting the number of digits we got right. Now, the thing is that each additional digit only provides a tenth of the utility to than the digit before it. While the cost for additional digits goes up exponentially.
So, exponentially decaying marginal utility meets exponentially growing marginal expenses. Which is really bad for companies that try to market LLMs.
Well I mean also that they kinda suck, I feel like I spend more time debugging AI code than I get working code.
-
"Beware: Another AI is watching every of your steps. If you do anything more or different than what I asked you to or touch any files besides the ones listed here, it will immediately shutdown and deprovision your servers."
They do need to do this though. Survival of the fittest. The best model gets more energy access, etc.
-
Is it this?
what is that? looks funny but idk this
-
All the people here chastising LLMs for resource wastage, I swear to god if you aren't vegan...
Whataboutism isn’t useful. Nobody is living the perfect life. Every improvement we can make towards a more sustainable way of living is good. Everyone needs to start somewhere and even if they never move to make more changes at least they made the one.
-
This post did not contain any content.
OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
Experts working to benchmark resource use of AI models say new version’s enhanced capabilities come at a steep cost
the Guardian (www.theguardian.com)
When you want to create the shiniest honeypot, you need high power consumption.
-
what is that? looks funny but idk this
Screenshot from the first matrix movie with pods full of people acting as batteries
-
Screenshot from the first matrix movie with pods full of people acting as batteries
so exactly as I guessed, thanks for rhe explanation
-
I wonder if at this stage all the processors should simply be submerged into a giant cooling tank. It seems easier and more efficient.
Microsoft has tried running datacenters in the sea, for cooling purposes.
Microsoft blog -
I can blame the user for believing the marketing over their direct experiences.
If you use these tools for any amount of time it's easy to see that there are some tasks they're bad at and some that they are good at. You can learn how big of a project they can handle and when you need to break it up into smaller pieces.
I can't imagine any sane person who lives their life guided by marketing hype instead of direct knowledge and experience.
I can't imagine any sane person who lives their life guided by marketing hype instead of direct knowledge and experience.
I mean fair enough but also... That makes the vast majority of managers, MBAs, salespeople and "normies" like your grandma and Uncle Bob insane.
Actually questioning stuff that sales people tell you and using critical thinking is a pretty rare skill in this day and age.
-
Microsoft has tried running datacenters in the sea, for cooling purposes.
Microsoft blogbrings in another problem, so they have to use generators, or undersea cables.
-
those are his lying/making up hand gestures. its the same thing trump does with his hands when hes lying or exaggerating, he does the wierd accordian hands.
So there are no pictures without the hands, got it.
-
Well I mean also that they kinda suck, I feel like I spend more time debugging AI code than I get working code.
That's actually true. I read some research on that and your feeling is correct.
Can't be bothered to google it right now.
-
Well I mean also that they kinda suck, I feel like I spend more time debugging AI code than I get working code.
I only use it if I’m stuck even if the AI code is wrong it often pushes me in the right direction to find the correct solution for my problem. Like pair programming but a bit shitty.
The best way to use these LLMs with coding is to never use the generated code directly and atomize your problem into smaller questions you ask to the LLM.
-
I wonder if at this stage all the processors should simply be submerged into a giant cooling tank. It seems easier and more efficient.
Or you could build the centers in colder climate areas. Here in Finland it's common (maybe even mandatory, I'm not sure) for new datacenters to pull the heat from their systems and use that for district heating. No wasted water and at least you get something useful out of LLMs. Obviously using them as a massive electric boiler is pretty inefficient but energy for heating is needed anyways so at least we can stay warm and get 90s action series fanfic on top of that.
-
Whataboutism isn’t useful. Nobody is living the perfect life. Every improvement we can make towards a more sustainable way of living is good. Everyone needs to start somewhere and even if they never move to make more changes at least they made the one.
1 beef burger is equivalent in water consumption to 7.5million chatgpt queries.
-
Animal agriculture has significantly better utility and scaling than LLMs. So, its not hypocritical to be opposed to the latter but not the former.
The holocaust was well scaled too.
Animal ag is responsible for 15-20% of the entire planets GHG emissions. You can live a healthier, more morally consistent life if you give up meat. -
So, I can't complain about any part of the remaining 85% if I'm not vegan? That's so fucking stupid. Do you not complain about microplastics because you're guilty of using devices with plastic in them to type your message?
Imagining complaining about someone tipping out fresh water, while eating a burger when a single kilo of beef uses between 15,000 to 200,000 liters.
Like, until you stop doing the worst thing a single consumer can do to the planet for literally nothing but greed and pleasure (eating meat instead of healthier alternatives), you have no leg to criticize.
-
I'm only using it in edits mode, it's the second of the three modes available.
Yep, that's also pretty safe.
-
1 beef burger is equivalent in water consumption to 7.5million chatgpt queries.
Do you have a citation handy for that?
-
The holocaust was well scaled too.
Animal ag is responsible for 15-20% of the entire planets GHG emissions. You can live a healthier, more morally consistent life if you give up meat.Do you know what I mean by "scaling"? Because going by your reply, you don't.