OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
-
a similar amount of energy to running a hair dryer
We see a lot of those kinds of comparisons. Thing is, you run a hair dryer once per day at most. Or it's compared to a google search, often. Again, most people will do a handful of searches each day. A ChatGPT conversation can be hundreds of messages back and forth. A Claude Code session can go for hours and involve millions of tokens. An individual AI inference might be pretty tame but the quantity of them is another level.
If it was so efficient then they wouldn't be building Manhatten-sized datacenters.
ok, but running a hairdryer for 5 minutes is well up into the hundreds of queries which is more than the vast majority of people will use in a week. The post I replied to was talking about it being 1-2% of energy usage, so that includes transport, heating and heavy industry. It just doesnt pass the smell test to me that something where a weeks worth of usage is exceeded by a person drying their hair once is comparable with such vast users of energy.
-
Photographer1: Sam, could you give us a goofier face?
*click* *click*
Photographer2: Goofier!!
*click* *click* *click* *click*
He looks like someone in a cult. Wide open eyes, thousand yard stare, not mentally in the same universe as the rest of the world.
-
I make it write entire functions for me, one prompt = one small feature or sometimes one or two functions which are part of a feature, or one refactoring. I make manual edits fast and prompt the next step. It easily does things for me like parsing obscure binary formats or threading new piece of state through the whole application to the levels it's needed, or doing massive refactorings. Idk why it works so good for me and so bad for other people, maybe it loves me. I only ever used 4.1 and possibly 4o in free mode in Copilot.
It's an issue of scope. People often give the AI too much to handle at once, myself (admittedly) included.
-
This post did not contain any content.
OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
Experts working to benchmark resource use of AI models say new version’s enhanced capabilities come at a steep cost
the Guardian (www.theguardian.com)
When will genAI be so good, it'll solve its own energy crisis?
-
Sam Altman looks like an SNL actor impersonating Sam Altman.
"Herr derr, AI. No, seriously."
-
Obviously it's higher. If it was any lower, they would've made a huge announcement out of it to prove they're better than the competition.
I get the distinct impression that most of the focus for GPT5 was making it easier to divert their overflowing volume of queries to less expensive routes.
-
When will genAI be so good, it'll solve its own energy crisis?
Most certainly it won't happen until after AI has developed a self-preservation bias. It's too bad the solution is turning off the AI.
-
"If you do anything else then what i asked your mother dies"
"Beware: Another AI is watching every of your steps. If you do anything more or different than what I asked you to or touch any files besides the ones listed here, it will immediately shutdown and deprovision your servers."
-
This post did not contain any content.
OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
Experts working to benchmark resource use of AI models say new version’s enhanced capabilities come at a steep cost
the Guardian (www.theguardian.com)
Is it this?
-
Photographer1: Sam, could you give us a goofier face?
*click* *click*
Photographer2: Goofier!!
*click* *click* *click* *click*
Looks like he's going to eat his microphone
-
Obviously it's higher. If it was any lower, they would've made a huge announcement out of it to prove they're better than the competition.
-
Obviously it's higher. If it was any lower, they would've made a huge announcement out of it to prove they're better than the competition.
Unless it wasn't as low as they wanted it. It's at least cheap enough to run that they can afford to drop the pricing on the API compared to their older models.
-
This post did not contain any content.
OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
Experts working to benchmark resource use of AI models say new version’s enhanced capabilities come at a steep cost
the Guardian (www.theguardian.com)
But it also could be lower, right?
-
I believe in verifiable statements and so far,with few exceptions, I saw nothing. We are now speculating on magical numbers that we can't see, but we know that ai is demanding and we know that even small models are not free. The only accessible data come from mistral, most other ai devs are not exactly happy to share the inner workings of their tools.
Even than, mistral didn't release all their data, even if they did it would only apply to mistral 7b and above, not to chatgpt. -
This post did not contain any content.
OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
Experts working to benchmark resource use of AI models say new version’s enhanced capabilities come at a steep cost
the Guardian (www.theguardian.com)
i really hate this cunt's face.
-
This post did not contain any content.
OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
Experts working to benchmark resource use of AI models say new version’s enhanced capabilities come at a steep cost
the Guardian (www.theguardian.com)
Sam Altman has gone into PR and hype overdrive lately. He is practically everywhere trying to distract the media from seeing the truth about LLM. GPT-5 has basically proved that we've hit a wall and the belief that LLM will just scale linearly with amount of training data is false. He knows AI bubble is bursting and he is scared.
-
"If you do anything else then what i asked your mother dies"
I've tried threats in prompt files, with results that are... OK. Honestly, I can't tell if they made a difference or not.
The only thing I've found that consistently works is writing good old fashioned scripts to look for common errors by LLMs and then have them run those scripts after every action so they can somewhat clean up after themselves.
-
I make it write entire functions for me, one prompt = one small feature or sometimes one or two functions which are part of a feature, or one refactoring. I make manual edits fast and prompt the next step. It easily does things for me like parsing obscure binary formats or threading new piece of state through the whole application to the levels it's needed, or doing massive refactorings. Idk why it works so good for me and so bad for other people, maybe it loves me. I only ever used 4.1 and possibly 4o in free mode in Copilot.
Are you using Copilot in agent mode? That's where it breaks shit. If you're using it in ask mode with the file you want to edit added to the chat context, then you're probably going to be fine.
-
This post did not contain any content.
OpenAI will not disclose GPT-5’s energy use. It could be higher than past models
Experts working to benchmark resource use of AI models say new version’s enhanced capabilities come at a steep cost
the Guardian (www.theguardian.com)
All the people here chastising LLMs for resource wastage, I swear to god if you aren't vegan...
-
Are you using Copilot in agent mode? That's where it breaks shit. If you're using it in ask mode with the file you want to edit added to the chat context, then you're probably going to be fine.
I'm only using it in edits mode, it's the second of the three modes available.