ChatGPT 5 power consumption could be as much as eight times higher than GPT 4 — research institute estimates medium-sized GPT-5 response can consume up to 40 watt-hours of electricity
-
And an LLM that you could run local on a flash drive will do most of what it can do.
What do you use your usb drive llm for?
-
The University of Rhode Island's AI lab estimates that GPT-5 averages just over 18 Wh per query, so putting all of ChatGPT's reported 2.5 billion requests a day through the model could see energy usage as high as 45 GWh.
A daily energy use of 45 GWh is enormous. A typical modern nuclear power plant produces between 1 and 1.6 GW of electricity per reactor per hour, so data centers running OpenAI's GPT-5 at 18 Wh per query could require the power equivalent of two to three nuclear power reactors, an amount that could be enough to power a small country.
Help me out here. What designates the “response” type? Someone asking it to make a picture? Write a 20 page paper? Code a small app?
-
duckduckgo yes, but ... bing?
Bing is for porn.
-
Tech hasn't improved that much in the last in the last decade. All that's happened is that more cores have been added. The single-thread speed of a CPU is stagnant.
My home PC consumes more power than my Pentium 3 consumed 25 years ago. All efficiency gains are lost to scaling for more processing power. All improvements in processing power are lost to shitty, bloated code.
We don't have the tech for AI. We're just scaling up to the electrical senand demand of a small country and pretending we have the tech for AI.
It's the muscle car era: can't make things more efficient to compete with Asia? MAKE IT BIGGER AND CONSUME MORE
-
duckduckgo yes, but ... bing?
ddg is bing
-
Help me out here. What designates the “response” type? Someone asking it to make a picture? Write a 20 page paper? Code a small app?
Response Type is decided by ChatGPTs new routing function based on your input. So yeah. Asking it to "think long and hard", which I have seen people advocating for to get better results recently, will trigger the thinking model and waste more resources.
-
And an LLM that you could run local on a flash drive will do most of what it can do.
Probably not a flash drive but you can get decent mileage out of 7b models that run on any old laptop for tasks like text generation, shortening or summarizing.
-
Tech hasn't improved that much in the last in the last decade. All that's happened is that more cores have been added. The single-thread speed of a CPU is stagnant.
My home PC consumes more power than my Pentium 3 consumed 25 years ago. All efficiency gains are lost to scaling for more processing power. All improvements in processing power are lost to shitty, bloated code.
We don't have the tech for AI. We're just scaling up to the electrical senand demand of a small country and pretending we have the tech for AI.
Not even the ai tech itself is enough for ai
-
Response Type is decided by ChatGPTs new routing function based on your input. So yeah. Asking it to "think long and hard", which I have seen people advocating for to get better results recently, will trigger the thinking model and waste more resources.
So instead of just saying "thank you" I now have to say "think long and hard about how much this means to me"?
-
That basically just sounds like Mixture of Experts
Basically, but with MCP and SLMs interacting rather than a singular model, with the coordinator model only doing the work to figure out who to field the question to, and then continuously provide context to other SLMs in the case of more complex queries
-
It would only take one regulation to fix that:
Datacenters that use liquid cooling must use closed loop systems.
The reason they dont, and why they setup in the desert, is because water is incredibly cheap and energy to cool a closed loop system is expensive. So they use evaporative open loop systems.
Closed loop systems require a large heat sync, like a cold water lake, limiting them to locations that are not as tax advantageous as dry red states.
-
So instead of just saying "thank you" I now have to say "think long and hard about how much this means to me"?
If you want it to really use a lot of energy on receiving your gratitude, sure I guess^^
-
Those users are not paying a sustainable price, they're using chatbots because they're kept artificially cheap to increase use rates.
Force them to pay enough to make these bots profitable and I guarantee they'll stop.
Or it will gate keep them from poor people. It will mean alot if the capabilities keep on improving.
That being said, open source models will be a thing always, and I think with that in mind, it will not go away, unless it's replaced with something better.
-
I don't care how rough the estimate is, LLMs are using insane amounts of power, and the message I'm getting here is that the newest incarnation uses even more.
BTW a lot of it seems to be just inefficient coding as Deepseek has shown.
For training yes, but during operation by this studies measure Deepseek actually has an even higher power draw, according to the article. Even models with more efficient programming use insane amounts of electricity
This was higher than all other tested models, except for OpenAI's o3 (25.35 Wh) and Deepseek's R1 (20.90 Wh).
-
And an LLM that you could run local on a flash drive will do most of what it can do.
I mean no not at all, but local LLMs are a less energy reckless way to use AI
-
What do you use your usb drive llm for?
Porn. Obviously.
-
I have an extreme dislike for OpenAI, Altman, and people like him, but the reasoning behind this article is just stuff some guy has pulled from his backside. There's no facts here, it's just "I believe XYX" with nothing to back it up.
We don't need to make up nonsense about the LLM bubble. There's plenty of valid enough criticisms as is.
By circulating a dumb figure like this, all you're doing is granting OpenAI the power to come out and say "actually, it only uses X amount of power. We're so great!", where X is a figure that on its own would seem bad, but compared to this inflated figure sounds great. Don't hand these shitty companies a marketing win.
Thats actyally a fav rhetorical trick of mine when arhuing with consummatw bullshitters who have followers.
-
Or it will gate keep them from poor people. It will mean alot if the capabilities keep on improving.
That being said, open source models will be a thing always, and I think with that in mind, it will not go away, unless it's replaced with something better.
I don't think they can survive if they gatekeep and make it unaffordable to most people. There's just not enough demand or revenue that can be generated from rich people asking for chatGPT to do their homework or pretend to be their friend. They need mass adoption to survive, which is why they're trying to keep it artificially cheap in the first place.
Why do you think they haven't raised prices yet? They're trying to make everyone use it and become reliant on it.
And it's not happening. The technology won't "go away" per se, but these these expensive AI companies will fail.
-
The University of Rhode Island's AI lab estimates that GPT-5 averages just over 18 Wh per query, so putting all of ChatGPT's reported 2.5 billion requests a day through the model could see energy usage as high as 45 GWh.
A daily energy use of 45 GWh is enormous. A typical modern nuclear power plant produces between 1 and 1.6 GW of electricity per reactor per hour, so data centers running OpenAI's GPT-5 at 18 Wh per query could require the power equivalent of two to three nuclear power reactors, an amount that could be enough to power a small country.
That are 25 request per kWh.
At 10 to 25cents per kWh that's 1cent per request. That doesn't seem to be too expensive. -
that's a lot. remember to add "-noai" to your google searches.
This is my weekly time to tell lemmings about Kagi, the search engine that does not shove LLM in your face (but still let's you use it when you explicitly want it) and that you pay for with your money, not your data.