708
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 06 Aug 2025
708 points (100.0% liked)
Showerthoughts
36457 readers
198 users here now
A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. The most popular seem to be lighthearted clever little truths, hidden in daily life.
Here are some examples to inspire your own showerthoughts:
- Both “200” and “160” are 2 minutes in microwave math
- When you’re a kid, you don’t realize you’re also watching your mom and dad grow up.
- More dreams have been destroyed by alarm clocks than anything else
Rules
- All posts must be showerthoughts
- The entire showerthought must be in the title
- No politics
- If your topic is in a grey area, please phrase it to emphasize the fascinating aspects, not the dramatic aspects. You can do this by avoiding overly politicized terms such as "capitalism" and "communism". If you must make comparisons, you can say something is different without saying something is better/worse.
- A good place for politics is c/politicaldiscussion
- Posts must be original/unique
- Adhere to Lemmy's Code of Conduct and the TOS
If you made it this far, showerthoughts is accepting new mods. This community is generally tame so its not a lot of work, but having a few more mods would help reports get addressed a little sooner.
Whats it like to be a mod? Reports just show up as messages in your Lemmy inbox, and if a different mod has already addressed the report, the message goes away and you never worry about it.
founded 2 years ago
MODERATORS
This is the main reason I am reticent about using ai. I can get around its funtional limitations but I need to know they have brought the energy usage down.
It's not that bad when it's just you fucking around having it write fanfics instead of doing something more taxing, like playing an AAA video game or, idk, run a microwave or whatever it is normies do. Training a model is very taxing, but running them isn't and the opportunity cost might even be net positive if you tend to use your gpu a lot.
It becomes more of a problem when everyone is doing it when it's not needed, like reading and writing emails. There's no net positive, it's a very large scale usage, and brains are a hell of a lot more efficient at it. This use case has gotta be one of the dumbest imaginable, all while making people legitimately dumber using it over time.
oh you are talking locally I think. I play games on my steamdeck as my laptop could not handle it at all.
Your steam deck at full power (15W TDP per default) equals 5 ChatGPT requests per hour. Do you feel guilty yet? No? And you shouldn't!
Yup, and the deck can do stuff at an astounding low wattage, like 3W to 15W range. Meanwhile there's gpus that can run at like 400W-800W, like when people used to use two 1080s SLI. I always found it crazy when I saw a guy running a system burning as much electricity as a weak microwave just to play a game, lol. Kept his house warm, tho.
The rule of thumb from data centers is every watt of compute equals 3 watts of energy consumption, 1 to power the thing and 2 to remove the 1 watt of heat, so high power components are really a ton of power
It's the same as playing a 3d game. It's a GPU or GPU equivalent doing the work. It doesn't matter if you are asking it to summarize an email or play Red Dead Redemption.
I mean if every web search I do is like playing a 3d game then I will stick with web searches. 3d gaming is the most energy intensive thing I do on a computer.
You can run one on your PC locally so you know how much power it is consuming
I already stress my laptop with what I do so I doubt I will do that anytime soon. I tend to use pretty old hardware though. 5 year plus. honestly closer to 10.
Can't remember the last time my hardware was younger than 10 years old 😂...😭
Mines actually just shy. 2017 manufacture.
How much further down than 3W/request can you go? i hope you don't let your microwave run 10 seconds longer than optimal, because that's exactly the amount of energy we are talking about. Or running a 5W nightlight for a bit over half an hour.
LLM and Image generation are not what kills the climate. What does are flights, cars, meat, and bad insulation of houses leading to high energy usage in winter. Even if we turned off all GenAI, it wouldn't even leave a dent compared to those behemoths.
Where is this 3W from? W isn't even an energy unit, but a power unit.
sorry, it should be 3 Wh, you are correct of course. The 3Wh come from here:
Every source here stays below 3Wh, so it's reasonable to use 3Wh as an upper bound. (I wouldn't trust Altmans 0,3 Wh tho lol)
Thanks for linking the sources. I will take a look into that
sorry. there were two conversations and Im getting confused. Are you talking local where I don't have the overhead? Or using them online where I am worried about the energy usage?
that's online, what is used in the data centers per request. local is probably not so different, depending on device - different devices have different architectures which might be more or less optimal, but the cooling is passive. If it would cost more it wouldn't be mostly free.
This is a pretty well researched post, he made a cheat sheet too ;-)
Yeah the thing is its not comparing each request to an airline flight its comparing each one to a web search. Its utility is not that much greater its just a convenience. Its like with bitcoin where its about energy per transaction compared to a credit card transactions. I mean I search the web everyday a whole bunch and way more when im working.
Whether its useful or not is another discussion, but if you used a LLM to write an email in 2 minutes that you would use 10 minutes for (including searches and whatever), you actually generate LESS CO2 than the manual process:
equals ~40W
compared to:
equals ~17W.
And that is excluding many other factors like general energy costs for infrastructure, which tilt the calculation further in chatgpts favor.
EVERYTHING we do creates emissions one way or another, we create emissions by simply existing too; it's important to set things into perspective. Both upper examples compare to running a 1000W microwave for either 2:20 min or 1:05 min. You wouldn't be shocked by those values.
This would not save anything as you would not use your monitor and pc 8 minutes less in that scenario. Or at least I would not. Its sorta moot as generating an email is definitely not something I would use ai for. Granted I really doubt I would spend 10 minutes on an email unless it was complicated and I was doing something else with it and keeping it open while doing something else as I put it together. Any savings would assume the ai generated email did not result in more activity than one you answered yourself. To have savings you would genuinely have to use the resource less that day or week or such.
Well, that depends on workload and the employer. If you are one of the lucky ones where it's just important that shit gets done on time, it would result it lower usage. That's on the employer, not on the LLM.
3W/request (4W if you include training the model) is nothing compared to what we use in our everyday life, and it's even less when looking what other activities consume. Noone would have an issue with you running a blender for 30 seconds, even tho it's the same energy usage as an Chatbot request.
see again you compare something someone might do once a day and most people do hardly ever. Using a blender. With something used many times constantly through out the day. Web searching. Even before ai datacenters were a massive use of energy. Now im not the type to say throw it all away but I will be careful in my usage till im sure its worth it. this is going to require the vendors to put out data on energy usage. Its new enough that im sure more and better chips will be able to reduce the energy it takes. You have to realize your talking to someone who walks if I can, then bikes as a second option, and finally takes public transit. I avoid driving and planes unless I absolutely have to. Im in tech so I will be using it but it will likely follow the same curve as previous technology has but maybe not given smartphones and apps would be the most recent things before ai and I use those only if I absolutely have to.
i had no car my entire life and have flown 4 times (i like trains lol).
Ok, something nearly everyone does - washing clothes.
A 500W washing machine uses this amount of energy in 10.8 seconds during the spin cycle.
Using a dryer? 4.3 seconds@3000W.
If we look at the equivalent mechanical energy, lets look at your bike (you gotta eat to offset the energy loss, causing emissions through agriculture and cooking!)
You can generate a Chatbot response by pedaling ~2 minutes@100W power!
The chips are already pretty optimized, since they are in principle not different to high-end gaming GPU's. I have a 3070 TI, and i can generate a complex response locally in under a minute @ 290W TDP, 30-40 seconds if i use something like qwen 2.5, which aligns pretty well with what i've said before. Playing a video game uses a lot more power.
2 minutes for every query and im 100% not using it and I think these numbers like 2 min of biking aren't really based on anything. It will help again if the industry makes it a point to track and publish energy usage.
A local LLM probably costs about as much as the online LLM, but that usage is so widely distributed it's not as impactful to any specific locations.