21
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 11 Aug 2025
21 points (100.0% liked)
TechTakes
2312 readers
142 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
I know like half the facts I would need to estimate it... if you know the GPU vRAM required for the video generation, and how long it takes, then assuming no latency, you could get a ballpark number looking at nVida GPU specs on power usage. For instance, if a short clip of video generation needs 90 GB VRAM, then maybe they are using an RTX 6000 Pro... https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/ , take the amount of time it takes in off hours which shouldn't have a queue time... and you can guessestimate a number of Watt hours? Like if it takes 20 minutes to generate, then at 300-600 watts of power usage that would be 100-200 watt hours. I can find an estimate of $.33 per kWh (https://www.energysage.com/local-data/electricity-cost/ca/san-francisco-county/san-francisco/ ), so it would only be costing $.03 to $.06.
IDK how much GPU-time you actually need though, I'm just wildly guessing. Like if they use many server grade GPUs in parallel, that would multiply the cost up even if it only takes them minutes per video generation.
This does leave out the constant cost (per video generated) of training the model itself right. Which pro genAI people would say you only have to do once, but we know everything online gets scraped repeatedly now so there will be constant retraining. (I am mixing video with text here so, lot of big unknowns).
If they got a lot of usage out of a model this constant cost would contribute little to the cost of each model in the long run... but considering they currently replace/retrain models every 6 months to 1 year, yeah this cost should be factored in as well.
Also, training compute grows quadratically with model size, because its is a multiple of training data (which grows linearly with model size) and the model size.
Well that’s certainly depressing. Having to come to terms with living post-gen AI even after the bubble bursts isn’t going to be easy.
Keep in mind I was wildly guessing with a lot of numbers... like I'm sure 90 GB vRAM is enough for decent quality pictures generated in minutes, but I think you need a lot more compute to generate video at a reasonable speed? I wouldn't be surprised if my estimate is off by a few orders of magnitude. $.30 is probably enough that people can't spam lazily generated images, and a true cost of $3.00 would keep it in the range of people that genuinely want/need the slop... but yeah I don't think it is all going cleanly away once the bubble pops or fizzles.