586
top 50 comments
sorted by: hot top controversial new old
[-] Saledovil@sh.itjust.works 17 points 3 hours ago

It's safe to assume that any metric they don't disclose is quite damning to them. Plus, these guys don't really care about the environmental impact, or what us tree-hugging environmentalists think. I'm assuming the only group they are scared of upsetting right now is investors. The thing is, even if you don't care about the environment, the problem with LLMs is how poorly they scale.

An important concept when evaluating how something scales is are marginal values, chiefly marginal utility and marginal expenses. Marginal utility is how much utility do you get if you get one more unit of whatever. Marginal expenses is how much it costs to get one more unit. And what the LLMs produce is the probably that a token, T, follows on prefix Q. So P(T|Q) (read: Probably of T, given Q). This is done for all known tokens, and then based on these probabilities, one token is chosen at random. This token is then appended to the prefix, and the process repeats, until the LLM produces a sequence which indicates that it's done talking.

If we now imagine the best possible LLM, then the calculated value for P(T|Q) would be the actual value. However, it's worth noting that this already displays a limitation of LLMs. Namely even if we use this ideal LLM, we're just a few bad dice rolls away from saying something dumb, which then pollutes the context. And the larger we make the LLM, the closer its results get to the actual value. A potential way to measure this precision would be by subtracting P(T|Q) from P_calc(T|Q), and counting the leading zeroes, essentially counting the number of digits we got right. Now, the thing is that each additional digit only provides a tenth of the utility to than the digit before it. While the cost for additional digits goes up exponentially.

So, exponentially decaying marginal utility meets exponentially growing marginal expenses. Which is really bad for companies that try to market LLMs.

[-] Jeremyward@lemmy.world 4 points 2 hours ago

Well I mean also that they kinda suck, I feel like I spend more time debugging AI code than I get working code.

[-] SkunkWorkz@lemmy.world 2 points 22 minutes ago

I only use it if I’m stuck even if the AI code is wrong it often pushes me in the right direction to find the correct solution for my problem. Like pair programming but a bit shitty.

The best way to use these LLMs with coding is to never use the generated code directly and atomize your problem into smaller questions you ask to the LLM.

[-] squaresinger@lemmy.world 2 points 59 minutes ago

That's actually true. I read some research on that and your feeling is correct.

Can't be bothered to google it right now.

[-] TheObviousSolution@lemmy.ca 3 points 2 hours ago

When you want to create the shiniest honeypot, you need high power consumption.

[-] Tollana1234567@lemmy.today 9 points 3 hours ago

intense electricity demands, and WATER for cooling.

[-] Event_Horizon@lemmy.world 1 points 3 hours ago

I wonder if at this stage all the processors should simply be submerged into a giant cooling tank. It seems easier and more efficient.

[-] OADINC@feddit.nl 1 points 1 hour ago

Microsoft has tried running datacenters in the sea, for cooling purposes. Microsoft blog

[-] Tollana1234567@lemmy.today 1 points 1 hour ago

brings in another problem, so they have to use generators, or undersea cables.

[-] fuzzywombat@lemmy.world 20 points 6 hours ago

Sam Altman has gone into PR and hype overdrive lately. He is practically everywhere trying to distract the media from seeing the truth about LLM. GPT-5 has basically proved that we've hit a wall and the belief that LLM will just scale linearly with amount of training data is false. He knows AI bubble is bursting and he is scared.

[-] Saledovil@sh.itjust.works 3 points 2 hours ago

He's also already admitted that they're out of training data. If you've wondered why a lot more websites will run some sort of verification when you connect, it's because there's a desperate scramble to get more training data.

[-] Tollana1234567@lemmy.today 3 points 3 hours ago

MS already released, thier AI doesnt make money at all, in fact its costing too much. of course hes freaking out.

[-] vrighter@discuss.tchncs.de 6 points 5 hours ago

is there any picture of the guy without his hand up like that?

[-] Tollana1234567@lemmy.today 2 points 3 hours ago

those are his lying/making up hand gestures. its the same thing trump does with his hands when hes lying or exaggerating, he does the wierd accordian hands.

[-] cute_noker@feddit.dk 1 points 1 hour ago

So there are no pictures without the hands, got it.

[-] threeduck@aussie.zone 7 points 6 hours ago

All the people here chastising LLMs for resource wastage, I swear to god if you aren't vegan...

[-] Bunbury@feddit.nl 2 points 2 hours ago

Whataboutism isn’t useful. Nobody is living the perfect life. Every improvement we can make towards a more sustainable way of living is good. Everyone needs to start somewhere and even if they never move to make more changes at least they made the one.

[-] Saledovil@sh.itjust.works 3 points 2 hours ago

Animal agriculture has significantly better utility and scaling than LLMs. So, its not hypocritical to be opposed to the latter but not the former.

[-] k0e3@lemmy.ca 12 points 5 hours ago
[-] 3abas@lemmy.world 6 points 4 hours ago

It's not, you're just personally insulted. The livestock industry is responsible for about 15% of human caused greenhouse gas emissions. That's not negligible.

[-] k0e3@lemmy.ca 5 points 2 hours ago

So, I can't complain about any part of the remaining 85% if I'm not vegan? That's so fucking stupid. Do you not complain about microplastics because you're guilty of using devices with plastic in them to type your message?

[-] stratoscaster@lemmy.world 8 points 5 hours ago

What is it with vegans and comparing literally everything to veganism? I was in another thread and it was compared to genocide, rape, and climate change all in the same thread. Insanity

[-] Strawberry 4 points 4 hours ago

probably because the animal death industry is comparable to those things

[-] Saledovil@sh.itjust.works 3 points 2 hours ago

Death Industry sounds like it would be an awesome band name.

[-] lowleekun@ani.social 1 points 3 hours ago

Dude, wtf?! You can't just go around pointing out peoples hypocrisy. Companies killing the planet is big bad.

People joining in? Dude just let us live!! It is only animals...

big /s

[-] UnderpantsWeevil@lemmy.world 3 points 4 hours ago

I mean, they're both bad.

But also, "Throw that burger in the trash I'm not eating it" and "Uninstall that plugin, I'm not querying it" have about the same impact on your gross carbon emissions.

These are supply side problems in industries that receive enormous state subsides. Hell, the single biggest improvement to our agriculture policy was when China stopped importing US pork products. So, uh... once again, thank you China for saving the planet.

[-] lowleekun@ani.social 1 points 3 hours ago

Wait so the biggest improvement came when there was a massive decline in demand?

[-] Bloomcole@lemmy.world 7 points 7 hours ago

i really hate this cunt's face.

[-] daveB@sh.itjust.works 37 points 11 hours ago
[-] seraphine 1 points 2 hours ago

what is that? looks funny but idk this

[-] Sheldan@lemmy.world 1 points 2 hours ago

Screenshot from the first matrix movie with pods full of people acting as batteries

[-] seraphine 1 points 2 hours ago

so exactly as I guessed, thanks for rhe explanation

[-] ZILtoid1991@lemmy.world 26 points 13 hours ago

When will genAI be so good, it'll solve its own energy crisis?

[-] Saledovil@sh.itjust.works 2 points 2 hours ago

Current genAI? Never. There's at least one breakthrough needed to build something capable of actual thinking.

load more comments (1 replies)
[-] redsunrise@programming.dev 247 points 18 hours ago

Obviously it's higher. If it was any lower, they would've made a huge announcement out of it to prove they're better than the competition.

[-] T156@lemmy.world 3 points 8 hours ago

Unless it wasn't as low as they wanted it. It's at least cheap enough to run that they can afford to drop the pricing on the API compared to their older models.

[-] ChaoticEntropy@feddit.uk 16 points 13 hours ago

I get the distinct impression that most of the focus for GPT5 was making it easier to divert their overflowing volume of queries to less expensive routes.

load more comments (10 replies)
[-] fittedsyllabi@lemmy.world 2 points 7 hours ago

But it also could be lower, right?

[-] InnerScientist@lemmy.world 4 points 3 hours ago

Not really, if it were they would be announcing their new highly efficient model.

[-] MentalEdge@sopuli.xyz 3 points 4 hours ago

Unlikely. 5 is just all of OpenAIs previous models in a trenchcoat.

load more comments
view more: next ›
this post was submitted on 14 Aug 2025
586 points (100.0% liked)

Technology

74003 readers
4008 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS