58
you are viewing a single comment's thread
view the rest of the comments
[-] shnizmuffin@lemmy.inbutts.lol 51 points 2 weeks ago

If I asked a PhD, "How many Bs are there in the word 'blueberry'?" They'd call an ambulance for my obvious, severe concussion. They wouldn't answer, "There are three Bs in the word blueberry! I know, it's super tricky!"

[-] panda_abyss@lemmy.ca 6 points 2 weeks ago* (last edited 2 weeks ago)

I don’t feel this is a good example of why LLMs shouldn’t be treated like PhDs.

My first interactions with gpt5 have been pretty awful, and I’d test it but it’s not available to me anymore

Edit: I am not having a stroke, I’m bad at typing and autocorrect hates me

[-] shnizmuffin@lemmy.inbutts.lol 4 points 2 weeks ago
[-] panda_abyss@lemmy.ca 3 points 2 weeks ago

BlackBerry toast

[-] GissaMittJobb@lemmy.ml 4 points 2 weeks ago

LLMs are fundamentally unsuitable for character counting on account of how they 'see' the world - as a sequence of tokens, which can split words in non-intuitive ways.

Regular programs already excel at counting characters in words, and LLMs can be used to generate such programs with ease.

[-] itslilith 20 points 2 weeks ago

But they don't recognize their inadequacies, instead spouting confident misinformation

[-] GissaMittJobb@lemmy.ml 9 points 2 weeks ago

This is true. They do not think, because they are next token predictors, not brains.

Having this in mind, you can still harness a few usable properties from them. Nothing like the kind of hype the techbros and VCs imagine, but a few moderately beneficial use-cases exist.

[-] itslilith 8 points 2 weeks ago

Without a doubt. But PhD level thinking requires a kind of introspection that LLMs (currently) just don't have. And the letter counting thing is a funny example of that inaccuracy

[-] chaos@beehaw.org 3 points 2 weeks ago

The tokenization is a low-level implementation detail, it shouldn't affect an LLM's ability to do basic reasoning. We don't do arithmetic by counting how many neurons we can feel firing in our brain, we have higher level concepts of numbers, and LLMs are supposed to have something similar. Plus, in the """thinking""" models, you'll see them break up words into individual letters or even write them out in a numbered list, which should break the tokens up into individual letters as well.

[-] darreninthenet@piefed.social 2 points 2 weeks ago

FWIW, ChatGPT 5 gets this correct

[-] shnizmuffin@lemmy.inbutts.lol 3 points 2 weeks ago
[-] limerod@reddthat.com 1 points 2 weeks ago

You appear to be using the older gpt model. The newer model calculates and answers correctly for most words at least for the few I asked

[-] mbtrhcs@feddit.org 1 points 2 weeks ago

It literally says 5 in the screenshot but ok

[-] limerod@reddthat.com 1 points 2 weeks ago

I saw that. I'm using the mobile app. There's a possibility the web version is using an inferior model.

[-] darreninthenet@piefed.social 1 points 2 weeks ago

It did for me 🤷🏻‍♂️

this post was submitted on 08 Aug 2025
58 points (100.0% liked)

Technology

40106 readers
267 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS