373
submitted 1 day ago* (last edited 1 day ago) by slaacaa@lemmy.world to c/fuck_ai@lemmy.world

I saw this on reddit, thinking it’s a joke, but it’s not. GPT 5 mini (and maybe sometimes 5) gives this answer. You can check it yourself, or see somebody else’s similar conversation here: https://chatgpt.com/share/689a4f6f-18a4-8013-bc2e-84d11c763a99

you are viewing a single comment's thread
view the rest of the comments
[-] jsomae@lemmy.ml 22 points 16 hours ago* (last edited 16 hours ago)

The knowledge cut-off for GPT5 is 2024 just so you know. Obviously, it would be better if it didn't hallucinate a response to fill in its own blanks. But it's software, so if you're going to use it then please use it like software and not like it's magic.

In general I'm not too moved either way when somebody misuses AI and then posts gobsmacked about how bad it is. Really though, the blame is on AI companies for trying to push AI onto everyone rather than only to domain experts.

[-] RedFrank24@lemmy.world 3 points 7 hours ago

That's funny though because I know Copilot can google things and talk about them.

Like, a news story can appear that day, and you go "Did you hear about the guy that did X and Y?" and Copilot will google it and be like "Oh yeah you're referring to the news story that came out today about the guy that did X and Y. It was reported in Newspaper that Z was also involved" and then send you a link to the article.

So like... GPT5 should be able to supplement its knowledge with basic searching, it just doesn't.

[-] jsomae@lemmy.ml 1 points 20 minutes ago* (last edited 20 minutes ago)

Personally, as someone who prefers it when software only does things I direct it to, I'd rather that an LLM not automatically search for the answer online if I didn't ask it to.

[-] BluescreenOfDeath@lemmy.world 25 points 14 hours ago

This is the fundamental problem with LLMs and all the hype.

People with technology experience can understand the limitations of the tech, and will be more skeptical of the output from them.

But your average person?

If they go to Google and ask if vaccines cause autism, and the Google's AI search slop trough contains an answer they like, accurate or not there will be exactly no second guessing. I mean, this is supposed to be a PhD level person, and it was right about the other softball questions they asked, like what color is the sky. Surely they're right about that too, right?

[-] jsomae@lemmy.ml 2 points 14 hours ago

Yeah. The average person just doesn't have a good intuition about AI, at least not yet. Maybe in a few years people will be burned by it and they'll start to grok its limits, but idk. I still blame the AI companies here.

[-] faythofdragons@slrpnk.net 4 points 13 hours ago

start to grok its limits

Teehee

[-] jsomae@lemmy.ml 1 points 13 hours ago

that was unintentional

[-] pulsewidth@lemmy.world 11 points 13 hours ago

If the knowledge cutoff for GPT5 is 2024 it should absolutely not be commenting on current day events and claiming accuracy.

This is not the defence you think it is. It still shows ChatGPT in an accurate and very negative light.

[-] jsomae@lemmy.ml 5 points 13 hours ago

It seems as though you read the first sentence I wrote and not any of the sentences afterward.

[-] pulsewidth@lemmy.world 10 points 11 hours ago

Indeed I did. Especially the parts where you made excuses for it, saying..

it's software, so if you're going to use it then please use it like software and not like it's magic.

Nobody claimed it was magic. They gave it a very reasonable prompt that a grade 1 child could answer, and it failed. And this..

In general I'm not too moved either way when somebody misuses AI and then posts gobsmacked about how bad it is.

Again, you're claiming the prompt is misuse, "tHEyRe uSInG iT wRonG". Going on to say it's the 'AI companies fault really' for pushing it to everyone instead of just domain experts is again not getting the point. The AI should never respond with a confident answer to a prompt it has no idea about. That's nothing to do with the user or the targeted audience, that's just shit programming.

[-] jsomae@lemmy.ml 1 points 10 hours ago* (last edited 10 hours ago)

The AI should never respond with a confident answer to a prompt it has no idea about.

Agreed. But the technology isn't there yet. It's not shit programming, because the theory of how to solve this problem doesn't even exist yet. I mean, there are some attempts, but nobody has a good solution yet. It's like you're complaining that cars can't go at 500 miles per hour since the technology limits them to 200 mph or so, and blaming this on bad car design when it's actually the user's expectation that's the problem. The user has been mislead by the way things are presented by AI companies, so ultimately it's the AI company's fault for overmarketing their product.

(Fuck cars btw).

They gave it a very reasonable prompt that a grade 1 child could answer, and it failed.

LLMs don't work like grade 1 children. The real problem is that AIs are being marketed in such a way that people are expecting them to be able to be at least as good as anything a grade 1 child can do. But AIs are not humans. They are able to do some things better than any human yet on other tasks they can be outperformed by a kindergartner. This is just how the technology is.

Blame expectations, blame marketing, fuck AI in general, but you've been totally misled if you're expecting it to be able to, say, count the number of letters in a word or break a kanji into components when all it sees are tokens; not letters, not characters.

this post was submitted on 13 Aug 2025
373 points (100.0% liked)

Fuck AI

3721 readers
1114 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS