58
submitted 1 day ago* (last edited 1 day ago) by o7___o7@awful.systems to c/techtakes@awful.systems

This is peak laziness. It seems that the reading list's author used autoplag to extrude the entire 60 page supplemental insert. The author also super-promises this has never happened before.

you are viewing a single comment's thread
view the rest of the comments
[-] paraphrand@lemmy.world 8 points 1 day ago

AI assistants such as ChatGPT are well-known for creating plausible-sounding errors known as confabulations especially when lacking detailed information on a particular topic.

No, they are hallucinations or bullshit. I won’t accept any other terms.

[-] o7___o7@awful.systems 11 points 1 day ago* (last edited 1 day ago)

If it makes you feel better, I've heard good folks like Emily Bender of Stochastic Parrots fame suggest confabulation is a better term. "Hallucination" implies that LLMs have qualia and are accidentally sprinkling falsehoods over a true story. Confabulation better illustrates that it's producing a bullshit milkshake from its training data that can only be correct accidentally.

[-] paraphrand@lemmy.world 7 points 1 day ago

You’ve swayed me. I’m now down with all three. Thanks for the explaination.

this post was submitted on 20 May 2025
58 points (100.0% liked)

TechTakes

1870 readers
159 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS