110
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 30 Apr 2026
110 points (100.0% liked)
Technology
42839 readers
179 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
Don't get your tech reporting from The Guardian. This headline is so stupid. They can't help but anthropomorphize LLMs, because they just don't known any better.
Same vibes as “my calculator has a tiny mathematician trapped inside.”
Or “there’s an artist inside of my printer who turns numbers into pictures.”
"you took a photo of me and trapped my soul in the image!"
Though your calculator can be trusted to actually do its job accurately.
Not even that. Calculators have their own limitations related to rounding errors and big numbers. Their results may be deterministic but they are not always accurate.
https://youtu.be/_XJbwN6EZ4I?t=1074 (skip to 17:54 if the time jump doesn't work)
If only that were the case...
Well shit, that’s a good point.
This right here. Just about everything in here is awful, and implies decision making and thought processes that straight up do not and have never existed in any AI model whatsoever.
What happened was they threw an awfully-scoped statistics model at problems the program couldn't possibly generate good outputs for, and surprise surprise, it generated bad outputs. The part that's of interest is just how bad the output was, and even then, only in a schadenfreude-filled "it was bound to happen eventually" manner.
It didn't confess it just outputted more plausible garbage based on inputs.
It just agreed with the accusations, because these models do what they're trained to do: Agree with the prompter.
No, not necessarily; they can easily, even condescendingly go against your view depending on the topic. It really depends on the topic and the conversational flow.
Can I just anthropomorphise a little bit and call them psychotic?
The CEO? Yeah sure, go ahead!
That needs no... *thinks of the Zuck*
Well, hmm, you're right: maybe that does need anthropomorphization after all.