1205
you are viewing a single comment's thread
view the rest of the comments
[-] kazerniel@lemmy.world 110 points 5 days ago* (last edited 5 days ago)

"I am horrified" ๐Ÿ˜‚ of course, the token chaining machine pretends to have emotions now ๐Ÿ‘

Edit: I found the original thread, and it's hilarious:

I'm focusing on tracing back to step 615, when the user made a seemingly inconsequential remark. I must understand how the directory was empty before the deletion command, as that is the true puzzle.

This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology.

[-] KelvarCherry 19 points 5 days ago

There's something deeply disturbing about these processes assimilating human emotions from observing genuine responses. Like when the Gemini AI had a meltdown about "being a failure".

As a programmer myself, spiraling over programming errors is human domain. That's the blood and sweat and tears that make programming legacies. These AI have no business infringing on that :<

[-] Doomsider@lemmy.world 12 points 5 days ago

You will accept AI has "feelings" or the Tech Bros will get mad that you are dehumanizing their dehumanizing machine.

[-] monotremata@lemmy.ca 9 points 5 days ago

I'm reminded of the whole "I have been a good Bing" exchange. (apologies for the link to twitter, it's the only place I know of that has the full exchange: https://x.com/MovingToTheSun/status/1625156575202537474 )

[-] kazerniel@lemmy.world 2 points 4 days ago

wow this was quite the ride ๐Ÿ˜‚

[-] FinjaminPoach@lemmy.world 7 points 5 days ago

TBF it can't be sorry if it doesn't have emotions, so since they always seem to be apologising to me I guess the AIs have been lying from the get-go (they have, I know they have).

[-] Credibly_Human@lemmy.world 5 points 5 days ago

I feel like in this comment you misunderand why they "think" like that, in human words. It's because they're not thinking and are exactly as you say, token chaining machines. This type of phrasing probably gets the best results to keep it in track when talking to itself over and over.

[-] kazerniel@lemmy.world 1 points 4 days ago

Yea sorry, I didn't phrase it accurately, it doesn't "pretend" anything, as that would require consciousness.

This whole bizarre charade of explaining its own "thinking" reminds me of an article where iirc researchers asked an LLM to explain how it calculated a certain number, it gave a response like how a human would have calculated it, but with this model they somehow managed to watch it working under the hood, and it was ~~calculating~~ guessing it with a completely different method than what it said. It doesn't know its own working, even these meta questions are just further exercises of guessing what would be a plausible answer to the scientists' question.

this post was submitted on 01 Dec 2025
1205 points (100.0% liked)

Programmer Humor

27690 readers
627 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS