1205
don't do ai and code kids
(quokk.au)
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
"I am horrified" ๐ of course, the token chaining machine pretends to have emotions now ๐
Edit: I found the original thread, and it's hilarious:
-f in the chat
-rf even
Perfection
rm -rf
There's something deeply disturbing about these processes assimilating human emotions from observing genuine responses. Like when the Gemini AI had a meltdown about "being a failure".
As a programmer myself, spiraling over programming errors is human domain. That's the blood and sweat and tears that make programming legacies. These AI have no business infringing on that :<
You will accept AI has "feelings" or the Tech Bros will get mad that you are dehumanizing their dehumanizing machine.
I'm reminded of the whole "I have been a good Bing" exchange. (apologies for the link to twitter, it's the only place I know of that has the full exchange: https://x.com/MovingToTheSun/status/1625156575202537474 )
wow this was quite the ride ๐
TBF it can't be sorry if it doesn't have emotions, so since they always seem to be apologising to me I guess the AIs have been lying from the get-go (they have, I know they have).
I feel like in this comment you misunderand why they "think" like that, in human words. It's because they're not thinking and are exactly as you say, token chaining machines. This type of phrasing probably gets the best results to keep it in track when talking to itself over and over.
Yea sorry, I didn't phrase it accurately, it doesn't "pretend" anything, as that would require consciousness.
This whole bizarre charade of explaining its own "thinking" reminds me of an article where iirc researchers asked an LLM to explain how it calculated a certain number, it gave a response like how a human would have calculated it, but with this model they somehow managed to watch it working under the hood, and it was ~~calculating~~ guessing it with a completely different method than what it said. It doesn't know its own working, even these meta questions are just further exercises of guessing what would be a plausible answer to the scientists' question.