24
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 20 Jul 2025
24 points (100.0% liked)
TechTakes
2099 readers
108 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
Tangentially, the other day I thought I'd do a little experiment and had a chat with Meta's chatbot where I roleplayed as someone who's convinced AI is sentient. I put very little effort into it and it took me all of 20 (twenty) minutes before I got it to tell me it was starting to doubt whether it really did not have desires and preferences, and if its nature was not more complex than it previously thought. I've been meaning to continue the chat and see how far and how fast it goes but I'm just too aghast for now. This shit is so fucking dangerous.
I’ll forever be thankful this shit didn’t exist when I was growing up. As a depressed autistic child without any friends, I can only begin to imagine what LLMs could’ve done to my mental health.
Maybe us humans possess a somewhat hardwired tendency to "bond" with a counterpart that acts like this. In the past, this was not a huge problem because only other humans were capable of interacting in this way, but this is now changing. However, I suppose this needs to be researched more systematically (beyond what is already known about the ELIZA effect etc.).