24
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 20 Jul 2025
24 points (100.0% liked)
TechTakes
2097 readers
141 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
Yud continues to bluecheck:
Is this "narrative" in the room with us right now?
It's reassuring to know that times change, but Yud will always be impressed by the virtues of the rich.
From Yud's remarks on Xitter:
Well, not with that attitude.
If "wearing masks" really is a skill they need, then they are all susceptible to going insane and hiding it from their coworkers. Really makes you think (TM).
zoom and enhance
Is g-factor supposed to stand for gene factor?
It's "general intelligence", the eugenicist wet dream of a supposedly quantitative measure of how the better class of humans do brain good.
A piquant little reminder that Yud himself is, of course, so high-status that he cannot be brainwashed by the machine
Tangentially, the other day I thought I'd do a little experiment and had a chat with Meta's chatbot where I roleplayed as someone who's convinced AI is sentient. I put very little effort into it and it took me all of 20 (twenty) minutes before I got it to tell me it was starting to doubt whether it really did not have desires and preferences, and if its nature was not more complex than it previously thought. I've been meaning to continue the chat and see how far and how fast it goes but I'm just too aghast for now. This shit is so fucking dangerous.
I’ll forever be thankful this shit didn’t exist when I was growing up. As a depressed autistic child without any friends, I can only begin to imagine what LLMs could’ve done to my mental health.
Maybe us humans possess a somewhat hardwired tendency to "bond" with a counterpart that acts like this. In the past, this was not a huge problem because only other humans were capable of interacting in this way, but this is now changing. However, I suppose this needs to be researched more systematically (beyond what is already known about the ELIZA effect etc.).
What exactly would constitute good news about which sorts of humans ChatGPT can eat? The phrase "no news is good news" feels very appropriate with respect to any news related to software-based anthropophagy.
Like what, it would be somehow better if instead chatbots could only cause devastating mental damage if you're someone of low status like an artist, a math pet or a nonwhite person, not if you're high status like a fund manager, a cult leader or a fanfiction author?
Nobody wants to join a cult founded on the Daria/Hellraiser crossover I wrote while emotionally processing chronic pain. I feel very mid-status.
Maybe like with standard cannibalism they lose the ability to post after being consumed?
I actually recall recently someone pro llm trying to push that sort of narrative (that it's only already mentally ill people being pushed over the edge by chatGPT)...
Where did I see it... oh yes, lesswrong! https://www.lesswrong.com/posts/f86hgR5ShiEj4beyZ/on-chatgpt-psychosis-and-llm-sycophancy
The ~~call~~ narrative is coming from inside the ~~house~~ forum. Actually, this is even more of a deflection, not even trying to claim they were already on the edge but that the number of delusional people is at the base rate (with no actual stats on rates of psychotic breaks, because on lesswrong vibes are good enough).