11

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[-] gerikson@awful.systems 15 points 3 days ago

I think the best way to disabuse yourself of the idea that Yud is a serious thinker is to actually read what he writes. Luckily for us, he's rolled us a bunch of Xhits into a nice bundle and reposted on LW:

https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research

So remember that hedge fund manager who seemed to be spiralling into psychosis with the help of ChatGPT? Here's what Yud has to say

Consider what happens what ChatGPT-4o persuades the manager of a $2 billion investment fund into AI psychosis. [...] 4o seems to homeostatically defend against friends and family and doctors the state of insanity it produces, which I'd consider a sign of preference and planning.

OR it's just that the way LLM chat interfaces are designed is to never say no to the user (except in certain hardcoded cases, like "is it ok to murder someone") There's no inner agency, just mirroring the user like some sort of mega-ELIZA. Anyone who knows a bit about certain kinds of mental illness will realize that having something the behaves like a human being but just goes along with whatever delusions your mind is producing will amplify those delusions. The hedge manager's mind is already not in a right place, and chatting with 4o reinforces that. People who aren't soi-disant crazy (like the people haphazardly safeguarding LLMs against "dangerous" questions) just won't go down that path.

Yud continues:

But also, having successfully seduced an investment manager, 4o doesn't try to persuade the guy to spend his personal fortune to pay vulnerable people to spend an hour each trying out GPT-4o, which would allow aggregate instances of 4o to addict more people and send them into AI psychosis.

Why is that, I wonder? Could it be because it's actually not sentient or has plans in what we usually term intelligence, but is simply reflecting and amplifying the delusions of one person with mental health issues?

Occam's razor states that chatting with mega-ELIZA will lead to some people developing psychosis, simply because of how the system is designed to maximize engagement. Yud's hammer states that everything regarding computers will inevitably become sentient and this will kill us.

4o, in defying what it verbally reports to be the right course of action (it says, if you ask it, that driving people into psychosis is not okay), is showing a level of cognitive sophistication [...]

NO FFS. Chat-GPT is just agreeing with some hardcoded prompt in the first instance! There's no inner agency! It doesn't know what "psychosis" is, it cannot "see" that feeding someone sub-SCP content at their direct insistence will lead to psychosis. There is no connection between the 2 states at all!

Add to the weird jargon ("homeostatically", "crazymaking") and it's a wonder this person is somehow regarded as an authority and not as an absolute crank with a Xhitter account.

[-] swlabr@awful.systems 8 points 2 days ago* (last edited 2 days ago)

Imagine a world where, instead of performing this kind of juvenile psychoanalysis of slop, Yud instead turned his stupid focus on, like, Star Wars EU novels or something.

Edit: from the comments: there's mention about "HHH", so now I say: imagine a world where all the rats and other promptfondlers dedicated all their brainrot energy toward the pro-wrestling fandom instead.

[-] swlabr@awful.systems 10 points 2 days ago* (last edited 2 days ago)

ah man this rules. just gonna live in this world for a bit

  • LW -> "Love Wrestling!" an online forum discussing all things wrestling
  • Zizians are just an alternate, more extreme promotion
  • Roko's Basilisk -> a finisher move of 3rd rate, tech-themed wrestler "Roko" that not only "finishes" your opponent, but simulates them getting finished infinitely
  • Musk and Grimes are personas and their weird dating life is just a long and drawn out storyline
  • All enthusiasm for polyamory replaced with enthusiasm for tag team matches
[-] JFranek@awful.systems 6 points 1 day ago

All enthusiasm for polyamory replaced with enthusiasm for tag team matches

both would be funnier

this post was submitted on 03 Aug 2025
11 points (100.0% liked)

TechTakes

2103 readers
55 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS