24

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[-] scruiser@awful.systems 12 points 1 day ago

Rationalist Infighting!

tldr; one of the MIRI aligned rationalist (Rob Bensinger) complained about how EA actually increased AI-risk long-run by promoting OpenAI and then Anthropic. Scott Alexander responded aggressively, basically saying they are entirely wrong and also they are bad at public communications! Various lesswrongers weigh in, seemingly blind to irony and hypocrisy!

Some highlights from the quotes of the original tweets and the lesswronger comments on them:

  • Scott Alexander tries blaming Eliezer for hyping up AI and thus contributing to OpenAI in the first place. Just a reminder, Scott is one of the AI 2027 authors, he really doesn't have room to complain about rationalist creating crit-hype.

  • Scott Alexander tries claiming SBF was a unique one off in the rationalist/EA community! (Anthropic's leadership has been called out on the EA forums and lesswrong for a similar pattern of repeated lying)

  • Rob Bensinger is indirectly trying to claim Eliezer/MIRI has been serious forthright honest commentators on AI theory and policy, as opposed to Open-Phil/EA/Anthropic which have been "strategic" with their public communication, to the point of dishonesty.

  • habryka is apparently on the verge of crashing out? I can't tell if they are planning on just quitting twitter or quitting their attempts at leadership within the rationalist community. Quitting twitter is probably a good call no matter what.

  • Load of tediously long posts, mired with that long-winded rationalist way of talking, full of rationalist in-group jargon for conversations and conflict resolution

  • Disagreement on whether Ilya Sutskever's $50 billion dollar startup is going to contribute to AI safety or just continue the race to AGI.

  • Arguments over who is with the EAs vs. Open Philanthropy vs. MIRI!

  • Argument over the definition of gaslighting!

To be clear, I agree with the complaints about EA and Anthropic, I just also think MIRI has its own similar set of problems. So they are both right, all of the rationalists are terrible at pursing their alleged nominal goals of stopping AI Doom.

I did sympathize with one lesswronger's comment:

More than any other group I've been a part of, rationalists love to develop extremely long and complicated social grievances with each other, taking pages and pages of text to articulate. Maybe I'm just too stupid to understand the high level strategic nuances of what's going on -- what are these people even arguing about? The exact flavor of comms presented over the last ten years?

[-] CinnasVerses@awful.systems 10 points 1 day ago

Old Twitter was terrible for people's souls. I can only imagine what it is like now that the well-meaning professionals are gone and catturd and Wall Street Apes are the leading accounts.

[-] scruiser@awful.systems 6 points 1 day ago

Old Twitter was terrible for people’s souls.

It almost makes me feel sorry for the way the rationalists are still so attached to it. But they literally have two different forums (lesswrong and the EA forum), so staying on twitter is entirely their choice, they have alternatives.

Fun fact! Over the past few years, Eliezer has deliberately cut his lesswrong posting in favor of posting on twitter, apparently (he's made a few comments about this choice) because lesswrong doesn't uncritically accept his ideas and nitpicks them more than twitter does. (How bad do you have to be to not even listen to critique on a website that basically loves you and take your controversial foundational premises seriously?)

[-] istewart@awful.systems 6 points 1 day ago

I'm willing to go out on a limb and say that short-form social media in general (Twitter and imitators, Instagram, TikTok) is essentially a failed set of media. But I'll concede that's like cramming a Zyn pouch in my mouth while making fun of a guy chain-smoking Marlboros.

[-] scruiser@awful.systems 7 points 1 day ago

I've read speculation that in 30-50 years people will have an attitude towards social media that we have towards cigarettes now.

That would be really nice but that scenario feels pretty optimistic to me on a few points. For one, scientists doing research were able to overcome the lobbying influence and paid think tanks of cigarette companies. I am worried science as a public institution isn't in good enough shape to do that nowadays. Likewise part of the push back against cigarettes included a variety of mandatory labeling and sin taxes on them, and it would take some pretty major shifts for the political will for that kind of action to be viable. Well maybe these things are viable in the EU, the US is pretty screwed.

[-] blakestacey@awful.systems 9 points 1 day ago

The only people I trust as little as I trust the owners of corporate social media are the politicians who have decided to cash in on the moment by "regulating" them. I mean, here in progressive Massachusetts, the state house of representatives just this week passed a bill that, depending on the whims of the Attorney General, would require awful.systems to verify the ages of its users by gathering their government-issued IDs or biometrics. We are, you see, a "public website, online service, online application or mobile application that displays content primarily generated by users and allows users to create, share and view user-generated content with other users". And so we would have to "implement an age assurance or verification system to determine whether a current or prospective user on the social media platform" is 16 or older. (Or 14 or 15 with parental consent, but your humble mods lack the resources to parse divorce laws in all localities worldwide, sort out issues of disputed guardianship, etc., etc.) The meaning of what "practicable" age verification is supposed to be would depend upon regulations that the Attorney General has yet to write.

So, yeah, as an old-school listserv nerd who had the I am not on Facebook T-shirt 15 years ago, I don't trust any of these people.

[-] istewart@awful.systems 6 points 1 day ago

I'm not quite so pessimistic. It's important to remember that the actual practical purpose of the extant corporate social media* is to convey targeted advertising; i.e. an optimization (possibly the last optimization) on American management of global supply chains. Those supply chains were already starting to be optimized past their breaking point: flooded with dissatisfactory junk, easily spoofed by low-quality sellers, on top of broader externalities besides. And now, they have now been blasted into fine dust by a failed presidency partially funded by the social media and online advertising barons. It may yet be something of a self-correcting problem, albeit having done substantial damage in the meantime.

*Twitter is now a fully dedicated advertising campaign for Elon Musk's program of white supremacy, with financial returns no object. It's not quite going according to plan. By this time next decade, the Twitter microblogging permutation of the tech may be thoroughly killed, and if not it'll be disgustingly cringe. Who do you think you are posting like that, Baby Trump?!?!

[-] scruiser@awful.systems 4 points 1 day ago

The collapse of the current American management of global supply chains isn't exactly an optimistic expectation, but I guess it beats social media continuing as it is into the future and maybe a better global order will develop in the aftermath.

[-] nfultz@awful.systems 4 points 1 day ago

Haven't seen any estimates of death toll due to social media but cigarettes is/was pretty staggering (20-40m), way too big to hide - https://www.ucpress.edu/books/golden-holocaust/hardcover - if it's "only" 50 years to flip the consensus on social media, that would be a faster process, I do hope its possible though. Tobacco execs had the good sense to keep a relatively low profile compared to Zuck and Musk, so that might speed it up.

[-] CinnasVerses@awful.systems 8 points 1 day ago* (last edited 1 hour ago)

Bonus race pseudoscience quoted by No77e!

There is a phenomenon in which rationalists sometimes make predictions about the future, and they seem to completely forget their other belief that we're heading toward a singularity (good or bad) relatively soon. It's ubiquitous, and it kind of drives me insane. Consider these two tweets:

Richard Ngo @RichardMCNgo: Hypothesis: We'll look back on mass migration as being worse for Europe than WW 2 was. ... high-trust and homogeneous ... internal ethno-religious fractures.

Liv Boeree @Liv_Boeree: Would not be surprised if it turns out that everyone outsourcing their writing to LLMs will have a similar or worse effect on IQ as lead piping in the long run

(he shares these tweets as photos, I ain't working harder to transcribe them or using a chatbot)

[-] scruiser@awful.systems 7 points 1 day ago

No77e is correctly noting the discrepancy between the rationalist obsession with eugenics and the belief in an imminent (or even the next 40 years) technological singularity, but fails to realize that the general problem is the eugenics obsession of rationalists. It is kind of frustrating how close but far they are from realizing the problem.

Also, reminder of the time Eliezer claimed Genesmith's insane genetic engineering plan was one of the most important projects in the world (after AI obviously): https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=fxnhSv3n4aRjPQDwQ Apparently Eliezer's plan if we aren't all doomed by LLMs is to let the genetically engineered geniuses invent friendly AI instead.

this post was submitted on 05 Apr 2026
24 points (100.0% liked)

TechTakes

2536 readers
39 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS