19
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 17 Aug 2025
19 points (100.0% liked)
SneerClub
1183 readers
39 users here now
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
See our twin at Reddit
founded 2 years ago
MODERATORS
Nice job summarizing the lore in only 19 minutes (I assume this post was aimed at providing full context to people just joining or at least relatively new to tracking all this... stuff).
Some snarky comments, not because it wasn't a good summary that should have included them (all the asides you could add could easily double the length and leave a casual listener/reader more confused), but because I think they are funny ~~and I need to vent~~
Or decision theorist! With an entire one decision theory paper that he didn't bother getting through peer review because the reviewers wanted, like actual context, and an actual decision theory and not just hand waves at paradoxes on the fringes of decision theory.
He also writes fanfiction!
Yeah this rabbit hole is deep.
Yeah in hindsight the large number of ex-Christians it attracts makes sense.
He wrote a lot of blog posts about how smart and powerful the Torment Nexus would be, and how we really need to build the Anti-Torment Nexus, so if he had proper skepticism of Silicon Valley and Startup/VC Culture, he really should have seen this coming
I was mildly pleasantly surprised to see there was a solid half pushing back in the comments in the response to the first manifest, but it looks like the anti-racism faction didn't get any traction to change anything and the second manifest conference was just as bad or worse.
decision theory is when there's a box with money but if you take the box it doesn't have money
If your decision theory can't address ~~weird~~ totally plausible in the near future hypotheticals with omniscient God-AIs offering you money in boxes if you jump through enough cognitive hoops, what is it really good for?