15

Want to wade into the ~~snowy~~ sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[-] CinnasVerses@awful.systems 7 points 1 day ago

While I tend to think Yudkowsky is sincere, some things like his prediction market for P(doom) are hard to square with that https://manifold.markets/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r (launched June 2023, will resolve N/A on 1 January 2027 if the world has not ended yet. It has not moved much since 1 January 2024)

[-] samvines@awful.systems 8 points 1 day ago

Does it still count if it turns out that Trump invading iran was based on Claude or ChatJippity advice and things escalate to global thermonuclear war? AI technically wiped out humanity because our dumb leaders were dumb enought to trust it?

[-] BlueMonday1984@awful.systems 7 points 1 day ago

On the one hand, Yud's vision of AI doomsday is specifically "AI turns sentient/superintelligent and kills us all because reasons", not "Humanity wipes itself out because they trusted lying machines".

On the other hand, the absence of sentience/superintelligence hasn't stopped AI from causing untold damage anyways, as the past two to three years can attest.

[-] lurker@awful.systems 7 points 1 day ago

Technically yes, but Yud probably wouldn’t count that, since the AI didn’t have the express purpose of destroying everyone

[-] Soyweiser@awful.systems 4 points 18 hours ago

So if Bender took over he wouldn't count. As he wants to 'kill all humans (except Fry)'. Seems like a loophole.

Bender really takes the "intelligence" out of "artificial superintelligence". "Yeah, kill all humans. Except Fry, he's my friend or pet or something. And I guess Leela because he'll be whiny about it and also I owe her for the thing. And Hermes because he still owes me money. And I guess the professor is okay..." And so on and so forth through all of humanity.

[-] lurker@awful.systems 7 points 1 day ago

I will never understand why people seriously bet “yes” on these types of things. Like you either loose the bet and loose money or you win the bet and die

[-] scruiser@awful.systems 8 points 19 hours ago

Eliezer is trying to get around that with some weird conditions and game on the prediction market question:

This market resolves N/A on Jan 1st, 2027. All trades on this market will be rolled back on Jan 1st, 2027. However, up until that point, any profit or loss you make on this market will be reflected in your current wealth; which means that purely profit-interested traders can make temporary profits on this market, and use them to fund other permanent bets that may be profitable; via correctly anticipating future shifts in prices among people who do bet their beliefs on this important question, buying low from them and selling high to them.

I don't think that actually helps. But Eliezer is committed to prediction markets being useful on a nearly ideological level, so he has to try to come up with weird complicated strategies to try to get around their fundamental limits.

[-] lurker@awful.systems 5 points 14 hours ago

If you have to set up that many rules to get around the inherent flaw of “gambling on everyone’s lives” just run a normal ass poll. gets rid of unnecessary financial incentives

[-] CinnasVerses@awful.systems 8 points 17 hours ago

It feels like a teenaged argument about Batman v. Superman or the USS Enterprise v. a Star Destroyer. I think many LessWrongers are not serious about the belief system as something to act on, but the problem is that when they are serious you get Ziz Lasota. Its also similar to how they love markets in theory, but don't want to start a business or make speculative investments.

[-] istewart@awful.systems 6 points 16 hours ago

prediction markets being useful on a nearly ideological level

At this point, I would say prediction markets are now an explicit ideological plank of what's left of the libertarian movement. Darkly amusing that they're desperately trying to pump life and legitimacy into something the GW Bush administration thought was too corrupt to use.

this post was submitted on 29 Mar 2026
15 points (100.0% liked)

TechTakes

2528 readers
29 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS