251
49

Indeed, you have nothing to fear.

252
18

OpenAI blog post: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation

Orange discuss: https://news.ycombinator.com/item?id=39207291

I don't have any particular section to call out. May post thoughts ~~tomorrow~~ today it's after midnight oh gosh, but wanted to post since I knew ya'll'd be interested in this.

Terrorists could use autocorrect according to OpenAI! Discuss!

253
37

Hounding the president of Harvard out of a job because you think she's a DEI hire is one thing, but going after a Billionaire's wife? How dare these journalists! What big bullies.

Bonus downplaying of EA's faults. He of course phrases the Bostrom affair as someone being "accused" of sending a racist email, as if there were any question as to who sent it, or if it was racist. And acts like it's not just the cherry on top of a lifetime of Bostrom's work.

254
41
255
23
submitted 2 years ago* (last edited 2 years ago) by skillissuer@discuss.tchncs.de to c/sneerclub@awful.systems

cross-posted from: https://lemmy.world/post/11178564

Scientists Train AI to Be Evil, Find They Can't Reverse It::How hard would it be to train an AI model to be secretly evil? As it turns out, according to Anthropic researchers, not very.

256
38
submitted 2 years ago* (last edited 2 years ago) by Architeuthis@awful.systems to c/sneerclub@awful.systems

edited to add tl;dr: Siskind seems ticked off because recent papers on the genetics of schizophrenia are increasingly pointing out that at current miniscule levels of prevalence, even with the commonly accepted 80% heritability, actually developing the disorder is all but impossible unless at least some of the environmental factors are also in play. This is understandably very worrisome, since it indicates that even high heritability issues might be solvable without immediately employing eugenics.

Also notable because I don't think it's very often that eugenics grievances breach the surface in such an obvious way in a public siskind post, including the claim that the whole thing is just HBD denialists spreading FUD:

People really hate the finding that most diseases are substantially (often primarily) genetic. There’s a whole toolbox that people in denial about this use to sow doubt. Usually it involves misunderstanding polygenicity/omnigenicity, or confusing GWAS’ current inability to detect a gene with the gene not existing. I hope most people are already wise to these tactics.

257
48
submitted 2 years ago* (last edited 2 years ago) by dgerard@awful.systems to c/sneerclub@awful.systems

258
23
submitted 2 years ago* (last edited 2 years ago) by dgerard@awful.systems to c/sneerclub@awful.systems

"glowfic" apparently. written in a roleplay forum format.

This is not a story for kids, even less so than HPMOR. There is romance, there is sex, there are deliberately bad kink practices whose explicit purpose is to get people to actually hurt somebody else so that they'll end up damned to Hell, and also there's math.

start here. or don't, of course.

259
58
submitted 2 years ago* (last edited 2 years ago) by hirudiniformes@awful.systems to c/sneerclub@awful.systems

I did fake Bayesian math with some plausible numbers, and found that if I started out believing there was a 20% per decade chance of a lab leak pandemic, then if COVID was proven to be a lab leak, I should update to 27.5%, and if COVID was proven not to be a lab leak, I should stay around 19-20%

This is so confusing: why bother doing "fake" math? How does he justify these numbers? Let's look at the footnote:

Assume that before COVID, you were considering two theories:

  1. Lab Leaks Common: There is a 33% chance of a lab-leak-caused pandemic per decade.
  2. Lab Leaks Rare: There is a 10% chance of a lab-leak-caused pandemic per decade.

And suppose before COVID you were 50-50 about which of these were true. If your first decade of observations includes a lab-leak-caused pandemic, you should update your probability over theories to 76-24, which changes your overall probability of pandemic per decade from 21% to 27.5%.

Oh, he doesn't, he just made the numbers up! "I don't have actual evidence to support my claims, so I'll just make up data and call myself a 'good Bayesian' to look smart." Seriously, how could a reasonable person have been expected to be concerned about lab leaks before COVID? It simply wasn't something in the public consciousness. This looks like some serious hindsight bias to me.

I don’t entirely accept this argument - I think whether or not it was a lab leak matters in order to convince stupid people, who don’t know how to use probabilities and don’t believe anything can go wrong until it’s gone wrong before. But in a world without stupid people, no, it wouldn’t matter.

Ah, no need to make the numbers make sense, because stupid people wouldn't understand the argument anyway. Quite literally: "To be fair, you have to have a really high IQ to understand my shitty blog posts. The Bayesian math is is extremely subtle..." And, convince stupid people of what, exactly? He doesn't say, so what was the point of all the fake probabilities? What a prick.

260
12

The Techno-Optimist Manifesto By Marc Andreesson

261
19
"hey wait, EA sucks!" (www.lesswrong.com)
262
13
263
14
264
20
submitted 2 years ago* (last edited 2 years ago) by dgerard@awful.systems to c/sneerclub@awful.systems
265
17
266
19

Eliezer Yudkowsky @ESYudkowsky If you're not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entire legal code -- which no human can know or obey -- and threatens to enforce it, via police reports and lawsuits, against anyone who doesn't comply with its orders. Jan 3, 2024 · 7:29 PM UTC

267
71
submitted 2 years ago* (last edited 2 years ago) by TinyTimmyTokyo@awful.systems to c/sneerclub@awful.systems

Pass the popcorn, please.

(nitter link)

268
38
submitted 2 years ago* (last edited 2 years ago) by dgerard@awful.systems to c/sneerclub@awful.systems
269
94
submitted 2 years ago* (last edited 2 years ago) by dgerard@awful.systems to c/sneerclub@awful.systems

I'm called a Nazi because I happily am proud of white culture. But every day I think fondly of the brown king Cyrus the Great who invented the first ever empire, and the Japanese icon Murasaki Shikibu who wrote the first novel ever. What if humans just loved each other? History teaches us that we have all been, and always will be - great

read the whole thread, her responses are even worse

270
52
271
14
272
13
submitted 2 years ago* (last edited 2 years ago) by saucerwizard@awful.systems to c/sneerclub@awful.systems

Is uh, anyone else watching? This dude (chaos) was/is friends with Brent Dill.

273
25
submitted 2 years ago* (last edited 2 years ago) by dgerard@awful.systems to c/sneerclub@awful.systems

an entirely vibes-based literary treatment of an amateur philosophy scary campfire story, continuing in the comments

274
15

... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.

EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:

Andrew Ng wrote:

In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack.

Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do.

Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?

EY replied:

I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.

275
55

I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don't think it got a proper post, and I think it deserves one.

view more: ‹ prev next ›

SneerClub

1212 readers
4 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS