31

Excerpt:

A new study published on Thursday in The American Journal of Psychiatry suggests that dosage may play a role. It found that among people who took high doses of prescription amphetamines such as Vyvanse and Adderall, there was a fivefold increased risk of developing psychosis or mania for the first time compared with those who weren’t taking stimulants.

Perhaps this explains some of what goes on at LessWrong and in other rationalist circles.

23

Maybe she was there to give Moldbug some relationship advice.

[-] TinyTimmyTokyo@awful.systems 16 points 4 months ago

Trace seems a bit... emotional. You ok, Trace?

[-] TinyTimmyTokyo@awful.systems 26 points 4 months ago

So now Steve Sailer has shown up in this essay's comments, complaining about how Wikipedia has been unfairly stifling scientific racism.

Birds of a feather and all that, I guess.

[-] TinyTimmyTokyo@awful.systems 16 points 4 months ago

what is the entire point of singling out Gerard for this?

He's playing to his audience, which includes a substantial number of people with lifetime subscriptions to the Unz Review, Taki's crapazine and Mankind Quarterly.

[-] TinyTimmyTokyo@awful.systems 20 points 4 months ago

why it has to be quite that long

Welcome to the rationalist-sphere.

[-] TinyTimmyTokyo@awful.systems 27 points 4 months ago

Scott Alexander, by far the most popular rationalist writer besides perhaps Yudkowsky himself, had written the most comprehensive rebuttal of neoreactionary claims on the internet.

Hey Trace, since you're undoubtedly reading this thread, I'd like to make a plea. I know Scott Alexander Siskind is one of your personal heroes, but maybe you should consider digging up some dirt in his direction too. You might learn a thing or two.

[-] TinyTimmyTokyo@awful.systems 16 points 4 months ago

Until a month ago, TW was the long-time researcher for "Blocked and Reported", the podcast hosted by Katie 'TERF' Herzog and relentless sealion Jesse Singal.

31
OK doomer (www.newyorker.com)

The New Yorker has a piece on the Bay Area AI doomer and e/acc scenes.

Excerpts:

[Katja] Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent. Raised in Chicago as an Orthodox Jew, he dropped out of school after eighth grade, taught himself calculus and atheism, started blogging, and, in the early two-thousands, made his way to the Bay Area. His best-known works include “Harry Potter and the Methods of Rationality,” a piece of fan fiction running to more than six hundred thousand words, and “The Sequences,” a gargantuan series of essays about how to sharpen one’s thinking.

[...]

A guest brought up Scott Alexander, one of the scene’s microcelebrities, who is often invoked mononymically. “I assume you read Scott’s post yesterday?” the guest asked [Katja] Grace, referring to an essay about “major AI safety advances,” among other things. “He was truly in top form.”

Grace looked sheepish. “Scott and I are dating,” she said—intermittently, nonexclusively—“but that doesn’t mean I always remember to read his stuff.”

[...]

“The same people cycle between selling AGI utopia and doom,” Timnit Gebru, a former Google computer scientist and now a critic of the industry, told me. “They are all endowed and funded by the tech billionaires who build all the systems we’re supposed to be worried about making us extinct.”

35

In her sentencing submission to the judge in the FTX trial, Barbara Fried argues that her son is just a misunderstood altruist, who doesn't deserve to go to prison for very long.

Excerpt:

One day, when he was about twelve, he popped out of his room to ask me a question about an argument made by Derik Parfit, a well-known moral philosopher. As it happens, | am quite familiar with the academic literature Parfi’s article is a part of, having written extensively on related questions myself. His question revealed a depth of understanding and critical thinking that is not all that common even among people who think about these issues for a living. ‘What on earth are you reading?” I asked. The answer, it turned out, was he was working his way through the vast literature on utiitarianism, a strain of moral philosophy that argues that each of us has a strong ethical obligation to live so as to alleviate the suffering of those less fortunate than ourselves. The premises of utilitarianism obviously resonated strongly with what Sam had already come to believe on his own, but gave him a more systematic way to think about the problem and connected him to an online community of like-minded people deeply engaged in the same intellectual and moral journey.

Yeah, that "online community" we all know and love.

[-] TinyTimmyTokyo@awful.systems 20 points 8 months ago

You know the doom cult is having an effect when it starts popping up in previously unlikely places. Last month the socialist magazine Jacobin had an extremely long cover feature on AI doom, which it bought into completely. The author is an effective altruist who interviewed and took seriously people like Katja Grace, Dan Hendrycks and Eliezer Yudkosky.

I used to be more sanguine about people's ability to see through this bullshit, but eschatological nonsense seems to tickle something fundamentally flawed in the human psyche. This LessWrong post is a perfect example.

[-] TinyTimmyTokyo@awful.systems 31 points 9 months ago

Eats the same bland meal every day of his life. Takes an ungodly number of pills every morning. Uses his son as his own personal blood boy. Has given himself a physical appearance that can only be described as "uncanny valley".

I'll never understand the extremes some of these tech bros will go to deny the inevitability of death.

[-] TinyTimmyTokyo@awful.systems 15 points 9 months ago

Happy Valentine's Day everybody!

71
submitted 10 months ago* (last edited 10 months ago) by TinyTimmyTokyo@awful.systems to c/sneerclub@awful.systems

Pass the popcorn, please.

(nitter link)

[-] TinyTimmyTokyo@awful.systems 20 points 10 months ago

Imagine thinking there is actually some identifiable thing called "white culture". As if a skin color defines a culture.

Yeah, sounds like a Nazi.

[-] TinyTimmyTokyo@awful.systems 17 points 11 months ago

What a bunch of monochromatic, hyper-privileged, rich-kid grifters. It's like a nonstop frat party for rich nerds. The photographs and captions make it obvious:

The gang going for a hiking adventure with AI safety leaders. Alice/Chloe were surrounded by a mix of uplifting, ambitious entrepreneurs and a steady influx of top people in the AI safety space.

The gang doing pool yoga. Later, we did pool karaoke. Iguanas everywhere.

Alice and Kat meeting in “The Nest” in our jungle Airbnb.

Alice using her surfboard as a desk, co-working with Chloe’s boyfriend.

The gang celebrating… something. I don’t know what. We celebrated everything.

Alice and Chloe working in a hot tub. Hot tub meetings are a thing at Nonlinear. We try to have meetings in the most exciting places. Kat’s favorite: a cave waterfall.

Alice’s “desk” even comes with a beach doggo friend!

Working by the villa pool. Watch for monkeys!

Sunset dinner with friends… every day!

These are not serious people. Effective altruism in a nutshell.

24

They've been pumping this bio-hacking startup on the Orange Site (TM) for the past few months. Now they've got Siskind shilling for them.

42
Effective Obfuscation (newsletter.mollywhite.net)

Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she's shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that's an excellent development. Molly's great.

16
submitted 11 months ago* (last edited 11 months ago) by TinyTimmyTokyo@awful.systems to c/sneerclub@awful.systems

Not 7.5% or 8%. 8.5%. Numbers are important.

17

Non-paywalled link: https://archive.ph/9Hihf

In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen's recent excrescence on so-called "techno-optimism". It wasn't exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.

But when Andreessen included "existential risk" and transhumanism on his list of enemy ideas, I'm sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox's EA-promoting "Future Perfect" vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.

So have at at, Marc and Ezra. Fight. And maybe take each other out.

[-] TinyTimmyTokyo@awful.systems 16 points 1 year ago* (last edited 1 year ago)

I mean, of course he loves unfettered technology and capitalism. He's a fucking billionaire. He hit the demographic lottery.

EDIT: I just noticed his list of "techno-optimist" patrons. On the list? John Galt. LMAO. The whole list is pretty much an orgy of libertarians.

57
submitted 1 year ago* (last edited 1 year ago) by TinyTimmyTokyo@awful.systems to c/sneerclub@awful.systems

Rationalist check-list:

  1. Incorrect use of analogy? Check.
  2. Pseudoscientific nonsense used to make your point seem more profound? Check.
  3. Tortured use of probability estimates? Check.
  4. Over-long description of a point that could just have easily been made in 1 sentence? Check.

This email by SBF is basically one big malapropism.

56

Representative take:

If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

24
view more: next ›

TinyTimmyTokyo

joined 1 year ago