[-] TinyTimmyTokyo@awful.systems 18 points 1 week ago

Bay Area rationalist Sam Kirchner, cofounder of the Berkeley "Stop AI" group, claims "nonviolence isn't working anymore" and goes off the grid. Hasn't been heard from in weeks.

Article has some quotes from Emile Torres.

https://archive.is/20251205074622/https://www.theatlantic.com/technology/2025/12/sam-kirchner-missing-stop-ai/685144/

[-] TinyTimmyTokyo@awful.systems 19 points 3 months ago

With his all-consuming fear of death, Thiel is about as far from being a Christian as one can get. All his antichrist talk is a nakedly transparent attempt to gas up the rubes so they remain on the side of the billionaires even after Trump gives up the ghost. He doesn't believe a single thing he's saying.

[-] TinyTimmyTokyo@awful.systems 18 points 3 months ago

Given how often it shows up in his writings, this incel victim narrative is a linchpin to his personality. He even trots it out in the middle of this genocidal screed -- in what on first glance seems to be an irrelevant detour. But it's really not irrelevant. His self-inflicted psychic damage is painfully real and manifests itself in all sorts of toxic and sociopathic ways, including abject dehumanization of an entire population.

[-] TinyTimmyTokyo@awful.systems 18 points 3 months ago

You don't hear too many leftists saying things like this:

I think that the Democratic Party has two factions; they disagree on a ton of important stuff. I think that the neoliberals are right on nearly all of those disagreements, and the progressives are wrong on nearly all of the disagreements.

[-] TinyTimmyTokyo@awful.systems 17 points 4 months ago

Ozy Brennan tries to explain why "rationalism" spawns so many cults.

One of the reasons they give is "a dangerous sense of grandiosity".

the actual process of saving the world is not very glamorous. It involves filling out paperwork, making small tweaks to code, running A/B tests on Twitter posts.

Yep, you heard it right. Shitposting and inconsequential code are the proper way to save the world.

[-] TinyTimmyTokyo@awful.systems 18 points 4 months ago

It's a really good article. This part stuck out to me:

If you are seriously, legitimately concerned that an emergent technology is about to exterminate humanity within the next three years, wouldn’t you find yourself compelled to do more than argue with the converted about the particular elements of your end times scenario? Some folks were involved in pushing for SB 1047, but that stalled out; now what? Aren’t you starting an all-out effort to pressure those companies to shut down their operations ASAP? That all these folks are under the same roof for three days, and no one’s being confronted, or being made uncomfortable, or being protested—not even a little bit—is some of the best evidence I’ve seen that all the handwringing over AI Safety and x-risk really is just the sort of amped-up cosplaying its critics accuse it of being.

[-] TinyTimmyTokyo@awful.systems 18 points 4 months ago

It's happening.

Today Anthropic announced new weekly usage limits for their existing Pro plan subscribers. The chatbot makers are getting worried about the VC-supplied free lunch finally running out. Ed Zitron called this.

Naturally the orange site vibe coders are whinging.

[-] TinyTimmyTokyo@awful.systems 18 points 4 months ago

The rest of that guy's blog is a fucking neofascist mess. That'll teach me to post a link without first checking out the writer.

[-] TinyTimmyTokyo@awful.systems 19 points 5 months ago

What makes this worse than the financial crisis of 2008 is that you can't live in a GPU once the crash happens.

[-] TinyTimmyTokyo@awful.systems 18 points 5 months ago* (last edited 5 months ago)

People are often overly confident about their imperviousness to mental illness. In fact I think that --given the right cues -- we're all more vulnerable to mental illness than we'd like to think.

Baldur Bjarnason wrote about this recently. He talked about how chatbots are incentivizing and encouraging a sort of "self-experimentation" that exposes us to psychological risks we aren't even aware of. Risks that no amount of willpower or intelligence will help you avoid. In fact, the more intelligent you are, the more likely you may be to fall into the traps laid in front of you, because your intelligence helps you rationalize your experiences.

[-] TinyTimmyTokyo@awful.systems 17 points 1 year ago

So it turns out the healthcare assassin has some.... boutique... views. (Yeah, I know, shocker.) Things he seems to be into:

  • Lab-grown meat
  • Modern architecture is rotten
  • Population decline is an existential threat
  • Elon Musk and Peter Thiel

How soon until someone finds his LessWrong profile?

35

In her sentencing submission to the judge in the FTX trial, Barbara Fried argues that her son is just a misunderstood altruist, who doesn't deserve to go to prison for very long.

Excerpt:

One day, when he was about twelve, he popped out of his room to ask me a question about an argument made by Derik Parfit, a well-known moral philosopher. As it happens, | am quite familiar with the academic literature Parfi’s article is a part of, having written extensively on related questions myself. His question revealed a depth of understanding and critical thinking that is not all that common even among people who think about these issues for a living. ‘What on earth are you reading?” I asked. The answer, it turned out, was he was working his way through the vast literature on utiitarianism, a strain of moral philosophy that argues that each of us has a strong ethical obligation to live so as to alleviate the suffering of those less fortunate than ourselves. The premises of utilitarianism obviously resonated strongly with what Sam had already come to believe on his own, but gave him a more systematic way to think about the problem and connected him to an online community of like-minded people deeply engaged in the same intellectual and moral journey.

Yeah, that "online community" we all know and love.

71
submitted 2 years ago* (last edited 2 years ago) by TinyTimmyTokyo@awful.systems to c/sneerclub@awful.systems

Pass the popcorn, please.

(nitter link)

[-] TinyTimmyTokyo@awful.systems 17 points 2 years ago

What a bunch of monochromatic, hyper-privileged, rich-kid grifters. It's like a nonstop frat party for rich nerds. The photographs and captions make it obvious:

The gang going for a hiking adventure with AI safety leaders. Alice/Chloe were surrounded by a mix of uplifting, ambitious entrepreneurs and a steady influx of top people in the AI safety space.

The gang doing pool yoga. Later, we did pool karaoke. Iguanas everywhere.

Alice and Kat meeting in “The Nest” in our jungle Airbnb.

Alice using her surfboard as a desk, co-working with Chloe’s boyfriend.

The gang celebrating… something. I don’t know what. We celebrated everything.

Alice and Chloe working in a hot tub. Hot tub meetings are a thing at Nonlinear. We try to have meetings in the most exciting places. Kat’s favorite: a cave waterfall.

Alice’s “desk” even comes with a beach doggo friend!

Working by the villa pool. Watch for monkeys!

Sunset dinner with friends… every day!

These are not serious people. Effective altruism in a nutshell.

24

They've been pumping this bio-hacking startup on the Orange Site (TM) for the past few months. Now they've got Siskind shilling for them.

42
Effective Obfuscation (newsletter.mollywhite.net)

Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she's shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that's an excellent development. Molly's great.

16
submitted 2 years ago* (last edited 2 years ago) by TinyTimmyTokyo@awful.systems to c/sneerclub@awful.systems

Not 7.5% or 8%. 8.5%. Numbers are important.

17

Non-paywalled link: https://archive.ph/9Hihf

In his latest NYT column, Ezra Klein identifies the neoreactionary philosophy at the core of Marc Andreessen's recent excrescence on so-called "techno-optimism". It wasn't exactly a difficult analysis, given the way Andreessen outright lists a gaggle of neoreactionaries as the inspiration for his screed.

But when Andreessen included "existential risk" and transhumanism on his list of enemy ideas, I'm sure the rationalists and EAs were feeling at least a little bit offended. Klein, as the founder of Vox media and Vox's EA-promoting "Future Perfect" vertical, was probably among those who felt targeted. He has certainly bought into the rationalist AI doomer bullshit, so you know where he stands.

So have at at, Marc and Ezra. Fight. And maybe take each other out.

57
submitted 2 years ago* (last edited 2 years ago) by TinyTimmyTokyo@awful.systems to c/sneerclub@awful.systems

Rationalist check-list:

  1. Incorrect use of analogy? Check.
  2. Pseudoscientific nonsense used to make your point seem more profound? Check.
  3. Tortured use of probability estimates? Check.
  4. Over-long description of a point that could just have easily been made in 1 sentence? Check.

This email by SBF is basically one big malapropism.

56

Representative take:

If you ask Stable Diffusion for a picture of a cat it always seems to produce images of healthy looking domestic cats. For the prompt "cat" to be unbiased Stable Diffusion would need to occasionally generate images of dead white tigers since this would also fit under the label of "cat".

24
1
submitted 2 years ago* (last edited 2 years ago) by TinyTimmyTokyo@awful.systems to c/sneerclub@awful.systems

[All non-sneerclub links below are archive.today links]

Diego Caleiro, who popped up on my radar after he commiserated with Roko's latest in a never-ending stream of denials that he's a sex pest, is worthy of a few sneers.

For example, he thinks Yud is the bestest, most awesomest, coolest person to ever breathe:

Yudkwosky is a genius and one of the best people in history. Not only he tried to save us by writing things unimaginably ahead of their time like LOGI. But he kind of invented Lesswrong. Wrote the sequences to train all of us mere mortals with 140-160IQs to think better. Then, not satisfied, he wrote Harry Potter and the Methods of Rationality to get the new generation to come play. And he founded the Singularity Institute, which became Miri. It is no overstatement that if we had pulled this off Eliezer could have been THE most important person in the history of the universe.

As you can see, he's really into superlatives. And Jordan Peterson:

Jordan is an intellectual titan who explores personality development and mythology using an evolutionary and neuroscientific lenses. He sifted through all the mythical and religious narratives, as well as the continental psychoanalysis and developmental psychology so you and I don’t have to.

At Burning Man, he dons a 7-year old alter ego named "Evergreen". Perhaps he has an infantilization fetish like Elon Musk:

Evergreen exists ephemerally during Burning Man. He is 7 days old and still in a very exploratory stage of life.

As he hinted in his tweet to Roko, he has an enlightened view about women and gender:

Men were once useful to protect women and children from strangers, and to bring home the bacon. Now the supermarket brings the bacon, and women can make enough money to raise kids, which again, they like more in the early years. So men have become useless.

And:

That leaves us with, you guessed, a metric ton of men who are no longer in families.

Yep, I guessed about 12 men.

3

Excerpt:

Richard Hanania, a visiting scholar at the University of Texas, used the pen name “Richard Hoste” in the early 2010s to write articles where he identified himself as a “race realist.” He expressed support for eugenics and the forced sterilization of “low IQ” people, who he argued were most often Black. He opposed “miscegenation” and “race-mixing.” And once, while arguing that Black people cannot govern themselves, he cited the neo-Nazi author of “The Turner Diaries,” the infamous novel that celebrates a future race war.

He's also a big eugenics supporter:

“There doesn’t seem to be a way to deal with low IQ breeding that doesn’t include coercion,” he wrote in a 2010 article for AlternativeRight .com. “Perhaps charities could be formed which paid those in the 70-85 range to be sterilized, but what to do with those below 70 who legally can’t even give consent and have a higher birthrate than the general population? In the same way we lock up criminals and the mentally ill in the interests of society at large, one could argue that we could on the exact same principle sterilize those who are bound to harm future generations through giving birth.”

(Reminds me a lot of the things Scott Siskind has written in the past.)

Some people who have been friendly with Hanania:

  • Mark Andreessen, Silion Valley VC and co-founder of Andreessen-Horowitz
  • Hamish McKenzie, CEO of Substack
  • Elon Musk, Chief Enshittification Officer of Tesla and Twitter
  • Tyler Cowen, libertarian econ blogger and George Mason University prof
  • J.D. Vance, US Senator from Ohio
  • Steve Sailer, race (pseudo)science promoter and all-around bigot
  • Amy Wax, racist law professor at UPenn.
  • Christopher Rufo, right-wing agitator and architect of many of Florida governor Ron DeSantis's culture war efforts
2
view more: ‹ prev next ›

TinyTimmyTokyo

joined 2 years ago