[-] scruiser@awful.systems 15 points 3 months ago

Putting this into the current context of LLMs... Given how Eliezer still repeats the "diamondoid bacteria" line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.

[-] scruiser@awful.systems 15 points 3 months ago* (last edited 3 months ago)

we cant do basic things

That's giving them too much credit! They've generated the raw material for all the marketing copy and jargon pumped out by the LLM companies producing the very thing they think will doom us all! They've served a small but crucial role in the influence farming of the likes of Peter Thiel and Elon Musk. They've served as an entry point to the alt-right pipeline!

dath ilan?

As a self-certified Eliezer understander, I can tell you dath ilan would open up a micro-prediction market on various counterfactual ban durations. Somehow this prediction market would work excellently despite a lack of liquidity and multiple layers of skewed incentives that should outweigh any money going into it. Also Said would have been sent to a ~~reeducation camp~~, quiet city and ~~sterilized~~ denied UBI if he reproduces for not conforming to dath ilan's norms much earlier.

[-] scruiser@awful.systems 16 points 4 months ago* (last edited 4 months ago)

So... apparently Peter Thiel has taken to co-opting fundamentalist Christian terminology to go after Effective Altruism? At least it seems that way from this EA post (warning, I took psychic damage just skimming the lunacy). As far as I can tell, he's merely co-opting the terminology, Thiel's blather doesn't have any connection to any variant of Christian eschatology (whether mainstream or fundamentalist or even obscure wacky fundamentalist), but of course, the majority of the EAs don't recognize that, or the fact that he is probably targeting them for their (kind of weak to be honest) attempts at getting AI regulated at all, and instead they charitably try to steelman him and figure out if he was a legitimate point. ...I wish they could put a tenth of this effort into understanding leftist thought.

Some of the comments are... okay actually, at least by EA standards, but there are still plenty of people willing to defend Thiel

One comment notes some confusion:

I’m still confused about the overall shape of what Thiel believes.

He’s concerned about the antichrist opposing Jesus during Armageddon. But afaik standard theology says that Jesus will win for certain. And revelation says the world will be in disarray and moral decay when the Second Coming happens.

If chaos is inevitable and necessary for Jesus’ return, why is expanding the pre-apocalyptic era with growth/prosperity so important to him?

Yeah, its because he is simply borrowing Christian Fundamentalists Eschatological terminology... possibly to try to turn the Christofascists against EA?

Someone actually gets it:

I'm dubious Thiel is actually an ally to anyone worried about permanent dictatorship. He has connections to openly anti-democratic neoreactionaries like Curtis Yarvin, he quotes Nazi lawyer and democracy critic Carl Schmitt on how moments of greatness in politics are when you see your enemy as an enemy, and one of the most famous things he ever said is "I no longer believe that freedom and democracy are compatible". Rather I think he is using "totalitarian" to refer to any situation where the government is less economically libertarian than he would like, or "woke" ideas are popular amongst elite tastemakers, even if the polity this is all occurring in is clearly a liberal democracy, not a totalitarian state.

Note this commenter still uses non-confrontational language ("I'm dubious") even when directly calling Thiel out.

The top comment, though, is just like the main post, extending charitability to complete technofascist insanity. (Warning for psychic damage)

Nice post! I am a pretty close follower of the Thiel Cinematic Universe (ie his various interviews, essays, etc)

I think Thiel is also personally quite motivated (understandably) by wanting to avoid death. This obviously relates to a kind of accelerationist take on AI that sets him against EA, but again, there's a deeper philosophical difference here. Classic Yudkowsky essays (and a memorable Bostrom short story, video adaptation here) share this strident anti-death, pro-medical-progress attitude (cryonics, etc), as do some philanthropists like Vitalik Buterin. But these days, you don't hear so much about "FDA delenda est" or anti-aging research from effective altruism. Perhaps there are valid reasons for this (low tractability, perhaps). But some of the arguments given by EAs against aging's importance are a little weak, IMO (more on this later) -- in Thiel's view, maybe suspiciously weak. This is a weird thing to say, but I think to Thiel, EA looks like a fundamentally statist / fascist ideology, insofar as it is seeking to place the state in a position of central importance, with human individuality / agency / consciousness pushed aside.

As for my personal take on Thiel's views -- I'm often disappointed at the sloppiness (blunt-ness? or low-decoupling-ness?) of his criticisms, which attack the EA for having a problematic "vibe" and political alignment, but without digging into any specific technical points of disagreement. But I do think some of his higher-level, vibe-based critiques have a point.

[-] scruiser@awful.systems 15 points 6 months ago

Example #"I've lost count" of LLMs ignoring instructions and operating like the bullshit spewing machines they are.

[-] scruiser@awful.systems 15 points 7 months ago* (last edited 7 months ago)

You had me going until the very last sentence. (To be fair to me, the OP broke containment and has attracted a lot of unironically delivered opinions almost as bad as your satirical spiel.)

[-] scruiser@awful.systems 16 points 7 months ago* (last edited 7 months ago)

The latest twist I'm seeing isn't blaming your prompting (although they're still eager to do that), it's blaming your choice of LLM.

"Oh, you're using shitGPT 4.1-4o-o3 mini _ro_plus for programming? You should clearly be using Gemini 3.5.07 pro-doubleplusgood, unless you need something locally run, then you should be using DeepSek_v2_r_1 on your 48 GB VRAM local server! Unless you need nice sounding prose, then you actually need Claude Limmerick 3.7.01. Clearly you just aren't trying the right models, so allow me to educate you with all my prompt fondling experience. You're trying to make some general point? Clearly you just need to try another model."

[-] scruiser@awful.systems 15 points 7 months ago

It starts out seeming like a funny but petty and irrelevant criticism of his kitchen skill and product choices, but then beautifully transitions that into an accurate criticism of OpenAI.

[-] scruiser@awful.systems 15 points 7 months ago* (last edited 7 months ago)

nanomachines son

(no really, the sci-fi version of nanotech where nanomachines can do anything is Eliezer's main scenario for the AGI to boostrap to Godhood. He's been called out multiple times on why drexler's vision for nanotech ignores physics, so he's since updated to diamondoid bacteria (but he still thinks nanotech).)

[-] scruiser@awful.systems 15 points 7 months ago

You need to translate them into lesswrongese before you try interpreting them together.

probability: he made up a number to go with his feelings about a topic

subjective: the number is even more made up and feelings based than is normal for lesswrong

noticeable: the number is really tiny, but big enough for Eliezer to fearmonger about!

No, you don't get to actually know what the number is, then you could penalize Eliezer for predicting it wrongly or question why that number specifically. Just trust that the bayesianified language shows Eliezer thought really hard about it.

[-] scruiser@awful.systems 15 points 8 months ago

No, he's in favor of human slavery, so he still wants to keep naming schemes evocative of it.

[-] scruiser@awful.systems 15 points 10 months ago* (last edited 10 months ago)

My favorite comment in the lesswrong discussion: https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=oyDCbGtkvXtqMnNbK

It's not that eugenics is a magnet for white supremacists, or that rich people might give their children an even more artificially inflated sense of self-worth. No, the risk is that the superbabies might be Khan and kick start the eugenics wars. Of course, this isn't a reason not to make superbabies, it just means the idea needs some more workshopping via Red Teaming (hacker lingo is applicable to everything).

[-] scruiser@awful.systems 16 points 2 years ago

Did you misread or are you making a joke (sorry the situation is so absurd its hard to tell)? Curtis Yarvin is Moldbug, and he was the one hosting the afterparty (he didn't attend the Manifest conference himself). So apparently there were racists too cringy even for Moldbug-hosted parties!

view more: ‹ prev next ›

scruiser

joined 2 years ago