[-] scruiser@awful.systems 13 points 1 month ago* (last edited 1 month ago)

A lesswronger wrote an blog post about avoiding being overly deferential, using Eliezer as an example of someone that gets overly deferred to. Of course, they can't resist glazing him, even in the context of an blog post on not being too deferential:

Yudkowsky, being the best strategic thinker on the topic of existential risk from AGI

Another lesswronger pushes back on that and is highly upvoted (even among the doomers that think Eliezer is a genius, most of them still think he screwed up in inadvertently helping LLM companies get to where they are): https://www.lesswrong.com/posts/jzy5qqRuqA9iY7Jxu/the-problem-of-graceful-deference-1?commentId=MSAkbpgWLsXAiRN6w

The OP gets mad because this is off topic from what they wanted to talk about (they still don't acknowledge the irony).

A few days later they write an entire post, ostensibly about communication norms, but actually aimed at slamming the person that went off topic: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse

And of course the person they are slamming comes back in for another round of drama: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse?commentId=s4GPm9tNmG6AvAAjo

No big point to this, just a microcosm of lesswrongers being blind to irony, sucking up to Eliezer, and using long winded posts about meta-norms and communication as a means of fighting out their petty forum drama. (At least us sneerclubers are direct and come out and say what we mean on the rare occasions we have beef among ourselves.)

[-] scruiser@awful.systems 13 points 3 months ago

It's a microcosm of lesswrong's dysfunction: IQ veneration, elitism, and misunderstanding the problem in the first place. And even overlooking those problems, I think intellect only moderately correlates with an appreciation for science and an ability to understand science. Someone can think certain scientific subjects are really cool but only have a layman's grasp of the technical details. Someone can do decently in introductory college level physics with just a willingness to work hard and being decent at math. And Eliezer could have avoided tangents about nuclear reactors or whatever to focus on stuff relevant to AI.

[-] scruiser@awful.systems 13 points 3 months ago

Chiming in to agree your prediction write-ups aren't particularly good. Sure they spark discussion, but the whole forecasting/prediction game is one we've seen the rationalists play many times, and it is very easy to overlook or at least undercount your misses and over hype your successes.

In general... I think your predictions are too specific and too optimistic...

[-] scruiser@awful.systems 13 points 3 months ago

That too.

And judging by how all the elegantly charitably written blog posts on the EA forums did jack shit to stop the second manifest conference from having even more racists, debate really doesn't help.

[-] scruiser@awful.systems 13 points 6 months ago* (last edited 6 months ago)

So us sneerclubbers correctly dismissed AI 2027 as bad scifi with a forecasting model basically amounting to "line goes up", but if you end up in any discussions with people that want more detail titotal did a really detailed breakdown of why their model is bad, even given their assumptions and trying to model "line goes up": https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models

tldr; the AI 2027 model, regardless of inputs and current state, has task time horizons basically going to infinity at some near future date because they set it up weird. Also the authors make a lot of other questionable choices and have a lot of other red flags in their modeling. And the picture they had in their fancy graphical interactive webpage for fits of the task time horizon is unrelated to the model they actually used and is missing some earlier points that make it look worse.

[-] scruiser@awful.systems 13 points 6 months ago

The space of possible evolved biological minds is far smaller than the space of possible ASI minds

Achkshually, Yudkowskian Orthodoxy says any truly super-intelligent minds will converge on Expected Value Maximization, Instrumental Goals, and Timeless-Decision Theory (as invented by Eliezer), so clearly the ASI mind space is actually quite narrow.

[-] scruiser@awful.systems 12 points 7 months ago

This post has prompted me to give a reminder that one of the authors of AI 2027 predicted back in 2021 that "prompt programming" would be a thing by now.

[-] scruiser@awful.systems 13 points 8 months ago

His fears are my hope, that Trump fucking up hard enough will send the pendulum of public opinion the other way (and then the Democrats use that to push some actually leftist policies through... it's a hope not an actual prediction).

He cultivated this incompetence and worshiped at the altar of the Silicon Valley CEO, so seeing him confronted with Elon's and Trump's clumsy incompetence is some nice schadenfreude.

[-] scruiser@awful.systems 12 points 8 months ago

The sequence of links hopefully lays things out well enough for normies? I think it it does, but I've been aware of the scene since the mid 2010s, so I'm not the audience that needs it. I can almost feel sympathy for Sam dealing with all the doomers, except he uses the doom and hype to market OpenAI and he lied a bunch so not really. And I can almost feel sympathy for the board, getting lied to and outmaneuvered by a sociopathic CEO, but they are a bunch of doomers from the sound of it so, eh. I would say they deserve each other, its the rest of the world that don't deserve them (from the teacher dealing with the LLM slop plugged into homework, to the Website Admin fending off scrapers, to legitimate ML researchers getting the attention sucked away while another AI winter starts to loom, to the machine cultist not saving a retirement fund and having panic attacks over the upcoming salvation or doom).

[-] scruiser@awful.systems 13 points 10 months ago* (last edited 10 months ago)

That was literally the inflection point on my path to sneerclub. I had started to break from less wrong before, but I hadn't reached the tipping point of saying it was all bs. And for ssc and Scott in particular I had managed to overlook the real message buried in thousands of words of equivocating and bad analogies and bad research in his earlier posts. But "you are still crying wolf" made me finally question what Scott's real intent was.

[-] scruiser@awful.systems 13 points 2 years ago* (last edited 2 years ago)

I don't think even that does it. Richard Hanania, one of Manifest's promoted speakers, wrote "Why Do I Hate Pronouns More Than Genocide?".

[-] scruiser@awful.systems 12 points 2 years ago* (last edited 2 years ago)

So, I was morbidly curious about what Zack has to say about the Brennan emails (as I think they've been under-discussed, if not outright deliberately ignored, in lesswrong discussion), I found to my horror I actually agree with a side point of Zack's. From the footnotes:

It seems notable (though I didn't note it at the time of my comment) that Brennan didn't break any promises. In Brennan's account, Alexander "did not first say 'can I tell you something in confidence?' or anything like that." Scott unilaterally said in the email, "I will appreciate if you NEVER TELL ANYONE I SAID THIS, not even in confidence. And by 'appreciate', I mean that if you ever do, I'll probably either leave the Internet forever or seek some sort of horrible revenge", but we have no evidence that Topher agreed.

To see why the lack of a promise is potentially significant, imagine if someone were guilty of a serious crime (like murder or stealing billions of dollars of their customers' money) and unilaterally confessed to an acquaintance but added, "Never tell anyone I said this, or I'll seek some sort of horrible revenge." In that case, I think more people's moral intuitions would side with the reporter.

Of course, Zack's ultimate conclusion on this subject is the exact opposite of the correct one I think:

I think that to people who have read and understood Alexander's work, there is nothing surprising or scandalous about the contents of the email.

I think the main reason someone would consider the email a scandalous revelation is if they hadn't read Slate Star Codex that deeply—if their picture of Scott Alexander as a political writer was "that guy who's so committed to charitable discourse

Gee Zack, I wonder why so many people misread Scott? ...Its almost like he is intentionally misleading about his true views in order to subtly shift the Overton window of rationalist discourse and intentionally presents himself as simply committed to charitable discourse while actually having a hidden agenda! And the bloated length of Scott's writing doesn't help with clarity either. Of course Zack, who writes tens of thousands of words to indirectly complain about perceived hypocrisy of Eliezer's in order to indirectly push gender essentialist views, probably finds Scott's writings a perfectly reasonable length.

Edit: oh and a added bonus on the Brennan Emails... Seeing them brought up again I connected some dots I had missed. I had seen (and sneered at) this Yud quote before:

I feel like it should have been obvious to anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside and has no ethics and will hurt you if you trust them, but in case it wasn't obvious consider the point made explicitly.

But somehow I had missed or didn't realize the subtext was the emails that laid clear Scott's racism:

(Subtext: Topher Brennan. Do not provide any link in comments to Topher's publication of private emails, explicitly marked as private, from Scott Alexander.)

Hmm... I'm not sure to update (usage of rationalist lingo is deliberate and ironic) in the direction of "Eliezer is stubbornly naive on Scott's racism" or "Eliezer is deliberately covering for Scott's racism". Since I'm not a rationalist my probabilities don't have to sum to 1, so I'm gonna go with both.

view more: ‹ prev next ›

scruiser

joined 2 years ago