[-] scruiser@awful.systems 12 points 3 months ago* (last edited 3 months ago)

I'm feeling an effort sneer...

For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights.

Every time I read about a case like this my conviction grows that sneerclub's vibe based moderation is the far superior method!

The key component of making good sneer club criticism is to never actually say out loud what your problem is.

We've said it multiple times, it's just a long list that is inconvenient to say all at once. The major things that keep coming up: The cult shit (including the promise of infinite AGI God heaven and infinite Roko's Basilisk hell; and including forming high demand groups motivated by said heaven/hell); the racist shit (including the eugenics shit); the pretentious shit (I could actually tolerate that if it didn't have the other parts); and lately serving as crit-hype marketing for really damaging technology!

They don't need to develop protocols of communication that produce functional outcomes

Ahem... you just admitted to taking a hundred hours to ban someone, whereas dgerad and co kick out multiple troublemakers in our community within a few hours tops each. I think we are winning on this one.

For LessWrong to become a place that can't do much but to tear things down.

I've seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith's wild genetic engineering fantasies come to mind).

[-] scruiser@awful.systems 12 points 4 months ago* (last edited 4 months ago)

Weird rp wouldn't be sneer worthy on it's own (although it would still be at least a little cringe), it's contributing factors like...

  • the constant IQ fetishism (Int is superior to Charisma but tied with Wis and obviously a true IQ score would be both Int and Wis)

  • the fact that Eliezer cites it like serious academic writing (he's literally mentioned it to Yann LeCunn in twitter arguments)

  • the fact that in-character lectures are the only place Eliezer has written up many of his decision theory takes he developed after the sequences (afaik, maybe he has some obscure content that never made it to lesswrong)

  • the fact that Eliezer think it's another HPMOR-level masterpiece (despite how wordy it is, HPMOR is much more readable, even authors and fans of glowfic usually acknowledge the format can be awkward to read and most glowfics require huge amounts of context to follow)

  • the fact that the story doubles down on the HPMOR flaw of confusion of which characters are supposed to be author mouthpieces (putting your polemics into the mouths of character's working for literal Hell... is certainly an authorial choice)

  • and the continued worldbuilding development of dath ilan, the rationalist utopia built on eugenics and censorship of all history (even the Hell state was impressed!)

...At least lintamande has the commonsense understanding of why you avoid actively linking your bdsm dnd roleplay to your irl name and work.

And it shouldn't be news to people that KP supports eugenics given her defense of Scott Alexander or comments about super babies, but possibly it is and headliner of weird roleplay will draw attention to it.

[-] scruiser@awful.systems 12 points 5 months ago

The only question is who will get the blame.

Isn't it obvious? Us sneerers and the big name skeptics (like Gary Marcuses and Yann LeCuns) continuously cast doubt on LLM capabilities, even as they are getting within just a few more training runs and one more scaling of AGI Godhood. We'll clearly be the ones to blame for the VC funding drying up, not years of hype without delivery.

[-] scruiser@awful.systems 12 points 5 months ago

Unlike with coding, there are no simple “tests” to try out whether an AI’s answer is correct or not.

So for most actual practical software development, writing tests is in fact an entire job in and of itself and its a tricky one because covering even a fraction of the use cases and complexity the software will actually face when deployed is really hard. So simply letting the LLMs brute force trial-and-error their code through a bunch of tests won't actually get you good working code.

AlphaEvolve kind of did this, but it was testing very specific, well defined, well constrained algorithms that could have very specific evaluation written for them and it was using an evolutionary algorithm to guide the trial and error process. They don't say exactly in their paper, but that probably meant generating code hundreds or thousands or even tens of thousands of times to generate relatively short sections of code.

I've noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.

[-] scruiser@awful.systems 12 points 5 months ago* (last edited 5 months ago)

Following up because the talk page keeps providing good material..

Hand of Lixue keeps trying to throw around the Wikipedia rules like the other editors haven't seen people try to weaponize the rules to push their views many times before.

Particularly for the unflattering descriptions I included, I made sure they reflect the general view in multiple sources, which is why they might have multiple citations attached. Unfortunately, that has now led to complaints about overcitation from @Hand of Lixue. You can't win with some people...

Looking back on the original lesswrong ~~brigade organizing~~ discussion of how to improve the wikipedia article, someone tried explaining to Habyrka the rules then and they were dismissive.

I don’t think it counts as canvassing in the relevant sense, as I didn’t express any specific opinion on how the article should be edited.

Yes Habyrka, because you clearly have such a good understanding of the Wikipedia rules and norms...

Also, heavily downvoted on the lesswrong discussion is someone suggesting Wikipedia is irrelevant because LLMs will soon be the standard for "access to ground truth". I guess even lesswrong knows that is bullshit.

[-] scruiser@awful.systems 12 points 5 months ago

The wikipedia talk page is some solid sneering material. It's like Habryka and HandofLixue can't imagine any legitimate reason why Wikipedia has the norms it does, and they can't imagine how a neutral Wikipedian could come to write that article about lesswrong.

Eigenbra accurately calling them out...

"I also didn't call for any particular edits". You literally pointed to two sentences that you wanted edited.

Your twitter post also goes against Wikipedia practices by casting WP:ASPERSIONS. I can't speak for any of the other editors, but I can say I have never read nor edited RationalWiki, so you might be a little paranoid in that regard.

As to your question:

Was it intentional to try to pick a fight with Wikipedians?

It seems to be ignorance on Habyrka's part, but judging by the talk page, instead of acknowledging their ignorance of Wikipedia's reasonable policies, they seem to be doubling down.

[-] scruiser@awful.systems 12 points 6 months ago

This connection hadn't occured to me before, but the Starship Troopers scenes (in the book) where they claim to have mathematically rigorous proofs about various moral statements or actions or societal constructs reminds me of how Eliezer has a decision theory in mind with all sorts of counter intuitive claims (it's mathematically valid to never ever give into any blackmail or threats or anything adjacent to them), but hasn't actually written out his decision theory in rigorous well defined terms that can pass peer review or be used to figure out anything beyond some pre-selected toy problems.

[-] scruiser@awful.systems 12 points 7 months ago

"You claim to like unions, but seem strangely hostile to police unions. Curious."

  • Turning Point USA
[-] scruiser@awful.systems 12 points 7 months ago

This post has prompted me to give a reminder that one of the authors of AI 2027 predicted back in 2021 that "prompt programming" would be a thing by now.

[-] scruiser@awful.systems 12 points 9 months ago

Another episode in the continued saga of lesswrongers anthropomorphizing LLMs to an absurd extent: https://www.lesswrong.com/posts/MnYnCFgT3hF6LJPwn/why-white-box-redteaming-makes-me-feel-weird-1

[-] scruiser@awful.systems 12 points 10 months ago

One comment refuses to leave me: https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=C7MvCZHbFmeLdxyAk

The commenter makes and extended tortured analogy to machine learning... in order to say that maybe genes with correlations to IQ won't add to IQ linearly. It's an encapsulation of many lesswrong issues: veneration of machine learning, overgeneralizing of comp sci into unrelated fields, a need to use paragraphs to say what a single sentence could, and a failure to actually state firm direct objections to blatantly stupid ideas.

[-] scruiser@awful.systems 12 points 1 year ago

I chose to have children, be a father and a husband, live an honest industrious life as an example to my offspring, and attempt to preserve my way of life through them.

Wow, just a few words off the 14 words.

I find it kind of irritating how someone that doesn't familiarize themselves with white supremacists rhetoric and methods might manage to view that phrase innocuously. But it really isn't that hard to see through the bullshit once you've familiarized themselves with the most basic dog whistles and slogans.

view more: ‹ prev next ›

scruiser

joined 2 years ago