[-] scruiser@awful.systems 7 points 15 hours ago

With a name like that and lesswrong to springboard it's popularity, BayesCoin should be good for at least one cycle of pump and dump/rug-pull.

Do some actual programming work (or at least write a "white paper") on tying it into a prediction market on the blockchain and you've got rationalist catnip, they should be all over it, you could do a few cycles of pumping and dumping before the final rug pull.

[-] scruiser@awful.systems 9 points 23 hours ago

I feel like some of the doomers are already setting things up to pivot when their most major recent prophecy (AI 2027) fails:

From here:

(My modal timeline has loss of control of Earth mostly happening in 2028, rather than late 2027, but nitpicking at that scale hardly matters.)

It starts with some rationalist jargon to say the author agrees but one year later...

AI 2027 knows this. Their scenario is unrealistically smooth. If they added a couple weird, impactful events, it would be more realistic in its weirdness, but of course it would be simultaneously less realistic in that those particular events are unlikely to occur. This is why the modal narrative, which is more likely than any other particular story, centers around loss of human control the end of 2027, but the median narrative is probably around 2030 or 2031.

Further walking the timeline back, adding qualifiers and exceptions that the authors of AI 2027 somehow didn't explain before. Also, the reason AI 2027 didn't have any mention of Trump blowing up the timeline doing insane shit is because Scott (and maybe some of the other authors, idk) like glazing Trump.

I expect the bottlenecks to pinch harder, and for 4x algorithmic progress to be an overestimate...

No shit, that is what every software engineering blogging about LLMs (even the credulous ones) say, even allowing LLMs get better at raw code writing! Maybe this author is better in touch with reality than most lesswrongers...

...but not by much.

Nope, they still have insane expectations.

Most of my disagreements are quibbles

Then why did you bother writing this? Anyway, I feel like this author has set themselves up to claim credit when it's December 2027 and none of AI 2027's predictions are true. They'll exaggerate their "quibbles" into successful predictions of problems in the AI 2027 timeline, while overlooking the extent to which they agreed.

I'll give this author +10 bayes points for noticing Trump does unpredictable batshit stuff, and -100 for not realizing the real reason why Scott didn't include any call out of that in AI 2027.

[-] scruiser@awful.systems 5 points 23 hours ago

Doom feels really likely to me. […] But who knows, perhaps one of my assumptions is wrong. Perhaps there’s some luck better than humanity deserves. If this happens to be the case, I want to be in a position to make use of it.

This line actually really annoys me, because they are already set up for moving the end date on their doomsday prediction as needed while still maintaining their overall doomerism.

[-] scruiser@awful.systems 15 points 2 days ago

No, he's in favor of human slavery, so he still wants to keep naming schemes evocative of it.

[-] scruiser@awful.systems 7 points 3 days ago* (last edited 3 days ago)

Mesa-optimization? I'm not sure who in the lesswrong sphere coined it... but yeah, it's one of their "technical" terms that don't actually have academic publishing behind it, so jargon.

Instrumental convergence.... I think Bostrom coined that one?

The AI alignment forum has a claimed origin here is anyone on the article here from CFAR?

[-] scruiser@awful.systems 8 points 3 days ago* (last edited 3 days ago)

Center For Applied Rationality. They hosted "workshops" were people could learn to be more rational. Except there methods weren't really tested. And pretty culty. And reaching the correct conclusions (on topics such as AI doom) were treated as proof of rationality.

Edit: still host, present tense. I had misremembered some news of some other rationality adjacent institution as them shutting down, nope, they are still going strong, offering regular 4 day ~~brainwashing sessions~~ workshops.

[-] scruiser@awful.systems 7 points 3 days ago

Its the sort of stuff that makes great material for science fiction! It's less fun when you see it in the NYT or quoted by mainstream politicians with plans that will wreck the country.

[-] scruiser@awful.systems 12 points 3 days ago

Yeah the genocidal imagery was downright unhinged, much worse than I expected from what little I've previously read of his. I almost wonder how ideological adjacent allies like Siskind can still stand to be associated with him (but not really, Siskind can normalize any odious insanity if it serves his purposes).

[-] scruiser@awful.systems 13 points 3 days ago

His fears are my hope, that Trump fucking up hard enough will send the pendulum of public opinion the other way (and then the Democrats use that to push some actually leftist policies through... it's a hope not an actual prediction).

He cultivated this incompetence and worshiped at the altar of the Silicon Valley CEO, so seeing him confronted with Elon's and Trump's clumsy incompetence is some nice schadenfreude.

[-] scruiser@awful.systems 13 points 3 days ago* (last edited 3 days ago)

I can use bad analogies also!

  • If airplanes can fly, why can't they fly to the moon? It is a straightforward extension of existing flight technology, and plotting airplane max altitude from 1900-1920 shows exponential improvement in max altitude. People who are denying moon-plane potential just aren't looking at the hard quantitative numbers in the industry. In fact, with no atmosphere in the way, past a certain threshold airplanes should be able to get higher and higher and faster and faster without anything to slow them down.

I think Eliezer might have started the bad airplane analogies... let me see if I can find a link... and I found an analogy from the same author as the 2027 ~~fanfic~~ forecast: https://www.lesswrong.com/posts/HhWhaSzQr6xmBki8F/birds-brains-planes-and-ai-against-appeals-to-the-complexity

Eliezer used a tortured metaphor about rockets, so I still blame him for the tortured airplane metaphor: https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem

[-] scruiser@awful.systems 17 points 3 days ago* (last edited 3 days ago)

This isn't debate club or men of science hour, this is a forum for making fun of idiocy around technology. If you don't like that you can leave (or post a few more times for us to laugh at before you're banned).

As to the particular paper that got linked, we've seen people hyping LLMs misrepresent their research as much more exciting than it actually is (all the research advertising deceptive LLMs for example) many many times already, so most of us weren't going to waste time to track down the actual paper (and not just the marketing release) to pick apart the methods. You could say (raises sunglasses) our priors on it being bullshit were too strong.

[-] scruiser@awful.systems 9 points 3 days ago* (last edited 3 days ago)

Big effort post... reading it will still be less effort than listening to the full Behind the Bastards podcast, so I hope you appreciate it...

To summarize it from a personal angle...

In 2011, I was a high schooler who liked Harry Potter fanfics. I found Harry Potter And The Methods of Rationality a fun story, so I went to the lesswrong website and was hooked on all the neat pop-science explanations. The AGI stuff and cryonics and transhumanist stuff seemed a bit fanciful but neat (after all, the present would seem strange and exciting to someone from a hundred years ago). Fast forward to 2015, HPMOR was finally finishing, I was finishing my undergraduate degree, and in the course of getting a college education I had actually taken some computer science and machine learning courses. Reconsidering lesswrong with my level of education then... I noticed MIRI (the institute Eliezer founded) wasn't actually doing anything with neural nets, they were playing around with math abstractions, and they hadn't actually published much formal writing (well not actually any, but at the time I didn't appreciate peer-review vs. self publishing and preprints), and even the informal lesswrong posts had basically stopped. I had gotten into a related blog, slatestarcodex (written by Scott Alexander), which filled some of the same niche, but in 2016 Scott published a defense of Trump normalizing him, and I realized Scott had an agenda at cross purposes with the "center-left" perspective he portrayed himself as. At around that point, I found the reddit version of sneerclub and it connected a lot of dots I had been missing. Far from the AI expert he presented himself as, Eliezer had basically done nothing but write loose speculation on AGI and pop-science explanations. And Scott Alexander was actually trying to push "human biodiversity" (i.e. racism disguised in pseudoscience) and neoreactionary/libertarian beliefs. From there, it became apparent to me a lot of Eliezer's claims weren't just a bit fanciful, they were actually really really ridiculous, and the community he had setup had a deeply embedded racist streak.

To summarize it focusing on Eliezer....

Late 1990s Eliezer was on various mailing lists, speculating with bright eyed optimism about nanotech and AGI and genetic engineering and cryonics. He tried his hand at getting in on it, first trying to write a stock trading bot... which didn't work, then trying to write up seed AI (AI that would bootstrap to strong AGI and change the world)... which also didn't work; then trying to develop a new programming language for AI... which he never finished. Then he realized he had been reckless, an actually successful AI might have destroyed mankind, so really it was lucky he didn't succeed, he needed to figure out how to align an AI first. So from the mid 2000s on he started getting donors (this is where Thiel comes in) to fund his research. People kind of thought he was a crank, or just didn't seem concerned with his ideas, so he concluded they must not be rational enough, and set about, first on Overcoming bias, then his own blog, lesswrong, writing a sequence of blog posts to fix that (and putting any actual AI research on hold). They got moderate attention which exploded in the early 2010s when a side project of writing Harry Potter fanfiction took off. He used this fame to get more funding and spread his ideas further. Finally, around mid 2010s, he pivoted to actually trying to do AI research again... MIRI has a sparse (compared to number of researchers they hired and how productive good professors in academia are) collection of papers focused on an abstract concept for AI called AIXI, that basically depends on having infinite computing power and isn't remotely implementable in the real world. Last I checked they didn't get any further than that. Eliezer was skeptical of neural network approaches, derisively thinking of them as voodoo science trying to blindly imitate biology with no proper understanding, so he wasn't prepared for NN taking off mid 2012 and leading to GPT and LLM approaches. So when ChatGPT starts looking impressive, he starts panicking, leading to him going on a podcast circuit professing doom (after all if he and his institute couldn't figure out AI alignment, no one can, and we're likely all doomed for reasons he has written tens of thousands of words in blog posts about without being refuted at a quality he believes is valid).

To tie off some side points:

  • Peter Thiel was one of the original funders of Eliezer and his institution. It was probably a relatively cheap attempt to buy reputation, and it worked to some extent. Peter Thiel has cut funding since Eliezer went full doomer (Thiel probably wanted Eliezer as a silicon valley hype man, not an apocalypse cult).

  • As Scott continued to write posts defending the far-right with a weird posture of being center-left, Slatestarcodex got an increasingly racist audience, culminating in a spin-off forum with full on 14 words white supremacists. He has played a major role in the alt-right pipeline that is some of Trump's most loyal supporters.

  • Lesswrong also attracted some of the neoreactionaries (libertarian wackjobs that want a return to monarchy), among them Menicus Moldbug (real name Curtis Yarvin). Yarvin has written about strategies for dismantling the federal government, which DOGE is now implementing

  • Eliezer may not have been much of a researcher himself, but he inspired a bunch of people, so a lot of OpenAI researchers buy into the hype and/or doom. Sam Altman uses Eliezer's terminology as marketing hype.

  • As for lesswrong itself... what is original isn't good and what's good isn't original. Lots of the best sequences are just a remixed form of books like Kahneman's "Thinking, Fast and Slow". And the worst sequences demand you favor Eliezer's take on bayesianism over actual science, or are focused on the coming AI salvation/doom.

  • other organizations have taken on the "AI safety" mantle. They are more productive than MIRI, in that they actually do stuff with actually implemented 'AI', but what they do is typically contrive (emphasis on contrive) scenarios where LLMs will "act" "deceptive" or "power seeking" or whatever scary buzzword you can imagine and then publish papers about it with titles and abstracts that imply the scenarios are much more natural than they really are.

Feel free to ask any follow-up questions if you genuinely want to know more. If you actually already know about this stuff and are looking for a chance to evangelize for lesswrong or the coming LLM God, the mods can smell that out and you will be shown the door, so don't bother (we get one or two people like that every couple of weeks).

67

So despite the nitpicking they did of the Guardian Article, it seems blatantly clear now that Manifest 2024 was infested by racists. The post article doesn't even count Scott Alexander as "racist" (although they do at least note his HBD sympathies) and identify a count of full 8 racists. They mention a talk discussing the Holocaust as a Eugenics event (and added an edit apologizing for their simplistic framing). The post author is painfully careful and apologetic to distinguish what they personally experienced, what was "inaccurate" about the Guardian article, how they are using terminology, etc. Despite the author's caution, the comments are full of the classic SSC strategy of trying to reframe the issue (complaining the post uses the word controversial in the title, complaining about the usage of the term racist, complaining about the threat to their freeze peach and open discourse of ideas by banning racists, etc.).

2

This is a classic sequence post: (mis)appropriated Japanese phrases and cultural concepts, references to the AI box experiment, and links to other sequence posts. It is also especially ironic given Eliezer's recent switch to doomerism with his new phrases of "shut it all down" and "AI alignment is too hard" and "we're all going to die".

Indeed, with developments in NN interpretability and a use case of making LLM not racist or otherwise horrible, it seems to me like their is finally actually tractable work to be done (that is at least vaguely related to AI alignment)... which is probably why Eliezer is declaring defeat and switching to the podcast circuit.

view more: next ›

scruiser

joined 2 years ago