Hyperstition is such a bad neologism, apparently doubleplus superstition equals self-fullfilling prophecy (transitive)? They don't even bother to verb it properly... Nick Land got a nonsense word stuck in his head and now there's a whole subculture of midwit thought leader wannabes parroting that shit.
Additionally he said something to the effect of I don't blame you for not knowing this, it wasn't effectively communicated to the media like it's no big deal, which isn't really helping to beat the allegations of don't ask don't tell policies about SA in rat related orgs.
[SBF's] psychiatrist, George Lerner, worked in the same office as Scott Alexander IIRC (I’ve lost track of the source, will post later if I can find it).
It was in an ACX blog post, siskind just admitted it out of nowhere. edit: Well ok because he was obviously discussing him, but the possibility of any connections between them wasn't really on anyone's radar by then I think.
edit: Got it: https://www.astralcodexten.com/p/the-psychopharmacology-of-the-ftx#footnote-anchor-1-84889532
OpenAI Declares ‘Code Red’ as Google Threatens AI Lead
I just wanted to point out this tidbit:
Altman said OpenAI would be pushing back work on other initiatives, such as advertising, AI agents for health and shopping, and a personal assistant called Pulse.
Apparently a fortunate side effect of google supposedly closing the gap is that it's a great opportunity to give up on agents without looking like complete clowns. And also make Pulse even more vapory.
The kids were using Adobe for Education. This calls itself “the creative resource for K–12 and Higher Education” and it includes the Adobe Express AI image generator.
I feel the extent to which schooling in the USA is of the this arts and crafts class brought to you by Carl's Jr™ variety is probably understated.
Could be part of its RLHF training, frequent emphasized headers maybe help the prediction engine stay on track for long passages.
"not on squeaking terms"

by the way I first saw this in the stubsuck
transcript
I know this is about rationalism but the unexpanded uncapitalized "rat" name really makes this post. Imagining a world where this is a callout post about a community of rodents being racist. We're not on squeaking terms right now cause they're being problematic :/
Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”
who talks like this
So many low-hanging fruits. Unbelievable fruits. You wouldn’t believe how low they’re hanging.
In every RAG guide I've seen, the suggested system prompts always tended to include some more dignified variation of "Please for the love of god only and exclusively use the contents of the retrieved text to answer the user's question, I am literally on my knees begging you."
Also, if reddit is any indication, a lot of people actually think that's all it takes and that the hallucination stuff is just people using LLMs wrong. I mean, it would be insane to pour so much money into something so obviously fundamentally flawed, right?
This was such a chore to read, it's basically quirk-washing TREACLES. This is like a major publication deciding to take an uncritical look at scientology focusing on the positive vibes and the camaraderie, while stark in the middle of operation snow white, which in fact I bet happened a lot at the time.
The doomer scene may or may not be a delusional bubble—we’ll find out in a few years
Fuck off.
The doomers are aware that some of their beliefs sound weird, but mere weirdness, to a rationalist, is neither here nor there. MacAskill, the Oxford philosopher, encourages his followers to be “moral weirdos,” people who may be spurned by their contemporaries but vindicated by future historians. Many of the A.I. doomers I met described themselves, neutrally or positively, as “weirdos,” “nerds,” or “weird nerds.” Some of them, true to form, have tried to reduce their own weirdness to an equation. “You have a set amount of ‘weirdness points,’ ” a canonical post advises. “Spend them wisely.”
The weirdness is eugenics and the repugnant conclusion, and abusing bayes rule to sidestep context and take epistimological shortcuts to cuckoo conclusions while fortifying a bubble of accepted truths that are strangely amenable to allowing rich people to do whatever the hell they want.
Writing a 7-8000 word insider expose on TREACLES without mentioning eugenics even once throughout should be all but impossible, yet here we are.

They made a pro-longtermist video in association with open philanthropy a few years back, The Last Human or something like that, the summary was pretty open about the connection.
I don't think the shadiness is specific to rationalism, see also that bizarre KG video claiming it's scientifically impossible to lose weight by exercising that coincided with the height of Ozempic's hype.
edit: The Last Human came out at 2022, the same year the McAskill book arguing longtermism was published, what a coinkidink.