"Drinking alone tonight?" the bartender asks.
And if I do a simple calculation, (virtues - vices)
"...then I have shit for brains."
That picture... Harry Potter and the Hole of Ketamine.
First, learn the difference between scorn or disdain and hate.
Second, read the comments in the thread already made about those "'sort of' correct" predictions.
Hmm, a xitter link, I guess I'll take a moment to open that in a private tab in case it's passingly amusing...
To the journalists contacting me about the AGI consensual non-consensual (cnc) sex parties—
OK, you have my attention now.
To the journalists contacting me about the AGI consensual non-consensual (cnc) sex parties—
During my twenties in Silicon Valley, I ran among elite tech/AI circles through the community house scene. I have seen some troubling things around social circles of early OpenAI employees, their friends, and adjacent entrepreneurs, which I have not previously spoken about publicly.
It is not my place to speak as to why Jan Leike and the superalignment team resigned. I have no idea why and cannot make any claims. However, I do believe my cultural observations of the SF AI scene are more broadly relevant to the AI industry.
I don't think events like the consensual non-consensual (cnc) sex parties and heavy LSD use of some elite AI researchers have been good for women. They create a climate that can be very bad for female AI researchers, with broader implications relevant to X-risk and AGI safety. I believe they are somewhat emblematic of broader problems: a coercive climate that normalizes recklessness and crossing boundaries, which we are seeing playing out more broadly in the industry today. Move fast and break things, applied to people.
There is nothing wrong imo with sex parties and heavy LSD use in theory, but combined with the shadow of 100B+ interest groups, leads to some of the most coercive and fucked up social dynamics that I have ever seen. The climate was like a fratty LSD version of 2008 Wall Street bankers, which bodes ill for AI safety.
Women are like canaries in the coal mine. They are often the first to realize that something has gone horribly wrong, and to smell the cultural carbon monoxide in the air. For many women, Silicon Valley can be like Westworld, where violence is pay-to-pay.
I have seen people repeatedly get shut down for pointing out these problems. Once, when trying to point out these problems, I had three OpenAI and Anthropic researchers debate whether I was mentally ill on a Google document. I have no history of mental illness; and this incident stuck with me as an example of blindspots/groupthink.
I am not writing this on the behalf of any interest group. Historically, much of OpenAI-adjacent shenanigans has been blamed on groups with weaker PR teams, like Effective Altruism and rationalists. I actually feel bad for the latter two groups for taking so many undeserved hits. There are good and bad apples in every faction. There are so many brilliant, kind, amazing people at OpenAI, and there are so many brilliant, kind, and amazing people in Anthropic/EA/Google/[insert whatever group]. I’m agnostic. My one loyalty is to the respect and dignity of human life.
I'm not under an NDA. I never worked for OpenAI. I just observed the surrounding AI culture through the community house scene in SF, as a fly-on-the-wall, hearing insider information and backroom deals, befriending dozens of women and allies and well-meaning parties, and watching many them get burned. It’s likely these problems are not really on OpenAI but symptomatic of a much deeper rot in the Valley. I wish I could say more, but probably shouldn’t.
I will not pretend that my time among these circles didn’t do damage. I wish that 55% of my brain was not devoted to strategizing about the survival of me and of my friends. I would like to devote my brain completely and totally to AI research— finding the first principles of visual circuits, and collecting maximally activating images of CLIP SAEs to send to my collaborators for publication.
I wandered over somehow from RationalWiki, which I had known of since the science-blogging days of yore, and found it more congenial to my tastes than other subreddits. E.g., it was friendlier to excursions into the wonky and erudite than r/badphilosophy, and generally had a justifiably low tolerance for superficial politeness while maintaining a level of empathy for serious matters.
I imagine it was an article about how "Sex workers are skeptical that crypto can answer all their financial problems".
I can barely get past the image caption. "An AI made this". OK, and what did you ask it for, "random shit"?
And then there's the section that seems implicitly to be arguing that we should take the risk estimates made on "internet rationality forums" seriously because they totally called the COVID crisis, you guys... Well, they did a better job than an economist, anyway.
Suppose there are five true heresies, but anyone who's on the record as believing more than one gets burned as a witch.
Two heresies leave Chicago traveling at 90 km/h and 100 km/h
Jessica asked if Yudkowsky denouncing neoreaction and the alt-right would still seem harmful, if he were to also to acknowledge, e.g., racial IQ differences?
uh
I agreed that that would be better, but realistically, I didn't see why Yudkowsky should want to poke that hornet's nest.
uhhhhhhhhh
“covalently bonded” bacteria
what an amazing theoretical possibility
From the "flipping through LessWrong for entertainment" department: