He probably paid a rationalist dating coach good money to tell him to do that.
the need to distribute sex to needy men
It always trips me up how this is about state sponsored arranged marriages (preferably to virgins), instead of like pushing to decriminalize sex work in the united states.
Isn't this completely hypothetical though? As in having the various LLMs respond to a story prompt and calling it an experiment, AI safety research style?
Even Scott’s fantasy dream scenario for what prediction markets could be like and what questions they could answer feels… … deliberately naive? …like libertarian brainrot? …disconnected from reality?
That's mostly because outright admitting that the point of prediction markets was to make having the prediction gene profitable so they could get on with breeding a rationailst kwisatz haderach to fight the robot god on more equal terms wouldn't fly with the lower level thetans and other exoterics.
Well, you could maybe sort of train it not to generate “all men are cats”, but then that might also prevent it from making the more correct generalization “all cats are mortal” or even completely valid generalizations like combing “all men are mortal” and “Socrates is man” to get “Socrates is mortal”.
Just wanted to say that that 'tal' comes after 'mor' when 'soc-rate-s' is in the near context and in agreement with the attention mechanism is a very different type of logic than what this phrasing implies. This is also in combination with the peculiarities of word embeddings (the technique by which the tokens are translated to numeric vectors) like how it has a hard time making something useful out of numbers, it uh gets uh complicated.
The monofacts thing seems very post hoc and way too abstracted in comparison, and also the amount of text that can be categorized as strictly true or false isn't that big all things considered.
Still if the point was to formalize the very no-duh observation that a neural net isn't supposed to output it's dataset verbatim at all times hence hallucinations, then fine, I guess. Their proposed sort of solution (controlled miscalibration) even amounts to forcing the model to generalize less by memorizing more, which used to be the opposite of why you would choose to use this type of topography.
The newest addition to her polycule
Isn't this mostly a pretentious way of saying someone I recently fucked?
"not on squeaking terms"

by the way I first saw this in the stubsuck
transcript
I know this is about rationalism but the unexpanded uncapitalized "rat" name really makes this post. Imagining a world where this is a callout post about a community of rodents being racist. We're not on squeaking terms right now cause they're being problematic :/
Liuson told managers that AI “should be part of your holistic reflections on an individual’s performance and impact.”
who talks like this
So many low-hanging fruits. Unbelievable fruits. You wouldn’t believe how low they’re hanging.
In every RAG guide I've seen, the suggested system prompts always tended to include some more dignified variation of "Please for the love of god only and exclusively use the contents of the retrieved text to answer the user's question, I am literally on my knees begging you."
Also, if reddit is any indication, a lot of people actually think that's all it takes and that the hallucination stuff is just people using LLMs wrong. I mean, it would be insane to pour so much money into something so obviously fundamentally flawed, right?
This was such a chore to read, it's basically quirk-washing TREACLES. This is like a major publication deciding to take an uncritical look at scientology focusing on the positive vibes and the camaraderie, while stark in the middle of operation snow white, which in fact I bet happened a lot at the time.
The doomer scene may or may not be a delusional bubble—we’ll find out in a few years
Fuck off.
The doomers are aware that some of their beliefs sound weird, but mere weirdness, to a rationalist, is neither here nor there. MacAskill, the Oxford philosopher, encourages his followers to be “moral weirdos,” people who may be spurned by their contemporaries but vindicated by future historians. Many of the A.I. doomers I met described themselves, neutrally or positively, as “weirdos,” “nerds,” or “weird nerds.” Some of them, true to form, have tried to reduce their own weirdness to an equation. “You have a set amount of ‘weirdness points,’ ” a canonical post advises. “Spend them wisely.”
The weirdness is eugenics and the repugnant conclusion, and abusing bayes rule to sidestep context and take epistimological shortcuts to cuckoo conclusions while fortifying a bubble of accepted truths that are strangely amenable to allowing rich people to do whatever the hell they want.
Writing a 7-8000 word insider expose on TREACLES without mentioning eugenics even once throughout should be all but impossible, yet here we are.

In other Scott of Siskind news, he just posted an entirely unnecessary amount of words to aggressively push back against the adage that "all exponentials sooner or later turn into sigmoids" as if it was by itself a load bearing claim of the side arguing against the direct imminence of the machine god.
It's just a bunch of arguing by analogy ( "helping you build intuition" ) and you-can't-really-knows while implying AI 2027 was very science much rigorous, but it also feels kind of desperate, like why are you bothering with this overperformative setting-the-record-straight thing, have you been feeling inadequate as an AI-curious stats fondler of note lately?