Jinsatsu Zetsubō (人殺・絶望, but his thralls call him Ginny) was not your ordinary vampire goth demon lord... He delighted in his garments of true terror and dread, what better source of inescapable despair than his beige ulster coat, barely held together by off-yellow gold pins, with a salmon pink napkin in the over pocket, an ensemble designed to inspire trudgery sucking all soul and joy from any passerby...
A glorious snippet:
The movement ~~connected to~~ attracted the attention of the founder culture of Silicon Valley and ~~leading to many shared cultural shibboleths and obsessions, especially optimism about the ability~~ of intelligent capitalists and technocrats to create widespread prosperity.
At first I was confused at what kind of moron would try using shibboleth positively, but it turns it's just terribly misquoting a citation:
Rationalist culture — and its cultural shibboleths and obsessions — became inextricably intertwined with the founder culture of Silicon Valley as a whole, with its faith in intelligent creators who could figure out the tech, mental and physical alike, that could get us out of the mess of being human.
Also lol at insiting on "exonym" as descriptor for TESCREAL, removing Timnit Gebru and Émile P. Torres and the clear intention of criticism from the term, it doesn't really even make sense to use the acronym unless you're doing critical analasis of the movement(s). (Also removing mentions of the espcially strong overalap between EA and rationalists.)
It's a bit of a hack job at making the page more biased, with a very thin verneer of still using the sources.
Not surprised, still very disappointed, I feel sick.
No no no it's fine! You get the word shuffler to deshuffle the—eloquently—shuffled paragraphs back into nice and tidy bullet points. And I have an idea! You could get an LLM to add metadata to the email to preserve the original bullet points, so the recipient LLM has extra interpolation room to choose to ignore the original list, but keep the—much more correct and eloquent, and with much better emphasis—hallucinated ones.
Quinn enters the dark and cold forest, crossing the threshold, an omnipresent sense of foreboding permeates the air, before being killed by a grue.
“Once we get AGI, we’ll turn the crank one more time—or two or three more times—and AI systems will become superhuman—vastly superhuman. They will become qualitatively smarter than you or I, much smarter, perhaps similar to how you or I are qualitatively smarter than an elementary schooler. “
Also this doesn't give enough credit to gradeschoolers. I certainly don't think I am much smarter (if at all) than when I was a kid. Don't these people remember being children? Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems? I'm not sure if it's me being the weird one, to me growing up is not about becoming smarter, it's more about gaining perspective, that is vital, but actual intelligence/personhood is a pre-requisite for perspective.
Hi, I'm going to be that OTHER guy:
Thank god not all dictionaries are prescriptivists and simply reflect the natural usage: Cambridge dictionary: Beg the question
On a side rant "begging the question" is a terrible name for this bias, and the very wikipedia page you've been so kind to offer provides the much more transparent "assuming the conclusion".
If you absolutely wanted to translate from the original latin/greek (petitio principii/τὸ ἐν ἀρχῇ αἰτεῖσθαι): "beginning with an ask", where ask = assumption of the premise. [Which happens to also be more transparent]
Just because we've inherited terrible translations does not mean we should seek to perpetuate them though sheer cultural inertia, and much less chastise others when using the much more natural meaning of the words "beg the question". [I have to wonder if begging here is somehow a corruption of "begin" but I can't find sources to back this up, and don't want to waste too much time looking]
I feel mildly better, thanks.
Not every rationalist I've met has been nice or smart ^^.
I think it's hard to grow up in our society, without harboring a kernel of fascism in our hearts, it's easy to fall into the constantly sold "everything would work better if we just put the right people in charge". With varying definitions of who the "right people" are:
- Racism
- Eugenics
- Benevolent AI
- Fellow tribe,
- The enlightened who can read "the will of the people" or who are able to "carve reality at the joints"
- Some brands of "sovereign citizen" or corporate libertarianism (I'm the best person in charge of me!).
- The positivist invokers of ScientificProgress™
Do they deserve better? Absolutely, but you can't remove their agency, they ultimately chose this. The world is messy and broken, it's fine not to make too much peace with that, but you have to ponder your ends and your means more thoughtfully than a lot of EAs/Rationalists do. Falling prey to magical thinking is a choice, and/or a bias you can overcome (Which I find extremely ironic given the bias correction advertising in Rationalists spheres)
It makes you wonder about the specifics:
- Did the 1.5 workers assigned for each car mostly handle issues with the same cars?
- Was it a big random pool?
- Or did each worker have their geographic area with known issues ?
Maybe they could have solved context issues and possible latency issues by seating the workers in the cars, and for extra quick intervention speed put them in the driver's seat. Revolutionary. (Shamelessly stealing adam something's joke format about trains)
Student: I wish I could find a copy of one of those AIs that will actually expose to you the human-psychology models they learned to predict exactly what humans would say next, instead of telling us only things about ourselves that they predict we're comfortable hearing. I wish I could ask it what the hell people were thinking back then.
I think this part conveys the root insanity of Yud, failing to understand that language is a co-operative game between humans, that have to trust in common shared lived experiences, to believe the message was conveyed successfully.
But noooooooo, magic AI can extract all the possible meanings, and internal states of all possible speakers in all possible situations from textual descriptions alone: because: ✨bayes✨
The fact that such a (LLM based) system would almost certainly not be optimal for any conceivable loss function / training set pair seems to completely elude him.
~~Brawndo~~ Blockchain has got what ~~plants~~ LLMs crave, it's got ~~electrolytes~~ ledgers.
✨The Vibe✨ is indeed getting increasingly depressing at work.
It's also killing my parents' freelance translation business, there is still money in live interpreting, and prestige stuff or highly technical accuracy very obviously matters stuff, but a lot of stuff is drying up.