21
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 06 Apr 2025
21 points (100.0% liked)
TechTakes
1799 readers
63 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
:( looked in my old CS dept's discord, recruitment posts for the "Existential Risk Laboratory" running an intro fellowship for AI Safety.
Looks inside at materials, fkn Bostrom and Kelsey Piper and whole slew of BS about alignment faking. Ofc the founder is an effective altruist getting a graduate degree in public policy.
that's CFAR cult jargon right?
Not sure! What is CFAR?
Center For Applied Rationality. They hosted "workshops" were people could learn to be more rational. Except there methods weren't really tested. And pretty culty. And reaching the correct conclusions (on topics such as AI doom) were treated as proof of rationality.
Edit: still host, present tense. I had misremembered some news of some other rationality adjacent institution as them shutting down, nope, they are still going strong, offering regular 4 day ~~brainwashing sessions~~ workshops.
Mesa-optimization? I'm not sure who in the lesswrong sphere coined it... but yeah, it's one of their "technical" terms that don't actually have academic publishing behind it, so jargon.
Instrumental convergence.... I think Bostrom coined that one?
The AI alignment forum has a claimed origin here is anyone on the article here from CFAR?
Why use the perfectly fine 'inner optimizer' mentioned in the references when you can just ask google translate to give you the clunkiest, most pedestrian and also wrong part of speech Greek term to use in place of 'in' instead?
Also natural selection is totally like gradient descent brah, even though evolutionary algorithms actually modeled after natural selection used to be their own subcategory of AI before the term just came to mean lying chatbot.
I'm thinking they hired Jar-Jar Binks to the team.
Mesa-optimization... that must be when you rail some crushed-up Adderall XRs, boof some modafinil for good measure, and spend the night making sure your kitchen table surface is perfectly flat with no defects abrasions deviations contusions...
and you wrap it off with some linux 3d graphics lib hacking