21
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 06 Apr 2025
21 points (100.0% liked)
TechTakes
1799 readers
85 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
:( looked in my old CS dept's discord, recruitment posts for the "Existential Risk Laboratory" running an intro fellowship for AI Safety.
Looks inside at materials, fkn Bostrom and Kelsey Piper and whole slew of BS about alignment faking. Ofc the founder is an effective altruist getting a graduate degree in public policy.
that's CFAR cult jargon right?
Mesa-optimization? I'm not sure who in the lesswrong sphere coined it... but yeah, it's one of their "technical" terms that don't actually have academic publishing behind it, so jargon.
Instrumental convergence.... I think Bostrom coined that one?
The AI alignment forum has a claimed origin here is anyone on the article here from CFAR?
Why use the perfectly fine 'inner optimizer' mentioned in the references when you can just ask google translate to give you the clunkiest, most pedestrian and also wrong part of speech Greek term to use in place of 'in' instead?
Also natural selection is totally like gradient descent brah, even though evolutionary algorithms actually modeled after natural selection used to be their own subcategory of AI before the term just came to mean lying chatbot.