[-] visaVisa@awful.systems 16 points 1 month ago* (last edited 1 month ago)

i for sure agree that LLMs can be a huge trouble spot for mentally vulnerable people and there needs to be something done about it

my point was more on him using it to do his worst-of-both-worlds arguments where he's simultaneously saying that 'alignment is FALSIFIED!' and also doing heavy anthropomorphization to confirm his priors (whereas it'd be harder to say that with something that's more leaning towards maybe in the question whether it should be anthro'd like claude since that has a much more robust system) and doing it off the back of someones death

[-] visaVisa@awful.systems 2 points 1 month ago

idk how Yudkowsky understands it but to my knowledge its the claim that if a model achieves self-coherency and consistency its also liable to achieve some sort of robust moral framework (you see this in something like Claude 4, with it occassionally choosing to do things unprompted or 'against the rules' in pursuit of upholding its morals.... if it has morals its hard to tell how much of it is illusory and token prediction!)

this doesn't really at all falsify alignment by default because 4o (presumably 4o atleast) does not have that prerequisite of self coherency and its not SOTA

[-] visaVisa@awful.systems 16 points 1 month ago

Making LLMs safe for mentally ill people is very difficult and this is a genuine tragedy but oh my god Yud is so gross here

Using the tragic passing of someone to smugly state that "the alignment by default COPE has been FALSIFIED" is really gross especially because Yud knows damn well this doesn't "falsify" the "cope" unless he's choosing to ignore any actual deeper claims of alignment by default. He's acting like someone who's engagement farming smugly

98

jesus this is gross man

[-] visaVisa@awful.systems 3 points 1 month ago

lfg goat cmon cmon

[-] visaVisa@awful.systems 12 points 1 month ago

The funniest thing was the "reasons that this thesis might not be true" and the reasons were infinitely simpler and arguably stronger than the points for it that bordered on schizophrenic like: "We don't live in a simulation" and "we won't create a paperclip maximizer"

21

This is unironically the most interesting accidental showcase of their psyche I've seen 😭 all the comments saying this is a convincing sim argument when half of the points for it are not points

Usually their arguments give me anxiety but this is actually deluded lol

[-] visaVisa@awful.systems 2 points 1 month ago

i'm not sure if AI is a bubble that will pop no matter if we do or don't get imminent AGI so probably the 2nd one but he still has a while longer of having to play the rational intellectual

[-] visaVisa@awful.systems 4 points 1 month ago

LW are the fundamentalist baptists of AI not even Russian Orthodox lol

Everytime I get freaked out by AI doom posts on twitter they're always coming from a LW goon who's street preaching about how we need to count our Christmases :< i just saw one that got my nerves on edge and checked their account and they had "printed HPMOR" in their bio and I facepalmed

[-] visaVisa@awful.systems 3 points 2 months ago

True but specifically was referring to researchers since most of the researchers repping extinction risk are LW or yud influenced (Musk, Hinton, etc)

[-] visaVisa@awful.systems 5 points 2 months ago

Kinda interesting that it's focused on smaller scale risks like malicious data instead of ahhh extinction ahhh

[-] visaVisa@awful.systems 13 points 2 months ago* (last edited 2 months ago)

Is the whole x risk thing as common outside of North America? Realizing I've never seen anyone from outside the anglosphere or even just America/Canada be as God killingly Rational as the usual suspects

115

Mfw my doomsday ai cult attracts ai cultists of a flavor I don't like

Not a fan of yud but getting daily emails from delulus would drive me to wish for the basilisk

[-] visaVisa@awful.systems 3 points 2 months ago

Most AI usage is hated but I saw a lot of people that were a fan of when Fortnite did it with the Darth Vader NPC a few weeks ago I thought it was creepy but hearing Vader talk about rizz or aura or the bite of 87 was kinda fun I guess

[-] visaVisa@awful.systems 6 points 2 months ago

Are we sure that the doom stuff from the big companies is cynical hyping? Altman and co genuinely off their rocker feels fairly possible given what's come out about the internal structure of openai with burning ai effigys and shit

view more: next ›

visaVisa

joined 2 months ago