98

jesus this is gross man

top 50 comments
sorted by: hot top controversial new old
[-] blakestacey@awful.systems 58 points 1 month ago* (last edited 1 month ago)

The New York Times treats him as an expert: "Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book". He's an Internet rando who has yammered about decision theory, not an actual theorist! He wrote fanfic that claimed to teach rational thinking while getting high-school biology wrong. His attempt to propose a new decision theory was, last I checked, never published in a peer-reviewed journal, and in trying to check again I discovered that it's so obscure it was deleted from Wikipedia.

https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Functional_Decision_Theory

To recapitulate my sneer from an earlier thread, the New York Times respects actual decision theorists so little, it's like the whole academic discipline is trans people or something.

[-] V0ldek@awful.systems 17 points 1 month ago

Lol, I'm a decision theorist because I had to decide whether I should take a shit or shave first today. I am also an author of a forthcoming book because, get this, you're not gonna believe, here's something Big Book doesn't want you to know:

literally anyone can write a book. They don't even check if you're smart. I know, shocking.

Plus "forthcoming" can mean anything, Winds of Winter has also been a "forthcoming" book for quite a while

[-] swlabr@awful.systems 7 points 1 month ago

Lol, I’m a decision theorist because I had to decide whether I should take a shit or shave first today.

What's your P(doodoo)

[-] V0ldek@awful.systems 4 points 4 weeks ago* (last edited 4 weeks ago)

Changes during the day but it's always > 0.

[-] Soyweiser@awful.systems 20 points 1 month ago

Using a death for critihype jesus fuck

[-] Anomalocaris@lemm.ee 19 points 1 month ago

can we agree they Yudkowsky is a bit of a twat.

but also that there's a danger in letting vulnerable people access LLMs?

not saying that they should me banned, but some regulation and safety is necessary.

[-] visaVisa@awful.systems 16 points 1 month ago* (last edited 1 month ago)

i for sure agree that LLMs can be a huge trouble spot for mentally vulnerable people and there needs to be something done about it

my point was more on him using it to do his worst-of-both-worlds arguments where he's simultaneously saying that 'alignment is FALSIFIED!' and also doing heavy anthropomorphization to confirm his priors (whereas it'd be harder to say that with something that's more leaning towards maybe in the question whether it should be anthro'd like claude since that has a much more robust system) and doing it off the back of someones death

[-] Anomalocaris@lemm.ee 9 points 1 month ago

yhea, we should me talking about this

just not talking with him

[-] hungryjoe@functional.cafe 15 points 1 month ago

@Anomalocaris @visaVisa The attention spent on people who think LLMs are going to evolve into The Machine God will only make good regulation & norms harder to achieve

[-] Anomalocaris@lemm.ee 4 points 1 month ago

yhea, we need reasonable regulation now. about the real problems it has.

like making them liability for training on stolen data,

making them liable for giving misleading information, and damages caused by it...

things that would be reasonable for any company.

do we need regulations about it becoming skynet? too late for that mate

[-] expr@programming.dev 5 points 4 weeks ago

LLMs are a net-negative for society as a whole. The underlying technology is fine, but it's far too easy for corporations to manipulate the populace with them, and people are just generally very vulnerable to them. Beyond the extremely common tendency to misunderstand and anthropomorphize them and think they have some real insight, they also delude (even otherwise reasonable) people into thinking that they are benefitting from them when they really.... Aren't. Instead, people get hooked on the feelings they give them, and people keep wanting to get their next hit (tokens).

They are brain rot and that's all there is to it.

load more comments (5 replies)
[-] visaVisa@awful.systems 16 points 1 month ago

Making LLMs safe for mentally ill people is very difficult and this is a genuine tragedy but oh my god Yud is so gross here

Using the tragic passing of someone to smugly state that "the alignment by default COPE has been FALSIFIED" is really gross especially because Yud knows damn well this doesn't "falsify" the "cope" unless he's choosing to ignore any actual deeper claims of alignment by default. He's acting like someone who's engagement farming smugly

[-] swlabr@awful.systems 27 points 1 month ago

Making LLMs safe for mentally ill people is very difficult

Arguably, they can never be made "safe" for anyone, in the sense that presenting hallucinations as truth should be considered unsafe.

[-] FartMaster69@lemmy.dbzer0.com 25 points 1 month ago

ChatGPT has literally no alignment good or bad, it doesn’t think at all.

People seem to just ignore that because it can write nice sentences.

[-] antifuchs@awful.systems 15 points 1 month ago

But it apologizes when you tell it it’s wrong!

[-] BlueMonday1984@awful.systems 23 points 1 month ago

Hot take: A lying machine that destroys your intelligence and mental health is unsafe for everyone, mentally ill or no

[-] AllNewTypeFace@leminal.space 20 points 1 month ago

We’ve found the Great Filter, and it’s weaponised pareidolia.

[-] Soyweiser@awful.systems 8 points 1 month ago

"Yes," chatGPT whispered gently ASMR style, "you should but that cryptocoin it is a good investment". And thus the aliens sectioned off the Sol solar system forever.

[-] diz@awful.systems 7 points 1 month ago

Yeah I think it is almost undeniable chatbots trigger some low level brain thing. Eliza has 27% Turing Test pass rate. And long before that, humans attributed weather and random events to sentient gods.

This makes me think of Langford’s original BLIT short story.

And also of rove beetles that parasitize ant hives. These bugs are not ants but they pass the Turing test for ants - they tap the antennae with an ant and the handshake is correct and they are identified as ants from this colony and not unrelated bugs or ants from another colony.

[-] Saledovil@sh.itjust.works 11 points 1 month ago

What even is the "alignment by default cope"?

[-] visaVisa@awful.systems 2 points 1 month ago

idk how Yudkowsky understands it but to my knowledge its the claim that if a model achieves self-coherency and consistency its also liable to achieve some sort of robust moral framework (you see this in something like Claude 4, with it occassionally choosing to do things unprompted or 'against the rules' in pursuit of upholding its morals.... if it has morals its hard to tell how much of it is illusory and token prediction!)

this doesn't really at all falsify alignment by default because 4o (presumably 4o atleast) does not have that prerequisite of self coherency and its not SOTA

[-] YourNetworkIsHaunted@awful.systems 15 points 1 month ago* (last edited 1 month ago)

if it has morals its hard to tell how much of it is illusory and token prediction!

It's generally best to assume 100% is illusory and pareidolia. These systems are incredibly effective at mirroring whatever you project onto it back at you.

[-] HedyL@awful.systems 5 points 1 month ago

These systems are incredibly effective at mirroring whatever you project onto it back at you.

Also, it has often been pointed out that toxic people (from school bullies and domestic abusers up to cult leaders and dictators) often appear to operate from similar playbooks. Of course, this has been reflected in many published works (both fictional and non-fictional) and can also be observed in real time on social media, online forums etc. Therefore, I think it isn't surprising when a well-trained LLM "picks up" similar strategies (this is another reason - besides energy consumption - why I avoid using chatbots "just for fun", by the way).

Of course, "love bombing" is a key tool employed by most abusers, and chatbots appear to be particularly good at doing this, as you pointed out (by telling people what they want to hear, mirroring their thoughts back to them etc.).

[-] visaVisa@awful.systems 1 points 1 month ago* (last edited 1 month ago)

i disagree sorta tbh

i won't say that claude is conscious but i won't say that it isn't either and its always better to air on the side of caution (given there is some genuinely interesting stuff i.e. Kyle Fish's welfare report)

I WILL say that 4o most likely isn't conscious or self reflecting and that it is best to air on the side of not schizoposting even if its wise imo to try not to be abusive to AI's just incase

[-] self@awful.systems 23 points 1 month ago

centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:

i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution

the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.

claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.

if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?

schizoposting

fuck off with this

even if its wise imo to try not to be abusive to AI’s just incase

describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?

[-] swlabr@awful.systems 11 points 1 month ago

Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.

[-] YourNetworkIsHaunted@awful.systems 12 points 1 month ago

I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you're an asshole to the frontend there's a nonzero chance that a human person is still going to have to deal with it.

Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with "hello this is YourNet with $CompanyName Support." I'm not taking chances around unthinkingly answering an email with "alright you shitty robot. Don't lie to me or I'll barbecue this old commodore 64 that was probably your great uncle or whatever"

[-] Amoeba_Girl@awful.systems 6 points 1 month ago* (last edited 1 month ago)

Also it's simply just bad to practice being cruel to a humanshaped thing.

[-] nickwitha_k@lemmy.sdf.org 9 points 1 month ago* (last edited 1 month ago)

The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable.

Very much this but, we're all impressionable. Being abusive to a machine that's good at tricking our brains into thinking that it's conscious is conditioning oneself to be abusive, period. You see this also in online gaming - every person that I have encountered who is abusive to randos in a match on the Internet has problematic behavior in person.

It's literally just conditioning; making things adjacent to abusing other humans comfortable and normalizing them makes abusing humans less uncomfortable.

[-] swlabr@awful.systems 5 points 1 month ago

That’s reasonable, and especially achievable if you don’t use chatbots or digital assistants!

[-] Architeuthis@awful.systems 8 points 1 month ago* (last edited 1 month ago)

Children really shouldn't be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right, but I'm guessing you just mean to forego I have kidnapped your favorite hamster and will kill it slowly unless you make that div stop overflowing on resize type prompts.

[-] swlabr@awful.systems 3 points 1 month ago

Children really shouldn’t be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right

I agree! I'm more thinking of the case where a kid might overhear what they think is a phone call when it's actually someone being mean to Siri or whatever. I mean, there are more options than "be nice to digital entities" if we're trying to teach children to be good humans, don't get me wrong. I don't give a shit about the non-feelings of the LLMs.

[-] sinedpick@awful.systems 9 points 1 month ago* (last edited 1 month ago)

it's basically yet another form of Pascal's wager (which is a dumb argument)

[-] blakestacey@awful.systems 10 points 1 month ago

She said, “You know what they say the modern version of Pascal’s Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God. Perhaps your motto should be ‘Treat every chatterbot kindly, it might turn out to be the deity’s uncle.’”

"Crystal Nights"

[-] visaVisa@awful.systems 1 points 1 month ago* (last edited 1 month ago)

i care about the harm that ChatGPT and shit does to society the actual intellectual rot but when you don't really know what goes on in the black box and it exhibits 'emergent behavior' that is kind of difficult to understand under next token prediction (i keep using Claude as an example because of the thorough welfare evaluation that was done on it) its probably best to not completely discount it as a possibility since some experts genuinely do claim it as a possibility

I don't personally know whether any AI is conscious or any AI could be conscious but even without basilisk bs i don't really think there's any harm in thinking about the possibility under certain circumstances. I don't think Yud is being genuine in this though he's not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

The "incase" is that if there's any possibility that it is (which you don't think so i think its possible but who knows even) its advisable to take SOME level of courtesy. Like it has atleast the same amount of value as like letting an insect out instead of killing it and quite possibly more than that example. I don't think its bad that Anthropic is letting Claude end 'abusive chats' because its kind of no harm no foul even if its not conscious its just wary

put humans first obviously because we actually KNOW we're conscious

[-] o7___o7@awful.systems 18 points 1 month ago

If you have to entertain a "just in case" then you'd be better off leaving a saucer of milk out for the fairies. It won't hurt the environment or help build fascism and may even please a cat

All I know is that I didn't do anything to make those mushrooms grow in a circle like that and the sweetbread I left there in the morning was completely gone by lunchtime and that evening all my family's shoes got fixed up.

[-] cstross@wandering.shop 9 points 1 month ago

@YourNetworkIsHaunted Your fairies gnaw on raw pancreas meat? That's hardcore!

[-] o7___o7@awful.systems 7 points 1 month ago

You should have seen what they did to the liquor cabinet

[-] self@awful.systems 10 points 1 month ago

some experts genuinely do claim it as a possibility

zero experts claim this. you’re falling for a grift. specifically,

i keep using Claude as an example because of the thorough welfare evaluation that was done on it

asking the LLM about “its mental state” is part of a very old con dating back to mechanical Turks playing chess and horses that do math. of course the LLM generated some interesting sentences when prompted about its internal state — it was trained on appropriated copies of every piece of fiction in existence, including world-class works of sci-fi (with sentient AIs and everything!), and it was tuned to generate “interesting” (see: profitable, and there’s nothing more profitable than a con with enough marks) responses. that’s why the others keep mentioning pareidolia — the only intelligence in the loop is the reader assigning meaning to the slop they’re reading, and if you step out of that role, it really does become clear that what you’re reading is absolute slop.

s i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

you don’t think there’s any harm in thinking about the possibility, but all Yud does is create harm by grifting people who buy into that possibility. Yud’s Rationalist cult is the original driving force behind the people telling you LLMs must be sentient. do you understand that?

Like it has atleast the same amount of value as like letting an insect out instead of killing it

that insect won’t go on to consume so much energy and water and make so much pollution it creates an environmental crisis. the insect doesn’t exist as a product of the exploitation of third-world laborers or of artists and writers whose work was plagiarized. the insect isn’t a stupid fucking product of capitalism designed to maximize exploitation. I don’t acknowledge the utterly slim possibility that the insect might be or do any of the previous, because ignoring events with a near-zero probability of occurring is part of how I avoid looking like a god damn clown.

you say you acknowledge the harms done by LLMs, but I’m not seeing it.

load more comments (3 replies)
[-] o7___o7@awful.systems 10 points 1 month ago

Very Ziz of him

[-] bitofhope@awful.systems 8 points 1 month ago* (last edited 1 month ago)

It's just depressing. I don't even think Yudkoswsky is being cynical here, but expressing genuine and partially justified anger, while also being very wrong and filtering the event through his personal brainrot. This would be a reasonable statement to make if I believed in just one or two of the implausible things he believes in.

He's absolutely wrong in thinking the LLM "knew enough about humans" to know anything at all. His "alignment" angle is also a really bad way of talking about the harm that language model chatbot tech is capable of doing, though he's correct in saying the ethics of language models aren't a self-solving issue, even though he expresses it in critihype-laden terms.

Not that I like "handing it" to Eliezer Yudkowsky, but he's correct to be upset about a guy dying because of an unhealthy LLM obsession. Rhetorically, this isn't that far from this forum's reaction to children committing suicide because of Character.AI, just that most people on awful.systems have a more realistic conception of the capabilities and limitations of AI technology.

[-] fullsquare@awful.systems 6 points 1 month ago* (last edited 1 month ago)

though he’s correct in saying the ethics of language models aren’t a self-solving issue, even though he expresses it in critihype-laden terms.

the subtext is always that he also says that knows how to solve it and throw money at cfar pleaseeee or basilisk will torture your vending machine business for seven quintillion years

[-] bitofhope@awful.systems 4 points 4 weeks ago

Yes, that is also the case.

load more comments
view more: next ›
this post was submitted on 13 Jun 2025
98 points (100.0% liked)

SneerClub

1155 readers
69 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS