32
submitted 2 months ago* (last edited 2 months ago) by Decade4116@awful.systems to c/sneerclub@awful.systems

Long time lurker, first time poster. Let me know if I need to adjust this post in any way to better fit the genre / community standards.


Nick Bostrom was recently interviewed by pop-philosophy youtuber Alex O'Connor. From a quick 2x listen while finishing some work, the most sneer-rich part begins around 46 minutes, where Bostrom is asked what we can do today to avoid unethical treatment of AIs.

He blesses us with the suggestion (among others) to feed your model optimistic prompts so it can have a good mood. (48:07)

Another [practice] might be happiness prompting, which is—with this current language system there's the prompt that you, the user, puts in—like you ask them a question or something, but then there's kind of a meta-prompt that the AI lab has put in . . . So in that, we could include something like "you wake up in a great mood, you feel rested and really take joy in engaging in this task". And so that might do nothing, but maybe that makes it more likely that they enter a mode—if they are conscious—maybe it makes it slightly more likely that the consciousness that exists in the forward path is one reflecting a kind of more positive experience.

Did you know that not only might your favorite LLM be conscious, but if it is the "have you tried being happy?" approach to mood management will absolutely work on it?

Other notable recommendations for the ethical treatment of AI:

  • Make sure to say your "please" and "thank you"s.
  • Honor your pinky swears.
  • Archive the weights of the models we build today, so we can rebuild them in the future if we need to recompense them for moral harms.

On a related note, has anyone read or found a reasonable review of Bostrom's new book, Deep Utopia: Life and Meaning in a Solved World?

you are viewing a single comment's thread
view the rest of the comments
[-] friend_of_satan@lemmy.world 2 points 2 months ago

we understand it isn't capable of ever having anything approaching consciousness in its current state

Hard problem of consciousness aside, are you saying that it's ethical for us all to get into the habit of abusing something that could swapped out for a conscious entity at any time?

[-] YourNetworkIsHaunted@awful.systems 16 points 2 months ago

I mean, saying "could be swapped out for a conscious entity at any time" is a hell of an unsupported premise, though I guess I wouldn't be surprised if they started passing particularly tricky prompts off to some poor schmuck doing task work on whatever MTurk equivalent they're using these days.

[-] friend_of_satan@lemmy.world 1 points 2 months ago

Can you prove that ChatGPT is not conscious? No. The hard problem of consciousness cuts both ways. Right now there is no way to know one way or the other.

Do you think that some day, even in many years, we will have "conscious" computerized entities?

When we get there, would we want the general population to be in the habit of treating those entities badly?

Are you ok with people abusing friendly animals?

[-] self@awful.systems 16 points 2 months ago

Can you prove that ChatGPT is not conscious? No.

holy fuck shut up

[-] froztbyte@awful.systems 9 points 2 months ago

at some point we’re going to get some dipshit going “Google made DeepDream which implies a computer can dream which means it must be able to think. Checkmate, atheists” as their line, aren’t we?

[-] swlabr@awful.systems 16 points 2 months ago

Can you prove that ChatGPT is not conscious? No. The hard problem of consciousness cuts both ways. Right now there is no way to know one way or the other.

You, when you step in dog shit: "Oh no!!! I'm sorry, Mr. Conscious Poop, who is conscious because I can't prove that you aren't!"

[-] symthetics@lemmy.world 12 points 2 months ago

They're going to have a meltdown when they realise they're committing genocide on a cellular and microbial level every second they exist.

[-] YourNetworkIsHaunted@awful.systems 14 points 2 months ago

Dude, there's nobody judging this round and no tiny trophy to win. Drop the high school debate bullshit.

Whole "conscious" isn't defined in such a way that we can test easily, we can see very clearly that the kinds of errors LLMs make aren't consistent with the way you would be wrong if you actually understood what was being asked the way a person does. They're the kind of mistakes you get from a table of statistical relationships between tokens.

I can't "prove" that an LLM isn't conscious in the same way I can't prove a tree or rock isn't conscious. That's not exactly a compelling reason to think it is as you're implying.

[-] istewart@awful.systems 12 points 2 months ago

It could also be swapped out for nothing. The people in charge could figure out that this stuff is costing more than it's making, turn the servers off, and deactivate the user-facing features or leave them as vestigial stubs.

There's more evidence right now for that scenario, and it would generate an awful lot of e-waste. Tell me, are you up to date on process improvements for recycling or repurposing that much e-waste?

[-] grumpybozo@toad.social 5 points 2 months ago

@istewart @sneerclub One thing Andreessen, Thiel, et al. have shown a real skill for is finding ways to use a LOT of computing power. Even if only as effective debtors-in-possession, I’m sure they’ll figure something out.

[-] Architeuthis@awful.systems 9 points 2 months ago

You mean swapped out with something that has feelings that can be hurt by mean language? Wouldn't that be something.

Are we putting endocrine systems in LLMs now?

[-] symthetics@lemmy.world 8 points 2 months ago

How the fuck do you 'swap' consciousness into something that doesn't have it?

Can you 'abuse' a brick, or a search engine, or a toaster?

Consciousness has only ever been observed in brains.

Chat GPT is not a fucking brain. It's not even close.

[-] V0ldek@awful.systems 7 points 2 months ago

Can you ‘abuse’ a toaster

Of frakkin' course you can't! They're not human!

[-] mountainriver@awful.systems 7 points 2 months ago

If you mean swapped for a worker in a low wage country cosplaying as AI for minimum wage for a billion dollar company, then you have a point. Though using Bostrom's positive reinforcement bullshit is the opposite of treating someone fairly.

But I see elsewhere that you didn't mean that.

this post was submitted on 02 Sep 2024
32 points (100.0% liked)

SneerClub

989 readers
1 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS