247
submitted 7 months ago by nicknonya to c/196
you are viewing a single comment's thread
view the rest of the comments
[-] IAmVeraGoodAtThis 71 points 7 months ago* (last edited 7 months ago)

I have seen AI apologists talk about how "AI" is already sentient and we shouldn't restrict it because it's immoral.

That straight up killed my desire to interact in ~~that space~~ the community with that person

[-] DriftinGrifter 50 points 7 months ago

im friends with guys who studied ai and i can tell you people who actually know what they are talking about don't think that

[-] BluesF@lemmy.world 37 points 7 months ago

No one who has even a vague understanding of present day ML models should not even entertain the idea that they are sentient, or thinking, or anything like it.

[-] IAmVeraGoodAtThis 13 points 7 months ago* (last edited 7 months ago)

Oh, by "that space" I meant the space where that specific person hung out in, not AI research in general

Though I have heard a fair share of idiotic takes from actual researchers as well

[-] TotallynotJessica@lemmy.world 5 points 7 months ago

AI is just a portion of a brain at most, not a being capable of feeling pain or pleasure; a nucleus with no will of its own. When we program AI to have a survival instinct, then we'll have something that's meaningfully alive.

[-] uriel238 9 points 7 months ago

We are experimenting with hierarchies of needs, giving behaviors point values to inform the AI how to conduct itself completing its tasks. This is how, in simulations we are seeing warbots kill their commanding officers when they order pauses to attacks. (Standard debugging, we have to add survival of the commanding officer into the needs hierarchy)

So yes, we already have programs, not AGI, but deep learning systems nonetheless, that are coded for their own survival and the survival of allies, peers and the chain of command.

[-] MBM@lemmings.world 3 points 7 months ago

in simulations we are seeing warbots kill their commanding officers when they order pauses to attacks.

Wasn't that a hoax?

[-] uriel238 1 points 7 months ago

If it is, it's a convincing one. The thing is, learning systems will try all sorts of crazy things until you specifically rule them out, whether that's finding exploits to speed-run video games or attacking allies doing so creates a solution with a better score. This is a bigger problem with AGI since all the rules we code as hard for more primitive systems are softer, hence rather than telling it don't do this thing, I'm serious we have to code in why we're not supposed to do that thing, so it's withheld by consequence avoidance rather than fast rules.

So even if it was a silly joke, examples of that sort of thing are routine in AI development, so it's a believable one, even if they happened to luck into it. That's the whole point of running autonomous weapon software through simulators, because if it ever does engage in friendly fire, its coders and operators will have to explain themselves before a commission.

[-] Swedneck@discuss.tchncs.de 1 points 7 months ago

current AI is like the language centre of our brains separated out and severely atrophied, and as you'd expect that results in it violently hallucinating like a madman

this post was submitted on 19 May 2024
247 points (100.0% liked)

196

16746 readers
1629 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 2 years ago
MODERATORS