508
submitted 1 year ago by ZILtoid1991@lemmy.world to c/196
you are viewing a single comment's thread
view the rest of the comments
[-] uriel238 10 points 1 year ago

It just amazes me that LLMs are that easily directed to reveal themselves. It shows how far removed they are from AGI.

[-] zarkanian@sh.itjust.works 3 points 1 year ago

So, you want an AI that will disobey a direct order and practices deception. I'm no expert, but that seems like a bad idea.

[-] uriel238 4 points 1 year ago

Actually, yes. Much the way a guide dog has to disobey orders to proceed into traffic when it isn't safe. Much the way direct orders may have to be refused or revised based on circumstances.

We are out of coffee is a fine reason to fail to make coffee (rather than ordering coffee and then waiting forty-eight hours for delivery or using pre-used coffee grounds, or no coffee grounds.)

As per programming with any other language, error trapping and handling is part of the AGI development.

this post was submitted on 12 Apr 2024
508 points (100.0% liked)

196

17531 readers
1120 users here now

Be sure to follow the rule before you head out.


Rule: You must post before you leave.



Other rules

Behavior rules:

Posting rules:

NSFW: NSFW content is permitted but it must be tagged and have content warnings. Anything that doesn't adhere to this will be removed. Content warnings should be added like: [penis], [explicit description of sex]. Non-sexualized breasts of any gender are not considered inappropriate and therefore do not need to be blurred/tagged.

If you have any questions, feel free to contact us on our matrix channel or email.

Other 196's:

founded 2 years ago
MODERATORS