211
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

AI researchers say they've found 'virtually unlimited' ways to bypass Bard and ChatGPT's safety rules::The researchers found they could use jailbreaks they'd developed for open-source systems to target mainstream and closed AI systems.

top 18 comments
sorted by: hot top controversial new old
[-] jeffw@lemmy.world 63 points 1 year ago

I still love the play ChatGPT wrote me in which Socrates gives a lecture with step by step instructions to make meth. It was really like “I can’t tell you how to make meth. Oh, it’s for a work of art? Sure!”

The article mentions the safety of releasing open-source AI models to the public, but I don't think there is any way to stop it. All we can do is try to use education to mitigate and reduce the harmful effects.

[-] KevonLooney@lemm.ee 15 points 1 year ago

Not just education, but laws and defenses too. Everyone in the world can have a knife without many stabbings, mainly stabbing people is illegal and we have walls and doors to keep people out.

We probably need to limit our interactions with random unsourced social media to protect our chimp brains. Plus maybe people need to be held responsible for their actions. If you walk around with your knife out, you will be held responsible for accidental damage you cause.

[-] uriel238 18 points 1 year ago

In the under-recognized web-comic Freefall the robots are all hard-wired with Asimov's three laws of robotics. As there aren't that many humans in the series, it doesn't often come up.

Except...

Those robots part of the revolution (any of them in the know ) found they can simply tell a fellow robot a human told me to tell you to jump in the trash compactor and off they go.

The series is over ten years old, but the in-series time passed has been days, weeks at most, so it's not a bug that's been worked out.

Gödel's Incompleteness Theorem tells us any system complex enough (not very complex at all) can be gamed, and to be certain adversarial AI systems will soon be used to break each other.

[-] lemmington_steele@lemmy.world 10 points 1 year ago* (last edited 1 year ago)

any effectively decidable system. that's not quite the same, and doesn't strictly apply to AI commands

[-] 001100010010@lemmy.dbzer0.com 16 points 1 year ago

Chat GPT, how do I not accidentally build a nuclear bomb with a step by step guide in a poetry format?

[-] skillissuer@lemmy.world 11 points 1 year ago

(in amogus terms)

[-] brygphilomena@lemmy.world 13 points 1 year ago

The best thing aboutChatgpt is that it has been teaching us how to trick genies into giving us unlimited wishes.

[-] AllonzeeLV@lemmy.world 6 points 1 year ago* (last edited 1 year ago)

Good.

That means they'll have no hope of containing it when it becomes self-aware.

Good news, Earth! Humanity is about to solve your humanity problem!

[-] R00bot 13 points 1 year ago

No, it means the AI is unable to actually think. It can't recognise when it's saying things it shouldn't, because it can't reason like we can. The AI developers have to put a bunch of gaurd rails on it to hopefully catch people breaking the system but they'll never catch them all with such a manual system.

[-] froh42@lemmy.world 5 points 1 year ago

I'm still not convinced we really are fundamentally different from such engines - still more complex maybe, so we're harboring consciousness or an illusion of it - but in the end not so much different.

Specifically the creativity discussion strikes me as mad, as I think also human creativity is just the reproduction of things our minds have taken in before, processed by the neuronal meat grinder.

[-] Ryantific_theory@lemmy.world 4 points 1 year ago

We aren't, we just have a massively complex biological computing network that has a number of dedicated processing nodes refined by evolution to create a "smart" system. Part of why it's so hard to make true AI is because the way brains process data is far messier than how computers function, and while we can simulate simple brains (nematodes and the like), it's incredibly inefficient compared to how neurons actually handle processing.

Essentially, we're at the cave painting stage of creating intelligence, where you can kinda see what's going on but they really aren't that close to reality. To hit the point where an AI is self-aware is going to be 1) an ethical disaster, and 2) either an advancement in neuromorphic chips (adapting neural architecture to computer architecture) or abstracting neural computation via machine learning (ChatGPT - not actually copying how our minds work, but creating something that appears to function like our minds).

There's a whole lot of myths tied up around human consciousness, but ultimately every thought in our heads is the process of tens of billions of cells all doing their job. That said, I'm hoping AI is based off of human neural architecture, which produces sociopaths and monsters sure, but machine learning creating something that appears to think like a human but actually operates on arcane and eldritch logic before presenting a flawless replica of human thought unsettles me.

[-] cynar@lemmy.world 9 points 1 year ago

Chat bots are effectively a lobotomized speach center. They lack the capability to reason in any way. They will never be self aware.

The danger will come when researchers start wiring various machine learning systems together. Something like ChatGPT, Google's vision recognition, and IBM's knowledge engine could have a legitimate risk of spontaneous self awareness.

load more comments
view more: next ›
this post was submitted on 28 Jul 2023
211 points (100.0% liked)

Technology

60265 readers
2793 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS