85
submitted 7 months ago* (last edited 7 months ago) by comfydecal@infosec.pub to c/privacy@lemmy.ml

Is it fairly easy? Seems useful for a public site like Lemmy and the fediverse

https://nightshade.cs.uchicago.edu/whatis.html

https://decrypt.co/203153/ai-prompt-data-poisoning-nightshared

you are viewing a single comment's thread
view the rest of the comments
[-] GrappleHat@lemmy.ml 22 points 7 months ago

I'm very skeptical that this "model poisoning" approach will work in practice. To pull it off would require a very high level of coordination among disparate people generating the training data (the images/text). I just can't imagine it happening. Add to that: big tech has A LOT of resources to play this cat & mouse game.

I hope I'm wrong, but I predict big tech wins here.

[-] General_Effort@lemmy.world 3 points 7 months ago

This attack doesn't target Big Tech, at all. The model has to be open to pull off an attack like that.

this post was submitted on 16 Apr 2024
85 points (100.0% liked)

Privacy

32120 readers
242 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS