860
submitted 10 months ago by throws_lemy@lemmy.nz to c/technology@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] Even_Adder@lemmy.dbzer0.com 221 points 10 months ago

Reminder that this is made by Ben Zhao, the University of Chicago professor who stole open source code for his last data poisoning scheme.

[-] ramenshaman@lemmy.world 71 points 10 months ago

Pardon my ignorance but how do you steal code if it's open source?

[-] hperrin@lemmy.world 220 points 10 months ago

You don’t follow the license that it was distributed under.

Commonly, if you use open source code in your project and that code is under a license that requires your project to be open source if you do that, but then you keep yours closed source.

[-] fidodo@lemmy.world 43 points 10 months ago

I still wouldn't call it stealing, but I guess "broke open source code licenses" doesn't have the same impact, but I'd prefer accuracy.

[-] bamboo@lemm.ee 92 points 10 months ago

It’s piracy, distributing copyrighted works against the terms of its license. I agree stealing is not really the right word.

load more comments (1 replies)
[-] thanks_shakey_snake@lemmy.ca 23 points 10 months ago

I think it makes the most sense to think of it like stealing the way plagiarism is stealing.

load more comments (1 replies)
[-] Even_Adder@lemmy.dbzer0.com 76 points 10 months ago

He took GPLv3 code, which is a copyleft license that requires you share your source code and license your project under the same terms as the code you used. You also can't distribute your project as a binary-only or proprietary software. When pressed, they only released the code for their front end, remaining in violation of GPLv3.

load more comments (5 replies)
[-] MargotRobbie@lemmy.world 26 points 10 months ago* (last edited 10 months ago)

And as I said there, it is utterly hypocritical for him to sell snake oil to artists, allegedly to help them fight copyright violations, while committing actual copyright violations.

load more comments (1 replies)
[-] SPRUNT@lemmy.world 105 points 10 months ago

Is there a similar tool that will "poison" my personal tracked data? Like, I know I'm going to be tracked and have a profile built on me by nearly everywhere online. Is there a tool that I can use to muddy that profile so it doesn't know if I'm a trans Brazilian pet store owner, a Nigerian bowling alley systems engineer, or a Beverly Hills sanitation worker who moonlights as a practice subject for budding proctologists?

[-] Ghostalmedia@lemmy.world 125 points 10 months ago

The only way to taint your behavioral data so that you don’t get lumped into a targetable cohort is to behave like a manic. As I’ve said in a past comment here, when you fill out forms, pretend your gender, race, and age is fluid. Also, pretend you’re nomadic. Then behave erratic as fuck when shopping online - pay for bibles, butt plugs, taxidermy, and PETA donations.

Your data will be absolute trash. You’ll also be miserable because you’re going to be visiting the Amazon drop off center with gag balls and porcelain Jesus figurines to return every week.

[-] Bonehead@kbin.social 42 points 10 months ago

Then behave erratic as fuck when shopping online - pay for bibles, butt plugs, taxidermy, and PETA donations.

...in the same transaction. It all needs to be bought and then shipped together. Not only to fuck with the algorithm, but also to fuck with the delivery guy. Because we usually know what you ordered. Especially when it's in the soft bag packaging. Might as well make everyone outside your personal circle think you're a bit psychologically disturbed, just to be safe.

[-] Neato@ttrpg.network 20 points 10 months ago

How? Aren't most items in boxes even in the bags? It's not like they just toss a butt plug into a bag and ship it...right?

load more comments (1 replies)
load more comments (2 replies)
[-] Australis13@fedia.io 35 points 10 months ago

The browser addon "AdNauseum" can help with that, although it's not a complete solution.

[-] capital@lemmy.world 26 points 10 months ago

That and trackmenot.

It searches random shit in the background.

https://www.trackmenot.io/

load more comments (4 replies)
[-] TropicalDingdong@lemmy.world 20 points 10 months ago

Is there a similar tool that will “poison” my personal tracked data? Like, I know I’m going to be tracked and have a profile built on me by nearly everywhere online. Is there a tool that I can use to muddy that profile so it doesn’t know if I’m a trans Brazilian pet store owner, a Nigerian bowling alley systems engineer, or a Beverly Hills sanitation worker who moonlights as a practice subject for budding proctologists?

Have you considered just being utterly incoherent, and not making sense as a person? That could work.

[-] SPRUNT@lemmy.world 26 points 10 months ago

According to my exes, yes.

load more comments (1 replies)
load more comments (10 replies)
[-] gapbetweenus@feddit.de 95 points 10 months ago

The tool's creators are seeking to make it so that AI model developers must pay artists to train on data from them that is uncorrupted.

That's not something a technical solution will work for. We need copyright laws to be updated.

[-] Even_Adder@lemmy.dbzer0.com 27 points 10 months ago
load more comments (14 replies)
[-] federatingIsTooHard@lemmy.world 24 points 10 months ago

copyright laws need to be abolished

[-] ForgotAboutDre@lemmy.world 29 points 10 months ago

That would make it harder for creative people to produce things and make money from it. Abolishing copyright isn't the answer. We still need a system like that.

A shorter period of copyright, would encourage more new content. As creative industries could no longer rely on old outdated work.

load more comments (13 replies)
[-] gapbetweenus@feddit.de 15 points 10 months ago

That would be an update, not sure it would be a good thing. As an artist I want to be able to tell where my work is used and where not. Would suck to find something from me used in fascist propaganda or something.

load more comments (11 replies)
load more comments (2 replies)
load more comments (12 replies)
[-] General_Effort@lemmy.world 63 points 10 months ago

Explanation of how this works.

These "AI models" (meaning the free and open Stable Diffusion in particular) consist of different parts. The important parts here are the VAE and the actual "image maker" (U-Net).

A VAE (Variational AutoEncoder) is a kind of AI that can be used to compress data. In image generators, a VAE is used to compress the images. The actual image AI only works on the smaller, compressed image (the latent representation), which means it takes a less powerful computer (and uses less energy). It’s that which makes it possible to run Stable Diffusion at home.

This attack targets the VAE. The image is altered so that the latent representation is that of a very different image, but still roughly the same to humans. Say, you take images of a cat and of a dog. You put both of them through the VAE to get the latent representation. Now you alter the image of the cat until its latent representation is similar to that of the dog. You alter it only in small ways and use methods to check that it still looks similar for humans. So, what the actual image maker AI "sees" is very different from the image the human sees.

Obviously, this only works if you have access to the VAE used by the image generator. So, it only works against open source AI; basically only Stable Diffusion at this point. Companies that use a closed source VAE cannot be attacked in this way.


I guess it makes sense if your ideology is that information must be owned and everything should make money for someone. I guess some people see cyberpunk dystopia as a desirable future. I wonder if it bothers them that all the tools they used are free (EG the method to check if images are similar to humans).

It doesn’t seem to be a very effective attack but it may have some long-term PR effect. Training an AI costs a fair amount of money. People who give that away for free probably still have some ulterior motive, such as being liked. If instead you get the full hate of a few anarcho-capitalists that threaten digital vandalism, you may be deterred. Well, my two cents.

[-] barsoap@lemm.ee 20 points 10 months ago* (last edited 10 months ago)

So, it only works against open source AI; basically only Stable Diffusion at this point.

I very much doubt it even works against the multitude of VAEs out there. There's not just the ones derived from StabilitiyAI's models but ones right now simply intended to be faster (at a loss of quality): TAESD can also encode and has a completely different architecture thus is completely unlikely to be fooled by the same attack vector. That failing, you can use a simple affine transformation to convert between latent and rgb space (that's what "latent2rgb" is) and compare outputs to know whether the big VAE model got fooled into generating something unrelated. That thing just doesn't have any attack surface, there's several magnitudes too few weights in there.

Which means that there's an undefeatable way to detect that the VAE was defeated. Which means it's only a matter of processing power until Nightshade is defeated, no human input needed. They'll of course again train and try to fool the now hardened VAE, starting another round, ultimately achieving nothing but making the VAE harder and harder to defeat.

It's like with Russia: They've already lost the war but they haven't noticed, yet -- though I wouldn't be too sure that Nightshade devs themselves aren't aware of that: What they're doing is a powerful way to grift a lot of money from artists without a technical bone in their body.

load more comments (35 replies)
[-] mjhelto@lemm.ee 55 points 10 months ago
[-] UnderpantsWeevil@lemmy.world 29 points 10 months ago

Excited to see the guys that made Nightshade get sued in a Silicon Valley district court, because they're something something mumble mumble intellectual property national security.

[-] Even_Adder@lemmy.dbzer0.com 47 points 10 months ago

They already stole GPLv2 code for their last data poisoning scheme and remain in violation of that license. They're just grifters.

load more comments (12 replies)
load more comments (1 replies)
[-] kromem@lemmy.world 54 points 10 months ago

This doesn't work outside of laboratory conditions.

It's the equivalent of "doctors find cure for cancer (in mice)."

[-] bier@feddit.nl 18 points 10 months ago

I like that example, everytime you hear about some discovery that x kills 100% of cancer cells in a petri dish. You always have to think, so does bleach.

load more comments (2 replies)
[-] pavnilschanda@lemmy.world 44 points 10 months ago

Apparently people who specialize in AI/ML have a very hard time trying to replicate the desired results when training models with 'poisoned' data. Is that true?

[-] Even_Adder@lemmy.dbzer0.com 42 points 10 months ago* (last edited 10 months ago)

I've only heard that running images through a VAE just once seems to break the Nightshade effect, but no one's really published anything yet.

You can finetune models on known bad and incoherent images to help it to output better images if the trained embedding is used in the negative prompt. So there's a chance that making a lot of purposefully bad data could actually make models better by helping the model recognize bad output and avoid it.

load more comments (7 replies)
load more comments (2 replies)
[-] KittyCat@lemmy.world 43 points 10 months ago

In the long run this will only improve the strength of models as they adapt to the changes this introduces and get that much stronger for it.

load more comments (3 replies)
[-] vsis@feddit.cl 42 points 10 months ago

It's not FOSS and I don't see a way to review if what they claim is actually true.

It may be a way to just help to diferentiate legitimate human made work vs machine-generated ones, thus helping AI training models.

Can't demostrate that fact neither, because of its license that expressly forbids sofware adaptions to other uses.

Edit, alter, modify, adapt, translate or otherwise change the whole or any part of the Software nor permit the whole or any part of the Software to be combined with or become incorporated in any other software, nor decompile, disassemble or reverse engineer the Software or attempt to do any such things

sauce: https://nightshade.cs.uchicago.edu/downloads.html

[-] nybble41@programming.dev 19 points 10 months ago

The EULA also prohibits using Nightshade "for any commercial purpose", so arguably if you make money from your art—in any way—you're not allowed to use Nightshade to "poison" it.

load more comments (1 replies)
load more comments (1 replies)
[-] bonus_crab@lemmy.world 36 points 10 months ago

big companies already have all your uncorrupted artwork, all this does is eliminate any new competition from cropping up.

load more comments (11 replies)
[-] Canadian_Cabinet@lemmy.ca 33 points 10 months ago

Ironic that they used an AI picture for the article...

[-] webghost0101@sopuli.xyz 27 points 10 months ago

I bet that before the end of this year this tool will be one of the things that helped improve the performance and quality of AI.

[-] reverendsteveii@lemm.ee 25 points 10 months ago

is anyone else excited to see poisoned AI artwork? This might be the element that makes it weird enough.

Also, re: the guy lol'ing that someone says this is illegal - it might be. is it wrong? absolutely not. does the woefully broad computer fraud and abuse act contain language that this might violate? it depends, the CFAA has two requirements for something to be in violation of it.

  1. the act in question affects a government computer, a financial institution's computer, OR a computer "which is used in or affecting interstate or foreign commerce or communication" (that last one is the biggie because it means that almost 100% of internet activity falls under its auspices)

  2. the act "knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;" (with 'protected computer' being defined in 1)

Quotes are from the law directly, as quoted at https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act

the poisoned artwork is information created with the intent of causing it to be transmitted to computers across state or international borders and damaging those computers. Using this technique to protect what's yours might be a felony in the US, and because it would be considered intentionally damaging a protected computer by the knowing transmission of information designed to cause damage, you could face up to 10 years in prison for it. Which is fun because the people stealing from you face absolutely no retribution at all for their theft, they don't even have to give you some of the money they use your art to make, but if you try to stop them you go to prison for a decade.

The CFAA is the same law that Reddit co-founder Aaron Swartz was prosecuted under. His crime was downloading things from JSTOR that he had a right to download as an account holder, but more quickly than they felt he should have. He was charged with 13 felonies and faced 50 years and over a million dollars in fines alongside a lifetime ban from ever using an internet connected computer again when he died by suicide. The charges were then dropped.

[-] captainthroatfuck@lemm.ee 20 points 10 months ago

It's not damaging a computer, it's poisoning the models ai uses to create the images. The program will work just fine, and as expected given the model that it has, the difference is the model might not be accurate. It's like saying you're breaking a screen if you're now looking at a low res version of an image

load more comments (2 replies)
load more comments (7 replies)
[-] Zealousideal_Fox900@lemmy.world 24 points 10 months ago

As an artist, nightshade is not something I will ever use. All my art is public domain, including AI. Let people generate as many pigeon pictures as they want I say!

[-] nightwatch_admin@feddit.nl 32 points 10 months ago

That’s great for you, truly it is, but for others it’s not.

load more comments (6 replies)
[-] neurogenesis@lemmy.dbzer0.com 24 points 10 months ago

Oily snakes slither such that back and forth looks like production..

[-] M0oP0o@mander.xyz 20 points 10 months ago* (last edited 10 months ago)

They clam a credit to using AI to make the thumbnail..... The same people who did nothing more then ask Chat GPT to make a picture to represent the article on a tool that poisons AI models to protect people who make pictures for a living from having Chat GPT use their work to make; say a picture to represent an article on a tool that poisons AI models......

[-] HexesofVexes@lemmy.world 17 points 10 months ago

Ah, another arms race has begun. Just be wary, what one person creates another will circumvent.

load more comments
view more: next ›
this post was submitted on 21 Jan 2024
860 points (100.0% liked)

Technology

59708 readers
1911 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS