31
submitted 9 months ago by haxor@derp.foo to c/hackernews@derp.foo

There is a discussion on Hacker News, but feel free to comment here as well.

you are viewing a single comment's thread
view the rest of the comments
[-] autotldr@lemmings.world 4 points 9 months ago

This is the best summary I could come up with:


University of Chicago boffins this week released Nightshade 1.0, a tool built to punish unscrupulous makers of machine learning models who train their systems on data without getting permission first.

"Nightshade is computed as a multi-objective optimization that minimizes visible changes to the original image," said the team responsible for the project.

Nightshade was developed by University of Chicago doctoral students Shawn Shan, Wenxin Ding, and Josephine Passananti, and professors Heather Zheng and Ben Zhao, some of whom also helped with Glaze.

"Nightshade can provide a powerful tool for content owners to protect their intellectual property against model trainers that disregard or ignore copyright notices, do-not-scrape/crawl directives, and opt-out lists," the authors state in their paper.

The failure to consider the wishes of artwork creators and owners led to a lawsuit filed last year, part of a broader pushback against the permissionless harvesting of data for the benefit of AI businesses.

Matthew Guzdial, assistant professor of computer science at University of Alberta, said in a social media post, "This is cool and timely work!


The original article contains 704 words, the summary contains 174 words. Saved 75%. I'm a bot and I'm open source!

this post was submitted on 21 Jan 2024
31 points (100.0% liked)

Hacker News

24 readers
1 users here now

This community serves to share top posts on Hacker News with the wider fediverse.

Rules0. Keep it legal

  1. Keep it civil and SFW
  2. Keep it safe for members of marginalised groups

founded 1 year ago
MODERATORS