1874
submitted 2 months ago by FatCat@lemmy.world to c/technology@lemmy.world

Those claiming AI training on copyrighted works is "theft" misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in "vector space". When generating new content, the AI isn't recreating copyrighted works, but producing new expressions inspired by the concepts it's learned.

This is fundamentally different from copying a book or song. It's more like the long-standing artistic tradition of being influenced by others' work. The law has always recognized that ideas themselves can't be owned - only particular expressions of them.

Moreover, there's precedent for this kind of use being considered "transformative" and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it's understandable that creators feel uneasy about this new technology, labeling it "theft" is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn't make the current use of copyrighted works for AI training illegal or unethical.

For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744

top 50 comments
sorted by: hot top controversial new old
[-] lettruthout@lemmy.world 223 points 2 months ago

If they can base their business on stealing, then we can steal their AI services, right?

[-] LibertyLizard@slrpnk.net 166 points 2 months ago

Pirating isn’t stealing but yes the collective works of humanity should belong to humanity, not some slimy cabal of venture capitalists.

[-] sorghum@sh.itjust.works 35 points 2 months ago

Also, ingredients to a recipe aren't covered under copyright law.

load more comments (5 replies)
load more comments (4 replies)
load more comments (10 replies)
[-] TommySoda@lemmy.world 175 points 2 months ago* (last edited 2 months ago)

Here's an experiment for you to try at home. Ask an AI model a question, copy a sentence or two of what they give back, and paste it into a search engine. The results may surprise you.

And stop comparing AI to humans but then giving AI models more freedom. If I wrote a paper I'd need to cite my sources. Where the fuck are your sources ChatGPT? Oh right, we're not allowed to see that but you can take whatever you want from us. Sounds fair.

load more comments (15 replies)
[-] EldritchFeminity 145 points 2 months ago

The argument that these models learn in a way that's similar to how humans do is absolutely false, and the idea that they discard their training data and produce new content is demonstrably incorrect. These models can and do regurgitate their training data, including copyrighted characters.

And these things don't learn styles, techniques, or concepts. They effectively learn statistical averages and patterns and collage them together. I've gotten to the point where I can guess what model of image generator was used based on the same repeated mistakes that they make every time. Take a look at any generated image, and you won't be able to identify where a light source is because the shadows come from all different directions. These things don't understand the concept of a shadow or lighting, they just know that statistically lighter pixels are followed by darker pixels of the same hue and that some places have collections of lighter pixels. I recently heard about an ai that scientists had trained to identify pictures of wolves that was working with incredible accuracy. When they went in to figure out how it was identifying wolves from dogs like huskies so well, they found that it wasn't even looking at the wolves at all. 100% of the images of wolves in its training data had snowy backgrounds, so it was simply searching for concentrations of white pixels (and therefore snow) in the image to determine whether or not a picture was of wolves or not.

[-] Riccosuave@lemmy.world 95 points 2 months ago

Even if they learned exactly like humans do, like so fucking what, right!? Humans have to pay EXORBITANT fees for higher education in this country. Arguing that your bot gets socialized education before the people do is fucking absurd.

[-] v_krishna@lemmy.ml 36 points 2 months ago

That seems more like an argument for free higher education rather than restricting what corpuses a deep learning model can train on

[-] Malfeasant@lemm.ee 19 points 2 months ago

Tomato, tomato...

load more comments (1 replies)
[-] ricecake@sh.itjust.works 26 points 2 months ago* (last edited 2 months ago)

Basing your argument around how the model or training system works doesn't seem like the best way to frame your point to me. It invites a lot of mucking about in the details of how the systems do or don't work, how humans learn, and what "learning" and "knowledge" actually are.

I'm a human as far as I know, and it's trivial for me to regurgitate my training data. I regularly say things that are either directly references to things I've heard, or accidentally copy them, sometimes with errors.
Would you argue that I'm just a statistical collage of the things I've experienced, seen or read? My brain has as many copies of my training data in it as the AI model, namely zero, but "Captain Picard of the USS Enterprise sat down for a rousing game of chess with his friend Sherlock Holmes, and then Shakespeare came in dressed like Mickey mouse and said 'to be or not to be, that is the question, for tis nobler in the heart' or something". Direct copies of someone else's work, as well as multiple copyright infringements.
I'm also shit at drawing with perspective. It comes across like a drunk toddler trying their hand at cubism.

Arguing about how the model works or the deficiencies of it to justify treating it differently just invites fixing those issues and repeating the same conversation later. What if we make one that does work how humans do in your opinion? Or it properly actually extracts the information in a way that isn't just statistically inferred patterns, whatever the distinction there is? Does that suddenly make it different?

You don't need to get bogged down in the muck of the technical to say that even if you conceed every technical point, we can still say that a non-sentient machine learning system can be held to different standards with regards to copyright law than a sentient person. A person gets to buy a book, read it, and then carry around that information in their head and use it however they want. Not-A-Person does not get to read a book and hold that information without consent of the author.
Arguing why it's bad for society for machines to mechanise the production of works inspired by others is more to the point.

Computers think the same way boats swim. Arguing about the difference between hands and propellers misses the point that you don't want a shrimp boat in your swimming pool. I don't care why they're different, or that it technically did or didn't violate the "free swim" policy, I care that it ruins the whole thing for the people it exists for in the first place.

I think all the AI stuff is cool, fun and interesting. I also think that letting it train on everything regardless of the creators wishes has too much opportunity to make everything garbage. Same for letting it produce content that isn't labeled or cited.
If they can find a way to do and use the cool stuff without making things worse, they should focus on that.

load more comments (4 replies)
load more comments (14 replies)
[-] MentalEdge@sopuli.xyz 103 points 2 months ago* (last edited 2 months ago)

The whole point of copyright in the first place, is to encourage creative expression, so we can have human culture and shit.

The idea of a "teensy" exception so that we can "advance" into a dark age of creative pointlessness and regurgitated slop, where humans doing the fun part has been made "unnecessary" by the unstoppable progress of "thinking" machines, would be hilarious, if it weren't depressing as fuck.

[-] wagesj45@fedia.io 50 points 2 months ago

The whole point of copyright in the first place, is to encourage creative expression

...within a capitalistic framework.

Humans are creative creatures and will express themselves regardless of economic incentives. We don't have to transmute ideas into capital just because they have "value".

[-] wizardbeard@lemmy.dbzer0.com 37 points 2 months ago

Sorry buddy, but that capitalistic framework is where we all have to exist for the forseeable future.

Giving corporations more power is not going to help us end that.

load more comments (2 replies)
load more comments (8 replies)
load more comments (1 replies)
[-] calcopiritus@lemmy.world 84 points 2 months ago

I'll train my AI on just the bee movie. Then I'm going to ask it "can you make me a movie about bees"? When it spits the whole movie, I can just watch it or sell it or whatever, it was a creation of my AI, which learned just like any human would! Of course I didn't even pay for the original copy to train my AI, it's for learning purposes, and learning should be a basic human right!

load more comments (14 replies)
[-] mm_maybe@sh.itjust.works 73 points 2 months ago

The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works. This has been suppressed by OpenAI in a rather brute force kind of way, by prohibiting the prompts that have been found so far to do this (e.g. the infamous "poetry poetry poetry..." ad infinitum hack), but the possibility is still there, no matter how much they try to plaster over it. In fact there are some people, much smarter than me, who see technical similarities between compression technology and the process of training an LLM, calling it a "blurry JPEG of the Internet"... the point being, you wouldn't allow distribution of a copyrighted book just because you compressed it in a ZIP file first.

[-] cum_hoc@lemmy.world 23 points 2 months ago

The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works.

Exactly! This is the core of the argument The New York Times made against OpenAI. And I think they are right.

load more comments (1 replies)
load more comments (24 replies)
[-] dhork@lemmy.world 67 points 2 months ago

Bullshit. AI are not human. We shouldn't treat them as such. AI are not creative. They just regurgitate what they are trained on. We call what it does "learning", but that doesn't mean we should elevate what they do to be legally equal to human learning.

It's this same kind of twisted logic that makes people think Corporations are People.

load more comments (21 replies)
[-] finley@lemm.ee 64 points 2 months ago* (last edited 2 months ago)

"but how are we supposed to keep making billions of dollars without unscrupulous intellectual property theft?! line must keep going up!!"

[-] Eccitaze@yiffit.net 62 points 2 months ago* (last edited 2 months ago)

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages.

Like fuck it is. An LLM "learns" by memorization and by breaking down training data into their component tokens, then calculating the weight between these tokens. This allows it to produce an output that resembles (but may or may not perfectly replicate) its training dataset, but produces no actual understanding or meaning--in other words, there's no actual intelligence, just really, really fancy fuzzy math.

Meanwhile, a human learns by memorizing training data, but also by parsing the underlying meaning and breaking it down into the underlying concepts, and then by applying and testing those concepts, and mastering them through practice and repetition. Where an LLM would learn "2+2 = 4" by ingesting tens or hundreds of thousands of instances of the string "2+2 = 4" and calculating a strong relationship between the tokens "2+2," "=," and "4," a human child would learn 2+2 = 4 by being given two apple slices, putting them down to another pair of apple slices, and counting the total number of apple slices to see that they now have 4 slices. (And then being given a treat of delicious apple slices.)

Similarly, a human learns to draw by starting with basic shapes, then moving on to anatomy, studying light and shadow, shading, and color theory, all the while applying each new concept to their work, and developing muscle memory to allow them to more easily draw the lines and shapes that they combine to form a whole picture. A human may learn off other peoples' drawings during the process, but at most they may process a few thousand images. Meanwhile, an LLM learns to "draw" by ingesting millions of images--without obtaining the permission of the person or organization that created those images--and then breaking those images down to their component tokens, and calculating weights between those tokens. There's about as much similarity between how an LLM "learns" compared to human learning as there is between my cat and my refrigerator.

And YET FUCKING AGAIN, here's the fucking Google Books argument. To repeat: Google Books used a minimal portion of the copyrighted works, and was not building a service to compete with book publishers. Generative AI is using the ENTIRE COPYRIGHTED WORK for its training set, and is building a service TO DIRECTLY COMPETE WITH THE ORGANIZATIONS WHOSE WORKS THEY ARE USING. They have zero fucking relevance to one another as far as claims of fair use. I am sick and fucking tired of hearing about Google Books.

EDIT: I want to make another point: I've commissioned artists for work multiple times, featuring characters that I designed myself. And pretty much every time I have, the art they make for me comes with multiple restrictions: for example, they grant me a license to post it on my own art gallery, and they grant me permission to use portions of the art for non-commercial uses (e.g. cropping a portion out to use as a profile pic or avatar). But they all explicitly forbid me from using the work I commissioned for commercial purposes--in other words, I cannot slap the art I commissioned on a T-shirt and sell it at a convention, or make a mug out of it. If I did so, that artist would be well within their rights to sue the crap out of me, and artists charge several times as much to grant a license for commercial use.

In other words, there is already well-established precedent that even if something is publicly available on the Internet and free to download, there are acceptable and unacceptable use cases, and it's broadly accepted that using other peoples' work for commercial use without compensating them is not permitted, even if I directly paid someone to create that work myself.

load more comments (10 replies)
[-] scottywh@lemmy.world 55 points 2 months ago

Look... All I have to say is... Support the Internet Archive!

(please)

load more comments (1 replies)
[-] Varyk@sh.itjust.works 55 points 2 months ago

tweet is good, your body argument is completely wrong

[-] sentientity@lemm.ee 54 points 2 months ago* (last edited 2 months ago)

Disagree. These companies are exploiting an unfair power dynamic they created that people can't say no to, to make an ungodly amount of money for themselves without compensating people whose data they took without telling them. They are not creating a cool creative project that collaboratively comments on or remixes what other people have made, they are seeking to gobble up and render irrelevant everything that they can, for short term greed. That's not the scenario these laws were made for. AI hurts people who have already been exploited and industries that have already been decimated. Copyright laws were not written with this kind of thing in mind. There are potentially cool and ethical uses for AI models, but open ai and google are just greed machines.

Edited * THRICE because spelling. oof.

[-] LANIK2000@lemmy.world 50 points 2 months ago

This process is akin to how humans learn...

I'm so fucking sick of people saying that. We have no fucking clue how humans LEARN. Aka gather understanding aka how cognition works or what it truly is. On the contrary we can deduce that it probably isn't very close to human memory/learning/cognition/sentience (any other buzzword that are stands-ins for things we don't understand yet), considering human memory is extremely lossy and tends to infer its own bias, as opposed to LLMs that do neither and religiously follow patters to their own fault.

It's quite literally a text prediction machine that started its life as a translator (and still does amazingly at that task), it just happens to turn out that general human language is a very powerful tool all on its own.

I could go on and on as I usually do on lemmy about AI, but your argument is literally "Neural network is theoretically like the nervous system, therefore human", I have no faith in getting through to you people.

load more comments (2 replies)
[-] gcheliotis@lemmy.world 43 points 2 months ago* (last edited 2 months ago)

Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically of protecting the average person and public-interest non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.

AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and collectively.

AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.

Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.

See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.

TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections in law.

load more comments (4 replies)
[-] mriormro@lemmy.world 43 points 2 months ago

You know, those obsessed with pushing AI would do a lot better if they dropped the patronizing tone in every single one of their comments defending them.

It's always fun reading "but you just don't understand".

load more comments (14 replies)
[-] lightnsfw@reddthat.com 40 points 2 months ago

If ChatGPT was free I might see their point but it's not so no. If you're making money from someone's work you should pay them.

[-] kibiz0r@midwest.social 38 points 2 months ago

Not even stealing cheese to run a sandwich shop.

Stealing cheese to melt it all together and run a cheese shop that undercuts the original cheese shops they stole from.

[-] ArseAssassin@sopuli.xyz 38 points 2 months ago
load more comments (3 replies)
[-] rainynight65@feddit.org 36 points 2 months ago* (last edited 2 months ago)

Generative AI is not 'influenced' by other people's work the way humans are. A human musician might spend years covering songs they like and copying or emulating the style, until they find their own style, which may or may not be a blend of their influences, but crucially, they will usually add something. AI does not do that. The idea that AI functions the same as human artists, by absorbing influences and producing their own result, is not only fundamentally false, it is dangerously misleading. To portray it as 'not unethical' is even more misleading.

load more comments (2 replies)
[-] joshcodes@programming.dev 35 points 2 months ago

Studied AI at uni. I'm also a cyber security professional. AI can be hacked or tricked into exposing training data. Therefore your claim about it disposing of the training material is totally wrong.

Ask your search engine of choice what happened when Gippity was asked to print the word "book" indefinitely. Answer: it printed training material after printing the word book a couple hundred times.

Also my main tutor in uni was a neuroscientist. Dude straight up told us that the current AI was only capable of accurately modelling something as complex as a dragon fly. For larger organisms it is nowhere near an accurate recreation of a brain. There are complexities in our brain chemistry that simply aren't accounted for in a statistical inference model and definitely not in the current gpt models.

load more comments (5 replies)
[-] auzy@lemmy.world 35 points 2 months ago

As others have said, it isn't inspired always, sometimes it literally just copies stuff.

This feels like it was written by someone who invested their money in AI companies because they're worried about their stocks

load more comments (14 replies)
[-] nek0d3r@lemmy.world 31 points 2 months ago

Generative AI does not work like this. They're not like humans at all, it will regurgitate whatever input it receives, like how Google can't stop Gemini from telling people to put glue in their pizza. If it really worked like that, there wouldn't be these broad and extensive policies within tech companies about using it with company sensitive data like protection compliances. The day that a health insurance company manager says, "sure, you can feed Chat-GPT medical data" is the day I trust genAI.

load more comments (7 replies)
[-] MeaanBeaan@lemmy.world 29 points 2 months ago

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages.

Machine learning algorithms are not people and are not ingesting these works the same way a person does. This argument is brought up all the time and just doesn't ring true. You're defending the unethical use of copyrighted works by a giant corporation with a metaphor that doesn't have any bearing on reality; in an age where artists are already shamefully undervalued. Creating art is a human process with the express intent of it being enjoyed by other humans. Having an algorithm do it is removing the most important part of art; the humanity.

[-] LarmyOfLone@lemm.ee 26 points 2 months ago

The joke is of course that "paying for copyright" is impossible in this case. ONLY the large social media companies that own all the comments and content that has accumulated by the community have enough data to train AI models. Or sites like stock photo libraries or deviantart who own the distribution rights for the content. That means all copyright arguments practically argue that AI should be owned by big corporations and should be inaccessible to normal people.

Basically the "means of generation" will be owned by the capitalists, since they are the only ones with the economic power to license these things.

That is basically the worst case scenario. Not only will the value of work diminish greatly, the advances in productivity will also be only accessible to big capitalists.

Of course, that is basically inevitable anyway. Why wouldn't they want this? It's just sad seeing the stupid morons arguing for this as if they had anything to gain.

load more comments (6 replies)
[-] HereIAm@lemmy.world 26 points 2 months ago

"This process is akin to how humans learn... The AI discards the original text, keeping only abstract representations..."

Now I sail the high seas myself, but I don't think Paramount Studios would buy anyone's defence they were only pirating their movies so they can learn the general content so they can produce their own knockoff.

Yes artists learn and inspire each other, but more often than not I'd imagine they consumed that art in an ethical way.

load more comments (2 replies)
[-] fancyl@lemmy.world 24 points 2 months ago

Are the models that OpenAI creates open source? I don't know enough about LLMs but if ChatGPT wants exemptions from the law, it result in a public good (emphasis on public).

[-] graycube@lemmy.world 52 points 2 months ago

Nothing about OpenAI is open-source. The name is a misdirection.

If you use my IP without my permission and profit it from it, then that is IP theft, whether or not you republish a plagiarized version.

load more comments (3 replies)
load more comments (10 replies)
[-] Floey@lemm.ee 24 points 2 months ago

While I agree that using copyrighted material to train your model is not theft, text that model produces can very much be plagiarism and OpenAI should be on the hook when it occurs.

load more comments (10 replies)
[-] Zacryon@feddit.org 23 points 2 months ago* (last edited 2 months ago)

When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.

Okay.

load more comments (7 replies)
[-] Roflmasterbigpimp@lemmy.world 22 points 2 months ago

Okay that's just stupid. I'm really fond of AI but that's just common Greed.

"Free the Serfs?! We can't survive without their labor!!" "Stop Child labour?! We can't survive without them!" "40 Hour Work Week?! We can't survive without their 16 Hour work Days!"

If you can't make profit yet, then fucking stop.

[-] Capricorn_Geriatric@lemmy.world 22 points 2 months ago

Those claiming AI training on copyrighted works is "theft" misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves.

Sure.

When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.

Not really. Sure, they take input and garble it up and it is "transformative" - but so is a human watching a TV series on a pirate site, for example. Hell, it's eduactional is treated as a copyright violation.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages.

Perhaps. (Not an AI expert). But, as the law currently stands, only living and breathing persons can be educated, so the "educational" fair use protection doesn't stand.

The AI discards the original text, keeping only abstract representations in "vector space". When generating new content, the AI isn't recreating copyrighted works, but producing new expressions inspired by the concepts it's learned.

It does and it doesn't discard the original. It isn't impossible to recreate the original (since all the data it gobbled up gets stored somewhere in some shape or form and can be truthfully recreated, at least judging by a few comments bellow and news reports). So AI can and does recreate (duplicate or distribute, perhaps) copyrighted works.

Besides, for a copyright violation, "substantial similarity" is needed, not one-for-one reproduction.

This is fundamentally different from copying a book or song.

Again, not really.

It's more like the long-standing artistic tradition of being influenced by others' work.

Sure. Except when it isn't and the AI pumps out the original or something close enoigh to it.

The law has always recognized that ideas themselves can't be owned - only particular expressions of them.

I'd be careful with the "always" part. There was a famous case involving Katy Perry where a single chord was sued over as copyright infringement. The case was thrown out on appeal, but I do not doubt that some pretty wild cases have been upheld as copyright violations (see "patent troll").

Moreover, there's precedent for this kind of use being considered "transformative" and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

The problem is that Google books only lets you search some phrase and have it pop up as beibg from source xy. It doesn't have the capability of reproducing it (other than maybe the page it was on perhaps) - well, it does have the capability since it's in the index somewhere, but there are checks in place to make sure it doesn't happen, which seem to be yet unachieved in AI.

While it's understandable that creators feel uneasy about this new technology, labeling it "theft" is both legally and technically inaccurate.

Yes. Just as labeling piracy as theft is.

We may need new ways to support and compensate creators in the AI age, but that doesn't make the current use of copyrighted works for AI training illegal or

Yes, new legislation will made to either let "Big AI" do as it pleases, or prevent it from doing so. Or, as usual, it'll be somewhere inbetween and vary from jurisdiction to jurisdiction.

However,

that doesn't make the current use of copyrighted works for AI training illegal or unethical.

this doesn't really stand. Sure, morals are debatable and while I'd say it is more unethical as private piracy (so no distribution) since distribution and disemination is involved, you do not seem to feel the same.

However, the law is clear. Private piracy (as in recording a song off of radio, a TV broadcast, screen recording a Netflix movie, etc. are all legal. As is digitizing books and lending the digital (as long as you have a physical copy that isn't lended out as the same time representing the legal "original"). I think breaking DRM also isn't illegal (but someone please correct me if I'm wrong).

The problems arises when the pirated content is copied and distributed in an uncontrolled manner, which AI seems to be capable of, making the AI owner as liable of piracy if the AI reproduced not even the same, but "substantially similar" output, just as much as hosts of "classic" pirated content distributed on the Web.

Obligatory IANAL and as far as the law goes, I focused on US law since the default country on here is the US. Similar or different laws are on the books in other places, although most are in fact substantially similar. Also, what the legislators cone up with will definately vary from place to place, even more so than copyright law since copyright law is partially harmonised (see Berne convention).

load more comments (5 replies)
[-] aTun@lemm.ee 21 points 2 months ago* (last edited 2 months ago)

{{labeling it "theft" is both legally and technically inaccurate.}} Well, my understanding is that humans have intelligence, humans teach and learn from previous/other people's work and make progressive or create new work/idea using their own intelligence. AI/machine doesn't have intelligence from the start, doesn't have own intelligence to create/make things. It just copies, remixes, and applies the knowledge, and many personalities and all expressions have been teached. So "theft" is technically accurate.

load more comments (1 replies)
[-] Veneroso@lemmy.world 21 points 2 months ago

We have hundreds of years of out of copyright books and newspapers. I look forward to interacting with old-timey AI.

"Fiddle sticks! These mechanical horses will never catch on! They're far too loud and barely more faster than a man can run!"

"A Woman's place is raising children and tending to the house! If they get the vote, what will they demand next!? To earn a Man's wage!?"

That last one is still relevant to today's discourse somehow!?

[-] derf82@lemmy.world 20 points 2 months ago

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in "vector space".

Citation needed. I’m pretty sure LLMs have exactly reproduced copyrighted passages. And considering it can created detailed summaries of copyrighted texts, it obviously has to save more than “abstract representations.”

load more comments (5 replies)
load more comments
view more: next ›
this post was submitted on 06 Sep 2024
1874 points (100.0% liked)

Technology

59768 readers
2761 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS