1874
submitted 2 months ago by FatCat@lemmy.world to c/technology@lemmy.world

Those claiming AI training on copyrighted works is "theft" misunderstand key aspects of copyright law and AI technology. Copyright protects specific expressions of ideas, not the ideas themselves. When AI systems ingest copyrighted works, they're extracting general patterns and concepts - the "Bob Dylan-ness" or "Hemingway-ness" - not copying specific text or images.

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages. The AI discards the original text, keeping only abstract representations in "vector space". When generating new content, the AI isn't recreating copyrighted works, but producing new expressions inspired by the concepts it's learned.

This is fundamentally different from copying a book or song. It's more like the long-standing artistic tradition of being influenced by others' work. The law has always recognized that ideas themselves can't be owned - only particular expressions of them.

Moreover, there's precedent for this kind of use being considered "transformative" and thus fair use. The Google Books project, which scanned millions of books to create a searchable index, was ruled legal despite protests from authors and publishers. AI training is arguably even more transformative.

While it's understandable that creators feel uneasy about this new technology, labeling it "theft" is both legally and technically inaccurate. We may need new ways to support and compensate creators in the AI age, but that doesn't make the current use of copyrighted works for AI training illegal or unethical.

For those interested, this argument is nicely laid out by Damien Riehl in FLOSS Weekly episode 744. https://twit.tv/shows/floss-weekly/episodes/744

(page 2) 50 comments
sorted by: hot top controversial new old
[-] spacesatan@lazysoci.al 19 points 2 months ago* (last edited 2 months ago)

Am I the only person that remembers that it was "you wouldn't steal a car" or has everyone just decided to pretend it was "you wouldn't download a car" because that's easier to dunk on.

[-] roguetrick@lemmy.world 16 points 2 months ago

You wouldn't shoot a policeman and then steal his helmet.

load more comments (5 replies)
[-] knF@lemmy.world 19 points 2 months ago

This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages.

Many people quote this part saying that this is not the case and this is the main reason why the argument is not valid.

Let's take a step back and not put in discussion how current "AI" learns vs how human learn.

The key point for me here is that humans DO PAY (or at least are expected to...) to use and learn from copyrighted material. So if we're equating "AI" method of learning with humans', both should be subject to the the same rules and regulations. Meaning that "AI" should pay for using copyrighted material.

load more comments (9 replies)
[-] makyo@lemmy.world 19 points 2 months ago

I thought the larger point was that they're using plenty of sources that do not lie in the public domain. Like if I download a textbook to read for a class instead of buying it - I could be proscecuted for stealing. And they've downloaded and read millions of books without paying for them.

load more comments (4 replies)
[-] Loki@discuss.tchncs.de 17 points 2 months ago

Even if you come to the conclusion that these models should be allowed to "learn" from copyrighted material, the issue is that they can and will reproduce copyrighted material.

They might not recreate a picture of Mickey Mouse that exists already, but they will draw a picture of Mickey Mouse. Just like I could, except I'm aware that I can't monetize it in any way. Well, new Mickey Mouse.

load more comments (1 replies)
[-] PixelProf@lemmy.ca 16 points 2 months ago

As someone who researched AI pre-GPT to enhance human creativity and aid in creative workflows, it's sad for me to see the direction it's been marketed, but not surprised. I'm personally excited by the tech because I personally see a really positive place for it where the data usage is arguably justified, but we either need to break through the current applications of it which seems more aimed at stock prices and wow-factoring the public instead of using them for what they're best at.

The whole exciting part of these was that it could convert unstructured inputs into natural language and structured outputs. Translation tasks (broad definition of translation), extracting key data points in unstructured data, language tasks. It's outstanding for the NLP tasks we struggled with previously, and these tasks are highly transformative or any inputs, it purely relies on structural patterns. I think few people would argue NLP tasks are infringing on the copyright owner.

But I can at least see how moving the direction toward (particularly with MoE approaches) using Q&A data to support generating Q&A outputs, media data to support generating media outputs, using code data to support generating code, this moves toward the territory of affecting sales and using someone's IP to compete against them. From a technical perspective, I understand how LLMs are not really copying, but the way they are marketed and tuned seems to be more and more intended to use people's data to compete against them, which is dubious at best.

[-] infinite_ass@leminal.space 16 points 2 months ago

Ai has ideas? That's a bit of a philosophical stretch.

load more comments (2 replies)
[-] LupertEverett@lemmy.world 15 points 2 months ago

The "you wouldn't download a car" statement is made against personal cases of piracy, which got rightfully clowned upon. It obviously doesn't work at all when you use its ridiculousness to defend big ass corporations that tries to profit from so many of the stuff they "downloaded".

Besides, it is not "theft". It is "plagiarism". And I'm glad to see that people that tries to defend these plagiarism machines that are attempted to be humanised and inflated to something they can never be, gets clowned. It warms my heart.

[-] assassin_aragorn@lemmy.world 14 points 2 months ago

There is an easy answer to this, but it's not being pursued by AI companies because it'll make them less money, albeit totally ethically.

Make all LLM models free to use, regardless of sophistication, and be collaborative with sharing the algorithms. They don't have to be open to everyone, but they can look at requests and grant them on merit without charging for it.

So how do they make money? How goes Google search make money? Advertisements. If you have a good, free product, advertisement space will follow. If it's impossible to make an AI product while also properly compensating people for training material, then don't make it a sold product. Use copyright training material freely to offer a free product with no premiums.

load more comments (2 replies)
[-] gencha@lemm.ee 14 points 2 months ago

So if I watch all Star Wars movies, and then get a crew together to make a couple of identical movies that were inspired by my earlier watching, and then sell the movies, then this is actually completely legal.

It doesn't matter if they stole the source material. They are selling a machine that can create copyright infringements at a click of a button, and that's a problem.

This is not the same as an artist looking at every single piece of art in the world and being able to replicate it to hang it in the living room. This is an army of artists that are enslaved by a single company to sell any copy of any artwork they want. That army works as long as you feed it electricity and free labor of actual artists.

Theft actually seems like a great word for what these scammers are doing.

If you run some open source model on your own machine, that's a different story.

load more comments (6 replies)
[-] General_Effort@lemmy.world 14 points 2 months ago

Let's engage in a little fantasy. Someone invents a magic machine that is able to duplicate apartments, condos, houses, ... You want to live in New York? You can copy yourself a penthouse overlooking the Central Park for just a few cents. It's magic. You don't need space. It's all in a pocket dimension like the Tardis or whatever. Awesome, right? Of course, not everyone would like that. The owner of that penthouse, for one. Their multi-million dollar investment is suddenly almost worthless. They would certainly demand that you must not copy their property without consent. And so would a lot of people. And what about the poor construction workers, ask the owners of constructions companies? And who will pay to have any new house built?

So in this fantasy story, the government goes and bans the magic copy machine. Taxes are raised to create a big new police bureau to monitor the country and to make sure that no one use such a machine without a license.

That's turned from magical wish fulfillment into a dystopian story. A society that rejects living in a rent-free wonderland but instead chooses to make itself poor. People work to ensure poverty, not to create wealth.

You get that I'm talking about data, information, knowledge. The first magic machine was the printing press. Now we have computers and the Internet.

I'm not talking about a utopian vision here. Facts, scientific theories, mathematical theorems, ... All such is free for all. Inventors can get patents, but only for 20 years and only if they publish them. They can keep their invention secret and take their chances. But if they want a government enforced monopoly, they must publish their inventions so that others may learn from it.

In the US, that's how the Constitution demands it. The copyright clause: [The United States Congress shall have power] To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

Cutting down on Fair Use makes everyone poorer and only a very few, very rich people richer. Have you ever thought about where the money goes if AI training requires a license?

For example, to Reddit, because Reddit has rights to all those posts. So do Facebook and Xitter. Of course, there's also old money, like the NYT or Getty. The NYT has the rights to all their old issue about a century back. If AI training requires a license, they can sell all their old newspapers again. That's pure profit. Do you think they will their employees raises out of the pure goodness of their heart if they win their lawsuits? They have no legal or economics reason to do so. The belief that this would happen is trickle-down economics.

load more comments (4 replies)
[-] Otkaz@lemmy.world 14 points 2 months ago

Maybe if OpenAI didn't suddenly decide not to be open when they got in bed with Micro$oft, they could just make it a community effort. I own a copyrighted work that the AI hasn't been feed yet, so I loan it as training and you do the same. They could have made it an open initiative. Missed opportunity from a greedy company. Push the boundaries of technology, and we can all reap the rewards.

load more comments (4 replies)
[-] TimeSquirrel@kbin.melroy.org 12 points 2 months ago

So, is the Internet caring about copyright now? Decades of Napster, Limewire, BitTorrent, Piratebay, bootleg ebooks, movies, music, etc, but we care now because it's a big corporation doing it?

Just trying to get it straight.

[-] kryptonianCodeMonkey@lemmy.world 13 points 2 months ago

There is a kernal of validity to your point, but let's not pretend like those things are at all the same. The difference between copyright violation for personal use and copyright violation for commercialization is many orders of magnitude.

[-] ManixT@lemmy.world 12 points 2 months ago

You tell me, was it people suing companies or companies suing people?

Is a company claiming it should be able to have free access to content or a person?

load more comments (2 replies)
load more comments (10 replies)
[-] A1kmm@lemmy.amxl.com 12 points 2 months ago

The argument seem most commonly from people on fediverse (which I happen to agree with) is really not about what current copyright laws and treaties say / how they should be interpreted, but how people view things should be (even if it requires changing laws to make it that way).

And it fundamentally comes down to economics - the study of how resources should be distributed. Apart from oligarchs and the wannabe oligarchs who serve as useful idiots for the real oligarchs, pretty much everyone wants a relatively fair and equal distribution of wealth amongst the people (differing between left and right in opinion on exactly how equal things should be, but there is still some common ground). Hardly anyone really wants serfdom or similar where all the wealth and power is concentrated in the hands of a few (obviously it's a spectrum of how concentrated, but very few people want the extreme position to the right).

Depending on how things go, AI technologies have the power to serve humanity and lift everyone up equally if they are widely distributed, removing barriers and breaking existing 'moats' that let a few oligarchs hoard a lot of resources. Or it could go the other way - oligarchs are the only ones that have access to the state of the art model weights, and use this to undercut whatever they want in the economy until they own everything and everyone else rents everything from them on their terms.

The first scenario is a utopia scenario, and the second is a dystopia, and the way AI is regulated is the fork in the road between the two. So of course people are going to want to cheer for regulation that steers towards the utopia.

That means things like:

  • Fighting back when the oligarchs try to talk about 'AI Safety' meaning that there should be no Open Source models, and that they should tightly control how and for what the models can be used. The biggest AI Safety issue is that we end up in a dystopian AI-fueled serfdom, and FLOSS models and freedom for the common people to use them actually helps to reduce the chances of this outcome.
  • Not allowing 'AI washing' where oligarchs can take humanities collective work, put it through an algorithm, and produce a competing thing that they control - unless everyone has equal access to it. One policy that would work for this would be that if you create a model based on other people's work, and want to use that model for a commercial purpose, then you must publicly release the model and model weights. That would be a fair trade-off for letting them use that information for training purposes.

Fundamentally, all of this is just exacerbating cracks in the copyright system as a policy. I personally think that a better system would look like this:

  • Everyone gets a Universal Basic Income paid, and every organisation and individual making profit pays taxes in to fund the UBI (in proportion to their profits).
  • All forms of intellectual property rights (except trademarks) are abolished - copyright, patents, and trade secrets are no longer enforced by the law. The UBI replaces it as compensation to creators.
  • It is illegal to discriminate against someone for publicly disclosing a work they have access to, as long as they didn't accept valuable consideration to make that disclosure. So for example, if an OpenAI employee publicly released the model weights for one of OpenAI's models without permission from anyone, it would be illegal for OpenAI to demote / fire / refuse to promote / pay them differently on that basis, and for any other company to factor that into their hiring decision. There would be exceptions for personally identifiable information (e.g. you can't release the client list or photos of real people without consequences), and disclosure would have to be public (i.e. not just to a competitor, it has to be to everyone) and uncompensated (i.e. you can't take money from a competitor to release particular information).

If we had that policy, I'd be okay for AI companies to be slurping up everything and training model weights.

However, with the current policies, it is pushing us towards the dystopic path where AI companies take what they want and never give anything back.

[-] VerbFlow@lemmy.world 14 points 2 months ago
load more comments (2 replies)
[-] xenomor@lemmy.world 11 points 2 months ago* (last edited 2 months ago)

This take is correct although I would make one addition. It is true that copyright violation doesn’t happen when copyrighted material is inputted or when models are trained. While the outputs of these models are not necessarily copyright violations, it is possible for them to violate copyright. The same standards for violation that apply to humans should apply to these models.

I entirely reject the claims that there should be one standard for humans and another for these models. Every time this debate pops up, people claim some province based on ‘intelligence’ or ‘conscience’ or ‘understanding’ or ‘awareness’. This is a meaningless argument because we have no clear understanding about what those things are. I’m not claiming anything about the nature of these models. I’m just pointing out that people love to apply an undefined standard to them.

We should apply the same copyright standards to people, models, corporations, and old-school algorithms.

[-] HexesofVexes@lemmy.world 11 points 2 months ago

I rather think the point is being missed here. Copyright is already causing huge issues, such as the troubles faced by the internet archive, and the fact academics get nothing from their work.

Surely the argument here is that copyright law needs to change, as it acts as a barrier to education and human expression. Not, however, just for AI, but as a whole.

Copyright law needs to move with the times, as all laws do.

load more comments (6 replies)
[-] pyre@lemmy.world 11 points 2 months ago

it's rich cunts asking for handouts again. hey, we call this feasibility, you should have thought about it before, not now. your business is not feasible. fuck off forever. thanks.

load more comments
view more: ‹ prev next ›
this post was submitted on 06 Sep 2024
1874 points (100.0% liked)

Technology

59768 readers
2761 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS