348

Which of the following sounds more reasonable?

  • I shouldn't have to pay for the content that I use to tune my LLM model and algorithm.

  • We shouldn't have to pay for the content we use to train and teach an AI.

By calling it AI, the corporations are able to advocate for a position that's blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.

you are viewing a single comment's thread
view the rest of the comments
[-] Zeth0s@lemmy.world 13 points 1 year ago

That's absolutely not correct. AI is a field of computer science/scientific computing built on the idea that some capabilities of biological intelligences could be simulated or even reproduced "in silicon", i.e. by using computers.

Nowadays is an extremely broad term that covers a lot of computational methodologies. LLM in particular are a evolution of methods born to simulate and act as human neural network. Nowadays they work very differently, but they still provide great insights on how an "artificial" intellicenge can be built. It is only one small corner of what will be a real general artificial intelligence, and a small step in that direction.

AI as a name is absolutely unrelated with how programs based on the methodologies are built.

Human intelligences are in charge of all copyright part. AI and copyright are orthogonal, people are those who cannot tell the 2 and keep talking about AI.

There is AI, and there is copyright, it is time for all of us to properly frame the discussion on "copyright discussion related to 's product"

[-] assassin_aragorn@lemmy.world 6 points 1 year ago

What I'm getting at moreso is that comparisons to humans for purposes of copyright law (e.g. likening it to students learning in school or reading library books) don't hold water just because it's called an AI. I don't see that as an actual defence for these companies, and it seems to be somewhat prevalent.

[-] Zeth0s@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

You can absolutely compare AI with students. The problem is that, legally, in many western countries students still have to pay copyright holders of the books they use to learn.

It is purely a copyright discussion. How far copyright applies? Shall the law distinguish between human learning and machine learning? Can we retroactively change copyright of material available online?

For instance, copilot is more at risk than a LLM that learned from 4chan, because licenses are clearer there. Problem is that we have no idea on which data big llms were trained, to know if some copyright law already applies.

At the end it is just a legal dispute on companies making money out of AI trained on data publicly available (but not necessarily copyright free).

[-] assassin_aragorn@lemmy.world 2 points 1 year ago

My argument is that an LLM here is reading the content for different reasons than a student would. The LLM uses it to generate text and answer user queries, for cash. The student uses it to learn their field of study, and then apply it to make money. The difference is that the student internalizes the concepts, while the LLM internalizes the text. If you used a different book that covered the same content, the LLM would generate different output, but the student would learn the same thing.

I know it's splitting hairs, but I think it's an important point to consider.

My take is that an LLM algorithm can't freely consume any copyrighted work, even if it's been reproduced online with the consent of the author. The company would need the permission of the author for the express purpose of training the AI. If there's a copyright, it should apply.

You have me thinking though about the student comparison. College students pay to attend lectures on material that can be found online or in their textbooks. Wouldn't paying for any copyright material be analogous to this?

[-] Zeth0s@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

Students and LLM do the same with data, simply in a different way. LLM can learn more data, student can understand more concepts, logic and context.

And students study to make money.

Both LLMs and students map the data in some internal representation, that is however pretty different, because a biological mind is different from an AI.

Regarding your last paragraph, this is exactly the point. What shall openai and Microsoft pay, as they are making a lot of money out of other people work? Currently it is unclear as openai hasn't released what data they used, and because copyright laws do not cover generative AI. We need to wait for interpretation of existing laws and for new ones. But it will change soon in the future for sure

this post was submitted on 10 Jul 2023
348 points (100.0% liked)

Technology

59227 readers
2734 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS