Good, go away.
Seems like just yesterday Metallica was suing people for enjoying copyrighted materials
Okay, I can work with this. Hey Altman you can train on anything that's public domain, now go take those fuck ton of billions and fight the copyright laws to make public domain make sense again.
counterpoint: what if we just make an exception for tech companies and double fuck consumers?
This is the correct answer. Never forget that US copyright law originally allowed for a 14 year (renewable for 14 more years) term. Now copyright holders are able to:
- reach consumers more quickly and easily using the internet
- market on more fronts (merch didn't exist in 1710)
- form other business types to better hold/manage IP
So much in the modern world exists to enable copyright holders, but terms are longer than ever. It's insane.
If this passes, piracy websites can rebrand as AI training material websites and we can all run a crappy model locally to train on pirated material.
That would work if you were rich and friends with government officials. I don’t like your chances otherwise.
You are a glass half full sort of person!
Another win for piracy community
Piracy is not theft.
When a corporation does it to get a competitive edge, it is.
It’s only theft if they support laws preventing their competitors from doing it too. Which is kind of what OpenAI did, and now they’re walking that idea back because they’re losing again.
No it's not.
It can be problematic behaviour, you can make it illegal if you want, but at a fundamental level, making a copy of something is not the same thing as stealing something.
it uses the result of your labor without compensation. it's not theft of the copyrighted material. it's theft of the payment.
it's different from piracy in that piracy doesn't equate to lost sales. someone who pirates a song or game probably does so because they wouldn't buy it otherwise. either they can't afford or they don't find it worth doing so. so if they couldn't pirate it, they still wouldn't buy it.
but this is a company using labor without paying you, something that they otherwise definitely have to do. he literally says it would be over if they couldn't get this data. they just don't want to pay for it.
That information is published freely online.
Do companies have to avoid hiring people who read and were influenced by copyrighted material?
I can regurgitate copyrighted works as well, and when someone hires me, places like Stackoverflow get fewer views to the pages that I've already read and trained on.
Are companies committing theft by letting me read the internet to develop my intelligence? Are they committing theft when they hire me so they don't have to do as much research themselves? Are they committing theft when they hire thousands of engineers who have read and trained on copyrighted material to build up internal knowledge bases?
What's actually happening, is that the debates around AI are exposing a deeply and fundamentally flawed copyright system. It should not be based on scarcity and restriction but rewarding use. Information has always been able to flow freely, the mistake was linking payment to restricting it's movement.
it's ok if you don't know how copyright works. also maybe look into plagiarism. there's a difference between relaying information you've learned and stealing work.
Training on publicly available material is currently legal. It is how your search engine was built and it is considered fair use mostly due to its transformative nature. Google went to court about it and won.
What OpenAI is doing is not piracy.
Whatever it is, it isn't theft
Also true. It’s scraping.
In the words of Cory Doctorow:
Web-scraping is good, actually.
Scraping against the wishes of the scraped is good, actually.
Scraping when the scrapee suffers as a result of your scraping is good, actually.
Scraping to train machine-learning models is good, actually.
Scraping to violate the public’s privacy is bad, actually.
Scraping to alienate creative workers’ labor is bad, actually.
We absolutely can have the benefits of scraping without letting AI companies destroy our jobs and our privacy. We just have to stop letting them define the debate.
Let them. Copyright is bullshit. What's the issue. He's right
Oh it's "over"? Fine for me
Ho no, what will we do without degenerate generative AIs?!
Fuck Sam Altmann, the fartsniffer who convinced himself & a few other dumb people that his company really has the leverage to make such demands.
"Oh, but democracy!" - saying that in the US of 2025 is a whole 'nother kind of dumb.
Anyhow, you don't give a single fuck about democracy, you're just scared because a chinese company offers what you offer for a fraction of the price/resources.
Your scared for your government money and basically begging for one more handout "to save democracy".
Yes, I've been listening to Ed Zitron.
Fartsniffer 🤣
gosh Ed Zitron is such an anodyne voice to hear, I felt like I was losing my mind until I listened to some of his stuff
Yeah, he has the ability to articulate what I was already thinking about LLMs and bring in hard data to back up his thesis that it’s all bullshit. Dangerous and expensive bullshit, but bullshit nonetheless.
It’s really sad that his willingness to say the tech industry is full of shit is such an unusual attribute in the tech journalism world.
But when China steals all their (arguably not copywrite-able) work...
What I’m hearing between the lines here is the origin of a legal “argument.”
If a person’s mind is allowed to read copyrighted works, remember them, be inspired by them, and describe them to others, then surely a different type of “person’s” different type of “mind” must be allowed to do the same thing!
After all, corporations are people, right? Especially any worth trillions of dollars! They are more worthy as people than meatbags worth mere billions!
I don't think it's actually such a bad argument because to reject it you basically have to say that style should fall under copyright protections, at least conditionally, which is absurd and has obvious dystopian implications. This isn't what copyright was meant for. People want AI banned or inhibited for separate reasons and hope the copyright argument is a path to that, but even if successful wouldn't actually change much except to make the other large corporations that own most copyright stakeholders of AI systems. That's not actually a better circumstance.
Actually I would just make the guard rails such that if the input can’t be copyrighted then the ai output can’t be copyrighted either. Making anything it touches public domain would reel in the corporations enthusiasm for its replacing humans.
I think they would still try to go for it but yeah that option sounds good to me tbh
This has been the legal basis of all AI training sets since they began collecting datasets. The US copyright office heard these arguments in 2023: https://www.copyright.gov/ai/listening-sessions.html
MR. LEVEY: Hi there. I'm Curt Levey, President of the Committee for Justice. We're a nonprofit that focuses on a variety of legal and policy issues, including intellectual property, AI, tech policy. There certainly are a number of very interesting questions about AI and copyright. I'd like to focus on one of them, which is the intersection of AI and copyright infringement, which some of the other panelists have already alluded to.
That issue is at the forefront given recent high-profile lawsuits claiming that generative AI, such as DALL-E 2 or Stable Diffusion, are infringing by training their AI models on a set of copyrighted images, such as those owned by Getty Images, one of the plaintiffs in these suits. And I must admit there's some tension in what I think about the issue at the heart of these lawsuits. I and the Committee for Justice favor strong protection for creatives because that's the best way to encourage creativity and innovation.
But, at the same time, I was an AI scientist long ago in the 1990s before I was an attorney, and I have a lot of experience in how AI, that is, the neural networks at the heart of AI, learn from very large numbers of examples, and at a deep level, it's analogous to how human creators learn from a lifetime of examples. And we don't call that infringement when a human does it, so it's hard for me to conclude that it's infringement when done by AI.
Now some might say, why should we analogize to humans? And I would say, for one, we should be intellectually consistent about how we analyze copyright. And number two, I think it's better to borrow from precedents we know that assumed human authorship than to invent the wheel over again for AI. And, look, neither human nor machine learning depends on retaining specific examples that they learn from.
So the lawsuits that I'm alluding to argue that infringement springs from temporary copies made during learning. And I think my number one takeaway would be, like it or not, a distinction between man and machine based on temporary storage will ultimately fail maybe not now but in the near future. Not only are there relatively weak legal arguments in terms of temporary copies, the precedent on that, more importantly, temporary storage of training examples is the easiest way to train an AI model, but it's not fundamentally required and it's not fundamentally different from what humans do, and I'll get into that more later if time permits.
The "temporary storage" idea is pretty central for visual models like Midjourney or DALL-E, whose training sets are full of copyrighted works lol. There is a legal basis for temporary storage too:
The "Ephemeral Copy" Exception (17 U.S.C. § 112 & § 117)
U.S. copyright law recognizes temporary, incidental, and transitory copies as necessary for technological processes.
Section 117 allows temporary copies for software operation.
Section 112 permits temporary copies for broadcasting and streaming.
BTW, if anyone was interested - many visual models use the same training set, collected by a German non-profit: https://laion.ai/
It's "technically not copyright infringement" because the set is just a link to an image, paired with a text description of each image. Because they're just pointing to the image, they don't really have to respect any copyright.
Too bad, so sad
Obligatory: I'm anti-AI, mostly anti-technology
That said, I can't say that I mind LLMs using copyrighted materials that it accesses legally/appropriately (lots of copyrighted content may be freely available to some extent, like news articles or song lyrics)
I'm open to arguments correcting me. I'd prefer to have another reason to be against this technology, not arguing on the side of frauds like Sam Altman. Here's my take:
All content created by humans follows consumption of other content. If I read lots of Vonnegut, I should be able to churn out prose that roughly (or precisely) includes his idiosyncrasies as a writer. We read more than one author; we read dozens or hundreds over our lifetimes. Likewise musicians, film directors, etc etc.
If an LLM consumes the same copyrighted content and learns how to copy its various characteristics, how is it meaningfully different from me doing it and becoming a successful writer?
Right. The problem is not the fact it consumes the information, the problem is if the user uses it to violate copyright. It’s just a tool after all.
Like, I’m capable of violating copyright in infinitely many ways, but I usually don’t.
and learns how to copy its various characteristics
Because you are a human. Not an immortal corporation.
I am tired of people trying to have iNtElLeCtUaL dIsCuSsIoN about/with entities that would feed you feet first into a wood chipper if it thought it could profit from it.
Yup. Violating IP licenses is a great reason to prevent it. According to current law, if they get Alice license for the book they should be able to use it how they want.
I'm not permitted to pirate a book just because I only intend to read it and then give it back. AI shouldn't be able to either if people can't.
Beyond that, we need to accept that might need to come up with new rules for new technology. There's a lot of people, notably artists, who object to art they put on their website being used for training. Under current law if you make it publicly available, people can download it and use it on their computer as long as they don't distribute it. That current law allows something we don't want doesn't mean we need to find a way to interpret current law as not allowing it, it just means we need new laws that say "fair use for people is not the same as fair use for AI training".
If an LLM consumes the same copyrighted content and learns how to copy its various characteristics, how is it meaningfully different from me doing it and becoming a successful writer?
That is the trillion-dollar question, isn’t it?
I’ve got two thoughts to frame the question, but I won’t give an answer.
- Laws are just social constructs, to help people get along with each other. They’re not supposed to be grand universal moral frameworks, or coherent/consistent philosophies. They’re always full of contradictions. So… does it even matter if it’s “meaningfully” different or not, if it’s socially useful to treat it as different (or not)?
- We’ve seen with digital locks, gig work, algorithmic market manipulation, and playing either side of Section 230 when convenient… that the ethos of big tech is pretty much “define what’s illegal, so I can colonize the precise border of illegality, to a fractal level of granularity”. I’m not super stoked to come with an objective quantitative framework for them to follow, cuz I know they’ll just flow around it like water and continue to find ways to do antisocial shit in ways that technically follow the rules.
Sam Altman is a lying hype-man. He deserves to see his company fail.
OpenAI can open their asses and go fuck themselves!
This is why they killed that former employee.
Bye
I feel like it would be ok if AI generated images/text would be clearly marked(but i dont think its possible in the case of text)
Who would support something made stealing the hard work of other people if they could tell instantly
Privacy
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
much thanks to @gary_host_laptop for the logo design :)