351
US rejects AI copyright for famous state fair-winning Midjourney art
(arstechnica.com)
This is a most excellent place for technology news and articles.
If those people have ever tried actually using image generation software they will know that there is significant human authorship required to make something that isn't remotely dogshit. The most important skill in visual art is not how to draw something but knowing what to draw.
Then why does all AI need to harvest the work of millions of artists in order to create one mediocre painting? Millions upon millions of hours of blood sweat and tears is hidden behind that algorithm. Thousands of people starting to draw when they are 5 and never stopping in order to get as good as they are.
All big AI services refuse to disclose the training set they use and those that we know anything about absolutely uses copyrighted material from artist that didn't consent to be part of the training set.
This is what fuels my contempt for AI. People that uses literal billions of dollars of stolen time and talent and then pretend that actually having ideas is the important bit.
I mean, I agree that the developers of these AI tools need to be made to be more ethical in how they use stuff for training, but it is worth noting that that's kind of also how humans learn. Every human artist learns, in part, by absorbing the wealth of prior art that they experience. Copying existing pieces is even a common way to practice.
Yeah, that shrug you did about how it would be nice if AI didn't steal art is part of the problem. Shrugging and saying joink doesn't work when you want to copyright stuff.
Human learns by assimilating other people work and working it into their own style, yes. That means that the AI is the human in this and the AI owns the artistic works. Since AI does not yet have the right to own copyrights, any works produced by that AI is not copyrightable.
That is if you accept that AI and humans learn art in the same way. I don't personally think that is analogous but it doesn't matter for this discussion.
There's a reason I said "they should be made to be more ethical" and not just "they should be more ethical". I know that they aren't going to do it themselves and I'll support well-written regulations on them.
Isn't it what almost your entire comment was about?
The argument was basically "that is how humans learn too". I accepted that analogy because it doesn't change my conclusion that AI can't be copyrighted. Had the discussion been about something else I wouldn't have accepted that argument.
The difference is a human artist can then make new unique art and contribute to the craft so it can advance and they can make a living off it. AI made art isn’t unique, it’s a collage of other art. To get art from AI you have to feed it prompts of things it’s seen before. So when AI is used for art it takes jobs from artists and prevents the craft from advancing.
My point is that this description literally applies just as much to humans. Humans are also trained on vast quantities of things they've seen before and meanings associated with them.
This is genuinely a misunderstanding of how these programs work.
Because the only art anyone has ever done is when someone else paid them for it? There are a lot of art forms that generally aren't commercially viable, and it's very odd to insist that commercial viability is what advances an art form.
I do actually get regularly paid for a kind of work that is threatened by these things (although in my case it's LLMs, not images). For the time being I can out-perform ChatGPT and the like, but I don't expect that that will last forever. Either I'll end up incorporating it or I'll need to find something else to do. But I'm not going to stop doing my hobby versions of it.
Technology kills jobs all the time. We don't have many human calculators these days. If the work has value beyond the financial, people will keep doing it.
Human brains don’t have perfect recollection. Every time we retell a story or remember a memory or picture an image in our head it is distorted with our own imperfections.
When I prompt an AI to create an image it samples the images it learned from with perfect recollection.
AI does not learn the same way humans do.
This is incorrect actually. The models these AIs run from by definition have imperfect recall otherwise they would be ENORMOUS. No, that's actually exactly the opposite of how these work.
They train a statistically weighted model to predict outputs based on inputs. It has no actual image data stored internally, it can't.
This is incorrect actually. The models these AIs run from by definition have perfect recall and that is why they require ENORMOUS resources to run and why ChatGPT became less effective when the resources it was allocated were reduced.
-ChatGPT
No, they take exponentially increasing resources as a consequence of having imperfect recall. Smaller models have "worse" recall. They've been trained with smaller datasets (or pruned more).
As you increase the size of the model (number of "neurons" that can be weighted) you increase the ability of that model to retain and use information. But that information isn't retained in the same form as it was input. A model trained on the English language (an LLM, like ChatGPT) does not know every possible word, nor does it actually know ANY words.
All ChatGPT knows is what characters are statistically likely to go after another in a long sequence. With enough neurons and layers combined with large amounts of processing power and time for training, this results in a weighted model which is many orders of magnitude smaller than the dataset it was trained on.
Since the model weighting itself is smaller than the input dataset, it is literally impossible for the model to have perfect recall of the input dataset. So by definition, these models have imperfect recall.
I'm pretty sure that the way they constantly fuck up hands is a solid demonstration that these AI tools do not have a perfect recollection
The reason they fuck up hands is because hands are usually moving during pictures and have many different configurations compared to any other body part.
So when these image AIs refer back to all the pictures of hands they’ve been fed and use them to create an ‘average approximation’ of what a hand looks like they include the motion blur from some of their samples, a middle finger sticking up from another sample or extra fingers from the sample pictures of people holding hands etc and mismatch them together even when it doesn’t fit in the picture being created.
The AI doesn’t know what a hand is. It is just mixing together samples from its perfect recollection.
In which case the machine would get the copyright (which legally they can't now), not the prompter.
Artists. Famously part of the ruling class.
Jesus, you AI people are idiots.
I don't hate ai assisted technologies. I just think it's hilarious that you've been ranting and raving about how artists are the true ruling class and ai is our how we break the chains of their oppression.
You see these technologies as somehow a means of democratizing all creative endeavors. I see these technologies, as they stand, as just the latest in the attempts of those who own the tech and data to siphon even more control, autonomy, and wealth from the rest of us
But yeah dude, have fun typing in prompts and feeling like you did something cool.
It is funny how that "one mediocre painting" won the award while the human art did not.
If I took a few hours to make an impressive AI generated price of art, that's still %0.0001 the amount of time an actual a real artist would've spent developing the skill and then taking the time to make the peice. I get to skip all that because AI stole the real artists' works.
What about photographers?
I don't think "amount of work" is a good measurement for copyright, if you scribble something in 2 seconds on a piece of paper you still own the copyright, even if it's not a great piece of art.
I'm pretty specifically trying to bring to mind the time it takes to hone the skill. Photography is similar in that it takes many many hours to get to the point where you can produce a good work of art.
If an artist (or photographer) spends a couple hours on a peice, that's not the actual amount of time needed. It takes years to reach the point where they can make art in a few hours. That's what people are upset about, that's why nobody cares about "it took me hours to generate a good peice!", because it takes an artist 10,000 hours.
What AI art is doing is distilling that 10,000 hours (per artist) into a training set of 99% stolen works to allow someone with zero skill to produce a work of art in a few hours.
What's most problematic isn't who the copyright of the AI generated age belongs to, it's that artists who own their own works are having it stolen to be used in a commercial product. Go to any AI image generator, and you'll see "premium" options you can pay for. That product, that option to pay, only exists on the backs of artists who did not give licensing for their works, and did not get paid to provide the training data.
People have made millions off of photographs despite having zero training and only casually snapping the photo. You can get lucky, or the subject of your photo might be especially interesting or rare (such as from a newsworthy event).
I think we need something more nuanced than 'effort input'
Photographers must have downvoted you. You don't have to be skilled to take a really good photo. You do have to be skilled to it regularly, though.
The law is about human expression, not human work. That which a human expressed (with creative height) is protected, all else is not
So if I tell someone else to draw something, who gets the copyright?
Depends on your agreement.
I think by default if there's no contract saying otherwise, the copyright stays with the original artist.
I would argue that the artist produces the copyright and transfers it to you. If the artist isn't human and cant produce copyrights then it cant sell it to you. A lot of argumentation here is that we should treat AI like we treat a human artist. That is an insane line to go down because that would make any AI work effectively slavery.
If someone is doing work for you, you get the copyright. That's how it always worked
This isnt always the case. Tattoos for example, are commissioned and paid for but the actual copyright often resides with the artist not the person that paid for the work.
Yes, the artist must agree that copyright transfer is part of the agreement. By default ownership is with the artist.
That's only with the artist's agreement though isn't it? Usually because you're paying them. In this case the artist isn't a person so can't grant you the copyright (I think)
Yes, in practice this would be a contract with the artist deciding whether the copyright is transferred or not.
Because by default, if you commission someone to draw something for you, they keep the copyright.
It's actually gotten significantly easier, which makes this artist's work even more impressive. There is a very real chance they spent more time on this piece than other artists they were up against spent on theirs. I generate thousands of images a month, and sure, I can just take the first thing midjouney throws at me and be satisfied with 80% accuracy, or I can work and rework, each generation with diminishing returns, until I get to 98% accuracy and just accept that it's not capable of 100% yet.
.... you've never actually made art, have you? The sort of stuff that you enter into contests takes months to make, from the actual painting to rough sketches to reference gathering, and that's just the basics
Clicking a button a thousand times isn't really comparable
I'm not at all disagreeing with the overall sentiment here, but having given it a go, I will say AI image generation is a very tedious endeavor many times.
It's not just clicking a button. It's closer to trying to Google some very specific, but hard to find medical problem. You constantly tweak and retweak your search terms, both learning from what has been output so far and as you think of new ways to stop it from giving you crap you don't want. And each time you hit search the process takes forever, anywhere from 5 minutes to 5 hours.
I don't really feel like this constitutes skill, but it does represent a certain amount of brute force stubbornness to try to get AI image generation to do what you want.
Ok using your Google analogy - there's a reason why "librarian" is a job and "Googler" isn't. One requires years of skill and practice to interpret a request and find the right information and do all sorts of things, and the other is someone kinda bashing keys to make Google give them what they want. You wouldn't put them in remotely the same class
Maybe if you spent some of that time you spend tweaking settings on midjourney practicing art, you'd make something worthwhile and not just generated content slop. :)
Look, if I train a monkey to draw art, no matter how good my instructions or the resulting art is, I don't own that art, the monkey does.
As non-human animals cannot copyright their works, it then thusly defaults to the public domain.
The same applies to AI. You train it to make the art you want, but you're not the one making the art, the AI is. There's no human element in the creation itself, just like with the monkey.
You can edit or make changes as you like to the art, and you own those, but you don't own the art because the monkey/AI drew it.
Does my camera own my art, and not me?
No, because there's a fundemental difference between a tool that functions directly as a consequence of what you do, and an independent thing that acts based on your instruction.
When you take a photo, you have a direct hand in making it - when you direct an AI to make art, it is the one making the art, you just choose what it makes.
It's as silly as asking if your paintbrush owns your art as a response to being told that you can't claim copyright over art you don't own.