[-] 200fifty@awful.systems 9 points 10 months ago* (last edited 10 months ago)

Can AI companies legally ingest copyrighted materials found on the internet to train their models, and use them to pump out commercial products that they then profit from? Or, as the tech companies claim, does generative AI output constitute fair use?

This is kind of the central issue to me honestly. I'm not a lawyer, just a (non-professional) artist, but it seems to me like "using artistic works without permission of the original creators in order to create commercial content that directly competes with and destroys the market for the original work" is extremely not fair use. In fact it's kind of a prototypically unfair use.

Meanwhile Midjourney and OpenAI are over here like "uhh, no copyright infringement intended!!!" as though "fair use" is a magic word you say that makes the thing you're doing suddenly okay. They don't seem to have very solid arguments justifying them other than "AI learns like a person!" (false) and "well google books did something that's not really the same at all that one time".

I dunno, I know that legally we don't know which way this is going to go, because the ai people presumably have very good lawyers, but something about the way everyone seems to frame this as "oh, both sides have good points! who will turn out to be right in the end!" really bugs me for some reason. Like, it seems to me that there's a notable asymmetry here!

[-] 200fifty@awful.systems 9 points 1 year ago

Oh my god, I can't stop laughing out loud at "women evolved small heads because they kept falling over and hitting their big heads on rocks," based on the fact that his sister hit her head when she was younger. What's his explanation for why men didn't do this then?? Absolutely next-level moon logic I love it so much

[-] 200fifty@awful.systems 9 points 1 year ago

heck yeah I love ~~Physics Jenny Nicholson~~ Angela Collier

[-] 200fifty@awful.systems 9 points 1 year ago

Ah yes, pragmatists, well known for their constantly sunny and optimistic outlook on the future, consequences be damned (?)

[-] 200fifty@awful.systems 9 points 1 year ago

No no, it's "order of magnitudes". It's like "surgeons general."

[-] 200fifty@awful.systems 9 points 1 year ago* (last edited 1 year ago)

Yeah, it's such a braindead libertarian position to blame tech platforms blocking slurs on The Government. It's literally not illegal to say slurs! It's just not something most normal people want to be associated with

[-] 200fifty@awful.systems 9 points 1 year ago

Six fingers on the right hand

[-] 200fifty@awful.systems 9 points 1 year ago* (last edited 1 year ago)

I'm just wondering how exactly he goes about doing this. Like if I wanted to casually slip the N word into a casual conversation (for... some reason) I'm not actually sure how I would go about setting it up?

Like, is he just randomly saying it at people to see how they react (which most normies rightfully would judge as very weird)? Is he using it to describe actual black people (in which case I feel like people dropping him as a friend aren't really doing it over "speech taboos", are they...)? Is he asking people "so how do you feel about the word 'n.....'?" Something else? My curiosity is piqued now.

[-] 200fifty@awful.systems 8 points 2 years ago

The problem is I guess you'd need a significant corpus of human-written stuff in that language to make the LLM work in the first place, right?

Actually this is something I've been thinking about more generally: the "ai makes programmers obsolete" take sort of implies everyone continues to use javascript and python for everything forever and ever (and also that those languages never add any new idioms or features in the future I guess.)

Like, I guess now that we have AI, all computer language progress is just supposed to be frozen at September 2021? Where are you gonna get the training data to keep the AI up to date with the latest language developments or libraries?

[-] 200fifty@awful.systems 9 points 2 years ago

there actually is a comment making this point now:

Isn't this product kind of impossible? Like a compression program that compresses compressed files? If you have an algorithm for determining whether a generated image is good or bad couldn't the same logic be incorporated into the network so that it doesn't generate bad images?

the reply is a work of art:

We’re optimistic about using our own algorithms and models to evaluate another model. In theoretical computer science, it is easier to verify a correct solution than to generate a correct solution (P vs NP problem).

it's not even wrong, as they say

view more: ‹ prev next ›

200fifty

joined 2 years ago