[-] 200fifty@awful.systems 9 points 4 months ago* (last edited 4 months ago)

Can AI companies legally ingest copyrighted materials found on the internet to train their models, and use them to pump out commercial products that they then profit from? Or, as the tech companies claim, does generative AI output constitute fair use?

This is kind of the central issue to me honestly. I'm not a lawyer, just a (non-professional) artist, but it seems to me like "using artistic works without permission of the original creators in order to create commercial content that directly competes with and destroys the market for the original work" is extremely not fair use. In fact it's kind of a prototypically unfair use.

Meanwhile Midjourney and OpenAI are over here like "uhh, no copyright infringement intended!!!" as though "fair use" is a magic word you say that makes the thing you're doing suddenly okay. They don't seem to have very solid arguments justifying them other than "AI learns like a person!" (false) and "well google books did something that's not really the same at all that one time".

I dunno, I know that legally we don't know which way this is going to go, because the ai people presumably have very good lawyers, but something about the way everyone seems to frame this as "oh, both sides have good points! who will turn out to be right in the end!" really bugs me for some reason. Like, it seems to me that there's a notable asymmetry here!

[-] 200fifty@awful.systems 10 points 5 months ago

Not even -- it's a simplified Civilization clone for mobile. (It actually sounds like a pretty neat little game, but, uh, chess it is not!)

[-] 200fifty@awful.systems 9 points 6 months ago

heck yeah I love ~~Physics Jenny Nicholson~~ Angela Collier

[-] 200fifty@awful.systems 10 points 7 months ago* (last edited 7 months ago)

Thank god I can have a button on my mouse to open ChatGPT in Windows. It was so hard to open it with only the button in the taskbar, the start menu entry, the toolbar button in every piece of Microsoft software, the auto-completion in browser text fields, the website, the mobile app, the chatbot in Microsoft's search engine, the chatbot in Microsoft's chat software, and the button on the keyboard.

[-] 200fifty@awful.systems 10 points 7 months ago* (last edited 7 months ago)

I mean they do throw up a lot of legal garbage at you when you set stuff up, I'm pretty sure you technically do have to agree to a bunch of EULAs before you can use your phone.

I have to wonder though if the fact Google is generating this text themselves rather than just showing text from other sources means they might actually have to face some consequences in cases where the information they provide ends up hurting people. Like, does Section 230 protect websites from the consequences of just outright lying to their users? And if so, um... why does it do that?

Even if a computer generated the text, I feel like there ought to be some recourse there, because the alternative seems bad. I don't actually know anything about the law, though.

[-] 200fifty@awful.systems 10 points 9 months ago

yeah, I definitely think machine learning has obvious use cases to benefit the common good (youtube auto captions being Actually Pretty Decent Now is one that comes to mind easily) but I'm much less certain about most of the stuff being presently marketed as "AI"

[-] 200fifty@awful.systems 10 points 1 year ago

The problem is just transparency, you see -- if they could just show people the math that led them to determining that this would save X million more lives, then everyone would realize that it was actually a very good and sensible decision!

[-] 200fifty@awful.systems 10 points 1 year ago

yeah, my first thought was, what if you want to comment out code in this future? does that just not work anymore? lol

[-] 200fifty@awful.systems 10 points 1 year ago* (last edited 1 year ago)

I definitely think the youths are stressed because of 'environmental pollution,' but not in the way this commenter means...

[-] 200fifty@awful.systems 10 points 1 year ago* (last edited 1 year ago)

This is good! Though, he neglects to mention the group of people (including myself) who have yet to be sold on ai's usefulness at all (all critics of practical AI harms are lumped under 'reformers' implying they still see it as valuable but just currently misguided.)

Like, ok, so what if China develops it first? Now they can... generate more convincing spam, write software slightly faster with more bugs, and starve all their artists to death? ... Oh no, we'd better hurry up and compete with that!

[-] 200fifty@awful.systems 10 points 1 year ago

I had the same thought as Emily Bender's first one there, lol. The map is interesting to me, but mostly as a demonstration of how anglosphere-centric these models are!

[-] 200fifty@awful.systems 10 points 1 year ago

The industry is still learning how to even use the tech.

Just like blockchain, right? That killer app's coming any day now!

view more: ‹ prev next ›

200fifty

joined 2 years ago