337
submitted 1 week ago* (last edited 1 week ago) by BennyTheExplorer@lemmy.world to c/asklemmy@lemmy.ml

In my opinion, AI just feels like the logical next step for capitalist exploitation and destruction of culture. Generative AI is (in most cases) just a fancy way for cooperations to steal art on a scale, that hasn't been possible before. And then they use AI to fill the internet with slop and misinformation and actual artists are getting fired from their jobs, because the company replaces them with an AI, that was trained on their original art. Because of these reasons and some others, it just feels wrong to me, to be using AI in such a manner, when this community should be about inclusion and kindness. Wouldn't it be much cooler, if we commissioned an actual artist for the banner or find a nice existing artwork (where the licence fits, of course)? I would love to hear your thoughts!

you are viewing a single comment's thread
view the rest of the comments
[-] jsomae@lemmy.ml 79 points 1 week ago

Wouldn’t it be much cooler, if we commissioned an actual artist for the banner

I hate it when AI is used to replace the work an artist would have been paid for. But uh, this is a random open-source forum; there's no funding for artists to make banners. Rejecting AI art -- which was voted for by the community -- just seems like baseless virtue signalling. No artist is going to get paid if we remove it.

But like if you want to commission an artist with your own money, by all means go ahead. You'll still most likely need another community vote to approve it though.

[-] sanguinepar@lemmy.world 75 points 1 week ago

That doesn't change that real artists who made real art will have had their work used without permission or payment to help generate the banner. I'm with OP.

[-] jsomae@lemmy.ml 22 points 1 week ago

If I drew something myself, those artists would also not be paid. I can understand a deontological argument against using AI trained on people's art, but for me, the utilitarian argument is much stronger -- don't use AI if it puts an artist out of work.

[-] BennyTheExplorer@lemmy.world 41 points 1 week ago

It's not about anyone getting paid, it's about affording basic respect and empathy to people and their work. Using AI sends a certain message of 'I don't care about your consent or opinion towards me using your art", and I don't think, that this is a good thing for anyone.

[-] jsomae@lemmy.ml 17 points 1 week ago

Well yeah, I don't care about IP rights. Nothing has been materially stolen, and if AI improves, then the result could some day in theory be indistinguishable from a human who was merely "inspired" by an existing piece of art. At the end of the day, the artist is not harmed by AI plagiarism; the artist is harmed by AI taking what could have been their job.

[-] sanguinepar@lemmy.world 11 points 1 week ago
[-] jsomae@lemmy.ml 7 points 1 week ago
[-] patatas@sh.itjust.works 4 points 1 week ago

By systems positing human creativity as a computational exercise

[-] jsomae@lemmy.ml 6 points 1 week ago

the human brain follows the laws of physics; it therefore follows that human creativity is already computational.

[-] patatas@sh.itjust.works 4 points 1 week ago

Three problems with this:

  1. If computation means "anything that happens in the universe" then the term 'computation' is redundant and meaningless.
  2. We do not know or understand all of the physical laws of the universe, or if those laws indeed hold universally.
  3. Our consciousness does not operate at the level of atomic physics; see Daniel Dennett's 'compatibilism' defense of free will vs Robert Sapolsky's determinism. If we're vulgar materialists, then it follows that there is no free will, and thus no reason to advocate for societal change.
[-] jsomae@lemmy.ml 4 points 1 week ago* (last edited 1 week ago)
  1. Your argument should not require appealing to desire to have the word computation be less redundant. (I don't really think there's a meaningful difference between computation and physics, we just generally use the term computation to refer to physical processes which result in useful information.) But why don't we define computation as being "anything that can be done on a conventional computer (with sufficient time and memory)" -- i.e. Turing-computable.
  2. It is not relevant that we may not know all the physical laws of the universe; what matters only is whether there are laws or not. A scientist cannot cause free will to disappear from the universe simply by learning new facts about the laws of physics. (I would argue that if this were apparently true, then there was no free will to begin with.)
  3. My understanding of compatabilism is that free will and determinism are compatible; in other words, the laws of physics can give arise to free will (consciousness, as you put it). I think there are some additional twists in compatabilism I don't entirely understand, but that's the gist as far as I have seen. In any case, compatabilism seems to me to be compatible with the idea that one can simulate a human brain; since the simulation and the original would produce the same result, then if one has free will, the other must have free will too. ~(Simulating~ ~it~ ~multiple~ ~times~ ~will~ ~always~ ~result~ ~in~ ~the~ ~same~ ~thing,~ ~which~ ~therefore~ ~means~ ~that~ ~it's~ ~the~ ~same~ ~conscious~ ~experience~ ~--~ ~the~ ~same~ ~free~ ~will~ ~--~ ~each~ ~time,~ ~and~ ~not~ ~different~ ~instances~ ~of~ ~free~ ~will.~ ~In~ ~other~ ~words,~ ~consciousness~ ~is~ ~fungible~ ~with~ ~respect~ ~to~ ~simulation.)~ Simulation=computation, so therefore human creativity is computable.

Please note that I'm not arguing that current AIs actually are on the level of human creativity, just that there's no law against that eventually being possible.

[-] patatas@sh.itjust.works 1 points 1 week ago

The fact that we do not know or understand all the laws of physics (and again, if these are even indeed universal!) means that we cannot be certain about equating computation and physics - assuming we define computation as deterministic, as you seem to be doing here.

Can you 'simulate' a human brain? Sure, easy, all you have to do is just build a human brain out of DNA and proteins and lipids and water and hormones etc, and put it in an exact replica of a human body built from that same stuff.

We have no evidence that consciousness can be separated from the material body that gives rise to it!

And even if we try to abstract that away and say "let's just model the entire physical brain & body digitally": that brain & body is not an island; it's constantly interacting with the entirety of the rest of the physical world.

So, you want to 'simulate' a brain with ones and zeroes? You'll need to simulate the entire universe too. That's likely to be difficult, unless you have an extra universe worth of material to build that computational device with.

[-] jsomae@lemmy.ml 1 points 1 week ago* (last edited 1 week ago)

Okay, I agree that the universe may not be Turing-computable, since we don't know the laws of physics. Indeed, it almost certainly isn't, since Turing machines are discrete and the universe is continuous -- there are integrals, for instance, that have no closed-form, but are physically present in our universe. However, I have no particularly good reason to believe that infinite precision is actually necessary in order to accurately simulate the human brain, since we can get arbitrarily close to an exact simulation of, say, Newtonian physics, or quantum physics minus gravity, using existing computers -- by "arbitrarily close," I mean that for any desired threshold of error, there exists some discretization constant for which the simulation will remain within that error threshold.

Sure, maybe there are more laws of the universe we don't know and those turn out to be necessary for the human brain to work. But it seems quite unlikely, as we already have a working reductionist model of the brain -- it seems like we understand how all the component parts, like neurons and such, work, and we can even model how complex assemblages of neurons can compute interesting things. Like we've trained actual rat neurons to play Doom for some ungodly reason, and they obey according to how our models predict. Yeah, maybe there's some critical missing law of physics, but the current model we have seems sufficient so far as we can tell in order to model the brain.

constantly interacting with the rest of the physical world

I feel like the rest of the world shouldn't actually matter for the purposes of free will. I mean, yes, obviously our free will responds to the environment. But if the environment disappeared, our free will shouldn't disappear along with it. In other words, the free will should be either entirely located in the mind, or if you're not a compatabilist/materialist, it's located in the mind plus some other metaphysical component. So, I don't agree that it requires simulating the whole universe in order to simulate a free will (though I do agree that you can't simulate an actual mind in the real world unless you can simulate all its inputs, e.g. placing the mind in some kind of completely walled-off sensory deprivation environment that has within-epsilon-of-zero interaction with the outside world. Obviously, it's not very practical, but for a thought experiment about free will I don't think this detail really matters.)

[-] patatas@sh.itjust.works 1 points 1 week ago

So would you agree that people should be locked up for crimes that a sufficiently advanced AI system predicts they will commit?

Or would you agree that these systems cannot calculate human behaviour?

[-] jsomae@lemmy.ml 1 points 1 week ago* (last edited 1 week ago)

Hahaha, I didn't expect that.

I saw Minority Report, and I think it has a plot hole. If you can see the future then you can change it, meaning that if there is any way to relay information from the oracle to the person who would commit the crime, then that could change whether or not the person will commit the crime.

[-] sukhmel@programming.dev 3 points 1 week ago

If we're vulgar materialists, then it follows that there is no free will, and thus no reason to advocate for societal change.

No free will doesn't imply no change. Lifeless systems evolve over time, take rock formation as an example, it was all cosmic dust at some point. So no, even if we do accept that there is no free will that shouldn't mean perfect stasis

[-] patatas@sh.itjust.works 1 points 1 week ago

I never said that no change would occur. I said there was no season to advocate for it if there is no free will.

[-] RaivoKulli@sopuli.xyz 12 points 1 week ago

I mean how many of us are pirating stuff

[-] Evotech@lemmy.world 3 points 1 week ago* (last edited 1 week ago)

Thank you, you can’t both love piracy (which lemmy overwhelmingly does) and hate AI

[-] dil@lemmy.zip 1 points 1 week ago* (last edited 1 week ago)

plenty of examples where piracy harms no one devs get paid no matter what, ppl working on and making shows like south park that have 5 year deals, many devs get fired right after a game gets released they dont benefit if it does well, indie games i never pirate, I use the 2 hour steam window instead to see if I want it

ai on the other hand lol, actively takes away jobs

[-] Evotech@lemmy.world 1 points 1 week ago

There would be no job designing a lemmy banner

[-] dil@lemmy.zip 1 points 1 week ago

I'm glad I don't think like you, thatd be a confusing time

[-] dil@lemmy.zip 1 points 1 week ago

It's sad that you think that is what I was arguing

[-] sanguinepar@lemmy.world 11 points 1 week ago

Yeah, but if you drew it yourself then they wouldn't expect to be paid. Unless you plagiarised them to the degree that would trigger a copyright claim, they would (at worst) just see it as a job that they could have had, but didn't. Nothing of theirs was directly used, and at least something original of theirs was created. Whereas AI images are wholly based on other work and include no original ideas at all.

[-] jsomae@lemmy.ml 13 points 1 week ago

You're posting on lemmy.ml; we don't care much for intellectual property rights here. What we care about is that the working class not be deprived of their ability to make a living.

[-] sanguinepar@lemmy.world 3 points 1 week ago

Agree with that. I don't think the two are mutually exclusive though?

[-] jsomae@lemmy.ml 4 points 1 week ago

I agree that they are not mutually exclusive, which is why I usually side against AI. On this particular occasion however, there's a palpable difference, since no artist is materially harmed.

[-] yogthos@lemmy.ml 4 points 1 week ago

You haven't explained how it would be different in any way. Human artists learn by emulating other artists, and vast majority of art is derivative in nature. Unless a specific style is specified by the user input, AI images are also not plagiarised to the degree that would trigger a copyright claim. The only actual difference here is in the fact that the process is automated and a machine is producing the image instead of a human drawing it by hand.

[-] rumba@lemmy.zip 12 points 1 week ago

Real artists use uncited reference art all the time. That person that drew a picture of Catherine the Great for a video game certainly didn't list the artist of the source art they were looking at when they drew it. No royalties went to that source artist. People stopped buying reference art books for the most part when Google image search became a thing.

A hell, a lot of professional graphic artists right now use AI for inspiration.

This isn't to say that the problem isn't real and a lot of artists stand to lose their livelihood over it, but nobody's paying someone to draw a banner for this forum. The best you're going to get is some artist doing out of the goodness of their heart when they could be spending their time and effort on a paying job.

[-] sanguinepar@lemmy.world 20 points 1 week ago* (last edited 1 week ago)

Real artists may be influenced, but they still put something of themselves into what they make. AI only borrows from others, it creates nothing.

I realise no-one is paying someone to make a banner for this forum, it would need to be someone choosing to do it because they want there to be a banner. But the real artists whose work was used by the AI to make the banner had no choice in the matter, let alone any chance of recompense.

[-] rumba@lemmy.zip 6 points 1 week ago

So what's the solution for this board, they should just put up a black image? Should they start a crowdfunding to pay an artist?

It's a really bothers an artist enough they could make a banner for the board and ask them to swap out the AI. But, they'll have to make something that more people like than the AI.

[-] patatas@sh.itjust.works 13 points 1 week ago

The banner could be anything or nothing at all, and as long as it isn't AI generated, I would like it better

[-] yogthos@lemmy.ml 4 points 1 week ago

I, on the other hand, would not.

[-] patatas@sh.itjust.works 1 points 1 week ago

Perhaps we should ask ChatGPT what to do about this?

[-] yogthos@lemmy.ml 2 points 1 week ago

or perhaps you could stop perseverating

[-] patatas@sh.itjust.works 1 points 1 week ago

Not sure where I'm doing that - have been having some pretty interesting conversations with others tbh. My point is that you wouldn't outsource that decision to ChatGPT, so why is the creation of a banner image outsourced to one of these inherently dehumanizing systems?

[-] yogthos@lemmy.ml 1 points 1 week ago
[-] patatas@sh.itjust.works 1 points 1 week ago* (last edited 1 week ago)

Will read your link, but when I saw the phrase "democratising creativity" I rolled my eyes hard and then grabbed this for you from my bookmarks. But I'll read the rest anyway

https://aeon.co/essays/can-computers-think-no-they-cant-actually-do-anything

Edit: yeah so that piece starts out by saying how art is about the development of what I'm taking to be a sort of 'curatorial' ability, but ends up arguing that as long as the slop machines are nominally controlled by workers, that it's fine actually. I couldn't disagree more.

Elsewhere in a discussion with another user here, I attempted to bring up Ursula Franklin's distinction between holistic and prescriptive technologies. AI is, to me, exemplary of a prescriptive process, in that its entire function is to destroy opportunities for decision-making by the user. The piece you linked admits this is the goal:

"What distinguishes it is its capacity to automate aspects of cognitive and creative tasks such as writing, coding, and illustration that were once considered uniquely human."

I reject this as being worthwhile. The output of those human pursuits can be mimicked by this technology, but, because (as the link I posted makes clear) these systems do not think or understand, they cannot be said to perform those tasks any more than a camera can be said to be painting a picture.

And despite this piece arguing that the people using these processes are merely incorporating a 'tool' into their work, and that AI will open up avenues for incredible new modes of creativity, I struggle to think of an example where the message some GenAI output conveyed was anything other than "I do not really give a shit about the quality of the output".

These days our online environment suffers constantly from this stream of "good enough, I guess, who cares" stuff that insults the viewer by presuming they just want to see some sort of image at the top of a page, and don't care about anything beyond this crass consumptive requirement.

The banner image in question is a great example of this. The overall aesthetic is stereotypical of GenAI images, which supports the notion that control of the process was more or less ceded to the system (or, alternately, that these systems provide few opportunities for directing the process). There are bizarre glitches that the person writing the prompt couldn't be bothered to fix, the composition is directionless, the question-marks have a jarring crispness that clashes with the rest of the image, the tablets? signs? are made from some unknown material, perhaps the same indistinct stuff as the ground these critters are standing on.

It's all actively hostile to a sense of community, as it pretends that communication is something that can just as well be accomplished by a statistical process, because who cares about trying to create something from the heart?

These systems are an insult to human intelligence while also undermining it by automating our decision-making processes. I wrote an essay about this if you're interested, which I'll link here and sign off, because I don't want to be accused again of repeating myself unnecessarily: https://thedabbler.patatas.ca/pages/ai-is-dehumanization-technology.html

[-] yogthos@lemmy.ml 1 points 1 week ago

Feel free to keep tilting at windmills I guess.

[-] petrol_sniff_king 11 points 1 week ago

Considering AI is really unlikeable, I don't think that'll be too hard.

[-] rumba@lemmy.zip 4 points 1 week ago

Proof is when it happens.

[-] supersquirrel@sopuli.xyz 7 points 1 week ago

But, they’ll have to make something that more people like than the AI.

No, it does not have to be better than the AI image to be preferable.

[-] yogthos@lemmy.ml 4 points 1 week ago

Speak for yourself.

[-] rumba@lemmy.zip 3 points 1 week ago

Okay, we have your vote down now think about the other people that are also here. It needs to be preferable to the majority not just you.

this post was submitted on 29 Jul 2025
337 points (100.0% liked)

Asklemmy

49827 readers
379 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS