Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Starting this Stubsack off, I found a Substack post titled "Generative AI could have had a place in the arts", which attempts to play devil's advocate for the plagiarism-fueled slop machines.
Pointing to one particular lowlight, the author attempts to conflate AI with actually useful tech to try and make an argument:
Oh hey, that's my article actually, thanks for reading it! :D
Reading the part on motion capture back with your feedback in mind, I do see how it can give the impression of conflating generative AI with another form of machine learning (or "AI", as all of these are marketed as). That's my mistake, I could have worded it better -- thanks for pointing it out.
I don't agree that I was playing devil's advocate for the slop machines, however. I spend the majority of the article talking about my explicit disdain for them and their users. The point of the piece was to point to what I believe to be genuine use-cases for ethical ML (including gen AI) in art -- not as replacement of talent but as tools purpose-built for creatives, like the few that existed were before the current bubble. I think the paragraph right after the one on mo-cap best serves to summarize my thoughts:
Did you have anything thoughts on my article? I'm still very much a novice writer so every bit of feedback is invaluable to me.
I think that you have useful food for thought. I think that you underestimate the degree to which capitalism recuperates technological advances, though. For example, it's common for singers supported by the music industry to have pitch correction which covers up slight mistakes or persistent tone-deafness, even when performing live in concert. This technology could also be used to allow amateurs to sing well, but it isn't priced for them; what is priced for amateurs is the gimmicky (and beloved) whammy pedal that allows guitarists to create squeaky dubstep squeals. The same underlying technology is configured for different parts of capitalism.
From that angle, it's worth understanding that today's generative tooling will also be configured for capitalism. Indeed, that's basically what RLHF does to a language model; in the jargon, it creates an "agent", a synthetic laborer, based on desired sales/marketing/support interactions. We also have uses for raw generation; in particular, we predict the weather by generating many possible futures and performing statistical analysis. Style transfer will always be useful because it allows capitalists to capture more of a person and exploit them more fully, but it won't ever be adopted purely so that the customer has a more pleasant experience. Composites with object detection ("filters") in selfie-sharing apps aren't added to allow people to express themselves and be cute, but to increase the total and average time that users spend in the apps. Capitalists can always use the Shmoo, or at least they'll invest in Shmoo production in order to capture more of a potential future market.
So, imagine that we build miniature cloned-voice text-to-speech models. We don't need to imagine what they're used for, because we already know; Disney is making movies and extending their copyright on old characters, and amateurs are making porn. For every blind person using such a model with a screen reader, there are dozens of streamers on Twitch using them to read out donations from chat in the voice of a breathy young woman or a wheezing old man. There are other uses, yes, but capitalism will go with what is safest and most profitable.
Finally, yes, you're completely right that e.g. smartphones completely revolutionized filmmaking. It's important to know that the film industry didn't intend for this to happen! This is just as much of an exaptation as captialist recuperation and we can't easily plan for it because of the same difficulty in understanding how subsystems of large systems interact (y'know, plan interference.)
I think it's a piece in the long line of "AI means A and B, and A is bad and B can be good, so not all AI is bad", which isn't untrue in the general sense, but serves the interest of AIguys who aren't interested in using B, they're interested in promoting AI wholesale.
We're not in a world where we should be offering AI people any carveout; as you mention in the second half, they aren't interested in being good actors, they just want a world where AI is societally acceptable and they can become the Borg.
More directly addressing your piece, I don't think the specific examples you bring up are all that compelling. Or at least, not compared to the cost of building an AI model, especially when you bring up how it'll be cheaper than traditional alternatives.