23
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 07 Sep 2025
23 points (100.0% liked)
TechTakes
2185 readers
200 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
EDIT: The post's been deleted, and the Substack's been seemingly abandoned.
Starting this Stubsack off, I found a Substack post titled "Generative AI could have had a place in the arts", which attempts to play devil's advocate for the plagiarism-fueled slop machines.
Pointing to one particular lowlight, the author attempts to conflate AI with actually useful tech to try and make an argument:
I think it's a piece in the long line of "AI means A and B, and A is bad and B can be good, so not all AI is bad", which isn't untrue in the general sense, but serves the interest of AIguys who aren't interested in using B, they're interested in promoting AI wholesale.
We're not in a world where we should be offering AI people any carveout; as you mention in the second half, they aren't interested in being good actors, they just want a world where AI is societally acceptable and they can become the Borg.
More directly addressing your piece, I don't think the specific examples you bring up are all that compelling. Or at least, not compared to the cost of building an AI model, especially when you bring up how it'll be cheaper than traditional alternatives.
I think that you have useful food for thought. I think that you underestimate the degree to which capitalism recuperates technological advances, though. For example, it's common for singers supported by the music industry to have pitch correction which covers up slight mistakes or persistent tone-deafness, even when performing live in concert. This technology could also be used to allow amateurs to sing well, but it isn't priced for them; what is priced for amateurs is the gimmicky (and beloved) whammy pedal that allows guitarists to create squeaky dubstep squeals. The same underlying technology is configured for different parts of capitalism.
From that angle, it's worth understanding that today's generative tooling will also be configured for capitalism. Indeed, that's basically what RLHF does to a language model; in the jargon, it creates an "agent", a synthetic laborer, based on desired sales/marketing/support interactions. We also have uses for raw generation; in particular, we predict the weather by generating many possible futures and performing statistical analysis. Style transfer will always be useful because it allows capitalists to capture more of a person and exploit them more fully, but it won't ever be adopted purely so that the customer has a more pleasant experience. Composites with object detection ("filters") in selfie-sharing apps aren't added to allow people to express themselves and be cute, but to increase the total and average time that users spend in the apps. Capitalists can always use the Shmoo, or at least they'll invest in Shmoo production in order to capture more of a potential future market.
So, imagine that we build miniature cloned-voice text-to-speech models. We don't need to imagine what they're used for, because we already know; Disney is making movies and extending their copyright on old characters, and amateurs are making porn. For every blind person using such a model with a screen reader, there are dozens of streamers on Twitch using them to read out donations from chat in the voice of a breathy young woman or a wheezing old man. There are other uses, yes, but capitalism will go with what is safest and most profitable.
Finally, yes, you're completely right that e.g. smartphones completely revolutionized filmmaking. It's important to know that the film industry didn't intend for this to happen! This is just as much of an exaptation as captialist recuperation and we can't easily plan for it because of the same difficulty in understanding how subsystems of large systems interact (y'know, plan interference.)