this thread has broken containment, and the median quality of the discussion has dropped to the point where some rando decided to start a subthread about how it’s not ok to celebrate hitler’s death and also two regulars had an extremely heated fight about who was the most not-mad about the word chat as a noun/pronoun/whatever in English of all fucking things so uhhhh that’s all folks
Meanwhile I've seen people justifying the power use of genAI with "but people also consume as much if not more energy through their lives".
This is called rationalizing because any relationship with reality it has is strictly rationed
idk, I’ve seen enough people call ai users
CW: racist bullshit but with words swapped
"clanker-loving species traitors"
to know that some anti-ai stuff is steeped in bigotry (and yes, it’s still bigotry if it’s ironic btw)
i don’t think most anti-ai people are like this. but some absolutely are and denying it helps no one, and it harms marginalized people
in every serious (ie not TikTok or any other right-wing or unmoderated hellhole) anti-AI community I know, bigotry gets you banned even if you’re trying to hide it behind nonsense words like a 12 year old
meanwhile the people who seem to have dreamt up the idea that AI critical spaces are full of masked bigotry appear to be mostly ~~Neil Gaiman~~ Warren Ellis (see replies), who has several credible sexual assault allegations leveled against him, and Jay Graber, bluesky’s CEO who deserves no introduction (search mastodon or take a look at bluesky right now if the name’s unfamiliar). I don’t trust either of those people’s judgement as to what harms marginalized people.
Gaiman? I thought it was Warren Ellis, the other "into the trebuchet with you" guy of comics.
you’re right, I even had Ellis’ Wikipedia page open to re-confirm the allegations but my fingers wanted it to be Gaiman for whatever reason
oh absolutely, fuck graber and fuck, fuuuuuuck gaiman to hell. i don’t have an inch of trust for either of them.
tho I will say that even here on lemmy, even if it didn’t reach the awfulness of what i quoted, i’ve seen a bunch of clanker memes that were seriously iffy… I wouldn’t qualify those of "serious discussions" but they still matter in the broader ai discourse
and I’d like to clarify on my stance: fuck ai. it can have it’s uses sometimes but the dominant (and promoted) uses are awful for all the reasons everyone knows about. just wanted to make it clear that I am not an ai supporter
oh absolutely, fuck graber and fuck, fuuuuuuck gaiman to hell. i don’t have an inch of trust for either of them.
tho I will say that even here on lemmy, even if it didn’t reach the awfulness of what i quoted, i’ve seen a bunch of clanker memes that were seriously iffy… I wouldn’t qualify those of “serious discussions”
I agree with all of this
but they still matter in the broader ai discourse
and disagree strongly with this. part of the mission of TechTakes and SneerClub is that they must remain a space where marginalized people are welcome, and we know from prior experience that the only way to guarantee that is to ensure that bigots and boosters (and sometimes they’re absolutely the same person — LLMs are fashtech after all) can’t ever control the discourse. I know through association that a lot of moderated AI-critical spaces, writers, and researchers follow a similar mission.
now, unmoderated and ineffectively moderated spaces are absolutely vulnerable to being tuned into fascist pipelines, and inventing slurs is one way they do it (see also “waffles” quickly being picked up as an anti-trans slur on bluesky, which has moderation that’s hostile to its marginalized userbase). if that’s something that’s happening in a popular community and there’s enough examples to show a pattern, then I’d love to have it as a post in TechTakes or as a blog link we can share around the AI-critical community as a warning.
Any minimally competent critique of AI would make such bigotry ipso facto meaningless. Note that the cited phrase implicitly accepts the premise of “AIs” as being in the same category of sentient beings as humans by virtue of it being possible to betray the latter for the former, and hence for any genuinely AI-critical person, it makes about as much sense as talking about ‘anti-table bigotry’; it’s just a meaningless configuration of words if one understands what they mean.
from what i see, white people simply clamor for a context in which they're "allowed" to finally call someone the n-word, and are willing to accept substitute targets for their racism
add in a protective cloak of "it's ironic and a joke and YOU'RE the real racist for pointing this out" and you get a whole lot of people who are extremely okay slinging around barely modified racial slurs
right???
in retrospect, me making this about ai was a mistake, cause it is not the only place this shows up!
and people get very defensive about this one too. like i'm pretty confident that coolboy004 on reddit is not giving a nuanced delivery on the ethics of a company running an ai-powered call center when he types "screws will not replace us" in all caps on /r/fuckai, and yet
i think it sucks that we're stuck with, say, bluesky engineers genuinely trying to pull the most moronic variant of "but what if the stochastic text generator might have feelings in the future too", but we still need to be able to talk about why people feel the need to make "clanka with the hard r" jokes (answer it's racism)
I'm really sorry to say this, I'm sure you're a lovely person, but fuck out of here with all that bullshit.
you’ve never posted on our instance before as far as I can tell and I’m pretty sure I didn’t ask you to fucking gatekeep one of our threads and start a shitty little fight that I have to clean up
How online do you have to be where "people dunking on AI "artists" is like Kristallnacht" doesn't sound completely fucking deranged?
This shows why it's so easy for conservatives to reverse Uno the language of social justice, painting themselves as the victims of oppression and liberals / women / minorities / immigrants / LGBTQ+ people / anyone else who exists without their consent as oppressors. They refuse to admit that words mean things, and that things are more important than words.
It's not a lack of reading comprehension. It's a lack of reality comprehension.
The 'change the subject' thing can be useful if you're changing like for like. Equating AI algorithms to the Jewish people is very far from that. To a disturbing degree.
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community