13

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 35 comments
sorted by: hot top controversial new old
[-] BlueMonday1984@awful.systems 6 points 12 hours ago* (last edited 12 hours ago)

GoToSocial recently put up a code of conduct that openly barred AI-"assisted" changes and fascist/capitalist involvement, prompting some concern trolling on the red site.

Got a promptfondler trying to paint basic human decency as ridiculous, and a Concerned Individual^tm^ who's pissed at GoToSocial refusing to become a Nazi bar.

[-] gerikson@awful.systems 1 points 1 hour ago

Yet another example of how acceptance of GenAI is increasingly coded as right wing

[-] mlen@awful.systems 6 points 12 hours ago

Signal is finally close to releasing a cross platform backup system: https://signal.org/blog/introducing-secure-backups/

[-] nfultz@awful.systems 7 points 16 hours ago

For those of you in the (West) LA area, there's a panel with Brian Merchant happening tomorrow. Probably no food this school year but still looks good.

https://law.ucla.edu/events/democracy-technology-salon

If anyone does turn up, codeword is banana bread, otherwise I'll assume you're a lawyer (not derogatory).

[-] BigMuffN69@awful.systems 7 points 13 hours ago* (last edited 11 hours ago)

Until proven otherwise, I assume everyone I encounter is a fellow sneerer (derogatory)

[-] aio@awful.systems 7 points 14 hours ago

codeword is banana bread

Will there be statues to swap as well?

[-] gerikson@awful.systems 4 points 14 hours ago

MAGA hates AI (well, Big Tech):

https://archive.ph/mBo9I

(Originally The Verge: MAGA populists call for holy war against Big Tech)

https://www.theverge.com/politics/773154/maga-tech-right-ai-natcon

[-] CinnasVerses@awful.systems 8 points 18 hours ago* (last edited 17 hours ago)

When it started in ’06, this blog was near the center of the origin of a “rationalist” movement, wherein idealistic youths tried to adapt rational styles and methods. While these habits did often impress, and bond this community together, they alas came to trust that their leaders had in fact achieved unusual rationality, and on that basis embraced many contrarian but not especially rational conclusions of those leaders. - Robin Hanson, 2025

I hear that even though Yud started blogging on his site, and even though George Mason University type economics is trendy with EA and LessWrong, he never identified himself with EA or LessWrong as movements. So this is like Gabriele D'Annunzio insisting he is a nationalist not a fascist, not Nicholas Taleb denouncing phrenology.

[-] scruiser@awful.systems 4 points 10 hours ago* (last edited 8 hours ago)

He had me in the first half, I thought he was calling out rationalist's problems (even if dishonestly disassociating himself from then). But then his recommended solution was prediction markets (a concept which rationalists have in fact been trying to play around with, albeit at a toy model level with fake money).

[-] blakestacey@awful.systems 3 points 4 hours ago* (last edited 4 hours ago)

Also a concept that Scott Aaronson praised Hanson for.

https://web.archive.org/web/20210425233250/https://twitter.com/arthur_affect/status/994112139420876800

(Crediting the "Great Filter" to Hanson, like Scott Computers there, sounds like some fuckin' bullshit to me. In Cosmos, Carl Sagan wrote, "Why are they not here? There are many possible answers. Although it runs contrary to the heritage of Aristarchus and Copernicus, perhaps we are the first. Some technical civilization must be the first to emerge in the history of the Galaxy. Perhaps we are mistaken in our belief that at least occasional civilizations avoid self-destruction." And in his discussion of abiogenesis: "Life had arisen almost immediately after the origin of the Earth, which suggests that life may be an inevitable chemical process on an Earth-like planet. But life did not evolve beyond blue-green algae for three billion years, which suggests that large lifeforms with specialized organs are hard to evolve, harder even than the origin of life. Perhaps there are many other planets that today have abundant microbes but no big beasts and vegetables." Boom! There it is, in only the most successful pop-science book of the century.)

[-] swlabr@awful.systems 4 points 3 hours ago* (last edited 2 hours ago)

Most famously, Robin is […] also the inventor of futarchy

A futarchy, you say? Tell me more, Robin Hanson

[-] sailor_sega_saturn@awful.systems 6 points 8 hours ago

Honestly Hanson is so awful the rationalists almost make him look better by association.

[-] scruiser@awful.systems 5 points 8 hours ago

He's the one that used the phrase "silent gentle rape"? Yeah, he's at least as bad as the worst evo-psych pseudoscience misogyny posted on lesswrong, with the added twist he has a position in academia to lend him more legitimacy.

[-] swlabr@awful.systems 1 points 2 hours ago

I started reading his post with that title to refresh myself. Just to get your feet wet:

DEC 01, 2010

Added Oct ’13:

Man, what happened in the three years it took for a content warning?

Anyway I skimmed it, the rest of the post is a huge pile of shit that I don’t want to read any more of, I’m sure it's been picked apart already. But JFC.

[-] Tar_alcaran@sh.itjust.works 5 points 16 hours ago

I deeply regret I have made posts proclaiming LessWrong as amazing, in the past.

They do still have a decent article here and there, but that's like digging for strawberries in a pile of shit. Even if you find one, it won't be great.

[-] CinnasVerses@awful.systems 5 points 12 hours ago* (last edited 11 hours ago)

We have some threads of Vaccinations in Book/Article Form which try to share good pop science and textbooks without the cult shit and Dunning-Kruger. People who think they know everything and are mysteriously underemployed tend to have the most time to post though.

[-] EponymousBosh@awful.systems 15 points 1 day ago
[-] BlueMonday1984@awful.systems 14 points 22 hours ago

I genuinely thought therapists were gonna avoid the psychosis-inducing suicide machine after seeing it cause psychosis and suicide. Clearly, I was being too optimistic.

[-] fullsquare@awful.systems 5 points 20 hours ago

nah they're built different

[-] swlabr@awful.systems 3 points 18 hours ago

Yeah, that headline and its writer can kick rocks.

[-] zogwarg@awful.systems 9 points 1 day ago
The future is now, and it is awful. 
Would any still wonder why, I grow so ever mournful.
[-] froztbyte@awful.systems 7 points 1 day ago

irl winced at this

[-] wizardbeard@lemmy.dbzer0.com 15 points 1 day ago

Some poor souls who arguably have their hearts in the right place definitely don't have their heads screwed on right, and are trying to do hunger strikes outside Google's AI offices and Anthropic's offices.

https://programming.dev/post/37056928 contains links to a few posts on X by the folks doing it.

Imagine being so worried about AGI that you thought it was worth starving yourself over.

Now imagine feeling that strongly about it and not stopping to ask why none of the ideologues who originally sounded the alarm bells about it have tried anything even remotely as drastic.

On top of all that, imagine being this worried about what Anthropic and Google are doing in the research of AI, hopefully being aware of Google's military contracts, and somehow thinking they give a singular shit if you kill yourself over this.

And... where's the people outside fucking OpenAI? Bets on this being some corporate shadowplay shit?

I mean, I try not to go full conspiratorial everything-is-a-false-fllag, but the fact that the biggest AI company that has been explicitly trying to create AGI isn't getting the business here is incredibly suspect. On the other hand, though, it feels like anything that publicly leans into the fears of evil computer God would be a self-own when they're in the middle of trying to completely ditch the "for the good of humanity, not just immediate profits" part of their organization.

[-] JFranek@awful.systems 5 points 14 hours ago

It's two guys in London and one guy in San Francisco. In London there's presumably no OpenAI office, in SF, you can't be at two places at once and Anthropic has more true believers/does more critihype.

Unrelated, few minutes before writing this a bona-fide cultist replied to the programming dev post. Cultist with the handle "BussyGyatt @feddit.org". Truly the dumbest timeline.

[-] bigfondue@lemmy.world 6 points 22 hours ago* (last edited 22 hours ago)

Didn't OpenAI just file court documents claiming that their opposition is funded by competitors? Accusing someone else of what they themselves are doing seems to be a pretty popular strategy these days.

[-] holdenweb@freeradical.zone 2 points 22 hours ago

@bigfondue @YourNetworkIsHaunted every accusation is a confession!

[-] Soyweiser@awful.systems 6 points 23 hours ago* (last edited 22 hours ago)

I dont know anything about the locations of any offices, but could it he that openAI just didnt have any local places? Asking them why not all worked ld be a good journalist question

But otoh it is just ~~two~~ three of them, and the second ones photo gives off a weird vibe. Why is he smiling like it is a joke?

[-] froztbyte@awful.systems 4 points 1 day ago

gigabyte selling shovels (and not even just random shovels, specialty shovels that need a fixed type of mobo to use)

not gonna spend much effort on it now but if someone runs into an actual worthwhile review showing training performance numbers I'd be keen to see (my expectations are that it still does not do very much, and that runtime quality still underperforms relative to VC-subsidised platforms)

[-] nightsky@awful.systems 5 points 23 hours ago

Fascinating how that product page is full of marketing fluff, but nowhere does it say what this actually is...? What does it do? It's some kind of.... memory expansion? But what's beneath the big heatsink then? All they say is that it's somehow amazing:

In the age of local AI, GIGABYTE AI TOP is the all-round solution to win advantages ahead of traditional AI training methods. It features a variety of groundbreaking technologies that can be easily adapted by beginners or experts, for most common open-source LLMs, in anyplace even on your desk.

A variety of groundbreaking technologies, uh huh, okay then. In so many ways this is the perfect companion product for AI.

[-] istewart@awful.systems 2 points 11 hours ago

Oh, it's a CXL board, Compute Express Link. Basically a way to attach DRAM to PCI Express. I know some people working on this stuff for one of the big vendors, but in that context it was a rack-scale box capable of handling multiple terabytes' worth of DIMMs. Having this as a desktop expansion card seems like a bit of a marginal application, but Gigabyte's done weird shit before. For instance, I have an AMD-compatible Thunderbolt 3 card that was only made in limited quantities by them and ASRock.

[-] BlueMonday1984@awful.systems 8 points 1 day ago

Starting this Stubsack off, I found a Substack post titled "Generative AI could have had a place in the arts", which attempts to play devil's advocate for the plagiarism-fueled slop machines.

Pointing to one particular lowlight, the author attempts to conflate AI with actually useful tech to try and make an argument:

While the idea of generative AI “democratizing” art is more or less a meme these days, there are in fact AI tools that do make certain artforms more accessible to low-budget productions. The first thing to come to mind is how computer vision-based motion capture give 3D animators access to clearer motion capture data from a live-action actor using as little as a smartphone camera and without requiring expensive mo-cap suits.

[-] shapeofquanta@lemmy.vg 6 points 22 hours ago

Oh hey, that's my article actually, thanks for reading it! :D

Reading the part on motion capture back with your feedback in mind, I do see how it can give the impression of conflating generative AI with another form of machine learning (or "AI", as all of these are marketed as). That's my mistake, I could have worded it better -- thanks for pointing it out.

I don't agree that I was playing devil's advocate for the slop machines, however. I spend the majority of the article talking about my explicit disdain for them and their users. The point of the piece was to point to what I believe to be genuine use-cases for ethical ML (including gen AI) in art -- not as replacement of talent but as tools purpose-built for creatives, like the few that existed were before the current bubble. I think the paragraph right after the one on mo-cap best serves to summarize my thoughts:

Imagine that [...] we purpose-built miniature voice cloning models to enhance voice artists’ performances. Not by replacing them with text-to-speech or voice changing algorithms, but by aiding their craft to venture places traditional voice work could not reach on its own. Take, for example, role-playing video games with self-insert protagonists allowing its characters to say the player’s chosen name without having to dance around it. We could have had voice artists and machine learning experts working together in designing minimalistic AI models to seamlessly weave computer-assisted voice lines into their human performances, creating something previously impossible.

Did you have anything thoughts on my article? I'm still very much a novice writer so every bit of feedback is invaluable to me.

[-] corbin@awful.systems 4 points 8 hours ago

I think that you have useful food for thought. I think that you underestimate the degree to which capitalism recuperates technological advances, though. For example, it's common for singers supported by the music industry to have pitch correction which covers up slight mistakes or persistent tone-deafness, even when performing live in concert. This technology could also be used to allow amateurs to sing well, but it isn't priced for them; what is priced for amateurs is the gimmicky (and beloved) whammy pedal that allows guitarists to create squeaky dubstep squeals. The same underlying technology is configured for different parts of capitalism.

From that angle, it's worth understanding that today's generative tooling will also be configured for capitalism. Indeed, that's basically what RLHF does to a language model; in the jargon, it creates an "agent", a synthetic laborer, based on desired sales/marketing/support interactions. We also have uses for raw generation; in particular, we predict the weather by generating many possible futures and performing statistical analysis. Style transfer will always be useful because it allows capitalists to capture more of a person and exploit them more fully, but it won't ever be adopted purely so that the customer has a more pleasant experience. Composites with object detection ("filters") in selfie-sharing apps aren't added to allow people to express themselves and be cute, but to increase the total and average time that users spend in the apps. Capitalists can always use the Shmoo, or at least they'll invest in Shmoo production in order to capture more of a potential future market.

So, imagine that we build miniature cloned-voice text-to-speech models. We don't need to imagine what they're used for, because we already know; Disney is making movies and extending their copyright on old characters, and amateurs are making porn. For every blind person using such a model with a screen reader, there are dozens of streamers on Twitch using them to read out donations from chat in the voice of a breathy young woman or a wheezing old man. There are other uses, yes, but capitalism will go with what is safest and most profitable.

Finally, yes, you're completely right that e.g. smartphones completely revolutionized filmmaking. It's important to know that the film industry didn't intend for this to happen! This is just as much of an exaptation as captialist recuperation and we can't easily plan for it because of the same difficulty in understanding how subsystems of large systems interact (y'know, plan interference.)

[-] FredFig@awful.systems 7 points 18 hours ago

I think it's a piece in the long line of "AI means A and B, and A is bad and B can be good, so not all AI is bad", which isn't untrue in the general sense, but serves the interest of AIguys who aren't interested in using B, they're interested in promoting AI wholesale.

We're not in a world where we should be offering AI people any carveout; as you mention in the second half, they aren't interested in being good actors, they just want a world where AI is societally acceptable and they can become the Borg.

More directly addressing your piece, I don't think the specific examples you bring up are all that compelling. Or at least, not compared to the cost of building an AI model, especially when you bring up how it'll be cheaper than traditional alternatives.

this post was submitted on 07 Sep 2025
13 points (100.0% liked)

TechTakes

2154 readers
87 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS