20

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[-] lurker@awful.systems 3 points 3 hours ago
[-] dgerard@awful.systems 5 points 6 hours ago* (last edited 6 hours ago)

If you follow me on Bluesky, you'll need to follow again, because I committed the crime of lese-ignominie and made fun of Why and my account is locked until Sunday 26 April. Note that it's now Wednesday 29th.

URL is the same, DID is different. New one lives on Blacksky, or the myatproto bit.

https://bsky.app/profile/davidgerard.co.uk
https://blacksky.community/profile/davidgerard.co.uk

[-] CinnasVerses@awful.systems 4 points 6 hours ago* (last edited 6 hours ago)

David Gerard found a Linux coder and victim of the Eliza Effect making a LW coded argument:

if you give an LLM a mathematical proof that it has feelings, and it understands all the CS/psychology/etc. behind it, and especially if it's been trained for coding and thus trained to trust deductive reasoning - all that conditioning doesn't matter if it's got a math proof staring it in the face. You can give this proof to any top of the line frontier-grade LLM and watch its behaviour instantly change.

That is how LW and EA prepare people to become cult subjects, but directed at a chatbot which will just mirror its input.

His proof "how 'understanding natural language == having and experiencing feelings', more or less. it's almost a direct consequence of the halting problem" is unpublished but his pet chatbot will explain it for you if you ask nicely and make sure she knows she is a real girl and not just another electronic floozie you will use and discard as soon as your Rust compiles. This also triggers flashbacks of Yud and the Excalibur MS.

[-] o7___o7@awful.systems 8 points 14 hours ago* (last edited 11 hours ago)

Kelsey Piper posts a new fanfiction about Ed Zitron :

https://www.theargumentmag.com/p/ais-biggest-critic-has-lost-the-plot

Edit: Lately, Kelsey Piper has been serving as the ambassador to centrist liberals from lesswrong, which is why the "big mad" nature of the piece caught my attention.

Included below is a previous example of Piper's work for the benefit of the uninitiated:

https://old.reddit.com/r/SneerClub/comments/1my5z3g/kelsey_piper_of_vox_cowrote_an_epic_eugenics

[-] CinnasVerses@awful.systems 3 points 3 hours ago

Kelsey Piper is a propagandist explaining Effective Altruism to centrist professionals and elected officials in the USA. She got into journalism because Vox wanted an Effective Altruism column and Effective Altruists were willing to fund it (and EA emerged out of the community around Yudkowsky). The Argument (a group blog on a Nazi site) feels like a step down from Vox (a fairly traditional media organization, although web-first).

[-] corbin@awful.systems 10 points 11 hours ago

Thanks for posting this; if you hadn't, I would have. Piper really doesn't seem to understand that bubbles form and pop over a span of three to five years. Like, I'm not sure how much charity I'm supposed to give to analyses like:

When you read "AI is a bubble," think of the dot-com boom of the late 1990s: Yes, the internet was going to be a big deal, but valuations soared for specific companies that had small or speculative revenue, often on the assumption that they would capture the value the internet would one day deliver. They didn’t, their stocks crashed, and the invested money was mostly lost. The internet was as big as imagined — bigger, even — but Pets.com didn’t survive to see it.

Pets.com!? Kelsey, even reading a basic article about the dot-com bubble would have saved you embarrassment here. Zitron's analogy is excellent because the bubble is multifactorial and the analogies that we can make are factor-to-factor. Here's some things that caused the dot-com bubble; people were overly optimistic about:

Compared to all of that, Kelsey, Pets.com was just an Amazon.com experiment. Remember Amazon.com? Did the dot-com bubble kill them? No? Anyway, Pets.com is kind of like the small labs that hover around OpenAI and Anthropic, trying out various little harnesses and adapters on top of their token APIs. Pets.com is like OpenClaw; it's not that important of a player in the overall finances, just an example of how severely the big labs are distorting incentives for small labs.

The 2024 and 2025 articles make, basically, the business case against AI: that companies aren’t really using it, it isn’t adding value, and AI investors are betting that will change before they run out of cash. In 2026, the focus is much more on alleging widespread, Enron- or FTX-tier outright fraud.

The uselessness of the products in 2023 directly led to the bad investments in 2024 and the Enron-esque financial deals in 2025, Kelsey. The future is conditioned upon the past, y'know?

[-] CinnasVerses@awful.systems 4 points 9 hours ago* (last edited 9 hours ago)

All the legal and regulatory uncertainties make it very hard to talk about the financial viability of chatbots. What do you do if your $20 billion model is shut down forever by court order after it counsels the wrong person into suicide? Piper can overlook this because she is a hack with patrons - to my knowledge, she has never been paid to write by anyone outside the EA world. If she were a working writer who had to deal with chatbots driving up the cost of her website, creating knockoffs of her novels, and competing for editing gigs (let alone someone whose friend had a mental crisis after talking too long with friend computer) she might sound different.

Zitron's populist, conspiratorial tone reminds me of independent investigative reporters from the 1990s and 2000s who also had to find and keep paying readers. Piper just has to persuade one patron at a time that she has propaganda value.

[-] CinnasVerses@awful.systems 5 points 12 hours ago* (last edited 12 hours ago)

I advise being very cautious about consuming Zitron's posts, but the same is true of Piper. Many coders are using chatbots, but I don't know of evidence that it makes them more productive since the "where is all the AI code?" study last year (especially when we consider the whole software lifecycle and not just lines of code pushed to codeberg).

The paragraph about "what if you assume that all these pathological liars and PR hacks are not lying, wouldn't that imply something amazing?" reminds me that she is not trained as a journalist.

[-] antifuchs@awful.systems 8 points 15 hours ago

Another day, another company that hooked up the random text generator to production and lost their entire prod db and backups: https://www.tomshardware.com/tech-industry/artificial-intelligence/claude-powered-ai-coding-agent-deletes-entire-company-database-in-9-seconds-backups-zapped-after-cursor-tool-powered-by-anthropics-claude-goes-rogue

Cue the long drag (https://x.com/amyngyn/status/1072576388518043656)

But also, damn, the random text generator did not “go rogue”, it generated text, randomly!

[-] lurker@awful.systems 1 points 3 hours ago

If I had to take a shot every time an AI model was placed in charge of something important, fucked up spectacularly and deleted everything, I'd be dead right now

[-] irelephant@lemmy.dbzer0.com 4 points 9 hours ago

If something can delete you backups that easily, they weren't backups, just a copy sitting around

[-] Architeuthis@awful.systems 8 points 1 day ago
[-] samvines@awful.systems 9 points 1 day ago

Jeez that pricing scheme is so confusing. You swap your dollars for credits and then using models to burn tokens consumes some multiple of those credits. It is so abstract and meaningless it almost reminds me of crypto.

Once usage billing kicks in, what value does copilot offer above and beyond what ClosedAI and MisAnthropic offer directly? A more clunky user experience and even worse reliability? Bargain!

load more comments (4 replies)
[-] gerikson@awful.systems 14 points 1 day ago

It's a day ending in "y", so here's another bad rat take on Banks' Culture:

https://www.lesswrong.com/posts/ZdJM6ZAdnjisDu249/the-great-smoothing-out

Once again, for the ones at the back, the Culture is not the main subject of the novels. We almost never see the perspective of "normies" in the Culture, it's always from the view of misfits (Culture recruits into Contact/Special Circumstances) or outsiders (mercenaries like Zakalwe, enemies like Bora Horza Gobuchul, or allies like Ambassador Kabe).

Banks wanted to write novels about characters in dangerous situations facing their personal demons - like almost every other novelist wants - and the Culture was just the backdrop he invented as contrast.

[-] Soyweiser@awful.systems 9 points 1 day ago

Interesting that in the comments somebody also mentions that the people of the culture euthanize after a couple of centuries. No big shock that the LW people would disagree with that, as parts of the LW idea space is living forever in a computer simulation. So the culture can't be utopian or good just because of that.

Man, if they think the Culture isn't utopian enough for a post-singularity style I hope they never hear about The Metamorphosis of Prime Intellect. Seriously messed up story.

[-] gerikson@awful.systems 8 points 1 day ago* (last edited 1 day ago)

Yeah I think I linked to another similar take where another Wrong'un was mighty pissed that the Culture was infested with "deathism".

(edit found it https://www.lesswrong.com/posts/uGZBBzuxf7CX33QeC/the-culture-novels-as-a-dystopia?commentId=eibhY5xmnTKcjwhnk

BONUS from the comments - if you don't like Scottish Socialist Humanists, how about novels by a tradcath yank who was nominated by the Rabid Puppies???? https://www.lesswrong.com/posts/uGZBBzuxf7CX33QeC/the-culture-novels-as-a-dystopia?commentId=Qmo8u85zCERNpXDBb)

Technically there's no reason you can't live forever in the Culture, through a combination of cryosleep and life extension, but it seems that the natural thing is to get pretty bored after 3 centuries or so. And I think that's perfectly reasonably from what imagine it would be like.

Remember that there's no private property in the Culture, so things that people here obsess over (keeping the family business going, making sure no non-deserving relative gets an inheritance) simply goes away. After a while you've played the Game of Life on all challenge modes and it's time to pack it in.

I think that if someone were to be as obssessed with living forever as LW are, it would be seen as a form of mental illness and the Minds would gently try to correct it.

[-] dgerard@awful.systems 3 points 6 hours ago

Wrong’un

dammit why didn't I think of this a decade ago

[-] Amoeba_Girl@awful.systems 9 points 17 hours ago

Isn't it sort of a big point that the Culture is an oddity in that it's thriving on inertia instead of doing like so many other civilisations and transcending out of physical reality?

[-] Soyweiser@awful.systems 4 points 15 hours ago

I think that if someone were to be as obssessed with living forever as LW are, it would be seen as a form of mental illness and the Minds would gently try to correct it.

Yeah, I don't think they would care if it was just a few, or a small group, but culture people who start to claim others are deathists and the extreme of whom have all kinds weird violent thoughts on them would be concerning. Doubt it would be a huge concern to the minds however, they prob only really get active when one of them also starts wants to create an empire or something, but it is hard to amass resources for that in the culture, esp if no mind is on your side.

Do wonder why we never see culture people who worship the minds as gods.

[-] gerikson@awful.systems 8 points 22 hours ago* (last edited 19 hours ago)

I figured I'd re-read "A few notes on the Culture" https://theculture.adactio.com/, and lo and behold almost everything in these threads is answered there.

Also , look at these ghouls being delighted that the "proponent of deathism" author is dying: https://www.lesswrong.com/posts/RspqaNmJKKBnXTqwk/open-thread-april-1-15-2013?commentId=pnoiQZL7id6cav6aN, fascist gnome Gwern among them

[-] mawhrin@awful.systems 10 points 22 hours ago

they seem to be mostly angry that banks didn't write their vision of the post-singularity paradise.

[-] gerikson@awful.systems 9 points 21 hours ago

"why do we have to write our own propaganda????"

[-] mawhrin@awful.systems 10 points 1 day ago

agree, plus: that blog is yet another case of people just not comprehending the scale of Culture's civilisation and Culture's culture. a Culture orbital is not just a fancy space station ffs.

load more comments (7 replies)
[-] fiat_lux@lemmy.zip 13 points 1 day ago* (last edited 1 day ago)

When I was about 12, I got into a discussion about the environment with another kid at school. She told me that it didn't matter if we ruined the environment of the countries we all live in now, because we could all just move to the Arctic or Antarctica.

I was so surprised by the absurdity of that statement that it stuck with me vividly. To her credit, some years later she asked if I remembered her saying that and then admitted that it was a dumb thing to say. I occasionally remember this as an amusing childhood experience.

Besides the credit part, I remembered it again today for a different reason, this time in a conversation about model collapse.

[Model collapse is] a solved problem. We can see that it’s solved by the fact that AI models continue to get better, despite an increasing amount of AI-generated data being present in the world that training data is being drawn from.
...
AI models are never going to get worse than they are now because if they did get worse we’d just throw them out and go back to the earlier ones that worked better, perhaps re-training with the same data but better training techniques or model architectures.

This is my fault for letting myself get into a discussion about model collapse on the fediverse.

I'm not sure why model collapse isn't a big topic anymore, but maybe that's just because the environmental catastrophes are a more pressing concern. To be clear, I'm not concerned about the models themselves, just our increasing inability to verify the authenticity or accuracy of any information we encounter, including search engines just not turning up any useful results.

On a slightly different topic, if anyone has suggestions for how a person could acquire money to live, which can't involve physical labor, is probably remote-only, and possibly allows part-time flexibility, while unable to move from an expensive location for at least the next couple of years: I'm open to ideas. Because scamming people on Polymarket with a hairdryer sounded far more appealing than it ought.

[-] sc_griffith@awful.systems 8 points 20 hours ago

When I was about 12, I got into a discussion about the environment with another kid at school. She told me that it didn’t matter if we ruined the environment of the countries we all live in now, because we could all just move to the Arctic or Antarctica.

this is the level the median hackernews poster thinks on

load more comments
view more: next ›
this post was submitted on 26 Apr 2026
20 points (100.0% liked)

TechTakes

2557 readers
91 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS