12

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 42 comments
sorted by: hot top controversial new old
[-] scruiser@awful.systems 7 points 5 hours ago

Lesswronger notices all of the rationalist's attempts at making an "aligned" AI company keep failing: https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate

Notably, the author doesn't realize Capitalism is the root problem in misaligning the incentives, and it takes a comment directly point it out for them to get as far as noticing as link to the cycle of enshittification.

Others were alarmed and advocated internally against scaling large language models. But these were not AGI safety researchers, but critical AI researchers, like Dr. Timnit Gebru.

Here we see rationalists approaching dangerously close to self-awareness and recognizing their whole concept of "AI safety" as marketing copy.

[-] swlabr@awful.systems 8 points 3 hours ago
>50 min read  
>”why company has perverse incentives”
>no mention of capitalism

rationalism.mpeg

[-] fullsquare@awful.systems 5 points 11 hours ago
[-] froztbyte@awful.systems 4 points 8 hours ago

ah yes, that great mark of certainty and product security, when you have to unleash pitbulls to patrol the completely not dangerous park that everyone can totally feel at ease in

(and of course I bet the damn play is a resource exhaustion attack on critics, isn’t it)

I don't think it's a resource exhaustion attack as much as a combination of legitimate paranoia (the consequence of a worldview where only billionaires are capable of actual agency) and attempt to impose that on reality by reverse-astroturfing any opposition by tying it to other billionaire AI bros.

[-] BigMuffN69@awful.systems 5 points 11 hours ago* (last edited 11 hours ago)

Great piece on previous hype waves by P. Ball

https://aeon.co/essays/no-suffering-no-death-no-limits-the-nanobots-pipe-dream

It’s sad, my “thoroughly researched” “paper” greygoo-2027 just doesn’t seem to have that viral x-factor that lands me exclusive interviews w/ the Times 🫠

[-] scruiser@awful.systems 5 points 5 hours ago

Putting this into the current context of LLMs... Given how Eliezer still repeats the "diamondoid bacteria" line in his AI-doom scenarios, even multiple decades after Drexler has both been thoroughly debunked and slightly contributed to inspiring real science, I bet memes of LLM-AGI doom and utopia will last long after the LLM bubble pops.

[-] Soyweiser@awful.systems 5 points 8 hours ago

Indeed great piece, good to document the older history of that stuff as well.

[-] bitofhope@awful.systems 6 points 12 hours ago

Creator of NaCl publishes something even saltier.

"Am I being detained?" I scream as IETF politely asks me to stop throwing a tantrum over the concept of having moderation policy.

[-] BasiqueEvangelist@awful.systems 3 points 9 hours ago

Does somebody have a rundown or something on DJB? All of the tantrum throwing has me confused over what his deal is.

[-] froztbyte@awful.systems 4 points 8 hours ago

noted for advancements in cryptography, and “stayed impartial” (iirc not quite defending, but also not acknowledging nor distancing) when the jacob appelbaum shit hit wider knowledge

probably about all you need to know in a nutshell

the most recent shit before this when I recall seeing his name pop up was when he was causing slapfight around Kyber (ML-KEM) in the cryptography spaces, but I don’t have links at hand

[-] BlueMonday1984@awful.systems 4 points 14 hours ago

New Baldur Bjarnason: The melancholy of history rhyming, comparing the AI bubble with the Icelandic banking bubble, and talking about the impending fallout of its burst.

[-] corbin@awful.systems 4 points 13 hours ago

What is the Range Rover in this analogy? A common belief about the 2008 Iceland bubble, which may very well not be true but was widely reported, is that Iceland's credit was used to buy luxuries like high-end imported cars; when the bubble burst, many folks supposedly committed insurance fraud by deliberately destroying their own cars which they could no longer afford to finance. (I might suggest that credit bubbles are fundamentally distinct from investment bubbles.)

[-] BlueMonday1984@awful.systems 4 points 12 hours ago

By my guess, the servers and datacentres powering the LLMs will end up as the AI bubble's Range Rover equivalent - they're obscenely expensive for AI corps to build and operate, and are practically impossible to finance without VC billions. Once the bubble bursts and the billions stop rolling in, I expect the servers to be sold off for parts and the datacentres to be abandoned.

[-] lagrangeinterpolator@awful.systems 5 points 17 hours ago* (last edited 17 hours ago)

From the ChatGPT subreddit: Gemini offers to pay me for a developer to fix its mess

Who exactly pays for it? Google? Or does Google send one of their interns to fix the code? Maybe Gemini does have its own bank account. Wow, I really haven't been keeping up with these advances in agentic AI.

[-] fullsquare@awful.systems 8 points 15 hours ago

it's almost as funny as when one time chatbot told vibecoder to learn to code

[-] Soyweiser@awful.systems 3 points 16 hours ago

Out: 'getting paid in exposure'

In: 'when you are done, just send your invoice to chatgpt'

[-] swlabr@awful.systems 7 points 21 hours ago

Kind of tangential to the sneer sphere. TIL about the Gayfemboy malware

[-] e8d79@discuss.tchncs.de 7 points 20 hours ago* (last edited 20 hours ago)

The trigger for activating the backdoor in Gayfemboy is the character string "meowmeow".

Whisper meow meow to your femboy to get access to his backdoor... is this malware the blackhat equivalent of a shitpost?

[-] Soyweiser@awful.systems 5 points 19 hours ago

Such sofisticated methods, and then they drop crypto miners.

[-] BlueMonday1984@awful.systems 2 points 16 hours ago
[-] JFranek@awful.systems 7 points 22 hours ago

Shamelessly posting link to my skeet thread (skeet trail?) on my experience with an (mandatory) AI chatbot workshop. Nothing that will surprise regulars here too much, but if you want to share the pain...

https://bsky.app/profile/jfranek.bsky.social/post/3lxtdvr4xyc2q

[-] bitofhope@awful.systems 3 points 14 hours ago

I love it giving the temperature in Europe. Down to a decimal, even.

[-] YourNetworkIsHaunted@awful.systems 3 points 21 hours ago

The blatant covering for the confabulated zip code is some peak boosterism. It knows what an address looks like and that some kind of postal code has to go there, and while it was pretty close I would still expect that to get returned to sender. Pretty close isn't good enough.

[-] JFranek@awful.systems 3 points 19 hours ago

Yeah, didn't even cross their mind that it could be wrong, because it looked ok.

[-] froztbyte@awful.systems 6 points 23 hours ago

in what seems to be a very popular theme of "maybe we can just live off defense money" for tech outfits, oura is planning to manufacture in texas for simping to the DoD

I'm struggling to sneer it, it's so fucking absurd

[-] DonPiano@feddit.org 9 points 1 day ago

Kind of generic: I am a researcher and recently started a third party funded project where I won't teach for a while. I kinda dread what garbage fire I'll return to in a couple of years when I teach again, how much AI slop will be established on the sides of teachers and students.

[-] o7___o7@awful.systems 8 points 1 day ago* (last edited 1 day ago)

DragonCon drops the ban hammer on a slop slinger. There was much rejoicing.

https://old.reddit.com/r/dragoncon/comments/1n5r2eu/a_warm_heartfelt_goodbye_to_a10_the_ai_artist_who/

Btw, the vibes were absolutely marvelous this year.

Edit: a shrine was built to shame the perpetrator

https://old.reddit.com/r/dragoncon/comments/1n60s10/to_shame_that_ai_stand_in_artist_alley_people/

[-] yetanotherduc@awful.systems 12 points 1 day ago

As a CS student, I wonder why us and artists are always the one who are attacked the most whenever some new "insert tech stuff" comes out. And everyone's like: HOLY SHIT PROGRAMMERS AND ARTISTS ARE DEAD, without realizing that most of these things are way too crappy to actually be... good enough to replace us?

[-] shapeofquanta@lemmy.vg 9 points 1 day ago

My guess would be because most people don’t understand what you all actually do so gen AI output looks to them like their impression of the work you do. Just look at the game studios replacing concept artists with Midjourney, not grasping what concept art even is for and screwing up everyone’s workflow as a result.

I’m neither a programmer nor an artist I can sorta understand how people get fooled. Show me a snippet of nonsense code or image and I’ll nod along if you say it’s good. But then as a writer (even if only hobbyist) I am able to see how godawful gen AI writing is whereas some non-writers won’t, and so I extrapolate from that since it’s not good at the thing I have domain expertise in, it probably isn’t good at the things I don’t understand.

[-] stormeuh@lemmy.world 6 points 21 hours ago

I feel like aggressive proponents of genAI are so because they are intimidated and/or jealous of the people they say they will replace. They lack the skills and critical thinking to become good at the task they want to replace, but also unwilling or unable to put in the work.

Instead of reckoning with this, they construct a phantasm where artists are "gatekeeping art", and genAI is going to disrupt that gatekeeping. Meanwhile I think deep down they know that what genAI produces is derivative by definition, and not "real art" by any means.

[-] swlabr@awful.systems 7 points 1 day ago

Show me a snippet of nonsense code or image and I’ll nod along if you say it’s good.

Smirk I’m in.

[-] shapeofquanta@lemmy.vg 3 points 18 hours ago

Look mom, I'm vibe-coding a SaaS!

[-] V0ldek@awful.systems 9 points 1 day ago

I was thinking about ethics ~~in game journalism~~ in software engineering and I think it might be easier to create a whitelist than a blacklist:

What are some serious software/hardware companies that have NOT participated in the AI bubble? No AI nonsense in their marketing slides, no mentions of AI on their landing page, etc.

[-] nightsky@awful.systems 7 points 1 day ago

Procreate, a digital painting software for iPads, has not just avoided AI, they even have an anti-GenAI statement on their website.

[-] BlueMonday1984@awful.systems 3 points 1 day ago

I was planning to mention Procreate as well, but felt like that'd be spamming the replies a bit.

On a wider note, I expect it'll be primarily art-related software/hardware companies that will have avoided AI participation - with how utterly artists have rejected the usage of AI, and resisted its intrusion into their spaces, the companies working with them likely view rejecting AI as an easy way of earning good PR with their users, and embracing it as a business liability at best, and a one-way trip past the trust thermocline at worst.

[-] BlueMonday1984@awful.systems 7 points 1 day ago

Gonna cheat a little bit and put one-woman consultancy firm/personal blog deadSimpleTech up as an example. The sole member is Iris Meredith, whose involvement begins and ends at publicly lambasting AI's continued shittiness.

[-] BlueMonday1984@awful.systems 8 points 1 day ago

Where the fuck has that guy been for 20 years? I've seen that happen many times with junior programmers during my 20 years of experience.

[-] froztbyte@awful.systems 8 points 1 day ago

also from a number of devs who went borderline malicious compliance in "adopting tdd/testing" but didn't really grok the assignment

At a recent job, I definitely saw malicious compliance/incompetence when it came to writing tests. My team and I would work hard to retrofit tests into older functionality and adjacent teams if they bothered to write tests would avoid testing anything of consequence.

this post was submitted on 31 Aug 2025
12 points (100.0% liked)

TechTakes

2146 readers
112 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS