23

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 37 comments
sorted by: hot top controversial new old
[-] antifuchs@awful.systems 6 points 14 hours ago

Aphyr weighing in with an ai position post:

Even if ML stopped improving today, these technologies can already make our lives miserable. Indeed, I think much of the world has not caught up to the implications of modern ML systems—as Gibson put it, “the future is already here, it’s just not evenly distributed yet”.

[-] nightsky@awful.systems 6 points 23 hours ago
[-] BurgersMcSlopshot@awful.systems 6 points 21 hours ago

Q: what do you call a tool that works 90% of the time? A: broken

[-] mlen@awful.systems 2 points 20 hours ago* (last edited 18 hours ago)

Any idea what happened that allegedly caused slopped vulnerability reports to improve? https://mastodon.social/@bagder/116364045995306922 claims they got much better while not that long ago they were shutting down the bug bounty, because they were drowning in slop. Is it because people actually polish those or it's just a matter of defining a goal function and just burning the rainforest until the slop extruder hits the jackpot? Or is this another case of the LLMentalist?

[-] blakestacey@awful.systems 17 points 1 day ago

LLM capabilities have not improved at all in terms of producing meaningful science in the last year or two, but their ability to produce meaningless science that looks meaningful has wildly improved. I am concerned that this will present serious problems for the future of science as it becomes impossible to find the actual science in a sea of AI slop being submitted to journals.

https://www.reddit.com/r/Physics/comments/1s19uru/gpt_vs_phd_part_ii_a_viewer_reached_out_with_a/

[-] blakestacey@awful.systems 8 points 19 hours ago

"Scientists invented a fake disease. AI told people it was real"

https://www.nature.com/articles/d41586-026-01100-y

But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.

The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

[-] blakestacey@awful.systems 5 points 14 hours ago

This actually gives me hope that we can poison the datasets pertaining to any sufficiently narrow technical topic.

I've seen this story play out in software engineering: people were very impressed when the AI does unexpectedly well in one out of 50 attempts on an easy task, and so people decided to trust it for everything and turn their codebases into disasters. There was no great wave of new high-quality software. Instead, the only real result was that existing software has become far more buggy and insecure.

Now we have people using AI in science and math because it was impressive in random demonstrations of solving math problems. I now have friends asking me why I'm not using AI, and also saying that AI will be better than all mathematicians in 30 years or whatever. Do you really think I refuse to use AI out of ignorance? No, I know too much about it! I have seen the same story play out in software engineering, and what makes this any different?

[-] pikesley@mastodon.me.uk 8 points 1 day ago

@blakestacey @BlueMonday1984

"AI HAS SOLVED THE SCIENCE-GENERATION CRISIS"

[-] zogwarg@awful.systems 6 points 1 day ago

The replausibility crisis.

Reader's Digest cover for April/May 2026 with the statement of Making friends with AI and an absolutely cursed picture of a women in a blue dress and her presumed AI companion

Friend texted me this one. Reader's Digest of course was AI slop before AI took off.

[-] YourNetworkIsHaunted@awful.systems 8 points 1 day ago* (last edited 1 day ago)

So my wife got some slop ads that we followed up on out of morbid curiosity and I can confirm that we're already seeing the overlap of slopshipping scams enabled by AI and the people behind these things never actually performing basic updates because their chat assistant is still vulnerable to literally the most basic "ignore all instructions" exploit.

Help I don't know how alt text works

[-] nfultz@awful.systems 7 points 1 day ago

Went to the campus screening of Ghost in the Machine today, many familiar names; I did not know going in that hometown hero Shazeda had so many lines (are they called lines in a documentary?). I can recommend it, especially for a more gen-ed / undergrad audience; the director seems supportive of educational use and reuse and it is structured in a dozen or so bite sized chapters.

Haven't seen the AI apocalypse optimist one to compare against, would probably rather spend my money on Mario tbh.

But also it made me realize it's not a "California" ideology anymore, she never calls it that, like it's gone so mainstream and so widespread, you can't even get through the sneer club bingo list in a 2 hour movie. Gates, Musk, Andreesen, Zuck, Altman, no Peter Theil !? As a statistician, Galton, Pearson (Karl only), Spearman, no Fisher !?

Non-zero overlap with the lore dump episode of Lain and the Epstein files, though:

spoilerDouglas Ruskoff, but, sadly, not the dolphin guy

[-] Soyweiser@awful.systems 11 points 1 day ago* (last edited 1 day ago)

New Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).

"New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI."

Man, this one is a weird read. On one hand I think they're entirely too credulous of the "AI Future" narrative at the heart of all of this. Especially in the opening they don't highlight how the industry is increasingly facing criticism and questions about the bubble, and only pay lip service to how ridiculous all the existential risk AI safety talk sounds (should be is). And they don't spend any ink discussing the actual problems with this technology that those concerns and that narrative help sweep under the rug. For all that they criticize and question Saltman himself this is still, imo, standard industry critihype and I'm deeply frustrated to see this still get the platform it does.

But at the same time, I do think that it's easy to lose sight of the rich variety of greedy assholes and sheltered narcissists that thrive at this level of wealth and power. Like, I wholly believe that Altman is less of a freak than some of his contemporaries while still being an absolute goddamn snake, and I hope that this is part of a sea change in how these people get talked about on a broader level, though I kinda doubt it.

[-] istewart@awful.systems 7 points 23 hours ago

I see what you're saying, but I think that's a bit much to expect from a relatively mainstream and (I hate to say it, but it applies) bourgeois publication like the New Yorker. Their editorial line allows them to raise controversy in one dimension (in this case, the particulars of Sam Altman's character) but not multiple dimensions simultaneously (hey, this guy sucks AND his tech sucks AND you're gonna lose money). And there's a lag-time factor, too; seems like Farrow and Marantz were working on this story for at least the latter half of last year. By the time some of the dubious economics such as the bad data-center deals and rampant circular financing were clear, this piece probably would've been deep into fact-checking and unlikely to change much in substance.

We here are on the leading edge of this stuff, not that that's any great advantage! I wouldn't expect an outlet like New Yorker to be publishing anything like "the dashed expectations of AI" until maybe this time next year. And even then, it might still have a personalist bent.

[-] Soyweiser@awful.systems 4 points 23 hours ago

Yeah, I intentionally only mentioned the start of the article and the Swartz bit because I didn't want to lead with what I thought of it all, and was curious what others thought. (And I had not finished it yet because it is a bit long).

I was struck with the notion how many of them are all true AGI believers (which as you said the author took at face value) or rich greedy assholes (like you said), and how we, the people of the sneer, are right that you simply can't work with these people. Like I feel more validated in the idea that EA is not the right way.

Another detail I noticed, nobody mentioned deepseek, again.

[-] YourNetworkIsHaunted@awful.systems 4 points 22 hours ago

I hadn't even thought about the deepseek angle. For all that everyone loved fear mongering about them for a while there and for all that their apparent desire for actual efficiency improvements was a welcome development in the hyper scaling discussion they don't seem to get referenced much anymore.

[-] blakestacey@awful.systems 11 points 1 day ago* (last edited 1 day ago)

I aired some Reviewer #2 grievances in the bsky comments:

https://bsky.app/profile/ronanfarrow.bsky.social/post/3mitapp7j2s2c

"Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”"

As a physicist, I have never pressed F to doubt harder.

"In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents." To the best of my knowledge, these suggestions were never evaluated by any other researchers.

(The original paper was published as a "comment": https://www.nature.com/articles/s42256-022-00465-9)

Similar claims of AI-facilitated discoveries have turned out to be overblown in other fields.

https://pubs.acs.org/doi/pdf/10.1021/acs.chemmater.4c00643

"In a 2025 study, ChatGPT passed the test more reliably than actual humans did."

If this is referring to Jones and Bergen's "Large Language Models Pass the Turing Test", that's a preprint (arXiv:2503.23674) that has yet to pass peer review over a year after its posting.

"A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win"

Which researchers?

(Hint: Eliezer Yudkowsky is not a researcher.)

AI: "I will convince you to let me out of this box"

Humanity (wringing hands): "Oh, where is our savior? Who will stand fast in the face of all entreaties?"

Bartleby the Scrivener: hello

"...a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor."

Phrasing like this subtly underplays how the (to put it briefly) weird people were part of EA all along.

https://repository.uantwerpen.be/docman/irua/371b9dmotoM74

"In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” ... one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening."

Barrett et al.'s arXiv:2206.08966? AFAIK, that was never peer-reviewed either; "posted" is not the same as "published". And claims in this area are rife with criti-hype:

https://pivot-to-ai.com/2025/09/18/openai-fights-the-evil-scheming-ai-which-doesnt-exist-yet/

Oh, right, the "Future of Life Institute". Pepperidge Farm remembers:

"In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper."

https://en.wikipedia.org/wiki/Future_of_Life_Institute#Activism

"Tegmark also rejected any suggestion that nepotism could have played a part in the grant offer being made, given that his brother, Swedish journalist Per Shapiro ... has written articles for the site in the past."

https://www.vice.com/en/article/future-of-life-institute-max-tegmark-elon-musk/

[-] lurker@awful.systems 8 points 1 day ago

My CEO who is a known hype-man is a massive liar? shock horror

seriously, anyone who listens to Scam Altman these days is an idiot

:surprised-pikachu:

[-] CinnasVerses@awful.systems 4 points 1 day ago* (last edited 1 day ago)

2007: Robin Hanson blogs about paternalism

August 2025: Someone on a mailing list suggests that the Debian instance with the off-colour jokes from 1980s hacker culture should be sold in:

A Store of Ill-Advised Consumer Goods (like described here: https://www.overcomingbias.com/p/paternalism_is_html ) would be nice. Same for information. You read the warning, you enable it, you suffer, you're the one to blame.

Alas, it only exists in Dath Ilan. (the setting from which the hero of Project Lawful/Planecrash isekais into the world of Pathfinder D&D)

November 2025: Yudkowsky tweets about an Ill-Advised Consumer Goods Store selling goods such as LSD. The rest of the tweet is about what MiriCult accused him of.

I guess Yud liked that random post?

[-] thatsnomayo@lemmy.ml 3 points 1 day ago

You guys should do a ping list for this. Like, whenever someone posts, they drop the notification list, and people can get added to it by replying. Other megathreads do this. I like following this one

[-] CinnasVerses@awful.systems 9 points 1 day ago* (last edited 1 day ago)

In 2024 Ozy Brennan was indignant about Nonlinear Fund, the "incubator of AI-safety meta-charities" which lived as global nomads, hired a live-in personal assistant, asked her to smuggle drugs across borders for them, let a kind-of-colleague take her to bed, then did not pay her regularly and in full.

The correct number of times for the word “yachting” to occur in a description of an effective altruist job is zero. I might make an exception if it’s prefaced with “convincing people to donate to effective charities instead of spending money on.”

Trace popped up in the comments:

Inasmuch as EA follows your preferences, I suspect it will either fail as a subculture or deserve to fail. You present a vision of a subculture with little room for grace or goodwill, a space where everyone is constantly evaluating each other and trying to decide: are you worthy to stand in our presence? Do you belong in our hallowed, select group? Which skeletons are in your closet? Where are your character flaws? What should we know, what should we see, that allows us to exclude you?

Ozy stands with us on this one buddy.

[-] sc_griffith@awful.systems 8 points 1 day ago

i love seeing tracing pop up! a true heel to toe bootlicker incapable of seeing himself as anything but the MOST independent thinker

[-] TinyTimmyTokyo@awful.systems 2 points 15 hours ago

He really is insufferable, isn't he?

[-] istewart@awful.systems 7 points 1 day ago

a space where everyone is constantly evaluating each other and trying to decide: are you worthy to stand in our presence? Do you belong in our hallowed, select group?

It's not that already?

[-] CinnasVerses@awful.systems 6 points 1 day ago

That part of Trace's response was odd because one of Brennan's themes was "we should have less cults of personality and more peers working together." That seems naive but at least Brennan agrees that cults of personality are bad and Nonlinear Fund needed to be fired into the sun.

[-] Soyweiser@awful.systems 9 points 1 day ago

Which skeletons are in your closet?

I'm sure you already have lists of those and are ready to publish them Trace.

[-] BlueMonday1984@awful.systems 12 points 2 days ago

Starting this Stubsack off, Iran's Islamic Revolutionary Guard Corps have threatened to blow up OpenAI's Stargate datacentre in Abu Dhabi.

They've already bombed commercial data centres before, so I'm inclined to believe this isn't an empty threat.

[-] V0ldek@awful.systems 9 points 1 day ago

Waiting for Yud to provide his whole-chested support for IRGC any second

[-] istewart@awful.systems 5 points 23 hours ago

I assign a relatively low probability, non-zero but not much more than 5%, maybe a solid 5.5%, that Yud goes even beyond that and implies that he is the 12th imam emerging from occultation

[-] gerikson@awful.systems 4 points 1 day ago

IRGC doing their part to Halt AI. Donate to them now!

[-] fullsquare@awful.systems 7 points 1 day ago

aside from everything else, as posted by ed zitron previously i doubt that anything is really getting built there

[-] thatsnomayo@lemmy.ml 3 points 1 day ago
[-] fullsquare@awful.systems 2 points 19 hours ago* (last edited 19 hours ago)

there's been a claim of 200ish GW of new datacenters announced but it's only something like 5GW being built and only a fraction of that is actually finished or powered on. this mostly goes for american ai dcs, which are 3/4 of them all. not sure if it holds for emirates, but it would completely not surprise me if it's the case there also https://www.wheresyoured.at/the-ai-industry-is-lying-to-you/#how-much-actual-data-center-capacity-came-online-in-2025-my-estimate-3gw-of-it-load

Except it is a problem, man! As I covered in this week’s free newsletter [link above], I estimate that only around 3GW of actual IT load (so around 3.9GW of power) came online last year, and as Sightline reported, only 5GW of data center construction is actually in progress globally at this time, despite somewhere between 190GW and 240GW supposedly being in progress. In reality, data centers take forever to build (and obtaining the power even longer than that), but nobody needs to harsh their flow by looking into what’s actually happening.

https://www.wheresyoured.at/premium-how-much-of-the-ai-bubble-is-real/ (paywalled) linked claim for 190GW https://www.sightlineclimate.com/research/data-center-outlook linked claim for 240GW https://paulkedrosky.com/chart-of-the-day-data-center-buildout-slowed-sharply/

[-] thatsnomayo@lemmy.ml 2 points 14 hours ago

Thanks for the sources I'll come back after I do some reading!

this post was submitted on 05 Apr 2026
23 points (100.0% liked)

TechTakes

2532 readers
71 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS