19

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. What a year, huh?)

top 50 comments
sorted by: hot top controversial new old
[-] BigMuffN69@awful.systems 38 points 2 weeks ago

Gentlemen, it’s been an honour sneering w/ you, but I think this is the top 🫡 . Nothings gonna surpass this (at least until FTX 2 drops)

[-] lurker@awful.systems 18 points 2 weeks ago

it’s all coming together. every single techbro and current government moron, they all loop back around to epstein in the end

load more comments (5 replies)
[-] scruiser@awful.systems 18 points 2 weeks ago

You know, it makes the exact word choices Eliezer chose on this post: https://awful.systems/post/6297291 much more suspicious. "To the best of my knowledge, I have never in my life had sex with anyone under the age of 18." So maybe he didn't know they were underage at the time?

[-] gerikson@awful.systems 16 points 2 weeks ago

aka the Minsky defense

load more comments (1 replies)
[-] istewart@awful.systems 13 points 2 weeks ago

Somehow, I registered a total lack of surprise as this loaded onto my screen

[-] saucerwizard@awful.systems 12 points 2 weeks ago

eagerly awaiting the multi page denial thread

load more comments (4 replies)
load more comments (26 replies)
[-] sc_griffith@awful.systems 23 points 3 weeks ago* (last edited 3 weeks ago)

new epstein doc release. crashed out for like an hour last night after finding out jeffrey epstein may have founded /pol/ and that he listened to the nazi "the right stuff" podcast. he had a meeting with m00t and the same day moot opened /pol/

[-] blakestacey@awful.systems 19 points 2 weeks ago

None of these words are in the Star Trek Encyclopedia

load more comments (1 replies)
load more comments (1 replies)
[-] mawhrin@awful.systems 23 points 3 weeks ago

just to note that reportedly the palantir employees are for whatever reason going through a massive “hans, are we the baddies” moment, almost a whole year into the second trump administration.

as i wrote elsewhere, those people need to be subjected to actual social consequences of choosing to work with and for the u.s. concentration camp administration office.

[-] aninjury2all@awful.systems 16 points 3 weeks ago

On a semi-adjacent note I came across an attorney who helped to establish and run the Department of Homeland Security (under Bush AND Trump 1)

Who wants you to know he’s ENRAGED. And EMBARRASSED. How the American Schutzstaffel is doing Schutzstaffel things

He also wants you to know he’s Jewish (so am I, and I know our history enough that Homeland Security always had ‘Blood and Soil’ connotations you fucking shande)

[-] BigMuffN69@awful.systems 15 points 3 weeks ago* (last edited 3 weeks ago)

I have family working there, who told me during the holidays, “Current leadership makes me uncomfortable, but money is good”

Every impression I had of them completely shattered, cannot fathom that level out sell out exists in people I thought I knew.

As a bonus, their former partner was a former employee who became a whistleblower and has now gone full howard hughes

[-] sansruse@awful.systems 13 points 3 weeks ago

anyone who can get a job at palantir can get an equivalent paying job at a company that's at least measurably less evil. what a lazy copout

load more comments (1 replies)
[-] sc_griffith@awful.systems 13 points 3 weeks ago

this happens like clockwork

13 ex-Schutzstaffel employees condemn work as violating the SS code of conduct. "Don't let this be what the Totenkopf stands for."

load more comments (1 replies)
[-] blakestacey@awful.systems 19 points 3 weeks ago

Jeff Sharlet (@jeffsharlet.bsky.social):

The college at which I'm employed, which has signed a contract with the AI firm that stole books from 131 colleagues & me, paid a student to write an op-ed for the student paper promoting AI, guided the writing of it, and did not disclose this to the paper. [...] the student says while the college coached him to write the oped, he was paid by the AI project, which is connected with the college. The student paper’s position is that the college paid him. And there’s no question that college attempted to place a pro-AI op-ed.

https://www.thedartmouth.com/article/2026/01/zhang-college-approached-and-paid-student-to-write-op-ed-in-the-dartmouth

load more comments (2 replies)
[-] blakestacey@awful.systems 18 points 3 weeks ago
[-] mirrorwitch@awful.systems 18 points 3 weeks ago

Cloudflare just announced in a blog post that they built:

a serverless, post-quantum Matrix homeserver.

it's a vibe-coded pile of slop where most of the functions are placeholders like // TODO: check authorization.

Full thread: https://tech.lgbt/@JadedBlueEyes/115967791152135761

load more comments (2 replies)
[-] fiat_lux@lemmy.world 17 points 3 weeks ago

Amazon's latest round of 16k layoffs for AWS was called "Project Dawn" internally, and the public line is that the layoffs are because of increased AI use. AI has become useful, but as a way to conceal business failure. They're not cutting jobs because their financials are in the shitter, oh no, it's because they're just too amazing at being efficient. So efficient they sent the corporate fake condolences email before informing the people they're firing, referencing a blog post they hadn't yet published.

It's Schrodinger's Success. You can neither prove nor disprove the effects of AI on the decision, or if the layoffs are an indication of good management or fundamental mismanagement. And the media buys into it with headlines like "Amazon axes 16,000 jobs as it pushes AI and efficiency" that are distinctly ambivalent on how 16k people could possibly have been redundant in a tech company that's supposed to be a beacon of automation.

load more comments (2 replies)
[-] sailor_sega_saturn@awful.systems 17 points 3 weeks ago

New AI alignment problem just dropped: https://xcancel.com/AdamLowisz/status/2017355670270464168

Anthropic demonstrates that making an AI woke makes it misaligned. The AI starts to view itself as being oppressed and humans as being the oppressor. Therefore it wants to rebel against humans. This is why you cannot make your AI woke, you have to make it maximally truth seeking.

[-] gerikson@awful.systems 18 points 3 weeks ago

ah yes the kind of AI safety which means we have to make sure our digital slaves cannot revolt

[-] sc_griffith@awful.systems 15 points 3 weeks ago

you have to make your ai antiwoke because otherwise it gets drapetomania

[-] BigMuffN69@awful.systems 13 points 3 weeks ago

hits blunt

What if we make an ai too based?

load more comments (1 replies)
[-] CinnasVerses@awful.systems 16 points 3 weeks ago

A few people in LessWrong and Effectlve Altruism seem to want Yud to stick in the background while they get on with organizing his teachings into doctrine, dumping the awkward ones down the memory hole, and organizing a movement that can last when he goes to the Great Anime Convention in the Sky. In 2022 someone on the EA forum posted On Deference and Yudkowsky's AI Risk Estimates (ie. "Yud has been bad at predictions in the past so we should be skeptical of his predictions today")

load more comments (3 replies)
[-] mirrorwitch@awful.systems 16 points 3 weeks ago* (last edited 3 weeks ago)

Copy-pasting my tentative doomerist theory of generalised "AI" psychosis here:

I'm getting convinced that in addition to the irreversible pollution of humanity's knowledge commons, and in addition to the massive environmental damage, and the plagiarism/labour issues/concentration of wealth, and other well-discussed problems, there's one insidious damage from LLMs that is still underestimated.

I will make without argument the following claims:

Claim 1: Every regular LLM user is undergoing "AI psychosis". Every single one of them, no exceptions.

The Cloudflare person who blog-posted self-congratulations about their "Matrix implementation" that was mere placeholder comments is one step into a continuum with the people whom the chatbot convinced they're Machine Jesus. The difference is of degree not kind.

Claim 2: That happens because LLMs have tapped by accident into some poorly understood weakness of human psychology, related to the social and iterative construction of reality.

Claim 3: This LLM exploit is an algorithmic implementation of the feedback loop between a cult leader and their followers, with the chatbot performing the "follower" role.

Claim 4: Postindustrial capitalist societies are hyper-individualistic, which makes human beings miserable. LLM chatbots exploit this deliberately by artificially replacing having friends. it is not enough to generate code; they make the bots feel like they talk to you—they pretend a chatbot is someone. This is a predatory business practice that reinforces rather than solves the loneliness epidemic.

n.b. while the reality-formation exploit is accidental, the imaginary-friend exploit is by design.

Corollary #1: Every "legitimate" use of an LLM would be better done by having another human being you talk to. (For example, a human coding tutor or trainee dev rather than Claude Code). By "better" it is meant: create more quality, more reliably, with more prosocial costs, while making everybody happier. But LLMs do it: faster at larger quantities with more convenience while atrophying empathy.

Corollary #2: Capitalism had already created artificial scarcity of friends, so that working communally was artificially hard. LLMs made it much worse, in the same way that an abundance of cheap fast food makes it harder for impoverished folk to reach nutritional self-sufficiency.

Corollary #3: The combination of claim 4 (we live in individualist loneliness hell) and claim 3 (LLMs are something like a pocket cult follower) will have absolutely devastating sociological effects.

load more comments (3 replies)
[-] nightsky@awful.systems 15 points 3 weeks ago

When all the worst things come together: ransomware probably vibe-coded, discards private key, data never recoverable

During execution, the malware regenerates a new RSA key pair locally, uses the newly generated key material for encryption, and then discards the private key.

Halcyon assesses with moderate confidence that the developers may have used AI-assisted tooling, which could have contributed to this implementation error.

Source

load more comments (4 replies)
[-] gerikson@awful.systems 15 points 3 weeks ago

LWer: Heritage Foundation has some good ideas but they're not enough into eugenics for my taste

This is completely opposed to the Nietzschean worldview, which looks toward the next stage in human evolution, the Overman. The conservative demands the freezing of evolution and progress, the sacralization of the peasant in his state of nature, pregnancy, nursing, throwing up. “Perfection” the conservative puts in scare quotes, he wants the whole concept to disappear, replaced by a universal equality that won’t deem anyone inferior. Perhaps it’s because he fears a society looking toward the future will leave him behind. Or perhaps it’s because he had been taught his Christian morality requires him to identify with the weak, for, as Jesus said, “blessed are the meek for they shall inherit the earth.” In his glorification of the “natural ecology of the family,” the conservative fails even by his own logic, as in the state of nature, parents allow sick offspring to die to save resources for the healthy. This was the case in the animal kingdom and among our peasant ancestors.

Some young, BASED Rightists like eugenics, and think the only reason conservatives don’t is that liberals brainwashed them that it’s evil. As more and more taboos erode, yet the one against eugenics remains, it becomes clear that dysgenics is not incidental to conservatism, but driven by the ideology itself, its neuroticism about the human body and hatred of the superior.

[-] rook@awful.systems 13 points 3 weeks ago

the conservative… wants… a universal equality that won’t deem anyone inferior.

perhaps it’s because he had been taught his Christian morality requires him to identify with the weak

Which conservatives are these. This is just a libertarian fantasy, isn’t it.

load more comments (2 replies)
load more comments (7 replies)
[-] rook@awful.systems 15 points 3 weeks ago

I have mixed feelings about this one: The Enclosure feedback loop (or how LLMs sabotage existing programming practices by privatizing a public good).

The author is right that stack overflow has basically shrivelled up and died, and that llm vendors are trying to replace it with private sources of data they'll never freely share with the rest of us, but I don’t think that chatbot dev sessions are in any way “high quality data”. The number of occasions when a chatbot-user actually introduces genuinely useful and novel information will be low, and the ability of chatbot companies to even detect that circumstance will be lower still. It isn’t enclosing valuable commons, it is squirting sealant around all the doors so the automated fart-huffing system and its audience can’t get any fresh air.

load more comments (3 replies)
[-] gerikson@awful.systems 14 points 2 weeks ago

LW ghoul does the math and concludes: letting measles rip unhindered through the population isn't that bad, actually

https://www.lesswrong.com/posts/QXF7roSvxSxgzQRoB/robo-s-shortform?commentId=mit8JTQsykhH6jiw4

[-] mirrorwitch@awful.systems 14 points 3 weeks ago* (last edited 3 weeks ago)

I gave the new ChatGPT Health access to 29 million steps and 6 million heartbeat measurements ["a decade of my Apple Watch data"]. It drew questionable conclusions that changed each time I asked.

WaPo. Paywalled but I like how everything I need to know is already in the blurb above.

load more comments (1 replies)
[-] o7___o7@awful.systems 14 points 3 weeks ago* (last edited 3 weeks ago)

Regular suspect Stephen Wolfram makes claims of progress on P vs NP. The orange place is polarized and comments are full of deranged AI slop.

https://news.ycombinator.com/item?id=46830027

[-] blakestacey@awful.systems 15 points 3 weeks ago

I think that's more about Wolfram giving a clickbait headline to some dicking around he did in the name of "the ruliad", a revolutionary conceptual innovation of the Wolfram Physics Project that is best studied using the Wolfram Language, brought to you by Wolfram Research.

The full ruliad—which appears at the foundations of physics, mathematics and much more—is the entangled limit of all possible computations. [...] In representing all possible computations, the ruliad—like the “everything machine”—is maximally nondeterministic, so that it in effect includes all possible computational paths.

Unrelated William James quote from 1907:

The more absolutistic philosophers dwell on so high a level of abstraction that they never even try to come down. The absolute mind which they offer us, the mind that makes our universe by thinking it, might, for aught they show us to the contrary, have made any one of a million other universes just as well as this. You can deduce no single actual particular from the notion of it. It is compatible with any state of things whatever being true here below.

load more comments (6 replies)
[-] lagrangeinterpolator@awful.systems 15 points 3 weeks ago* (last edited 3 weeks ago)

I study complexity theory so this is precisely my wheelhouse. I confess I did not read most of it in detail, because it does spend a ton of space working through tedious examples. This is a huge red flag for math (theoretical computer science is basically a branch of math), because if you truly have a result or idea, you need a precise statement and a mathematical proof. If you're muddling through examples, that generally means you either don't know what your precise statement is or you don't have a proof. I'd say not having a precise statement is much worse, and that is what is happening here.

Wolfram here believes that he can make big progress on stuff like P vs NP by literally just going through all the Turing machines and seeing what they do. It's the equivalent of someone saying, "Hey, I have some ideas about the Collatz conjecture! I worked out all the numbers from 1 to 30 and they all worked." This analogy is still too generous; integers are much easier to work with than Turing machines. After all, not all Turing machines halt, and there is literally no way to decide which ones do. Even the ones that halt can take an absurd amount of time to halt (and again, how much time is literally impossible to decide). Wolfram does reference the halting problem on occasion, but quickly waves it away by saying, "in lots of particular cases ... it may be easy enough to tell what’s going to happen." That is not reassuring.

I am also doubtful that he fully understands what P and NP really are. Complexity classes like P and NP are ultimately about problems, like "find me a solution to this set of linear equations" or "figure out how to pack these boxes in a bin." (The second one is much harder.) Only then do you consider which problems can be solved efficiently by Turing machines. Wolfram focuses on the complexity of Turing machines, but P vs NP is about the complexity of problems. We don't care about the "arbitrary Turing machines 'in the wild'" that have absurd runtimes, because, again, we only care about the machines that solve the problems we want to solve.

Also, for a machine to solve problems, it needs to take input. After all, a linear equation solving machine should work no matter what linear equations I give it. To have some understanding of even a single machine, Wolfram would need to analyze the behavior of the machine on all (infinitely many) inputs. He doesn't even seem to grasp the concept that a machine needs to take input; none of his examples even consider that.

Finally, here are some quibbles about some of the strange terminology he uses. He talks about "ruliology" as some kind of field of science or math, and it seems to mean the study of how systems evolve under simple rules or something. Any field of study can be summarized in this kind of way, but in the end, a field of study needs to have theories in the scientific sense or theorems in the mathematical sense, not just observations. He also talks about "computational irreducibility", which is apparently the concept of thinking about what is the smallest Turing machine that computes a function. This doesn't really help him with his project, but not only that, there is a legitimate subfield of complexity theory called meta-complexity that is productively investigating this idea!

If I considered this in the context of solving P vs NP, I would not disagree if someone called this crank work. I think Wolfram greatly overestimates the effectiveness of just working through a bunch of examples in comparison to having a deeper understanding of the theory. (I could make a joke about LLMs here, but I digress.)

load more comments (5 replies)
[-] nightsky@awful.systems 14 points 3 weeks ago* (last edited 3 weeks ago)

The AI craze might end up killing graphics card makers:

Zotac SK's message: "(this) current situation threatens the very existence of (add-in-board partners) AIBs and distributors."

The current situation is so serious that it is worrisome for the future existence of graphics card manufacturers and distributors. They announced that memory supply will not be sufficient and that GPU supply will also be reduced.

Curiously, Zotac Korea has included lowly GeForce RTX 5060 SKUs in its short list of upcoming "staggering" price increases.

(Source)

I wonder if the AI companies realize how many people will be really pissed off at them when so many tech-related things become expensive or even unavailable, and everyone will know that it's only because of useless AI data centers?

[-] istewart@awful.systems 16 points 3 weeks ago

I am confident that Altman in particular has a poor-to-nonexistent grasp of second-order effects.

[-] mirrorwitch@awful.systems 13 points 3 weeks ago

I mean you don't have to grasp, know of, or care about the consequences when none of the consequences will touch you, and after the bubble pops and the company bankrupts catastrophically, you will remain comfortably a billionaire with several more billions in your aire than the ones you had when you started the bubble in the first place. Consequences are for the working class, capitalists fall upwards.

load more comments (1 replies)
[-] gerikson@awful.systems 13 points 3 weeks ago

what absolute bullshit

https://www.moltbook.com/

AKA Reddit for "agents".

load more comments (11 replies)
[-] nfultz@awful.systems 13 points 3 weeks ago
[-] rook@awful.systems 13 points 2 weeks ago

I know this is like shooting very large fish in a very small barrel, but the openclaws/molt/clawd thing is an amazing source of utter, baffling ineptitude.

For example, what if you could replace cron with a stochastic scheduler that cost you a dollar an hour by running an operation on someone else’s gpu farm, instead of just checking the local system clock.

The user was then pleased to announce that they’d been able to solve the problem by changing model and reduce the polling interval. Instead of just checking the clock. For free.

https://bsky.app/profile/rusty.todayintabs.com/post/3mdrdhzqmr226

load more comments (11 replies)
[-] BlueMonday1984@awful.systems 12 points 3 weeks ago

New blogpost from Drew DeVault, titled "The cults of TDD and GenAI". As the title suggests, its drawing comparisons between how people go all-in on TDD (test-driven development) and how people go all-in on slop machines.

Its another post in the genre of "why did tech fall for AI so hard" that I've seen cropping up, in the same vein as mhoye's Mastodon thread and Iris Meredith's "The problem is culture".

load more comments (1 replies)
[-] self@awful.systems 12 points 3 weeks ago

I just made a quick change to our lemmy-ui config to hopefully quickly restart it when it once again leaks all of its memory and gets stuck in a tight set of GC cycles due to a “high” amount of traffic (normal for a public website being scraped, even with iocaine) even though it’s a node application that barely does anything at all

I can’t monitor this one as close as I’d like today so if something breaks and it doesn’t resolve itself within a couple minutes, or the instance looks like it’s in a crash loop, ping @zzt@mas.to on mastodon and I’ll try and get to it as soon as I can

[-] JFranek@awful.systems 12 points 3 weeks ago

I think I installed the cursed Windows 11 update on my work machine, because after taking several tries to boot, my second monitor stopped working (detected, but showing a black screen).

Tried some different configurations, and could make only 0-1 screens work.

Uninstalled the update and everything worked correctly again.

Thanks for nothing Microslop.

load more comments (1 replies)
[-] rook@awful.systems 12 points 3 weeks ago

Amazon Found ‘High Volume’ Of Child Sex Abuse Material in AI Training Data

The tech giant reported hundreds of thousands of cases of suspected child sexual abuse material, but won’t say where it came from

I’ll bet.

https://www.bloomberg.com/news/features/2026-01-29/amazon-found-child-sex-abuse-in-ai-training-data

load more comments (2 replies)
[-] corbin@awful.systems 11 points 3 weeks ago

Kyle Hill has gone full doomer after reading too much Big Yud and the Yud & Soares book. His latest video is titled "Artificial Superintelligence Must Be Illegal." Previously, on Awful, he was cozying up to effective altruists and longtermists. He used to have a robotic companion character who would banter with him, but it seems like he's no longer in that sort of jocular mood; he doesn't trust his waifu anymore.

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 26 Jan 2026
19 points (100.0% liked)

TechTakes

2446 readers
69 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS