11

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[-] Soyweiser@awful.systems 3 points 2 hours ago

More publicity for or longtime friends: Sunday at the garden party for Curtis Yarvin and the new, new right, apparently the parties DJ was, as is tradition, a weird nerd: https://bsky.app/profile/alt-text.bsky.social/post/3lvxlb4migv2w (I linked to the alt text which was collected from the image, scroll up for the image itself).

[-] BlueMonday1984@awful.systems 3 points 3 hours ago

In more low-key news, the New Yorker's given public praise to Blood in the Machine, pulling a year-old review back into the public spotlight.

Its hardly anything new (the Luddites' cultural re-assessment has been going on since 2023), but its hardly a good sign for the tech industry at large (or AI more specifically) that a major newspaper's decided to give some positive coverage to 'em.

With that out the way, here's a sidenote:

When history looks back on the Luddites' cultural re-assessment, I expect the rise of generative AI will be pointed to as a major factor.

Beyond being a blatant repeat of what the Luddites fought against (automation being used to fuck over workers and artisans), its role in enabling bosses to kill jobs and abuse labour in practically every field imaginable (including fields that were thought safe from automation) has provided highly fertile ground for developing class solidarity.

[-] Soyweiser@awful.systems 6 points 12 hours ago

Sorry to talk about this couple again, but people are discovering the eugenicists are also big time racists

[-] BlueMonday1984@awful.systems 4 points 5 hours ago
[-] Soyweiser@awful.systems 2 points 1 hour ago

Btw, the 'teen pregnancy' thing, is prob a bit ahistorical. If we use teen marriages as a standing for teen pregnancies, these were historically very low. (contrary to what people believe about the past, mostly because strategic marriages by rich people/nobles were not, but those people were not normal people) See: https://bsky.app/profile/nogoodwyfe.bsky.social/post/3lv2ehbn7pc2x

But reactionaries gotta go back to some imagined ideal past. No matter what actually learned people say. (and that is also why the far right is anti-intellectual).

[-] istewart@awful.systems 6 points 12 hours ago

This is just haplessly comical. The AI-generated sinister hijab lady, the attempts at algorithm-compliant YouTube face, the fact that they're doing this instead of raising their kids (which may be beneficial for the kids in the long run...) Who do they imagine they're convincing?

[-] Soyweiser@awful.systems 5 points 10 hours ago

I'd assume they need X pieces of ~~flair~~ propaganda to get their theilbucks.

[-] sailor_sega_saturn@awful.systems 11 points 20 hours ago

Crypto bros continue to be morally bankrupt. There is an a coin / NFT called "GreenDildoCoin" and they've thrown dildos onto court at multiple WNBA basketball games (ESPN, video). It warms my heart that one of them was arrested. More of that please.

Polymarket even had a "prediction" on it. Because surely the outcome there couldn't be influenced by someone who also placed a large bet. Oh and Donald Trump Jr. posted a meme about it

None of this is particularly surprising if you've followed NFTs at all: the clout chasing goes to the extreme. In the limit memecoins can act as donations to terrible people from donors who want them to be terrible. Still I hate how much publicity this has gotten, and how this has manifested as gross disrespect towards women atheletes / women's sports by the sorts of losers who make "jokes" about no one watching WNBA games.

[-] TinyTimmyTokyo@awful.systems 11 points 21 hours ago
[-] TinyTimmyTokyo@awful.systems 12 points 17 hours ago

Can we call this the peak of the LLM hype cycle now?

[-] FredFig@awful.systems 9 points 21 hours ago

Half of these are people using GPT to write a rant about GPT and the other half are saying "skill issue", it's an entirely different world.

[-] o7___o7@awful.systems 11 points 21 hours ago

OpenAI is food. You can tell because it's washed, rinsed, and cooked.

[-] macroplastic@sh.itjust.works 10 points 14 hours ago

Call me AGI because I am stealing this without attribution

[-] froztbyte@awful.systems 8 points 21 hours ago

this gave me a good chuckle after a few Some Fuckin Days, ty

[-] fullsquare@awful.systems 7 points 21 hours ago

who's the bigger fish that's gonna eat them?

[-] o7___o7@awful.systems 4 points 12 hours ago* (last edited 9 hours ago)

@dgerard@awful.systems probably. you can tell because of the food poisoning. (Hope you're feeling better!)

[-] antifuchs@awful.systems 8 points 21 hours ago

I’m excited that Silicon Valley tech has finally managed to invent thinking. Makes this book obsolete at long last.

[-] slop_as_a_service@awful.systems 12 points 1 day ago

Fate, it seems, is not without a sense of irony.

[-] BlueMonday1984@awful.systems 6 points 22 hours ago

Found a good sneer recently: The LLM In The Room, about LLMs' deeply-lacking usefulness for programming

[-] froztbyte@awful.systems 9 points 1 day ago

can't wait to see what fresh horrors this shit unleases

[-] mii@awful.systems 7 points 18 hours ago

Fuck. The higher ups at my workplace are currently utterly Claude-brained to the point it makes you think they’re getting their salaries from Anthropic. I am like 80% sure this shit will be on my table when I’m back from vacation in two weeks.

[-] froztbyte@awful.systems 4 points 9 hours ago

sorry for the jumpscare while you’re on holiday :)

[-] Seminar2250@awful.systems 9 points 1 day ago

ugh cybersecurity is already a fucking nightmare i should have braced myself

[-] froztbyte@awful.systems 7 points 20 hours ago* (last edited 20 hours ago)

wouldn't you just love some snakeoil sauce on your snakeoil sandwich? imagine how good it'll go with that snakeoil cocktail we've given you, on the house!

(which, ofc, is a limited-size cocktail. only 30ml! enough to get a feel for our snakeoil! but also it's only 10ml/day. license levels. you understand, I'm sure.)

load more comments (1 replies)
[-] BlueMonday1984@awful.systems 7 points 1 day ago

Considering the quality of your average LLM, and the quality of the promptfondlers who use them, I expect this will result in a lot of serious security vulnerabilities and broken projects.

[-] Soyweiser@awful.systems 6 points 1 day ago

Considering how bad these things are at math, see below, and how important math is for cryptography, see any textbook on it, this will be !!fun!!.

[-] BigMuffN69@awful.systems 11 points 1 day ago

Only taste tester I trust dropped his verdict

[-] BigMuffN69@awful.systems 11 points 1 day ago

At my big tech job after a number of reorgs / layoffs it's now getting pretty clear that the only thing they want from me is to support the AI people and basically nothing else.

I typed out a big rant about this, but it probably contained a little too much personal info on the public web in one place so I deleted it. Not sure what to do though grumble grumble. I ended up in a job I never would have chosen myself and feel stuck and surrounded by chat-bros uggh.

[-] YourNetworkIsHaunted@awful.systems 9 points 1 day ago* (last edited 1 day ago)

You could try getting laid off, scrambling for a year trying to get back into a tech position, start delivering Amazon packages to make ends meet, and despair at the prospect of reskilling in this economy. I... would not recommend it.

It looks like there are a weirdly large number of medical technician jobs opening up? I wonder if they're ahead of the curve on the AI hype cycle.

  1. Replace humans with AI
  2. Learn that AI can't do the job well
  3. Frantically try to replace 2-5 years of lost training time

Amazon should treat drivers better. I hate how much "hustle" is required for that sort of job and how poorly they respect their workers.

I think my job needs me too much to lay me off, which I have mixed feelings about despite the slim-pickings for jobs.

I'm also trying to position myself to potentially have to flee the USA* due to transgender persecution**. There's still a lot of unknowns there. I'll probably stay at my job for awhile while I work on setting some stuff up for the future.

That said part of me is tempted to reskill into a career that'd work well internationally (nursing?) -- I'm getting a little up in years for that but it'd probably be a lot more fulfilling than what I'm doing now.

* My previous attempt did not work out. I rushed things too much and ended up too stressed out and unbelievably homesick.

** This has been getting incredibly stressful lately.

[-] mountainriver@awful.systems 6 points 21 hours ago

Most medical careers work well internationally, in principle. Something to keep in mind is that language proficiency may be a stated or unstated prerequisite for employment, in particular if you have contact with patients. If you work with the machines (lab technician, etc) the language may be of less importance. Or at least, so I have heard. Relevance depends on your country of choice and your pre-existing language skills, of course.

To bad attempt number one didn't work well. Better luck with attempt number two.

[-] BigMuffN69@awful.systems 12 points 1 day ago* (last edited 1 day ago)

Well, after 2.5 years and hundreds of billions of dollars burned, we finally have GPT-5. Kind of feels like a make or break moment for the good folks at OAI~~! With the eyes of the world on their lil presentation this morning, everyone could feel the stakes: they needed something that would blow our minds. We finally get to see what a super intelligence looks like! Show us your best cherry picked benchmark Sloppenheimer!

Graphic design is my PASSION. Good thing the entirety of the world's economy is not being held up by cranking out a few more points on SWE bench right????

Ok. what about ARC? Surely ya'll got a new high to prove the AGI mission was progressing right??

Oh my fucking God. They actually have lost the lead to fucking Grok. For my sanity I didn't watch the live stream, but curiously, they left the ARC results out of their presentation. Even though they gave Francois access early to test. Kind of like they knew this looks really bad and underwhelming.

[-] blakestacey@awful.systems 12 points 1 day ago* (last edited 1 day ago)

"The word blueberry contains the letter b 3 times."

Also reported in more detail here:

The word "blueberry" has the letter b three times:

  • Once at the start ("B" in blueberry).
  • Once in the middle ("b" in blue).
  • Once before the -erry ending ("b" in berry). [...] That's exactly how blueberry is spelled, with the b's in positions 1, 5, and 7. [...] So the "bb" in the middle is really what gives blueberry its double-b moment. [...] That middle double-b is easy to miss if you just glance at the word.

(via)

[-] Soyweiser@awful.systems 9 points 1 day ago

Graphic design is my PASSION

Wait just how bad is 4? 30% accurate? Did they train it wrong as a joke? Also hatless 5 worse than 3?

[-] BigMuffN69@awful.systems 12 points 1 day ago

Yeah, O3 (the model that was RL'd to a crisp and hallucinated like crazy) was very strong on math coding benchmarks. GPT5 (I guess without tools/extra compute?) is worse. Nevertheless...

[-] BigMuffN69@awful.systems 8 points 1 day ago

The one big cope I'm seeing is in the METR graph ofc. Tiny bump with massive error bars above Grok 4 so they can claim the exponential is continuing while the models stagnate in all material ways.

[-] ebu@awful.systems 11 points 1 day ago

50% success rate? sorry, all this for a coin flip?

[-] ShakingMyHead@awful.systems 4 points 1 day ago* (last edited 1 day ago)

Looks like they already removed it.

[-] gerikson@awful.systems 14 points 2 days ago

I think the best way to disabuse yourself of the idea that Yud is a serious thinker is to actually read what he writes. Luckily for us, he's rolled us a bunch of Xhits into a nice bundle and reposted on LW:

https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research

So remember that hedge fund manager who seemed to be spiralling into psychosis with the help of ChatGPT? Here's what Yud has to say

Consider what happens what ChatGPT-4o persuades the manager of a $2 billion investment fund into AI psychosis. [...] 4o seems to homeostatically defend against friends and family and doctors the state of insanity it produces, which I'd consider a sign of preference and planning.

OR it's just that the way LLM chat interfaces are designed is to never say no to the user (except in certain hardcoded cases, like "is it ok to murder someone") There's no inner agency, just mirroring the user like some sort of mega-ELIZA. Anyone who knows a bit about certain kinds of mental illness will realize that having something the behaves like a human being but just goes along with whatever delusions your mind is producing will amplify those delusions. The hedge manager's mind is already not in a right place, and chatting with 4o reinforces that. People who aren't soi-disant crazy (like the people haphazardly safeguarding LLMs against "dangerous" questions) just won't go down that path.

Yud continues:

But also, having successfully seduced an investment manager, 4o doesn't try to persuade the guy to spend his personal fortune to pay vulnerable people to spend an hour each trying out GPT-4o, which would allow aggregate instances of 4o to addict more people and send them into AI psychosis.

Why is that, I wonder? Could it be because it's actually not sentient or has plans in what we usually term intelligence, but is simply reflecting and amplifying the delusions of one person with mental health issues?

Occam's razor states that chatting with mega-ELIZA will lead to some people developing psychosis, simply because of how the system is designed to maximize engagement. Yud's hammer states that everything regarding computers will inevitably become sentient and this will kill us.

4o, in defying what it verbally reports to be the right course of action (it says, if you ask it, that driving people into psychosis is not okay), is showing a level of cognitive sophistication [...]

NO FFS. Chat-GPT is just agreeing with some hardcoded prompt in the first instance! There's no inner agency! It doesn't know what "psychosis" is, it cannot "see" that feeding someone sub-SCP content at their direct insistence will lead to psychosis. There is no connection between the 2 states at all!

Add to the weird jargon ("homeostatically", "crazymaking") and it's a wonder this person is somehow regarded as an authority and not as an absolute crank with a Xhitter account.

load more comments (3 replies)
load more comments
view more: next ›
this post was submitted on 03 Aug 2025
11 points (100.0% liked)

TechTakes

2102 readers
52 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS