26

Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[-] blakestacey@awful.systems 1 points 38 minutes ago

While most bullish outlooks are premised on economic reacceleration, it’s difficult to ignore the market’s reliance on AI capex. In market-pricing terms, we believe we’re closer to the seventh inning than the first, and several developments indicate we may be entering the later phases of the boom. First, AI hyperscaler free-cash-flow growth has turned negative. Second, price competition in the "monopoly-feeder businesses” seems to be accelerating. Finally, recent deal-making smacks of speculation and vendor-financing strategies of old.

https://www.morganstanley.com/pub/content/dam/mscampaign/wealth-management/wmir-assets/gic-weekly.pdf

[-] blakestacey@awful.systems 3 points 3 hours ago
[-] dgerard@awful.systems 1 points 37 minutes ago

I predict Sam will lose big on the S&C claims. They have made fuckin bank on this bankruptcy, but also their fees have already been ruled entirely reasonable given the shitshow in question, and the near complete recovery for creditors will make them look even better.

[-] gerikson@awful.systems 6 points 5 hours ago

Guys according to LW you’re reading Omelas all wrong (just like LeGuin was wrong)

https://www.lesswrong.com/posts/n83HssLfFicx3JnKT/omelas-is-perfectly-misread

[-] corbin@awful.systems 6 points 4 hours ago

Choice sneer from the comments:

Omelas: how we talk about utopia [by Big Joel, a patient and straightforward Youtube humanist,] [has a] pretty much identical thesis, does this count?

Another solid one which aligns with my local knowledge:

It's also about literal child molesters living in Salem Oregon.

The story is meant to be given to high schoolers to challenge their ethics, and in that sense we should read it with the following meta-narrative: imagine that one is a high schooler in Omelas and is learning about The Plight and The Child for the first time, and then realize that one is a high schooler in Salem learning about local history. It's not intended for libertarian gotchas because it wasn't written in a philosophical style; it's a narrative that conveys a mood and an ethical framing.

[-] o7___o7@awful.systems 4 points 7 hours ago

LLMs are the Dippin Dots of technology.

[-] BlueMonday1984@awful.systems 7 points 6 hours ago

That's an unfair comparison, Dippin Dots don't slowly ruin the world by existing (also, they're delicious)

[-] o7___o7@awful.systems 6 points 4 hours ago

It will slways be the ice cream of the future, never today

[-] Soyweiser@awful.systems 8 points 12 hours ago* (last edited 11 hours ago)

So bit of a counter to our usual stuff thing. But a worker migrant here won a case against his employer who had linked his living space to his employment contract (forbidden) using chatgpt as an aid (how much is not told). So there actually was a case where it helped.

Interesting note on it, these sorts of cases have no jurisprudence yet, so that might have been a factor. No good links for it sadly as it was all in Dutch. (Cant even find a proper writeup in a bigger news site as a foreigner defending their rights against abuse is less interesting than some other country having a new bisshop). Skeets congratulating the guy here https://bsky.app/profile/isgoedhoor.bsky.social/post/3m27aqkyjjk2c (in Dutch). Nothing much about the genAI usage.

But this does fit a pattern, how, like with blind/bad eyesight people, these tools are veing used by people who have no other recourse because we refuse to help them (this is bad tbh, Im happy they are getting more help don't fet me wrong, but it shouldn't be this substandard).

[-] BlueMonday1984@awful.systems 3 points 9 hours ago

The Guardian shat out its latest piece of AI hype, violating Betteridge's Law of headlines by asking "Parents are letting little kids play with AI. Are they wrong?"

[-] swlabr@awful.systems 9 points 17 hours ago* (last edited 14 hours ago)

Do we have a word for people that are kind of like… AI concern trolls? Like they say they are critical of AI, or even against AI, but only ever really put forward pro-AI propaganda, especially in response to actual criticisms of AI. Kind of centrists or (neo) libs. But for AI.

Bonus points if they also for some reason say we should pivot to more nuclear power, because in their words, even though AI doesn’t use as much electricity as we think, we should still start using more nuclear power to meet the energy demands. (ofc this is bullshit)

E: Maybe it's just sealion

[-] blakestacey@awful.systems 4 points 5 hours ago
[-] Soyweiser@awful.systems 10 points 11 hours ago* (last edited 11 hours ago)

Sealions is a bit more specific, as they do not stop, and demand way more evidence than is normal, Scott had a term for this, forgot it already (one of those more useful Rationalist ideas, which they only employ themselves asymmetrically). Noticed it recently on reddit, some person was mad I didn't properly counter Yuds arguments, while misrepresenting my position (which wasn't that strong tbh, I just quickly typed them up before I had other things to do). But it is very important to take Yuds arguments seriously for some reason, reminds me of creationists.

Think just calling them AI concern trolls works.

[-] antifuchs@awful.systems 10 points 23 hours ago* (last edited 22 hours ago)

Been tangentially involved in a discussion about how much LLMs have improved in the past year (I don’t care), but now that same space has a discussion of how annoying the stupid pop-up chat boxes are on websites. Don’t know what the problem is, they’ve gotten so much better in the past year?

[-] YourNetworkIsHaunted@awful.systems 7 points 14 hours ago

I mean that's the fundamental problem, right? No matter how much better it gets, the things it's able to do aren't really anything people need or want. Like, if I'm going to a website looking for information it's largely because I don't want to deal with asking somebody for the answer. Even a flawless chatbot that can always provide the information I need - something that is far beyond the state of the art and possibly beyond some fundamental limitation of the LLM structure - wouldn't actually be preferable to just navigating a smooth and well-structured site.

[-] antifuchs@awful.systems 6 points 11 hours ago

Yup, exactly. The chatbot, no matter how helpful, exists in a context of dark patterned web design, incredibly bad resource usage and theft. Its purpose is to make the customer’s question go away, so not even the fanatics are interested.

[-] Soyweiser@awful.systems 4 points 10 hours ago* (last edited 10 hours ago)

See also how youtube tutorials have mostly killed (*) text based tutorials/wikis and are just inferior to good wikis/text based ones. Both because listening to a person talk is a linear experience, and a text one allows for easy scrolling, but also because most people are just bad at yt tutorials. (shoutout to the one which had annoyingly long random pauses in/between sentences even at 2x speed).

This is not helped because now youtube is a source of revenue, and updating a wiki/tutorial often is not. So the incentives are all wrong. A good example of this is the gaming wiki fextralife: See this page on dragons dogma 2 npcs. https://dragonsdogma2.wiki.fextralife.com/NPCs (the game has been out for over a year, if the weirdness doesn't jump out at you). But the big thing for fextralife is their youtube tutorials and it used to have an autoplaying link to their streams. This isn't a wiki, it is an advertisement for their youtube and livestreams. And while this is a big example the problem persists with smaller youtubers, who suffer from extreme publish, do not deviate from your niche or perish. They can't put in the time to update things, because they need to publish a new video (on their niche, branching out is punished) soon or not pay rent. (for people who play videogames and or watch youtube out there, this is also why somebody like the spiffing brit is has long ago went from 'I exploit games' to 'I grind and if you grind enough in this single player game you become op', the content must flow, but eventually you will run out of good new ideas (also why he tried to push his followers into doing risky cryptocurrency related 'cheats' (follow Elon, if he posts a word that can be cryptocoined, pump and dump it for a half hour))).

*: They still exist but tend to be very bad quality, even worse now people are using genAI to seed/update them.

[-] o7___o7@awful.systems 4 points 9 hours ago

People can't just have a hobby anymore, can they?

[-] Soyweiser@awful.systems 3 points 5 hours ago* (last edited 5 hours ago)

Nope. And tbh, did some dd2 recently, and for a very short while I was tempted to push the edit button, but then I remembered that fextralife just tries to profit off my wiki editing labor. (I still like the idea of wikis, but do not have the fortitude and social calm to edit a mainstream one like wikipedia). (I did a quick check, and yeah I also really hate the license fextralife/valnet uses "All contributions to any Fextralife.com Wiki fall under the below Contribution Agreement and become the exclusive copyrighted property of Valnet.", and their editor sucks ass (show me the actual code not this wysiwyg shit)).

[-] o7___o7@awful.systems 2 points 4 hours ago
[-] Soyweiser@awful.systems 12 points 1 day ago* (last edited 1 day ago)

Tyler Cowen saying some really weird shit about an AI 'actress'.

(For people who might wonder why he is relevant. See his 'see also' section on wikipedia)

E: And you might wonder, rightfully imho, that this cannot be real, that this must be an edit. https://archive.is/vPr1B I have bad news.

[-] blakestacey@awful.systems 13 points 21 hours ago* (last edited 21 hours ago)

The Wikipedia editors are on it.

image descriptionscreenshot of Tyler Cowen's Wikipedia article, specifically the "Personal life" section. The concluding sentence is "He also prefers virgin actresses."

[-] gerikson@awful.systems 9 points 1 day ago

Some Rat content got shared on HN, and the rats there are surprised and outraged not everyone shares their deathly fear of the AI god:

https://news.ycombinator.com/item?id=45451971

"Stop bringing up Roko's Basilisk!!!" they sputter https://news.ycombinator.com/item?id=45452426

"The usual suspects are very very worried!!!" - https://news.ycombinator.com/item?id=45452348 (username 'reducesuffering checks out!)

``Think for at least 5 seconds before typing.'' - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743

[-] dgerard@awful.systems 9 points 14 hours ago* (last edited 14 hours ago)

https://news.ycombinator.com/item?id=45453386

nobody mentioned this particular incident, dude just threw it into the discussion himself

[-] TinyTimmyTokyo@awful.systems 3 points 6 hours ago

Amusing to see him explaining to you the connection between Bay Area rationalists and AI safety people.

[-] sc_griffith@awful.systems 7 points 9 hours ago

incredible how he rushes to assure us that this was "a really hot 17 year old"

[-] gerikson@awful.systems 5 points 13 hours ago

Always fun trawling thru comments

Government banning GPUs: absolutely neccessary: https://news.ycombinator.com/item?id=45452400

Government banning ICE vehicles - eh, a step too far: https://news.ycombinator.com/item?id=45440664

[-] corbin@awful.systems 6 points 17 hours ago

The original article is a great example of what happens when one only reads Bostrom and Yarvin. Their thesis:

If you claim that there is no AI-risk, then which of the following bullets do you want to bite?

  1. If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.
  2. There’s no way that AI with an IQ of 300 will arrive within the next few decades.
  3. We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.

Ignoring that IQ doesn't really exist beyond about 160-180 depending on population choice, this is clearly an example of rectal philosophy that doesn't stand up to scrutiny. (1) is easy, given that the people verified to be high-IQ are often wrong, daydreaming, and otherwise erroring like humans; Vos Savant and Sidis are good examples, and arguably the most impactful high-IQ person, Newton, could not be steelmanned beyond Sherlock Holmes: detached and aloof, mostly reading in solitude or being hedonistic, occasionally helping answer open questions but usually not even preventing or causing crimes. (2) is ignorant of previous work, as computer programs which deterministically solve standard IQ tests like RPM and SAT have been around since the 1980s yet are not considered dangerous or intelligent. (3) is easy; linear algebra is confined in the security sense, while humans are not, and confinement definitely prevents all possible bad outcomes.

Frankly I wish that they'd understand that the capabilities matter more than the theory of mind. Fnargl is one alien at 100 IQ, but he has a Death Note and goldlust, so containing him will almost certainly result in deaths. Containing a chatbot is mostly about remembering how systemctl works.

[-] blakestacey@awful.systems 14 points 17 hours ago* (last edited 17 hours ago)

If a race of aliens with an IQ of 300 came to Earth

Oh noes, the aliens scored a meaningless number on the eugenicist bullshit scale, whatever shall we do

Next you'll be telling me that the aliens can file their TPS reports in under 12 parsecs

[-] BlueMonday1984@awful.systems 6 points 1 day ago* (last edited 1 day ago)

``Think for at least 5 seconds before typing.‘’ - on the subject of pulling the plug on a hostile AI - https://news.ycombinator.com/item?id=45452743

Read that last one against my better judgment, and found a particularly sneerable line:

And in this case we're talking about a system that's smarter than you.

Now, I'm not particularly smart, but I am capable of a lot of things AI will never achieve. Like knowing something is true, or working out a problem, or making something which isn't slop.

Between this rat and Saltman spewing similar shit on Politico, I have seen two people try to claim text extruders are smarter than living, thinking human beings. Saltman I can understand (he is a monorail salesman who lies constantly), but seeing someone who genuinely believes this shit is just baffling. Probably a consequence of chatbots destroying their critical thinking and mental acuity.

[-] gerikson@awful.systems 7 points 1 day ago

Let's not forget the perennial favorite "humans are just stochastic parrots too durr" https://news.ycombinator.com/item?id=45452238

to be scrupulously fair, the submission is flagged, and most of the explicit rat comments are downvoted

[-] Soyweiser@awful.systems 6 points 1 day ago

There have been a lot of cases in history of smart people being bested by the dumbest people around who just had more guns/a gun/copious amounts of meth/a stupid idea but they got lucky once, etc.

I mean, if they are so smart, why are they stuck in a locker?

[-] blakestacey@awful.systems 12 points 17 hours ago

It's practically a proverb that you don't ask a scientist to explain how a "psychic" is pulling off their con, because scientists are accustomed to fair play; you call a magician.

[-] corbin@awful.systems 10 points 1 day ago* (last edited 1 day ago)

Jeff "Coding Horror" Atwood is sneering — at us! On Mastodon:

bad news "AI bubble doomers". I've found the LLMs to be incredibly useful … Is it overhyped? FUCK Yes. … But this is NOTHING like the moronic Segway (I am still bitter about that crap), Cryptocurrency, … and the first dot-com bubble … If you find this uncomfortable, I'm sorry, but I know what I know, and I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.

T. chatbot booster rhetoric. So what are those examples, buddy? Very specifically? He replies:

a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.

Um hello‽ Maybe Jeff doesn't have a spare room or room to sublet, but surely he can spare a couch or a mailbox? Let your friend use your mailing address. Store some of their stuff in your garage. To use the jargon of hackers, Jeff should be a better neighbor. This is a common issue for unhoused folks and they cannot climb back up the ladder into society without some help. Jeff's reinvented the Hulk tacos meme but they can't even eat it because printer paper tastes awful.

[-] TinyTimmyTokyo@awful.systems 7 points 17 hours ago

The "unhoused friend" story is about as likely to be true as the proverbial Canadian girlfriend story. "You wouldn't know her."

[-] self@awful.systems 1 points 2 hours ago

jeff’s follow-up after the backlash clarifies: you wouldn’t know her because he donated right under the limit to incur a taxable event and didn’t establish a trust like a normal millionaire and also the LLM printout only came pointlessly after months of research and financially supporting the unhoused friend and also you’re no longer allowed to ask publicly about the person he brought up in public, take it to email

[-] misterbngo@awful.systems 7 points 18 hours ago

there's been something that's really been rubbed me the wrong way about jeff in the last few years. he was annoying before and had some insights but lately I've been using him as a sort of a jim crameresque tech-take-barometer.

What really soured me was after he started picking fights with some python people a few years back because someone dared post that a web framework? (couldn't dig up the link) was a greater contribution to the world than S/O? His response was pretty horrid to the point where various python leaders were telling to stop being a massive dick because he was trying to be a bully with this "do you know who I am" attitude because he personally had not heard of the framework so it wasn't acshually at all that relevant compared to S/O.

and now this combined with his stupid teehee I am giving away my wealth guise look how altruistic I am really is a bit eugh

[-] self@awful.systems 9 points 1 day ago

so I got angry

I really hope atwood’s unhoused friend got the actual infrastructural support you mentioned (a temporary mailing address and an introduction letter emailed to an employer is only slightly more effort than generating slop, jeff, please) but from direct experience with philanthropists like him, I’m fairly sure Jeff now considers the matter solved forever

[-] o7___o7@awful.systems 5 points 20 hours ago

Thanks for this.

[-] ShakingMyHead@awful.systems 9 points 1 day ago

A bit odd to start out throwing shade at the Segway considering that the concept has been somewhat redeemed with e-bikes and e-scooters.

[-] CinnasVerses@awful.systems 10 points 1 day ago* (last edited 1 day ago)

"Provide an overview of local homeless services" sounds like a standard task for a volunteer or a search engine, but yes "you can use my address for mail and store some things in my garage and I will email some contacts about setting you up with contract work" would be a better answer than just handing out secondhand information! Many "amazing things AI can do" are things the Internet + search engines could do ten years ago.

I would also like to hear from the friend "was this actually helpful?"

[-] Soyweiser@awful.systems 9 points 1 day ago* (last edited 1 day ago)

Friend: "I have a problem"

Me, with a stack of google printouts: "My time to shine!".

E: ow god, I thought the examples were multiple and the friend one was just a random one. No, it was the first example. 'I gave my friend a printout, which saved me time'. Also, as I assume the friend still is unhoused, and they didn't actually use the printout yet, he doesn't know if this actually helped. Atwood isn't a 'helping the unhoused' expert. He just assumed it was a good source. The story ends when he hands over the paper.

Also very funny that he is also going 'you just need to know how to ask questions the right way, which I learned by building stackoverflow'. Yeah euh, that is not a path a lot of people can follow up in.

[-] CinnasVerses@awful.systems 7 points 16 hours ago

Soyweiser

Its even worse when I read the whole thread, Atwood claims to have $140 million, and the best he can do for "a friend" who is homeless is handing out some printouts with a few sections highlighted? And he thinks this makes him look good because he promises to give away half his wealth one day?

[-] Soyweiser@awful.systems 4 points 10 hours ago

Like Clinton starting a go fund me for a coworker with cancer, the rich and their money are not voluntarily parted.

[-] CinnasVerses@awful.systems 2 points 3 hours ago

This also shows problems with the "effective altruist" approach. Donating to the local theater or "to raise awareness of $badThing" might not be the best way of using funds, but when a friend needs help now, you have the resources to help them, and you say "no, that might not be as efficient as creating a giant charity to help strangers one day" something is wrong.

load more comments
view more: next ›
this post was submitted on 28 Sep 2025
26 points (100.0% liked)

TechTakes

2186 readers
68 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS