28

A straightforward dismantling of AI fearmongering videos uploaded by Kyle "Science Thor" Hill, Sci "The Fault in our Research" Show, and Kurz "We're Sorry for Summarizing a Pop-Sci Book" Gesagt over the past few months. The author is a computer professional but their take is fully in line with what we normally post here.

I don't have any choice sneers. The author is too busy hunting for whoever is paying SciShow and Kurzgesagt for these videos. I do appreciate that they repeatedly point out that there is allegedly a lot of evidence of people harming themselves or others because of chatbots. Allegedly.

top 8 comments
sorted by: hot top controversial new old
[-] V0ldek@awful.systems 5 points 19 hours ago

This is actually a very good channel, holy shit. Most of the content seems to boil down to "Clean Code sucks, so-called "10x developers" are primadonnas that make the world worse, testing is important, no, more important than that". Old dude in the trenches since the 80s spitting the same straight facts I've been trying to teach everyone who'd listen for the measly 7 years of my career.

[-] V0ldek@awful.systems 7 points 1 day ago

Wasn't there a big deal about Kurzgesagt being associated with shady rationalist-like nonsense a long time ago? I remember my normie friends being like "what a shame, I thought it was such a good channel"...

Haven't heard about the other two but always happy to discover more popular wrong people to sneer at

[-] Architeuthis@awful.systems 4 points 1 day ago* (last edited 1 day ago)

They made a pro-longtermist video in association with open philanthropy a few years back, The Last Human or something like that, the summary was pretty open about the connection.

I don't think the shadiness is specific to rationalism, see also that bizarre KG video claiming it's scientifically impossible to lose weight by exercising that coincided with the height of Ozempic's hype.

edit: The Last Human came out at 2022, the same year the McAskill book arguing longtermism was published, what a coinkidink.

[-] corbin@awful.systems 19 points 4 days ago

The author also proposes a framework for analyzing claims about generative AI. I don't know if I endorse it fully, but I agree that each of the four talking points represents a massive failure of understanding. Their LIES model is:

  • Lethality: the bots will kill us all
  • Inevitability: the bots are unstoppable and will definitely be created in the future
  • Exceptionalism: the bots are wholly unlike any past technology and we are unprepared to understand them
  • Superintelligent: the bots are better than people at thinking

I would add to this a Plausibility or Personhood or Personality: the incorrect claim that the bots are people. Maybe call it PILES.

Call the last one A for Agency and turn the acronym into an AI history reference: ELISA.

[-] swlabr@awful.systems 9 points 3 days ago

Hey while we’re here, I propose two more letters:

S, standing for “stochastic parrot ignorance,”

C, standing for “Chinese room does not constitute thought,”

Now we can have ASS LICE

My favorite science Youtubers? Nah, those channels are IFLScience-tier, their intended audience is literally children.

[-] istewart@awful.systems 6 points 4 days ago

Kurz “We’re Sorry for Summarizing a Pop-Sci Book” Gesagt

Geshundheit

this post was submitted on 10 Dec 2025
28 points (100.0% liked)

SneerClub

1211 readers
15 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS