20

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 3) 50 comments
sorted by: hot top controversial new old
[-] TinyTimmyTokyo@awful.systems 6 points 1 month ago

This one's been making the rounds, so people have probably already seen it. But just in case...

Meta did a live "demo" of their ~~recording~~ new AI.

[-] gerikson@awful.systems 6 points 1 month ago
[-] swlabr@awful.systems 6 points 1 month ago

Hmm, it’s still on the funny side of graph for me. I think it could go on for at least another week.

load more comments (1 replies)
[-] Soyweiser@awful.systems 5 points 1 month ago* (last edited 1 month ago)

I almost wanna use some reverse psychology to try and make him stop.

'hey im from sneerclub and we are loving this please dont stop this strike'

(I mean he clearly mentally prepped against arguments and even force (and billionaires), but not someone just making fun of him. Of course he prob doesn't know about any of these places and hasn't build us up to Boogeyman status, but imagine it worked)

load more comments (13 replies)
[-] corbin@awful.systems 6 points 1 month ago

There's an ACX guest post rehashing the history of Project Xanadu, an important example of historical vaporware that influenced computing primarily through opinions and memes. This particular take is focused on Great Men and isn't really up to the task of humanizing the participants, but they do put a good spotlight on the cults that affected some of those Great Men. They link to a 1995 article in Wired that tells the same story in a better way, including the "six months" joke. The orange site points out a key weakness that neither narrative quite gets around to admitting: Xanadu's micropayment-oriented transclusion-and-royalty system is impossible to correctly implement, due to a mismatch between information theory and copyright; given the ability to copy text, copyright is provably absurd. My choice sneer is to examine a comment from one of the ACX regulars:

The details lie in the devil, for sure...you'd want the price [of making a change to a document] low enough (zero?) not to incur Trivial Inconvenience penalties for prosocial things like building wikis, yet high enough to make the David Gerards of the world think twice.

Ah yes, low enough to allow our heroic wiki-builders, wiki-citers, and wiki-correctors; and high enough to forbid their brutish wiki-pedants, wiki-lawyers, and wiki-deleters.

Disclaimer: I know Miller and Tribble from the capability-theory community. My language Monte is literally a Python-flavored version of Miller's E (WP, esolangs), which is itself a Java-flavored version of Tribble's Joule. I'm in the minority of a community split over the concept of agoric programming, where a program can expand to use additional resources on demand. To me, an agoric program is flexible about the resources allocated to it and designed to dynamically reconfigure itself; to Miller and others, an agoric program is run on a blockchain and uses micropayments to expand. Maybe more pointedly, to me a smart contract is what a vending machine proffers (see How to interpret a vending machine: smart contracts and contract law for more words); to them, a smart contract is how a social network or augmented/virtual reality allows its inhabitants to construct non-primitive objects.

[-] Soyweiser@awful.systems 5 points 1 month ago* (last edited 1 month ago)

The 17 rules also seem to have abuse build in. Documents need to be stored redundantly (without any mention of how many copies that means), and it has a system where people are billed for the data they store. Combine these and storing your data anywhere runs the risk of a malicious actor emptying your accounts. In a 'it costs ten bucks to store a file here' 'sorry we had to securely store ten copies of your file, 100 bucks please'. Weird sort of rules. Feels a lot like it never figured out what it wants to be a centralized or distributed system, a system where writers can make money, or they need to pay to use. And a lot of technical solutions for social problems.

load more comments (8 replies)
[-] Soyweiser@awful.systems 6 points 1 month ago

Was reading some science fiction from the 90's and the AI/AGI said 'im an analog computer, just like you, im actually really bad at math.' And I wonder how much damage these one of these ideas (the other being there are computer types that can do more/different things. Not sure if analog turing machines provide any new capabilities that digital TMs do, but I leave that question for the smarter people in the subject of theorethical computer science) did.

The idea that a smart computer will be worse at math (which makes sense from a storytelling perspective as a writer, because smart AI who also can do math super well is gonna be hard to write), which now leads people who read enough science fiction to see the machine that can't count nor run doom and go 'this is what they predicted!'.

Not a sneer just a random thought.

[-] corbin@awful.systems 6 points 1 month ago

It's because of research in the mid-80s leading to Moravec's paradox — sensorimotor stuff takes more neurons than basic maths — and Sharp's 1983 international release of the PC-1401, the first modern pocket computer, along with everybody suddenly learning about Piaget's research with children. By the end of the 80s, AI research had accepted that the difficulty with basic arithmetic tasks must be in learning simple circuitry which expresses those tasks; actually performing the arithmetic is easy, but discovering a working circuit can't be done without some sort of process that reduces intermediate circuits, so the effort must also be recursive in the sense that there are meta-circuits which also express those tasks. This seemed to line up with how children learn arithmetic: a child first learns to add by counting piles, then by abstracting to symbols, then by internalizing addition tables, and finally by specializing some brain structures to intuitively make leaps of addition. But sometimes these steps result in wrong intuition, and so a human-like brain-like computer will also sometimes be wrong about arithmetic too.

As usual, this is unproblematic when applied to understanding humans or computation, but not a reasonable basis for designing a product. Who would pay for wrong arithmetic when they could pay for a Sharp or Casio instead?

Bonus: Everybody in the industry knew how many transistors were in Casio and Sharp's products. Moravec's paradox can be numerically estimated. Moore's law gives an estimate for how many transistors can be fit onto a chip. This is why so much sci-fi of the 80s and 90s suggests that we will have a robotics breakthrough around 2020. We didn't actually get the breakthrough IMO; Moravec's paradox is mostly about kinematics and moving a robot around in the world, and we are still using the same kinematic paradigms from the 80s. But this is why bros think that scaling is so important.

load more comments (1 replies)
[-] lagrangeinterpolator@awful.systems 5 points 1 month ago* (last edited 1 month ago)

Not sure if analog turing machines provide any new capabilities that digital TMs do, but I leave that question for the smarter people in the subject of theorethical computer science

The general idea among computer scientists is that analog TMs are not more powerful than digital TMs. The supposed advantage of an analog machine is that it can store real numbers that vary continuously while digital machines can only store discrete values, and a real number would require an infinite number of discrete values to simulate. However, each real number "stored" by an analog machine can only be measured up to a certain precision, due to noise, quantum effects, or just the fact that nothing is infinitely precise in real life. So, in any reasonable model of analog machines, a digital machine can simulate an analog value just fine by using enough precision.

There aren't many formal proofs that digital and analog are equivalent, since any such proof would depend on exactly how you model an analog machine. Here is one example.

Quantum computers are in fact (believed to be) more powerful than classical digital TMs in terms of efficiency, but the reasons for why they are more powerful are not easy to explain without a fair bit of math. This causes techbros to get some interesting ideas on what they think quantum computers are capable of. I've seen enough nonsense about quantum machine learning for a lifetime. Also, there is the issue of when practical quantum computers will be built.

load more comments (2 replies)
load more comments (12 replies)
[-] froztbyte@awful.systems 5 points 1 month ago

hot off the heels of months of “agentic! it can do things for you!” llm hype, they have to make special APIs for the chatbots, I guess because otherwise they make too many whoopsies?

load more comments (6 replies)
[-] BlueMonday1984@awful.systems 5 points 1 month ago

Quick PSA for anyone who's still on LinkedIn: the site's stealing your data to train the slop machines

load more comments (6 replies)
[-] V0ldek@awful.systems 5 points 1 month ago
[-] BigMuffN69@awful.systems 4 points 1 month ago

Nice result, not too shocking after IMO performance. A friend of mind told me that this particular competition is highly time constrained for human competitors, i.e., questions aren’t impossibly difficult per se, but some are time sinks that you simply avoid to get points elsewhere. (5 hours on 12 Qs is tight…)

So when you are competing against a data center using a nuclear reactor vs 3 humans running on broccoli, the claims of superhuman performance definitely require an * attached to them.

load more comments (6 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 14 Sep 2025
20 points (100.0% liked)

TechTakes

2263 readers
82 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS