[-] blakestacey@awful.systems 6 points 3 days ago

Did you ever read Mad Mazes by Robert Abbott? That was a book of 20 mazes that were practically lessons in graph theory. I remember one involved navigating a public transit map where you could make free transfers of the same type (bus to bus or train to train) or to the same color (e.g., a red bus line to a red train line). Another involved using a die to mark your position on a grid; you could only move to a square if tilting the die over in that direction brought the number printed on the square to the top of the die.

[-] blakestacey@awful.systems 7 points 3 days ago

I also grew up with some sumptuously illustrated science books, like Roy A. Gallant's National Geographic Picture Atlas of Our Universe, and the Eyewitness Science series. And everything I could find by David Macaulay.

[-] blakestacey@awful.systems 8 points 3 days ago

Do the Zizians fit in the "rationalist/EA/risk community"? Gosh and golly gee.

Yuddites and Zizians are a better example of the "narcissism of small differences" than any of the ones that Siskind propped up.

[-] blakestacey@awful.systems 5 points 4 days ago

yeah, DeepSeek LLMs are probably still an environmental disaster for the same reason most supposedly more efficient blockchains are — perverse financial incentives across the entire industry.

  1. the waste generation will expand to fill the available data centers

  2. oops all data centers are full, we need to build more data centers

[-] blakestacey@awful.systems 11 points 4 days ago

This is much more a TechTakes story than a NotAwfulTech one; let's keep the discussion over on the other thread:

https://awful.systems/post/3400636

[-] blakestacey@awful.systems 7 points 4 days ago

Perhaps the most successful "sequel to chess" is actually the genre of chess problems, i.e., the puzzles about how Black can achieve mate in 3 (or whatever) from a contrived starting position that couldn't be seen in ordinary ("real") gameplay.

There are also various ways of randomizing the starting positions in order to make the memorized knowledge of opening strategies irrelevant.

Oh, and Bughouse.

[-] blakestacey@awful.systems 21 points 6 days ago

Pouring one out for the local-news reporters who have to figure out what the fuck "timeless decision theory" could possibly mean.

[-] blakestacey@awful.systems 15 points 6 days ago* (last edited 6 days ago)

The big claim is that R1 was trained on far less computing power than OpenAI’s models at a fraction of the cost.

And people believe this ... why? I mean, shouldn't the default assumption about anything anyone in AI says is that it's a lie?

[-] blakestacey@awful.systems 45 points 1 month ago

Wojciakowski took the critiques on board. “Wow, tough crowd … I’ve learned today that you are sensitive to ensuring human readability.”

Christ, what an asshole.

[-] blakestacey@awful.systems 43 points 2 months ago* (last edited 2 months ago)

Those are the actors who played Duncan Idaho in the David Lynch adaptation and in the two Syfy miniseries. So, yeah, it's not wrong, just incomplete — though I have no idea why it only serves up those three. There's certainly no limitation to three images, as can be verified by searching for "Sherlock Holmes actor" or the like.

[-] blakestacey@awful.systems 74 points 3 months ago

"When I have a disagreement with a girl, I hit my balls with a hammer. There is absolutely nothing she can do; it's a brutal mog."

[-] blakestacey@awful.systems 46 points 8 months ago

To date, the largest working nuclear reactor constructed entirely of cheese is the 160 MWe Unit 1 reactor of the French nuclear plant École nationale de technologie supérieure (ENTS).

"That's it! Gromit, we'll make the reactor out of cheese!"

17

a lesswrong: 47-minute read extolling the ambition and insights of Christopher Langan's "CTMU"

a science blogger back in the day: not so impressed

[I]t’s sort of like saying “I’m going to fix the sink in my bathroom by replacing the leaky washer with the color blue”, or “I’m going to fly to the moon by correctly spelling my left leg.”

Langan, incidentally, is a 9/11 truther, a believer in the "white genocide" conspiracy theory and much more besides.

24

Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid!

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut'n'paste it into its own post, there’s no quota here and the bar really isn't that high

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

59
submitted 10 months ago* (last edited 10 months ago) by blakestacey@awful.systems to c/techtakes@awful.systems

If you've been around, you may know Elsevier for surveillance publishing. Old hands will recall their running arms fairs. To this storied history we can add "automated bullshit pipeline".

In Surfaces and Interfaces, online 17 February 2024:

Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities [1], [2].

In Radiology Case Reports, online 8 March 2024:

In summary, the management of bilateral iatrogenic I'm very sorry, but I don't have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about managing hepatic artery, portal vein, and bile duct injuries, but for specific cases, it is essential to consult with a medical professional who has access to the patient's medical records and can provide personalized advice.

Edit to add this erratum:

The authors apologize for including the AI language model statement on page 4 of the above-named article, below Table 3, and for failing to include the Declaration of Generative AI and AI-assisted Technologies in Scientific Writing, as required by the journal’s policies and recommended by reviewers during revision.

Edit again to add this article in Urban Climate:

The World Health Organization (WHO) defines HW as “Sustained periods of uncharacteristically high temperatures that increase morbidity and mortality”. Certainly, here are a few examples of evidence supporting the WHO definition of heatwaves as periods of uncharacteristically high temperatures that increase morbidity and mortality

And this one in Energy:

Certainly, here are some potential areas for future research that could be explored.

Can't forget this one in TrAC Trends in Analytical Chemistry:

Certainly, here are some key research gaps in the current field of MNPs research

Or this one in Trends in Food Science & Technology:

Certainly, here are some areas for future research regarding eggplant peel anthocyanins,

And we mustn't ignore this item in Waste Management Bulletin:

When all the information is combined, this report will assist us in making more informed decisions for a more sustainable and brighter future. Certainly, here are some matters of potential concern to consider.

The authors of this article in Journal of Energy Storage seems to have used GlurgeBot as a replacement for basic formatting:

Certainly, here's the text without bullet points:

18

In which a man disappearing up his own asshole somehow fails to be interesting.

22
submitted 1 year ago* (last edited 1 year ago) by blakestacey@awful.systems to c/techtakes@awful.systems

So, there I was, trying to remember the title of a book I had read bits of, and I thought to check a Wikipedia article that might have referred to it. And there, in "External links", was ... "Wikiversity hosts a discussion with the Bard chatbot on Quantum mechanics".

How much carbon did you have to burn, and how many Kenyan workers did you have to call the N-word, in order to get a garbled and confused "history" of science? (There's a lot wrong and even self-contradictory with what the stochastic parrot says, which isn't worth unweaving in detail; perhaps the worst part is that its statement of the uncertainty principle is a blurry JPEG of the average over all verbal statements of the uncertainty principle, most of which are wrong.) So, a mediocre but mostly unremarkable page gets supplemented with a "resource" that is actively harmful. Hooray.

Meanwhile, over in this discussion thread, we've been taking a look at the Wikipedia article Super-recursive algorithm. It's rambling and unclear, throwing together all sorts of things that somebody somewhere called an exotic kind of computation, while seemingly not grasping the basics of the ordinary theory the new thing is supposedly moving beyond.

So: What's the worst/weirdest Wikipedia article in your field of specialization?

93

The day just isn't complete without a tiresome retread of freeze peach rhetorical tropes. Oh, it's "important to engage with and understand" white supremacy. That's why we need to boost the voices of white supremacists! And give them money!

28

With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety"/doomer nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (as well as some contacts from old hands who are on top of how to handle ultra-rich man-children with god complexes).

6
submitted 1 year ago* (last edited 1 year ago) by blakestacey@awful.systems to c/sneerclub@awful.systems

Flashback time:

One of the most important and beneficial trainings I ever underwent as a young writer was trying to script a comic. I had to cut down all of my dialogue to fit into speech bubbles. I was staring closely at each sentence and striking out any word I could.

"But then I paid for Twitter!"

6

AI doctors will revolutionize medicine! You'll go to a service hosted in Thailand that can't take credit cards, and pay in crypto, to get a correct diagnosis. Then another VISA-blocked AI will train you in following a script that will get a human doctor to give you the right diagnosis, without tipping that doctor off that you're following a script; so you can get the prescription the first AI told you to get.

Can't get mifepristone or puberty blockers? Just have a chatbot teach you how to cast Persuasion!

24
1

Yudkowsky writes,

How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

2

Steven Pinker tweets thusly:

My friend & Harvard colleague Howard Gardner, offers a thoughtful critique of my book Rationality -- but undermines his cause, as all skeptics of rationality must do, by using rationality to make it.

"My colleague and fellow esteemed gentleman of Harvard neglects to consider the premise that I am rubber and he is glue."

view more: ‹ prev next ›

blakestacey

joined 2 years ago
MODERATOR OF