[-] blakestacey@awful.systems 20 points 1 month ago

The lead-in to that is even "better":

This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We've never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).

"The reason for optimism is that we can cozy up to fascists!"

[-] blakestacey@awful.systems 20 points 1 month ago

An interesting thing came through the arXiv-o-tube this evening: "The Illusion-Illusion: Vision Language Models See Illusions Where There are None".

Illusions are entertaining, but they are also a useful diagnostic tool in cognitive science, philosophy, and neuroscience. A typical illusion shows a gap between how something "really is" and how something "appears to be", and this gap helps us understand the mental processing that lead to how something appears to be. Illusions are also useful for investigating artificial systems, and much research has examined whether computational models of perceptions fall prey to the same illusions as people. Here, I invert the standard use of perceptual illusions to examine basic processing errors in current vision language models. I present these models with illusory-illusions, neighbors of common illusions that should not elicit processing errors. These include such things as perfectly reasonable ducks, crooked lines that truly are crooked, circles that seem to have different sizes because they are, in fact, of different sizes, and so on. I show that many current vision language systems mistakenly see these illusion-illusions as illusions. I suggest that such failures are part of broader failures already discussed in the literature.

[-] blakestacey@awful.systems 19 points 2 months ago* (last edited 2 months ago)

From the linked Andrew Molitor item:

Why Extropic insists on talking about thermodynamics at all is a mystery, especially since “thermodynamic computing” is an established term that means something quite different from what Extropic is trying to do. This is one of several red flags.

I have a feeling this is related to wanking about physics in the e/acc holy gospels. They invoke thermodynamics the way that people trying to sell you healing crystals for your chakras invoke quantum mechanics.

[-] blakestacey@awful.systems 20 points 3 months ago

Silicon Valley is proud to announce the man who taught his asshole to talk, based on the hit William S. Burroughs story, "Don't be the man who taught his asshole to talk."

[-] blakestacey@awful.systems 20 points 4 months ago* (last edited 4 months ago)

From the documentation:

While reasoning tokens are not visible via the API, they still occupy space in the model's context window and are billed as output tokens.

Huh.

[-] blakestacey@awful.systems 20 points 6 months ago

The list of diatribes about forum drama that are interesting and edifying for the outsider is not long, and this one is not on it.

[-] blakestacey@awful.systems 20 points 6 months ago

I regret to inform you that Trace is hate-reading awful.systems too & has posted this comment on their Twitter.

Their writing is so boring I can't even summon up the enthusiasm to make a "senpai has noticed us" joke.

[-] blakestacey@awful.systems 20 points 7 months ago

Yeah, that juxtaposition makes no sense to me. How does the machine that remixes existing text and makes it worse become anything that can "recursively self-improve"? Show your work.

[-] blakestacey@awful.systems 20 points 7 months ago

Quoth Yud:

There is a way of seeing the world where you look at a blade of grass and see "a solar-powered self-replicating factory". I've never figured out how to explain how hard a superintelligence can hit us, to someone who does not see from that angle. It's not just the one fact.

It's almost as if basing an entire worldview upon a literal reading of metaphors in grade-school science books and whatever Carl Sagan said just after "these edibles ain't shit" is, I dunno, bad?

[-] blakestacey@awful.systems 20 points 1 year ago

Shot, in the post:

Gina and I eventually decided that the data collection process was too time-consuming, and we stopped partway through.

Chaser, from the comments:

Josh You and I wrote a python script that searches Google for a list of keywords, saves the text of the web pages in the search results, and shows them to GPT and asks it questions about them from a prompt. This would quickly automate the rest of your data collection

[-] blakestacey@awful.systems 20 points 1 year ago

The case for the importance of IQ for numerous real-world outcomes was made in the controversial book The Bell Curve (1994) by psychologist Richard Herrnstein and political scientist Charles Murray. They cogently argued

No, they didn't.

[-] blakestacey@awful.systems 20 points 1 year ago

I just can't get over the "struggling with a flour sifter" bit. Like ... what's there to struggle with? What accessory would help a person locked in combat with a flour sifter? Another flour sifter, to intimidate the first with the knowledge that it can be replaced?

view more: ‹ prev next ›

blakestacey

joined 2 years ago
MODERATOR OF