Answer on a postcard

[-] YouKnowWhoTheFuckIAM@awful.systems 8 points 8 months ago* (last edited 8 months ago)

due to all the 20th century conflicts

I assume he’s referring to the same question I was asking: did you just extrapolate this from the phrase “fucked up, disastrous mess” (referring to the sheer number of different systems in Europe?), because I think the big long reply above seriously undersells the fact that “20th century conflicts” aren’t even mentioned or gestured at in the video. There’s a map showing…different countries…but while 20th century conflicts changed various borders in Europe, they aren’t the origin of the borders between countries in Europe, or the origin of different European countries developing their own independent rail systems without any centralised plan - because they’re different countries, and the various bodies which today unify much of the continent only began to come into existence after the Second World War.

If we were talking about Former Yugoslavia, you’d actually be right! The integrated rail infrastructure of that region was completely devastated by the 1990s. But that’s not the focus here.

[-] YouKnowWhoTheFuckIAM@awful.systems 8 points 8 months ago* (last edited 8 months ago)

Alright.

Well I could be, and I really really want to be, incredibly sarcastic and dismissive, because I genuinely believe that you’ve missed the mark incredibly hard, and your eminently reasonably and good request that people not medicalise assholery in general would, in this case, imply not mentioning the fact that people abuse prescription drugs and act like assholes. Alcoholics act like assholes, so do cokeheads, and so do people who abuse prescription medications which are, at the appropriate dosage, a perfectly good and fine support and indeed lifeline for managing whatever condition they may have. And this is just the truth: one of the central reasons that you have alternatives to Adderall, such as the drug which you personally are prescribed, is that there are risks associated with Adderall even for patients with nothing but good intentions.

But I also know it’s bad and counter-productive for me to both try to explain that I think I’m actually being quite reasonable and be sarcastic and dismissive like that.

So instead, I’d like to ask you, first, for a little charity. I’m going to copy paste my original comment below, and point out that it does not say that Adderall is what “makes the eas racist, cultish, or even overly verbose debating club dropouts” (your words, my emphasis on “makes”). Then I’m going to point out what I think it does say:

they spend fucking hundreds of collective hours going around in circles on the EA forum debating[1] this shit, instead of actually doing anything useful

how do they, in good conscience, deny any responsibility for the real harms ideas cause, when they continue to lend them legitimacy by entertaining them over and over and over again?

Adderall

So Jax isn’t here saying “what makes them racist, cultish, or even overly verbose debating club dropouts?” What she’s asking is how are they able to go around in circles amongst themselves talking about this shit, without acknowledging that the ideas they entertain have real world consequences. The joke I’m making focuses narrowly on this point: they’re able to waste all of this time (given that they’re already eas) going round in circles, and denying that words have effects, because they (very very famously!) have a cultural problem with prescription drug abuse. The joke categorically does not attribute their racism or cultishness to Adderall.

Now, the joke does attribute their combined dissociation from real world consequences and their verbosity - specifically, their energy for verbosity - to abuse of Adderall. That’s a stretch, but it’s in the nature of a one-word joke to generalise a little! I need my reader to have a modicum of charity here, in imagining that I am aware that there are other things going on with these people. You, in fact, should be more than aware of this, because I replied to you in another context just the other day with three quite long paragraphs giving an analysis of Yudkowsky and scientific racism in LessWrong which didn’t once mention prescription drugs of any kind.

And the joke is a little inter-textual: the word “abuse” does not appear next to “Adderall”. Again, I need a little charity from my reader to make the joke work, but I think it’s actually a really reasonable amount of charity. I think, personally, that on SneerClub at least, where I am a frequent commenter, people are generally aware that the abuse of prescription ADHD medication (and other, similar, drugs) is a famous problem amongst rationalists/EAs. At least on SneerClub, I think, people can be trusted to know the difference between attributing behaviours to Adderall outright, and attributing behaviours to its abuse. In this context, I think we can in this case safely skirt discourses of medicalisation that I wholeheartedly agree exist in lots of places.

So this is where I think you’re just wrong: I think that you’re misusing the warning label we rightly put on discourses of medicalisation. And I think misusing those warning labels is generally not a good thing. I think that you do a disservice to me personally, and I think you do a disservice to people’s collective ability to communicate and socialise with one another if you call them out on bare associations between the names of drugs, bad behaviours, and discourses of medicalisation.

I think it misses the word for the trees to put the emphasis on a libertarian bent. The British and American class systems are perfectly capable of enforcing the same rules for prestige and polite discussion in order to favour some preferred hegemonic power without endorsing libertarian values. Indeed libertarianism as a movement most certainly adopts those rules because - for all that it may derive political support from (primarily white) guys of all sorts of backgrounds - it’s a fundamentally aristocratic proposition, right down to almost absurdist details such as its propensity to distribute land amongst an elite who employ lesser beings to work it.

In the case of rationalism, the emphasis should instead be on control: Yudkowsky built his system to control what was and wasn’t acceptable thinking, ostensibly for the benefit of the thinker. Its departures from actually very good patterns of thinking are what take it into cult territory, as the rigidity of the rules meets the hard wall of reality, and forces adherents to choose between reality and fantasy.

And as I say below to David, sure, there were other trends in play (most especially - as I note above as well - the tendency for America’s moral arc to bend towards racism). But I’d push back on suggesting that IQ-fetishism is distinct from naive biologism. Rather, IQ-fetishism itself is an expression of naive biologism (as we can see tracing its antecedents through back to Herbert Spencer), because you don’t get IQ-fetishism without the spectres of relativism and nurturism which, politically, it purports to counter-act - “IQ” is a supposedly sound, stable, measurable, cognitive category, where the alternative is understood to be a tangled mess of occult entities which cannot be reduced to any structure in the brain (and IQ holds out the promise of being reducible to g, which is in its whole conception reducible to a structure in the brain).

In this way the speculative futurism is simply of a piece with the biologism: once you reduce everything to (this very peculiar and highly naive, already science-fictional, concept of) the physical, you can manipulate it to generate whatever future you want. By the same token, the eugenic and fascistic trend in science-fiction pursues the same conceptual route. But it is only with the right historical ingredients, and the right players to activate those ingredients - which is to say an unequal society and the tendency to have people who want to naturalise that inequality - that the mixture becomes potently racist, and Yudkowsky, so to speak, is the one building the pot to specification.

[-] YouKnowWhoTheFuckIAM@awful.systems 9 points 9 months ago* (last edited 9 months ago)

Scholars of fascism and nazism do it all the time! The target of quotes like that is supposed to be those who deliberately muddy the waters. The “call a nazi a nazi” principle is a blunt instrument, and there are other tools in the anti-nazi kit.

[some hours later…] ah, the quote is from AR Moxon, whom I happen to know is both (a) not remotely averse to going deeper on what makes the nazis, (b) distinctly averse to not going deeper

I SAID I WANTED HOT WHEELS FOR CHRISTMAS

[-] YouKnowWhoTheFuckIAM@awful.systems 8 points 11 months ago* (last edited 11 months ago)

I don’t see how that works here. Humans don’t become impregnably narcissistic through bad management, rather insofar as management is the problem and as the scenario portrays it humans become incredibly good at managing information into increasingly tight self-serving loops. What the machine in this scenario would have to be able to do would not be “get super duper organised”. Rather it would have to be able to thoughtfully balance its own evolving systems against the input of other, perhaps significantly less powerful or efficient, systems in order to maintain a steady, manageable input of new information.

In other words, the machine would have to be able to slow down and become well-rounded. Or at least well-rounded in the somewhat perverse way that, for example, an eminent and uncorrupted historian is “well-rounded”.

In still other words it would have to be human, in the sense that human are already “open” information-processing creatures (rather than closed biological machines) who create processes for building systems out of that information. But the very problem faced by the machine’s designer is that humans like that don’t actually exist - no historian is actually that historian - and the human system-building processes that the machine’s designer will have to ape are fundamentally flawed, and flawed in the sense that there is, physically, no such unflawed process. You can only approach that historian by a constant careful balancing act, at best, and that as a matter just of sheer physical reality.

So the fanatics have to settle for a machine with a hard limit on what it can do and all they can do is speculate on how permissive that limit is. Quite likely, the machine has to do what the rest of us do: pick around in the available material to try to figure out what does and doesn’t work in context. Perhaps it can do so very fast, but so long as it isn’t to fold in on itself entirely it will have to slow down to a point at which it can co-operate effectively (this is how smart humans operate). At least, it will have to do all of this if it is to not be an impregnable narcissist.

That leaves a lot of wiggle room, but it dispenses with the most abject “to the moon” nonsense spouted by the anti-social man-children who come up with this shit.

It’s really gotta be emphasised that these guys didn’t come out of internet atheism and frankly I would really like to know where that idea came from. It’s a completely different thing which, arguably, predates internet atheism (if we read “internet atheism” as beginning in the early 2000s - but we could obviously push back that date much earlier). These guys are more or less out of Silicon Valley, Emile P Torres has coined the term “TESCREALS” (modified to “TREACLES”) for - and I had to google this even though I know all the names independently - “Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism”.

It’s a confluence of futurism cults which primarily emerged online (even on the early internet), but also in airport books by e.g. Ray Kurzweil in the 90s, and has gradually made its away into the wider culture, with EA and longtermism the now most successful outgrowths of its spores in the academy.

Whereas internet atheism kind of bottoms out in 1990s polemics against religion - nominally Christianity, but ultimately fuelled by the end of the Cold War and the West’s hunger for a new enemy (hey look over there, it’s some brown people with a weird religion) - the TREACLES “cluster of ideologies” (I prefer “genealogy”, because this is ultimately about a political genealogy) has deep roots in the weirdest end of libertarian economics/philosophy and rabid anti-communism. And therefore the Cold War (and even pre-Cold War) need for a capitalist political religion. OK the last part is my opinion, but (a) I think it stands up, and (b) it explains the clearly deeply felt need for a techno-religion which justifies the most insane shit as long as there’s money in it.

[-] YouKnowWhoTheFuckIAM@awful.systems 9 points 11 months ago* (last edited 11 months ago)

This “Gettier” attack seems to me to have no more interesting content than a “stopped clock”. To use an extremely similar, extremely common phrase, the New York Times would have been “right for the wrong reasons” to call Scott Alexander a racist. And this would be conceptually identical to pointing out that, I dunno, crazed conspiracy theorists suggested before he was caught that Jeffrey Epstein was part of an extensive paedophile network.

But we see this happen all the time, in fact it’s such a key building block of our daily experience that we have at least two cliches devoted to capturing it.

Perhaps it would be interesting if we were to pick out authentic Gettier cases which are also accusations of some kind, but it seems likely that in any case (i.e. all cases) where an accusation is levelled with complex evidence, the character of justification fails to be the very kind which would generate a Gettier case. Gettier cases cease to function like Gettier cases when there is a swathe of evidence to be assessed, because already our sense of justification is partial and difficult to target with the precision characteristic of unexpected failure - such cases turn out to be just “stopped clocks”. The sense of counter-intuitivity here seems mostly to be generated by the convoluted grammar of your summarising assessment, but this is just an example of bare recursivity, since you’re applying the language of the post to the post itself.

The only thing any of us can do is choose how we are going to get dumber every day

My impression is that, as a group, on average, rationalists tend to both feel and repress more intense feelings of shame and guilt than the rest of society can be bothered dealing with, and I say that as somebody who has spent nearly two years doing addiction recovery

Did OP consider the work going on at literally every single tech college’s VC groups in ~~optoelectronic~~ neural networks built on optical components to improve minimisation and how ~~that’s going to impact the decoupling of AI training and operation from Moore’s Law~~ that’s one hope for making processing power gains so that the banner headlines about “Moore’s Law” are pushed back a little further? I’m guessing no.~~___~~

You have the insider clout of a 15 year old with a search engine

view more: ‹ prev next ›

YouKnowWhoTheFuckIAM

joined 2 years ago