[-] YouKnowWhoTheFuckIAM@awful.systems 6 points 2 years ago* (last edited 2 years ago)

As far as I know Siskind has never deleted any idea for the contrarianism motivating it getting out of hand. It’s just against his character. Much more likely he considered it revealing his power level (even if he recognised himself as never having really endorsed the idea besides contrarianism in the first place).

Less charitably, more plausibly, at the outside he recognised that it’s a stupid fucking thing to say that makes him look just not smart.

I would guess that their personal reach over the name is pretty limited by a number of factors, including that the town itself has quite a significant similar claim itself. “Oxford Brookes” university, for example, is not a part of Oxford the Ancient University, but it certainly helps their brand to be next door (and as far as I know it’s a perfectly fine institution as far as these things go).

The issue with the Future of Humanity Institute would be almost the other way around: that as long as it’s in-house, the university can hardly dissociate themselves from it.

My dear boy…what the fuck are you talking about

Yeah man but it’s sold for thousands of years, and the last hundred? Oh you’d better believe we know it sells

It doesn’t do a bad job of cashing out a fairly strong corollary of utilitarianism which is generally taken to be characteristic of any utilitarian theory worth its salt viz. since each of us is only one person, and the utilitarian calculus calls for us to maximise happiness (or similar), then insofar as each of us only bears moral weight equal to one (presumably equal sized) fraction of that whole, therefore our obligations to others (insofar as the happiness of others obliges us) swamp our own personal preferences. Furthermore, insofar as (without even being a negative utilitarian) suffering is very bad, the alleviation of suffering is a particularly powerful such obligation when our responsibilities to each individual sufferer are counted up.

This is generally taken to be sufficiently characteristic of utilitarianism that objections against utilitarianism frequently cite this “demandingness” as an implausible consequence of any moral theory worth having.

So in isolation it makes some sense as shorthand for a profound consequence of utilitarianism the theory which utilitarians themselves frequently stand up as a major advantage of their position, even as opponents of utilitarianism also stand it up for being “too good” and not a practical theory of action.

In reality it’s a poor description of utilitarian beliefs, as you say, because the theory is not the person, and utilitarians are, on average, slightly more petty and dishonest than the average person who just gives away something to Oxfam here and there.

Yeah, I kind of used you to grandstand about a broader point that I hoped other people who had the “yuck” reaction would see, and I still haven’t figured out how to tag people (i.e. the person above) on this janky site

[-] YouKnowWhoTheFuckIAM@awful.systems 6 points 2 years ago* (last edited 2 years ago)

Edit: I should here add that “utility” as Hume understands it is not yet the full-fledged utility of “utilitarianism” or “utilons”, which innovation is due to Bentham (only a few decades later). For Hume, “utility” is just what you’d expect from normal language, i.e. “use”, or “usefulness”. The utility of things, including principles, is in their being good or bad for us, i.e. not formally in the sense of a hedonic calculus or the satisfaction of preferences (we don’t “count up” either of these things to get an account of Humean utility).

Hume isn’t an anti-realist! The notorious “is-ought” passage in Treatise which people often take for an expression of anti-realism only goes so far as to point out what it says: that evaluative conclusions cannot logically follow merely from fact premises, so that to conclude “eating grapes is good” we also need some evaluative premise “grapes are good” alongside “grapes are red” and “grapes are edible”, or whatever.

Contemporary accounts of Hume are muddled by his long and undeserved reputation as a thoroughgoing radical sceptic, but his philosophy has two sides: the destructive and the reconstructive, where the latter is perfectly comfortable with drawing all sorts of conclusions so long as they are limited by an awareness of the limits of our powers of judgement.

For morality, Hume finds its source in our “sentiments”, but indeed not totally unlike our friend over there, he does not think that this is cause to think our sentiments don’t have force. Again not unlike our friend, he thinks sentiments may be compared for their “utility”. However, his arguments (a) unlike those of our friend, do not attempt to bridge the essentially logical gap he has merely pointed out, (b) unlike the anti-realist, take reflective judgements about utility to have force, alongside the force of those sentiments we reflect on, of an essentially real character.

Insofar as there is a resemblance, the important distinction between what Hume is doing and what our guy is doing is that Hume doesn’t try to find any master-category (implicitly, “the species” above, although e/accs place this underneath another category “consciousness”) which would ground fact judgements in science to give them force. Rather, (a) he basically asks us what else do you plan on doing, if you don’t intend to prefer good things over bad? (b) identifies the particular sources of goodness and badness in real life, and then evaluates them. By contrast, the e/acc view attempts to argue that whatever our cultural judgements are, then they are good, insofar as they are refined evolutionarily/memetically - Hume thinks culture frequently gets these wrong, frequently gets them right, that culture is a flux, not a progressive development, and he discovers the essential truth in looking at individuals, not at group level “selection” over a set of competing propositions.

Hume isn’t tied to the inherent conservatism of a pseudo-Bayesian model. Curiously enough he is a political conservative, which is arguably what makes it possible for him to (lightly) rest his semi-realist account on what he takes to be a relatively stable human sentimental substrate. But this only gives him further cause to take a genial view of the stakes of what we now call “realism vs anti-realism”: it isn’t as important as trying to be nice.

Radioactive Wolf Twinks? My God, what have we done…

[-] YouKnowWhoTheFuckIAM@awful.systems 6 points 2 years ago* (last edited 2 years ago)

or confusing GWAS’ current inability to detect a gene with the gene not existing

This remarkable sleight of hand sticks out. The argument from the (or rather this particular) GWAS camp goes “we are detecting the genes, contrary to expectations”. There isn’t any positive assumption in favour of that camp, so failure to thus far detect the gene is supposed to motivate against its existence.

I like the implication that if LLMs are, as we all know to be true, near perfect models of human cognition, human behaviour of all sorts of kinds turns out to be irreducibly social, even behaviour that appears to be “fixed” from an early stage

While I agree with you about the economics, I’m trying to point out that physical reality also has constraints other than economic, many of them unknown, some of them discovered in the process of development.

Bird’s flight isn’t magic, or unknowable, or non reproduceable.

No. But it is unreproducible if you already have arms with shoulders, elbows, hands, and five stubby fingers. Human and bird bodies are sufficiently different that there are no close approximations for humans which will reproduce flight for humans as it is found in birds.

If it was, we’d have no sense of awe at learning about it, studying it. Imagine if human like behavior of intelligence was completely unknowable. How would we go about teaching things? Communicating at all? Sharing our experiences?

To me, this is a series of non-sequiturs. It’s obvious that you can have awe for something without having a genuine understanding of it, but that’s beside the point. Similarly, the kind of knowledge required for humans to communicate with one another isn’t relevant - what we want to know is the kind of knowledge which goes into the physical task of making artificial humans. And you ride roughshod of one of the most interesting aspects of the human experience: human communication and mutual understanding is possible across vast gulfs of the unknown, which is itself rather beautiful.

But again I can’t work out what makes that particularly relevant. I think there’s a clue here though:

…but I also take care not to put humanity, or intelligence in a broad sense, in some special magical untouchable place, either.

Right, but this would be a common (and mistaken) move some people make which I’m not making, and which I have no desire to make. You’re replying here to people who affirm either an implicit or explicit dualism about human consciousness, and say that the answers to some questions are just out of reach forever. I’m not one of those people, and I’m referring specifically to the words I used to make the point that I made, namely that there exist real physical constraints repeatedly approached and arrived at in the history of technology which demonstrate that not every problem has an ideal solution (and I refer you back to my earlier point about aircraft to show how that cashes out in practice).

I’ve just dipped in and out of it all day - I can’t look away! It’s better than a car crash: you can slow down multiple times

view more: ‹ prev next ›

YouKnowWhoTheFuckIAM

joined 2 years ago