[-] Collectivist@awful.systems 1 points 8 months ago

Well naive bayesianism, as practiced by the rationalists. Bayesianism itself can be reformed to get rid of most its problems, though I've yet to see a good solution for the absent-minded driver problem.

[-] Collectivist@awful.systems 1 points 1 year ago

I don't know, when I googled it this 80000 hours article is one of the first results. It seems reasonable at first glance but I haven't looked into it.

[-] Collectivist@awful.systems 1 points 1 year ago* (last edited 1 year ago)

The way this is categorized, this 18.2% is also about things like climate change and pandemics.

[-] Collectivist@awful.systems 2 points 1 year ago

the data presented on that page is incredibly noisy

Yes, that's why I said it's "less comprehensive" and why I first gave the better 2019 source which also points in the same direction. If there is a better source, or really any source, for the majority claim I would be interested in seeing it.

Speaking of which,

AI charities (which is not equivalent to simulated humans, because it also includes climate change, nearterm AI problems, pandemics etc)

AI is to climate change as indoor smoking is to fire safety, nearterm AI problems is an incredibly vague and broad category and I would need someone to explain to me why they believe AI has anything to do with pandemics. Any answer I can think of would reflect poorly on the one holding such belief.

You misread, it's 18.2% for long term and AI charities [emphasis added]

[-] Collectivist@awful.systems 2 points 1 year ago

The linked stats are already way out of date

Do you have a source for this 'majority' claim? I tried searching for more up to date data but this less comprehensive 2020 data is even more skewed towards Global development (62%) and animal welfare (27.3%) with 18.2% for long term and AI charities (which is not equivalent to simulated humans, because it also includes climate change, nearterm AI problems, pandemics etc). Utility of existential risk reduction is basically always based on population growth/ future generations (aka humans) and not simulations. 'digital person' only has 25 posts on the EA forum (by comparison, global health and development has 2097 post). It seems unlikely to me that this is a majority belief.

[-] Collectivist@awful.systems 2 points 1 year ago

I'm not that good at sneering. 'EA is when you make Fordlândia'? Idk, you found the original post and you're much better at it, it's better if you do it.

[-] Collectivist@awful.systems 2 points 1 year ago

When he posted the finished video on youtube yesterday, there were some quite critical comments on youtube, the EA forum and even lesswrong. Unfortunately they got little to no upvotes while the video itself got enough karma to still be on the frontpage on both forums.

[-] Collectivist@awful.systems 3 points 2 years ago

Since refusing a bet is seen as an admission of dishonesty, it's also a way to disadvantage an interlocutor with less money:

The marginal value of money decreases as you get more of it. A hundred dollars might be a vitally important amount of money for a poor person, and not even noticeable for a rich person. So if you bet against a person with less money you are wagering less of your happiness than they are. If they have health problems (and live in a country with bad healthcare) this bet increases their risk of death, which it doesn't for you. It seems to me that betting against someone who is poorer than you is morally dubious.

[-] Collectivist@awful.systems 3 points 2 years ago* (last edited 2 years ago)

No mention of the second castle either. And then Jan Kulveit says in this comment section:

For me, unfortunately, the discourse surrounding Wytham Abbey, seems like a sign of epistemic decline of the community, or at least on the EA forum.

While lying through his teeth in his comments on the post about the second castle.

view more: ‹ prev next ›

Collectivist

joined 2 years ago