... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.
EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:
Andrew Ng wrote:
In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack.
Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do.
Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?
EY replied:
I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.
Apparently genetically engineering ~300 IQ people (or breeding them, if you have time) is the consensus solution on how to subvert the acausal robot god, or at least the best the vast combined intellects of siskind and yud have managed to come up with.
So, using your influence to gradually stretch the overton window to include neonazis and all manner of caliper wielding lunatics in the hope that eugenics and human experimentation become cool again seems like a no-brainer, especially if you are on enough uppers to kill a family of domesticated raccoons at all times.
On a completely unrelated note, adderall abuse can cause cardiovascular damage, including heart issues or stroke, but also mental health conditions like psychosis, depression, anxiety and more.