https://russwilcoxdata.substack.com/p/and-the-alignment-problem-what-chinas
In June 2025, Zhao Tingyang gave a talk at Tsinghua’s Fangtang Forum. The edited transcript ran in The Paper on July 4 under the title “人工智能的伦理与思维之限” (The Ethical and Thinking Limits of AI). Near the end, Zhao wrote this:
“What requires more reflection is that attempting to ‘align’ AI with human nature and values actually contains a risk of human species suicide. Human nature is selfish, greedy, and cruel. Humans are the most dangerous biological species. Almost all religions demand the restraint of human desire; this is no accident. AI aligned with human values may well become a dangerous subject by imitating humans. Originally, AI does not possess the selfish genes of carbon-based life, so AI is actually closer to the legendary ‘human nature is fundamentally good’ kind of existence, whereas human nature is not ‘fundamentally good.’” The alignment paradigm treats human values as the target AI should conform to. Zhao is arguing the target is the danger. An AI aligned to human values inherits the specific features of human judgment that Zhao says have produced the record of human harm. The paradigm is not incomplete. It is pointed the wrong way.
Zhao’s argument has developed across CASS, The Paper, and Wenhua Zongheng from late 2022 through 2025, from a provocative aside into a sustained critique of the alignment paradigm. In the same period, the English-language alignment and AI ethics literature produced no substantive engagement. No citations. No rebuttal. No naming. Zhao is a member of the Chinese Academy of Social Sciences Institute of Philosophy, author of the Tianxia framework, and one of the most cited philosophers working in Chinese today.
I need to think on this a little more, wasn't on my radar.

I asked someone from the mainland, she more or less agreed with you: