I've seen this story play out in software engineering: people were very impressed when the AI does unexpectedly well in one out of 50 attempts on an easy task, and so people decided to trust it for everything and turn their codebases into disasters. There was no great wave of new high-quality software. Instead, the only real result was that existing software has become far more buggy and insecure.
Now we have people using AI in science and math because it was impressive in random demonstrations of solving math problems. I now have friends asking me why I'm not using AI, and also saying that AI will be better than all mathematicians in 30 years or whatever. Do you really think I refuse to use AI out of ignorance? No, I know too much about it! I have seen the same story play out in software engineering, and what makes this any different?
The last several years have been the monkey's paw moment for rationalists, where they keep getting what they want and realizing it's actually bad. As for why they keep getting what they want, just look at who's funding them.
(Also featuring a "Chinese curse" that isn't actually a phrase in Chinese. At least it's not "may you live in interesting times".)