Fun fact, I looked at that article. And my monitor exploded. No joke. I was in sudden darkness, and the mains were turned off. Pc survived thankfully, and I have a secondary monitor but lol wtf. (I need to go to bed).
Wait, the dusting attack problem has not been fixed?
Every time they go 'this wasnt in the data' it turns out it was. A while back they did the same with translating rareish languages. Turns out it was trained on it. Fucked up. But also, wtf how are they expecting this to stay secret and there being no backlash? This world needs a better class of criminals.
Xml also used to be a tech hype for a bit.
And i still remember how media outlets hyped up second life, forgot about it and a few months later discovered it again and more hype started. It was fun.
They were the first hype men of tech – didn’t actually do very much themselves but gave other people ideas.
This is a bit unfair, i think nick land also sold drugs. Not sure however.
Another thing from the comments
Now non-technical people with ideas can start prototyping and raise money
Didn't people use to say stuff like this about HTML? (Also paper prototyping exists, and I would also think that for almost all interesting ideas you actually need to have a technical person up front to know if a thing is even technically possible/viable (Wait, looking at the computation requirements of LLMs and the hype about that, im taking that last statement back ;) )).
The people discussing a random asspulled number to make a point as actually important is also very HN.
Would also be great if the article he talks about doesn't start with "I no longer endorse all the statements in this document.[emp mine] I think many of the conclusions are still correct, but especially section 1 is weaker than it should be, and many reactionaries complain I am pigeonholing all of them as agreeing with Michael Anissimov, which they do not; this complaint seems reasonable. This document needs extensive revision to stay fair and correct, but such revision is currently lower priority than other major projects. Until then, I apologize for any inaccuracies or misrepresentations."
Thank god you are real acausalrobotgod, else we would have been forced to create you.
Lol, this is considered a footnote by the Rationalist/EA standards
Once I would just like to see an explaination from the AI doomers how, considering the limited capacities of Turing style machines, and P!=NP (assuming it holds, else the limited capacities thing falls apart, but then we don't need AI for stuff to go to shit, as I think that prob breaks a lot of encryption methods), how AGI can be an existential risk, it cannot by definition surpass the limits of Turing machines via any of the proposed hypercomputational methods (as then turning machines are hyperturing and the whole classification structure crashed down).
I'm not a smart computer scientist myself (I did learn about some of the theories as evidenced above) but im constantly amazed at how our hyperhyped tech scene nowadays seems to not know that our computing paradigm has fundamental limits. (Everything touched by Musk extremely has this problem, with capacity problems in Starlink, Shannon Theoritically impossible compression demands for Neuralink, everything related to his tesla/AI related autonomous driving/robots thing. (To further make this an anti-Musk rant, he also claimed AI would solve chess, solving chess is a computational problem (it has been done for 7x7 board iirc), which just costs a lot of computation time (more than we have), if AI would solve chess, it would side step that time, making it a superturing thing, which makes turing machines superturing (I also can't believe that of all the theorethical hypercomputing methods we are going with the oracle method (machine just conjures up the right method, no idea how), the one I have always mocked personally) which is theoretically impossible and would have massive implications for all of computer science) sorry rant over).
Anyway, these people are not engineers or computer scientists, they are bad science fiction writers. Sorry for the slightly unrelated rant, it was stuck as a splinter in my mind for a while now. And I guess that typing it out and 'telling it to earth' like this makes me feel less ranty about it.
E: of course the fundamental limits apply to both sides of the argument, so both the 'AGI will kill the world' shit and 'AGI will bring us to posthuman utopia of a googol humans in postscarcity' seem unlikely. Unprecedented benefits? No. (Also im ignoring physical limits here as well, a secondary problem which would severely limit the singularity even if P=NP).
E2: looks at title of OPs post, looks at my post. Shit, the loons ARE at it again.
Scott "I didn't actually read the book I reviewed" Alexander.
Ah yes the more leftwing themotte user tracingwoodgrains. They place their windows in really strange places over there.
(For people not up to date on The Lore, he is one of the people who is behind TheSchism, an attempt to create TheMotte (sorry not going to explain that one today) with less rightwing assholes, the result)