27
the loons are at it again
(righttowarn.ai)
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
Building a scifi apocalypse cult around LLMs seems like a missed opportunity when there are much more interesting computer science toys lying around. Like you pointed out, there's the remote possibility that P=NP, which is also largely unexplored in fiction. There is a fun little low-budget movie called The Traveling Salesman about this exact scenario, where several scientists are locked in a room deciding what to do with their discovery when the government tries to squeeze them for it. Very 12 Angry Men.
My fav example of the micro-genre is The Laundry Files book series by Charles Stross (who visits these parts!). In the first book, The Atrocity Archives, it turns out that any mathematical proof that P=NP is a closely guarded state secret; so much so that the British government has an entire MoD agency dedicated to rounding up and permanently employing people who discover The Truth. This is because drawing a graph that summons horrors from beyond space-time (brain-eating parasites, hungry ghosts, Cthulhu, a competent Tory politician, etc) is an NP-complete problem. You really don't want an efficient algorithm for solving 3SAT to show up on reddit.
I mean, you could also use it to steal bitcoin and make robots, but pfft.
I'm not doing the series justice. I love how Bob, Mo, Mhari, and co grow and change, and their character arcs really hit home for me, as someone who more-or-less grew up alongside the series, not to mention the spot-on social commentary.
Computer scientist accidentally ruins the world by having his p=pn algorithm iterate over automatically generated programs and asks it 'does this program halt or not?'
That's basically how the point-of-view character gets roped in, except instead of threatening the whole world he only threatened Wolverhampton, he was still in grad school, and it was a graphics algorithm.
AH THE TSP MOVIE IS SO FUN :)
btw, as a shill for big MIP, I am compelled to share this site which has solutions for real world TSPs!
https://www.math.uwaterloo.ca/tsp/world/
Rad as heck!
btw, (sorry if this is prying!) considering your line of work, is all of this acausal robot god stuff especially weird and off-putting for you? Do your coworkers seem to be resistant to it?
Not prying! Thankful to say, none of my coworkers have ever brought up ye olde basilisk, the closest anyone has ever gotten has been jokes about the LLMs taking over, but never too seriously.
No, I don't find the acasual robot god stuff too weird b.c. we already had Pascal's wager. But holy shit, people actually full throat believing it to the point that they are having panic attacks wtf. Like:
Full human body simulation -> my brother-in-law is a computational chemist, they spend huge amounts of compute modeling simple few atom systems. To build a complete human simulation, you'd be computing every force interaction for approx ~ 10^28 atoms, like this is ludicrous.
The chuckle fucks who are posing this are suggesting ok, once the robot god can sim you (which again, doubt), it's going to be able to use that simulation of you to model your decisions and optimize against you.
So we have an optimization problem like:
min_{x,y} f(x) s.t. y in argmin{ g(x,y),(x,y) in X*Y}
where x and f(x) would be the decision variables and obj function 🐍 is trying to minimize, and y and g(x,y) is the objective of me, the simulated human who has its own goals, (don't get turned to paperclips).
This is a bilevel optimization problem, and it's very, very nasty to solve. Even in the nicest case possible, that somehow g,f, are convex functions and X,Y are all convex sets, (which is an insane ask considering y and g entails a complete human sim), this problem is provably NP-hard.
Basically, to build the acasual god, first you need a computer larger than the known universe, and this probably isn't sufficient.
Weird note: while I was in academia, I actually did do some work on training ANN to model the constraint that y is a minimizer of a follower problem by using an ANN to act as a proxy for g(x,*), and then encoding a representation of the trained network into a single level optimization problem... we got some nice results for some special low dim problems where we had lots of data🦍 🦍 🦍 🦍 🦍