17
submitted 5 days ago* (last edited 5 days ago) by scruiser@awful.systems to c/sneerclub@awful.systems

I found a neat essay discussing the history of Doug Lenat, Eurisko, and cyc here. The essay is pretty cool, Doug Lenat made one of the largest and most systematic efforts to make Good Old Fashioned Symbolic AI reach AGI through sheer volume and detail of expert system entries. It didn't work (obviously), but what's interesting (especially in contrast to LLMs), is that Doug made his business, Cycorp actually profitable and actually produce useful products in the form of custom built expert systems to various customers over the decades with a steady level of employees and effort spent (as opposed to LLM companies sucking up massive VC capital to generate crappy products that will probably go bust).

This sparked memories of lesswrong discussion of Eurisko... which leads to some choice sneerable classic lines.

In a sequence classic, Eliezer discusses Eurisko. Having read an essay explaining Eurisko more clearly, a lot of Eliezer's discussion seems a lot emptier now.

To the best of my inexhaustive knowledge, EURISKO may still be the most sophisticated self-improving AI ever built - in the 1980s, by Douglas Lenat before he started wasting his life on Cyc. EURISKO was applied in domains ranging from the Traveller war game (EURISKO became champion without having ever before fought a human) to VLSI circuit design.

This line is classic Eliezer dunning-kruger arrogance. The lesson from Cyc were used in useful expert systems and effort building the expert systems was used to continue to advance Cyc, so I would call Doug really successful actually, much more successful than many AGI efforts (including Eliezer's). And it didn't depend on endless VC funding or hype cycles.

EURISKO used "heuristics" to, for example, design potential space fleets. It also had heuristics for suggesting new heuristics, and metaheuristics could apply to any heuristic, including metaheuristics. E.g. EURISKO started with the heuristic "investigate extreme cases" but moved on to "investigate cases close to extremes". The heuristics were written in RLL, which stands for Representation Language Language. According to Lenat, it was figuring out how to represent the heuristics in such fashion that they could usefully modify themselves without always just breaking, that consumed most of the conceptual effort in creating EURISKO.

...

EURISKO lacked what I called "insight" - that is, the type of abstract knowledge that lets humans fly through the search space. And so its recursive access to its own heuristics proved to be for nought. Unless, y'know, you're counting becoming world champion at Traveller without ever previously playing a human, as some sort of accomplishment.

Eliezer simultaneously mocks Doug's big achievements but exaggerates this one. The detailed essay I linked at the beginning actually explains this properly. Traveller's rules inadvertently encouraged a narrow degenerate (in the mathematical sense) strategy. The second place person actually found the same broken strategy Doug (using Eurisko) did, Doug just did it slightly better because he had gamed it out more and included a few ship designs that countered the opponent doing the same broken strategy. It was a nice feat of a human leveraging a computer to mathematically explore a game, it wasn't an AI independently exploring a game.

Another lesswronger brings up Eurisko here. Eliezer is of course worried:

This is a road that does not lead to Friendly AI, only to AGI. I doubt this has anything to do with Lenat's motives - but I'm glad the source code isn't published and I don't think you'd be doing a service to the human species by trying to reimplement it.

And yes, Eliezer actually is worried a 1970s dead end in AI might lead to FOOM and AGI doom. To a comment here:

Are you really afraid that AI is so easy that it's a very short distance between "ooh, cool" and "oh, shit"?

Eliezer responds:

Depends how cool. I don't know the space of self-modifying programs very well. Anything cooler than anything that's been tried before, even marginally cooler, has a noticeable subjective probability of going to shit. I mean, if you kept on making it marginally cooler and cooler, it'd go to "oh, shit" one day after a sequence of "ooh, cools" and I don't know how long that sequence is.

Fearmongering back in 2008 even before he had given up and gone full doomer.

And this reminds me, Eliezer did not actually predict which paths lead to better AI. In 2008 he was pretty convinced Neural Networks were not a path to AGI.

Not to mention that neural networks have also been "failing" (i.e., not yet succeeding) to produce real AI for 30 years now. I don't think this particular raw fact licenses any conclusions in particular. But at least don't tell me it's still the new revolutionary idea in AI.

Apparently it took all the way until AlphaGo (sometime 2015 to 2017) for Eliezer to start to realize he was wrong. (He never made a major post about changing his mind, I had to reconstruct this process and estimate this date from other lesswronger's discussing it and noticing small comments from him here and there.) Of course, even as late as 2017, MIRI was still neglecting neural networks to focus on abstract frameworks like "Highly Reliable Agent Design".

So yeah. Puts things into context, doesn't it.

Bonus: One of Doug's last papers, which lists out a lot of lessons LLMs could take from cyc and expert systems. You might recognize the co-author, Gary Marcus, from one of the LLM critical blogs: https://garymarcus.substack.com/

you are viewing a single comment's thread
view the rest of the comments
[-] blakestacey@awful.systems 6 points 4 days ago

One thing I've been missing is takedowns of Rationalist ideology about theoretical computer science. The physics, I can do, along with assorted other topics.

[-] blakestacey@awful.systems 5 points 1 day ago

Some thoughts of what might be helpful in that vein:

  • What is a Turing machine? (Described in enough detail that one could, you know, prove theorems.)

  • What is the halting problem?

  • Why is Kolmogorov complexity/algorithmic information content uncomputable?

  • Pursuant to the above, what's up with Solomonoff induction?

  • Why is the lambda calculus not magically super-Turing?

[-] blakestacey@awful.systems 6 points 20 hours ago

The under-acknowledged Rule Zero for all this is that the Sequences were always cult shit. They were not intended to explain Solomonoff induction in the way that a textbook would, so that the reader might learn to reason about the concept. Instead, the ploy was to rig the game: Present the desired conclusion as the "simplest", pretend that "simplicity" is quantifiable, assert that scientists are insufficiently Rational(TM) because they reject the quantifiably "simplest" answer... School bad, blog posts good, tithe to MIRI.

[-] aio@awful.systems 3 points 1 day ago

i might try writing such a post!

[-] dgerard@awful.systems 3 points 1 day ago

i'm not digging for cites, but i recall it was extremely hard to convince rationalists that even the ASI couldn't just break cryptography, because they don't understand maths either

[-] scruiser@awful.systems 4 points 1 day ago

It's worse than you are remembering! Eliezer has claimed deep neural networks (maybe even something along the lines of llms) could learn to break hashes just through being trained on exposure to hash/plaintext pairs on the training data set.

The original discussion: here about a lesswrong post and here about a tweet. And the original lesswrong post if you want to go back to the source.

[-] dgerard@awful.systems 3 points 23 hours ago

I think that LW post is the example I remember, Veedrac insisting that mathematics holds even in the face of a "sufficiently intelligent" AGI

tho i think there was also discussion on rat-tumblr

[-] scruiser@awful.systems 2 points 1 day ago

I'd add:

  • examples of problems equivalent to the halting problem, examples of problems that are intractable

  • computational complexity. I.e. Schrodinger Equation and DFT and why the ASI can't invent new materials/nanotech (if it was even possible in the first place) just by simulating stuff really well.

titotal has written some good stuff on computational complexity before. Oh wait, you said you can do physics so maybe you're already familiar with the material science stuff?

[-] blakestacey@awful.systems 4 points 21 hours ago

On a bulletin board in a grad-student lounge, I once saw a saying thumbtacked up: "One electron is physics. Two electrons is perturbation theory. Three or more electrons, that's chemistry."

this post was submitted on 28 Apr 2025
17 points (100.0% liked)

SneerClub

1089 readers
12 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS