115
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 07 Oct 2024
115 points (100.0% liked)
Technology
37689 readers
224 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
That's assuming that we are a general intelligence. I'm actually unsure if that's even true.
True, they've only calculated it'd take perhaps millions of years. Which might be accurate, I'm not sure to what kind of computer global evolution over trillions of organisms over millions of years adds up to. And yes, perhaps some breakthrough happens, but it's still very unlikely and definitely not "right around the corner" as the AI-bros claim (and that near-future thing is what the paper set out to disprove).
But it's easy to just define general intelligence as something approximating what humans already do. The paper itself only analyzed whether it was feasible to have a computational system that produces outputs approximately similar to humans, whatever that is.
No, you're missing my point, at least how I read the paper. They're saying that the method of using training data to computationally develop a neural network is a conceptual dead end. Throwing more resources at the NP-hard problem isn't going to solve it.
What they didn't prove, at least by my reading of this paper, is that achieving general intelligence itself is an NP-hard problem. It's just that this particular method of inferential training, what they call "AI-by-Learning," is an NP-hard computational problem.
This is exactly what they've proven. They found that if you can solve AI-by-Learning in polynomial time, you can also solve random-vs-chance (or whatever it was called) in a tractable time, which is a known NP-Hard problem. Ergo, the current learning techniques which are tractable will never result in AGI, and any technique that could must necessarily be considerably slower (otherwise you can use the exact same proof presented in the paper again).
They merely mentioned these methods to show that it doesn't matter which method you pick. The explicit point is to show that it doesn't matter if you use LLMs or RNNs or whatever; it will never be able to turn into a true AGI. It could be a good AI of course, but that G is pretty important here.
No, General Intelligence has a set definition that the paper's authors stick with. It's not as simple as "it's a human-like intelligence" or something that merely approximates it.
This isn't my field, and some undergraduate philosophy classes I took more than 20 years ago might not be leaving me well equipped to understand this paper. So I'll admit I'm probably out of my element, and want to understand.
That being said, I'm not reading this paper with your interpretation.
But they've defined the AI-by-Learning problem in a specific way (here's the informal definition):
I read this definition of the problem to be defined by needing to sample from D, that is, to "learn."
But the caveat I'm reading, implicit in the paper's definition of the AI-by-Learning problem, is that it's about an entire class of methods, of learning from a perfect sample of intelligent outputs to itself be able to mimic intelligent outputs.
The paper defines it:
It's just defining an approximation of human behavior, and saying that achieving that formalized approximation is intractable, using inferences from training data. So I'm still seeing the definition of human-like behavior, which would by definition be satisfied by human behavior. So that's the circular reasoning here, and whether human behavior fits another definition of AGI doesn't actually affect the proof here. They're proving that learning to be human-like is intractable, not that achieving AGI is itself intractable.
I think it's an important distinction, if I'm reading it correctly. But if I'm not, I'm also happy to be proven wrong.