1415
submitted 1 week ago* (last edited 1 week ago) by JaymesRS@literature.cafe to c/memes@lemmy.world

Alt Text: an image of Agent Smith from The Matrix with the following text superimposed, "1999 was described as being the peak of human civilization in 'The Matrix' and I laughed because that obviously wouldn't age well and then the next 25 years happened and I realized that yeah maybe the machines had a point."

you are viewing a single comment's thread
view the rest of the comments
[-] masterspace@lemmy.ca 8 points 1 week ago* (last edited 1 week ago)

When I heard that line I was like "Yeah, sure. We'll never have AI in my lifespan" and you know what? I was right.

Unless you just died or are about to, you can't really confidently make that statement.

There's no technical reason to think we won't in the next ~20-50 years. We may not, and there may be a technical reason why we can't, but the previous big technical hurdles were the amount of compute needed and that computers couldn't handle fuzzy pattern matching, but modern AI has effectively found a way of solving the pattern matching problem, and current large models like ChatGPT model more "neurons" than are in the human brain, let alone the power that will be available to them in 30 years.

[-] 10001110101@lemm.ee 9 points 6 days ago

current large models like ChatGPT model more “neurons” than are in the human brain

I don't think that's true. Parameter counts are more akin to neural connections, and the human brain has something like 100 trillion connections.

[-] deranger@sh.itjust.works 19 points 1 week ago

There's no technical reason to think we will in the next ~20-50 years, either.

[-] match@pawb.social 6 points 6 days ago

there's plenty of reason to believe that, whether we have it or not, some billionaire asshole is going to force you to believe and respect his corportate AI as if it's sentient (while simultaneously treating it like slave labor)

[-] masterspace@lemmy.ca 3 points 6 days ago

There's plenty of economic reasons to think we will as long as it's technically possible.

[-] lowleveldata@programming.dev 13 points 1 week ago

the previous big technical hurdles were the amount of compute needed and that computers couldn’t handle fuzzy pattern matching

Was it? I thought it was always about we haven't quite figure it out what thinking really is

[-] masterspace@lemmy.ca 3 points 6 days ago* (last edited 6 days ago)

I mean, no, not really. We know what thinking is. It's neurons firing in your brain in varying patterns.

What we don't know is the exact wiring of those neurons in our brain. So that's the current challenge.

But previously, we couldn't even effectively simulate neurons firing in a brain, AI algorithms are called that because they effectively can simulate the way that neurons fire (just using silicon) and that makes them really good at all the fuzzy pattern matching problems that computers used to be really bad at.

So now the challenge is figuring out the wiring of our brains, and/or figuring out a way of creating intelligence that doesn't use the wiring of our brains. Both are entirely possible now that we can experiment and build and combine simulated neurons at ballpark the same scale as the human brain.

[-] lowleveldata@programming.dev 5 points 6 days ago

Aren't you just saying the same thing? We know it has something to do with the neurons but couldn't figure it out exactly how

[-] masterspace@lemmy.ca 3 points 6 days ago* (last edited 6 days ago)

The distinction is that it's not 'something to do with neurons', it's 'neurons firing and signalling each other'.

Like, we know the exact mechanism by which thinking happens, we just don't know the precise wiring pattern necessary to recreate the way that we think in particular.

And previously, we couldn't effectively simulate that mechanism with computer chips, now we can.

[-] lunarul@lemmy.world 9 points 1 week ago

There's no technical reason to think we won't in the next ~20-50 years

Other than that nobody has any idea how to go about it? The things called "AI" today are not precursors to AGI. The search for strong AI is still nowhere close to any breakthroughs.

[-] masterspace@lemmy.ca 2 points 6 days ago

Assuming that the path to AGI involves something akin to all the intelligence we see in nature (i.e. brains and neurons), then modern AI algorithms' ability to simulate neurons using silicon and math is inarguably and objectively a precursor.

[-] lunarul@lemmy.world 2 points 6 days ago* (last edited 6 days ago)

Machine learning, renamed "AI" with the LLM boom, does not simulate intelligence. It integrates feedback loops, which is kind of like learning and it uses a network of nodes which kind of look like neurons if you squint from a distance. These networks have been around for many decades, I've built a bunch myself in college, and they're at their core just polynomial functions with a lot of parameters. Current technology allows very large networks and networks of networks, but it's still not in any way similar to brains.

There is separate research into simulating neurons and brains, but that is separate from machine learning.

Also we don't actually understand how our brains work at the level where we could copy them. We understand some things and have some educated guesses on others, but overall it's pretty much a mistery still.

this post was submitted on 03 May 2025
1415 points (100.0% liked)

memes

14637 readers
3336 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS