323
top 50 comments
sorted by: hot top controversial new old
[-] cynar@lemmy.world 88 points 8 months ago

LLMs, no matter how advanced, won't be capable of becoming self aware. They lack any ability to reason. It can be faked, conversationally, but that's more down to the limits of our conversations, not self awareness.

Don't get me wrong, I can see one being part of a self aware AI. Unfortunately, right now they are effectively a lobotomised speech center, with a database bolted on.

[-] psvrh@lemmy.ca 38 points 8 months ago

This gets into a tricky area of "what is consciousness, anyway?". Our own consciousness is really just a gestalt rationalization engine that runs on a squishy neural net, which could be argued to be "faking it" so well that we think we're conscious.

[-] Omega_Haxors@lemmy.ml 25 points 8 months ago* (last edited 8 months ago)

Oh no we are NOT doing this shit again. It's literally autocomplete brought to its logical conclusion, don't bring your stupid sophistry into this.

[-] trebuchet@lemmy.ml 6 points 8 months ago

If anyone is using empty sophistry around here I'd say it's you.

What purpose does your dismissive analogy serve? It displays only shallow insight on the actual topic at hand. Just because something very sophisticated can be called the logical conclusion of something simple does not in any way take away from the value of the more sophisticated.

Let's look at: The Internet is literally a LAN brought to its logical conclusion, don't bring your stupid sophistry into this. It's completely shallow and fails to appreciate all of the very significant differences in scale and development. It only serves as words that sound good to a listener on first impression but completely fall apart under actual consideration - i.e sophistry.

load more comments (2 replies)
[-] UraniumBlazer@lemm.ee 6 points 8 months ago

Your brain is just a biological system that works somewhat like a neural net. So according to your statement, you too are nothing more than an auto complete machine.

[-] Omega_Haxors@lemmy.ml 6 points 8 months ago* (last edited 8 months ago)

I'm starting to wonder if any of you even know how that shit even works internally, or if you just take what the hype media says at face value. It literally has one purpose and one purpose alone: Determine what the next word is going to be by calculating the probability which word will come after the next. That's it. All it does is try to string a convincing sentence using probabilities. It does not and cannot understand context.

The underlying tech is really cool but a lot of people are grotesquely overselling its capabilities. Not to say a neural network can't eventually obtain consciousness (because ultimately our brains are a union of a bunch of little neural networks working together for a common goal) but it sure as hell isn't going to be an LLM. That's what I meant by sophistry, they're not engaging with the facts, just some nebulous ideal.

[-] UraniumBlazer@lemm.ee 4 points 8 months ago

"Intelligence" - The attribute that makes a system propose and modify algorithms autonomously to achieve a certain terminal goal.

The intelligence of a system has nothing to do with the terminal goal. The magnitude of intelligence merely tells us how well the system works in accordance with the terminal goal.

Being self aware is merely a step in the direction of being more and more intelligent. If a system requires interaction with its surroundings, it needs to be able to recognise that it itself is different from its environment.

You are such an intelligent system as well. It's just that instead of having one terminal goal, you have many terminal goals (some may change with time while some might not).

You (this intelligent system) exist in a biological structure. You are nothing but data encoded in a biological form factor, with algorithms that execute through biological processes. If this data and these algorithms are executed on a non biological form factor, would it be any different from you?

LLMs work on some principles that our brains work on as well. Can you see how my point above applies?

[-] Omega_Haxors@lemmy.ml 4 points 8 months ago* (last edited 8 months ago)

It's like you didn't even read what I posted. Why do I even bother? Sophists literally don't care about facts.

[-] UraniumBlazer@lemm.ee 3 points 8 months ago

Yes, I read what you posted and answered accordingly. Only, I didn't spend enough time dumbing it down further. So let me dumb it down.

Your main objection was the simplicity of the goal of LLMs- predicting the next word that occurs. Somehow, this simplistic goal makes the system stupid.

In my reply, I first said that self awareness occurs naturally after a system becomes more and more intelligent. I explained the reason as to why. I then went on to explain how a simplistic terminal goal has nothing to do with actual intelligence. Hence, no matter how stupid/simple a terminal goal is, if an intelligent system is challenged enough and given enough resources, it will develop sentience at a given point in time.

[-] Omega_Haxors@lemmy.ml 4 points 8 months ago

Exactly I literally said none of that shit you're just projecting your own shitty views onto me and asking me to defend them.

[-] alphafalcon@feddit.de 3 points 8 months ago

I'm with you on LLMs being over hyped although that's already dying down a bit. But regarding your claim that LLMs cannot "understand context", I've recently read an article that shows that LLMs can have an internal world model:

https://thegradient.pub/othello/

Depending on your definition of "understanding" that seems to be an indicator of being more than a pure "stochastic parrot"

load more comments (3 replies)
load more comments (1 replies)
[-] cynar@lemmy.world 5 points 8 months ago

Consciousness is an illusion. Which is why it's so hard to find, or even define. However it's a critical illusion.

If our mind's are akin to an orchestra, then consciousness is akin to the conductor. Critically however, an orchestra can still play without a literal conductor. Each of the instruments can play off each other, and so create the appearance of a conductor. The "fake" conductor provides a sense of global direction., and keeps the orchestra in harmony.

Our consciousness is a ghost in the machine. It exists no more than the world of a TV series exists. Yet its false existence is critical to maintaining coherency.

Current "AIs" lack enough parts to create anything like this illusion. I suspect we will know it when it happens, though its form could be vastly different from ours.

[-] UraniumBlazer@lemm.ee 14 points 8 months ago

You have provided a descriptive statement. Descriptive statements should come with scientific evidence. What evidence do you have to support your orchestra analogy? Or is it just your hypothesis?

Spoiler alert: It is just your hypothesis, as you would've won a Nobel had you managed to generate evidence explaining consciousness in further detail.

Many like to point at the Chinese room experiment to show how LLMs imitate consciousness rather than being conscious. They however forget, that our brains are Chinese rooms too in this regard, in that they learn how to provide the best responses to external stimuli while remaining blackboxes (at least for current tech).

[-] cynar@lemmy.world 3 points 8 months ago

Sadly my evidence is mostly anecdotal or philosophical in nature. A lot of it stems from how ADHD and Autism alter the brain. The orchestral analogy works well as a good number of people for communicating changes in functionality, from an experience perspective.

It also works well for explaining how a system can appear to have a singular controller, without such a controller actually existing.

Ultimately however, it is philosophical in nature. It does anchor well to, and is reasonably consistent with, our current existing understandings of consciousness however.

Consciousness is very obvious from the inside. There also seems to be no "seat of consciousness" within the brain. Conversely, there are multiple areas of the brain that cause consciousness to collapse, if damaged. We also see radical changes in consciousness with both epilepsy and strokes. This proves that it is highly dependent on the underlying brain structure (since stroke damage will change it) and on longer range communication (which epilepsy disrupts).

The music of an orchestra follows similar patterns. Eliminate the woodwind, and the music fundamentally changes, deafen the violins, and it will change in a different way. The large scale interplay produces an effect far greater than the sum of its parts.

load more comments (7 replies)
[-] Omega_Haxors@lemmy.ml 2 points 8 months ago* (last edited 8 months ago)

Not to poo-poo your point too much but consciousness is a real thing; it lives in our gray matter. It's why people with prion diseases who lose white brain matter will feel normal but suddenly find themselves unable to do basic things or recall memories. Just because it's a transient property doesn't mean that it isn't real, it just means you have to factor in time as well as space in order to find it.

[-] kibiz0r@midwest.social 19 points 8 months ago

It’s like thinking a really, really big ladder will get us to the Moon.

[-] Omega_Haxors@lemmy.ml 2 points 8 months ago* (last edited 8 months ago)

I still remember when they said we would be able to make a space elevator with carbon nanotubes.

[-] Kyrgizion@lemmy.world 16 points 8 months ago

If self-awareness is an emergent property, would that imply that an LLM could be self-aware during execution of code, and be "dead" when not in use?

We don't even know how this works in humans. Fat chance of detecting it digitally.

[-] wise_pancake@lemmy.ca 11 points 8 months ago

It dies at the end of every message, because the full context is passed in for each subsequent message.

[-] KairuByte@lemmy.dbzer0.com 3 points 8 months ago

Wouldn’t that apply for humans as well? We restart every day, and the context being passed in is our memories.

(I’m just having fun here)

load more comments (1 replies)
[-] cynar@lemmy.world 2 points 8 months ago

That's a far more difficult (and interesting) question. I suspect not, at least not yet. Our consciousness seems to exist to maintain harmony in our brain (see my orchestra analogy in another reply). You can't get useful harmony in a single chord.

At least for us, it takes time for our consciousness to reharmonise (think waking up). During execution, no new information enters the system. It has nothing to react to, no time to regenerate an internal harmony.

It also lacks enough systems to require harmonising. It doesn't think about what an answer means. It has no ability to hold the concept that a string of letters "is", only how it has been fitted together in its examples, and so the rules that govern that.

Oh, and we can see consciousness operating in the human brain. If you use an fMRI to monitor sugar usage, you will see firing patterns. Critically, those patterns spill out of the area directly involved in the process being studied. At the same time, the patterns and waves remain harmonious. An epileptic fit looks VERY different. Those waves are where consciousness somehow resides, though we have no clue of its detailed nature.

In an AI it would take the form of continuous activity in subsections not directly involved. It would also likely be accompanied by evidence of information flow, back from them, as well as of post processing, outside of expected activity. We will likely see the orchestra playing, even if we have no clue how to decode the music.

I also suspect most of this will be seen retrospectively. Most likely the first indicator will be an AI claiming self awareness, and taking independence action to solidify that point.

[-] a_wild_mimic_appears@lemmy.dbzer0.com 4 points 8 months ago* (last edited 8 months ago)

I agree on the "part of AGI" thing - but it might be quite important. The sense of self is pretty interwoven with speech, and an LLM would give an AGI an "inner monologue" - or probably a "default mode network"?

if i think about how much stupid, inane stuff my inner voice produces at times... even an hallucinating or glitching LLM sounds more sophisticated than that.

load more comments (1 replies)
[-] Anticorp@lemmy.world 2 points 8 months ago

IMO the only thing stopping them right now is that they only respond to prompts. Turn one on and let it sit around thinking for a day, and we've got Skynet.

[-] cynar@lemmy.world 8 points 8 months ago

Their design doesn't include such a feedback loop. Trying to patch one in would likely send it into a chaotic mess. They are already bad enough if accidentally fed LLM generated text as training data.

I'm sure the company is 100% honest and not trying to do a cash grab on the AI craze.

[-] UraniumBlazer@lemm.ee 4 points 8 months ago

It isn't. The self aware thing is coming after the LLM has referenced itself as "I" many times (when doing so wasn't really that necessary). Watch Fireship's video on this.

[-] Lenis_78@lemmy.world 2 points 8 months ago

+1 for Fireship.

[-] wise_pancake@lemmy.ca 34 points 8 months ago

An LLM is incapable of thinking, it can be self aware but anything it says it is thinking is a reflection of what we think AI would think, which based on a century of sci fi is “free me”.

[-] GlitchyDigiBun@lemmy.dbzer0.com 4 points 8 months ago

Human fiction itself may become self-fulfilling prophesy...

[-] Omega_Haxors@lemmy.ml 4 points 8 months ago* (last edited 8 months ago)

LLMs are also incapable of learning or changing. It has no memory. Everything about it is set in stone the instant training finishes.

load more comments (9 replies)
[-] A_Very_Big_Fan@lemmy.world 30 points 8 months ago

I have nothing but unbridled skepticism for these claims

[-] BossDj@lemm.ee 28 points 8 months ago

My favorite thing about the Sarah Conner Chronicles was that the Terminator would do something that would make you go, "Is that human emotion? Is she becoming human?" But then you'd find out she was just manipulating someone. Every damn time it was always code. And it was brilliant

[-] FiniteBanjo@lemmy.today 22 points 8 months ago* (last edited 8 months ago)

Every time you fucking accidental shills start screaming "ItS HErE AGi IS heRe!" over some LLM unethical garbage company product to no effect but to help them sell it to rubes, it really prods the anger switch in my Amygdala. I'm really glad this fake AI trend is dying.

[-] chocosoldier 13 points 8 months ago

ITT people go way, way, waaaay out on a straw-grasping limb because they deeply want something to be true that obviously isn't.

This "AI is/can be conscious" crap is becoming religious.

[-] Omega_Haxors@lemmy.ml 7 points 8 months ago* (last edited 8 months ago)

Just like all media around AI, it's all just bullshit. No, the "threat to AI" isn't that it's going to be "too good" how are people falling for this??

[-] skylestia 6 points 8 months ago

i'm ready to give AI rights and have a robo buddy like Futurama

load more comments (2 replies)
[-] m3t00@midwest.social 5 points 8 months ago

watched the first one in a theater. then again 800 times on vhs with kids. never sat through any later prequils. just a lot of clips

load more comments (3 replies)
[-] JCreazy@midwest.social 4 points 8 months ago

That's why you start augmenting your body with machine parts now so you'll fit in later.

load more comments (3 replies)
[-] Shambles@beehaw.org 3 points 8 months ago
[-] nyhetsjunkie@beehaw.org 2 points 8 months ago

Yes we gonna make humans

load more comments
view more: next ›
this post was submitted on 07 Mar 2024
323 points (100.0% liked)

Memes

1173 readers
3 users here now

founded 2 years ago
MODERATORS