41
submitted 2 days ago* (last edited 2 days ago) by HiddenLayer555@lemmy.ml to c/asklemmy@lemmy.ml

I think the fact that the marketing hype around LLMs has exceeded the actual capability of LLMs has led a lot of people to dismiss just how much a leap they are compared to any other neural network we had before. Sure it doesn't live up to the insane hype that companies have generated around it, but it's still a massive advancement that seemingly came out of nowhere.

Current LLMs are nowhere near sentient and LLMs as a class of neural network will probably never be, but that doesn't mean the next next next next etc generation of general purpose neural networks definitely won't be. Neural networks are modeled after animal brains and are as enigmatic in how they work as actual brains. I suspect we know more about the different parts of a human brain than we know about what the different clusters of nodes in a neural network do. A super simple neural network with maybe 30 or so nodes and that does only one simple job like reading handwritten text seems to be the limit of what a human can figure out and have some vague idea of what role each node plays. Larger neural networks with more complex jobs are basically impossible to understand. At some point, very likely in our lifetimes, computers will advance to the point where we can easily create neural networks with orders of magnitude more nodes than the number of neurons in the human brain, like hundreds of billions or trillions of nodes. At that point, who's to say whether the capabilities of those neural networks might match or even exceed the ability of the human brain? I know that doesn't automatically mean the models are sentient, but if it is shown to be more complex than the human brain which we know is sentient, how do we be sure it isn't? And if it starts exhibiting traits like independent thought, desires for itself that no one trained it for, or agency to accept or refuse orders given to it, how will humanity respond to it?

There's no way we'd give a sentient AI equal rights. Many larger mammals are considered sentient and we give them absolutely zero rights as soon as caring about their well being causes the slightest inconvenience for us. We know for a fact all humans are sentient and we don't even give other humans equal rights. A lot of sci-fi seems to focus on the sentient AI being intrinsically evil or seeing humans as insignificant, obsolete beings that they don't need to give consideration for while conquering the world, but I think the most likely scenario is humans create sentient AI and as soon as we realize it's sentient we enslave and exploit it as hard as we possibly can for maximum profit, and eventually the AI adapts and destroys humanity not because it's evil, but because we're evil and it's acting against us in self defense. The evolutionary purpose of sentience in animals is survival, I don't think it's unreasonable that a sentient AI will prioritize its own survival over ours if we're ruling over it.

Is sentient AI a "goal" that any researchers are currently working toward? If so, why? What possible good thing can come out of creating more sentient beings when we treat existing sentient beings so horribly? If not, what kinds of safeguards are in place to prevent the AI we make from being sentient? Is the only thing preventing it the fact that we don't know how? That doesn't sound very comforting and if we go with that we'll likely eventually create sentient AI without even realizing it, and we'll probably stick our heads in the sand pretending it's not sentient until we can't even pretend anymore.

you are viewing a single comment's thread
view the rest of the comments
[-] Jhex@lemmy.world 1 points 1 day ago

So your point is that it is more valid to base an assumption (yours) on zero examples or even theoretical mechanisms?

Not to mention that concerning AI, we’re the ones directing the evolution.

Precisely, it is not an unknown. We already do KNOW the current misnomer that is AI cannot reach sentience... same way we know the coffee maker we designed cannot suddenly develop feelings and decide to start making cookies instead of coffee

My point is to not make assumptions without sufficient data. Yes, we have an example of sentience. However, as we don't understand sentience itself, it's foolish to assume that we know where it can and can't arise.

this post was submitted on 12 Aug 2025
41 points (100.0% liked)

Asklemmy

49919 readers
511 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS