1072
Timmy the Pencil (lemmy.world)
top 50 comments
sorted by: hot top controversial new old
[-] NABDad@lemmy.world 92 points 5 months ago

In a robotics lab where I once worked, they used to have a large industrial robot arm with a binocular vision platform mounted on it. It used the two cameras to track an objects position in 3 dimensional space and stay a set distance from the object.

It worked the way our eyes worked, adjusting the pan and tilt of the cameras quickly for small movements and adjusting the pan and tilt of the platform and position of the arm to follow larger movements.

Viewers watching the robot would get an eerie and false sense of consciousness from the robot, because the camera movements matched what we would see people's eyes do.

Someone also put a necktie on the robot which didn't hurt the illusion.

[-] CaptainEffort@sh.itjust.works 89 points 5 months ago* (last edited 5 months ago)
[-] OlPatchy2Eyes@lemmy.world 22 points 5 months ago

We've been had

[-] ech@lemm.ee 14 points 5 months ago* (last edited 5 months ago)

Finishing up a rewatch through Community as we speak. Funny to see the gimmick (purportedly) used in real life.

[-] Deway@lemmy.world 13 points 5 months ago

He was so streets ahead.

[-] saddlebag@lemmy.world 9 points 5 months ago

That was my first thought!

[-] JackGreenEarth@lemm.ee 62 points 5 months ago

How would we even know if an AI is conscious? We can't even know that other humans are conscious; we haven't yet solved the hard problem of consciousness.

[-] JoeBigelow@lemmy.ca 25 points 5 months ago

Does anybody else feel rather solipsistic or is it just me?

[-] TexasDrunk@lemmy.world 18 points 5 months ago

I doubt you feel that way since I'm the only person that really exists.

Jokes aside, when I was in my teens back in the 90s I felt that way about pretty much everyone that wasn't a good friend of mine. Person on the internet? Not a real person. Person at the store? Not a real person. Boss? Customer? Definitely not people.

I don't really know why it started, when it stopped, or why it stopped, but it's weird looking back on it.

[-] SuddenDownpour@sh.itjust.works 7 points 5 months ago

Andrew Tate has convinced a ton of teenage boys to think the same, apparently. Kinda ironic.

load more comments (1 replies)
[-] lvxferre@mander.xyz 13 points 5 months ago

A Cicero a day and your solipsism goes away.

Rigour is important, and at the end of the day we don't really know anything. However this stuff is supposed to be practical; at a certain arbitrary point you need to say "nah, I'm certain enough of this statement being true that I can claim that it's true, thus I know it."

load more comments (4 replies)
load more comments (2 replies)
[-] lvxferre@mander.xyz 8 points 5 months ago

Let's try to skip the philosophical mental masturbation, and focus on practical philosophical matters.

Consciousness can be a thousand things, but let's say that it's "knowledge of itself". As such, a conscious being must necessarily be able to hold knowledge.

In turn, knowledge boils down to a belief that is both

  • true - it does not contradict the real world, and
  • justified - it's build around experience and logical reasoning

LLMs show awful logical reasoning*, and their claims are about things that they cannot physically experience. Thus they are unable to justify beliefs. Thus they're unable to hold knowledge. Thus they don't have conscience.

*Here's a simple practical example of that:

[-] CileTheSane@lemmy.ca 9 points 5 months ago

their claims are about things that they cannot physically experience

Scientists cannot physically experience a black hole, or the surface of the sun, or the weak nuclear force in atoms. Does that mean they don't have knowledge about such things?

load more comments (5 replies)
load more comments (23 replies)
[-] azertyfun@sh.itjust.works 6 points 5 months ago

We don't even know what we mean when we say "humans are conscious".

Also I have yet to see a rebuttal to "consciousness is just an emergent neurological phenomenon and/or a trick the brain plays on itself" that wasn't spiritual and/or cooky.

Look at the history of things we thought made humans humans, until we learned they weren't unique. Bipedality. Speech. Various social behaviors. Tool-making. Each of those were, in their time, fiercely held as "this separates us from the animals" and even caused obvious biological observations to be dismissed. IMO "consciousness" is another of those, some quirk of our biology we desperately cling on to as a defining factor of our assumed uniqueness.

To be clear LLMs are not sentient, or alive. They're just tools. But the discourse on consciousness is a distraction, if we are one day genuinely confronted with this moral issue we will not find a clear binary between "conscious" and "not conscious". Even within the human race we clearly see a spectrum. When does a toddler become conscious? How much brain damage makes someone "not conscious"? There are no exact answers to be found.

load more comments (1 replies)
[-] FlyingSquid@lemmy.world 6 points 5 months ago

I'd say that, in a sense, you answered your own question by asking a question.

ChatGPT has no curiosity. It doesn't ask about things unless it needs specific clarification. We know you're conscious because you can come up with novel questions that ChatGPT wouldn't ask spontaneously.

load more comments (3 replies)
load more comments (1 replies)
[-] iAvicenna@lemmy.world 36 points 5 months ago

Noooooo Timmy the Pencil! I haven't even seen this demonstration but I am deeply affected.

load more comments (1 replies)
[-] afraid_of_zombies@lemmy.world 30 points 5 months ago

Wait wasn't this directly from Community the very first episode?

That professor's name? Albert Einstein. And everyone clapped.

[-] Doof@lemmy.world 12 points 5 months ago

Yes it was - minus the googly eyes

[-] afraid_of_zombies@lemmy.world 24 points 5 months ago

Found it

https://youtu.be/z906aLyP5fg?si=YEpk6AQLqxn0UP6z

Good job OP. Took a scene from a show from 15 years ago and added some craft supplies from Kohls. Very creative.

load more comments (4 replies)
load more comments (2 replies)
[-] mPony@lemmy.world 29 points 5 months ago

RIP Timmy
We barely knew ye

[-] Colonel_Panic_@lemm.ee 22 points 5 months ago

We met you only just at noon, A friend like Tim we barely knew. Taken from us far too soon, Yellow Standard #2.

[-] mPony@lemmy.world 6 points 5 months ago

torn by fingers malcontent, pink eraser left unspent

load more comments (1 replies)
[-] Lotarion@lemmy.world 28 points 5 months ago

Tbf I'd gasp too, like wth

[-] ameancow@lemmy.world 12 points 5 months ago* (last edited 5 months ago)

Humans are so good at imagining things alive that just reading a story about Timmy the pencil is eliciting feelings of sympathy and reactions.

We are not good judges of things in general. Maybe one day these AI tools will actually help us and give us better perception and wisdom for dealing with the universe, but that end-goal is a lot further away than the tech-bros want to admit. We have decades of absolute slop and likely a few disasters to wade through.

And there's going to be a LOT of people falling in love with super-advanced chat bots that don't experience the world in any way.

[-] Fedizen@lemmy.world 7 points 5 months ago

next you're going to tell me the moon doesn't have a face on it

[-] dual_sport_dork@lemmy.world 7 points 5 months ago

It's clearly a rabbit.

load more comments (2 replies)
[-] MeDuViNoX@sh.itjust.works 28 points 5 months ago

WTF? My boy Tim didn't deserve to go out like that!

[-] Aceticon@lemmy.world 7 points 5 months ago

Look at the bright side: there are two Tiny Timmys now.

[-] HawlSera@lemm.ee 23 points 5 months ago
[-] FlyingSquid@lemmy.world 17 points 5 months ago

And now ChatGPT has a friendly-sounding voice with simulated emotional inflections...

[-] CitizenKong@lemmy.world 7 points 5 months ago

That's why I love Ex Machina so much. Way ahead of its time both in showing the hubris of rich tech-bros and the dangers of false empathy.

load more comments (1 replies)
[-] SchmidtGenetics@lemmy.world 13 points 5 months ago

Were people maybe not shocked at the action or outburst of anger? Why are we assuming every reaction is because of the death of something “conscious”?

[-] braxy29@lemmy.world 11 points 5 months ago* (last edited 5 months ago)

i mean, i just read the post to my very sweet, empathetic teen. her immediate reaction was, "nooo, Tim! 😢"

edit - to clarify, i don't think she was reacting to an outburst, i think she immediately demonstrated that some people anthropomorphize very easily.

humans are social creatures (even if some of us don't tend to think of ourselves that way). it serves us, and the majority of us are very good at imagining what others might be thinking (even if our imaginings don't reflect reality), or identifying faces where there are none (see - outlets, googly eyes).

[-] ryven@lemmy.dbzer0.com 9 points 5 months ago

Right, it's shocking that he snaps the pencil because the listeners were playing along, and then he suddenly went from pretending to have a friend to pretending to murder said friend. It's the same reason you might gasp when a friendly NPC gets murdered in your D&D game: you didn't think they were real, but you were willing to pretend they were.

The AI hype doesn't come from people who are pretending. It's a different thing.

load more comments (1 replies)
[-] A_Very_Big_Fan@lemmy.world 8 points 5 months ago* (last edited 5 months ago)

Seriously, I get that AI is annoying in how it's being used these days, but has the second guy seriously never heard of "anthropomorphizing"? Never seen Castaway? Or played Portal?

Nobody actually thinks these things are conscious, and for AI I've never heard even the most diehard fans of the technology claim it's "conscious."

(edit): I guess, to be fair, he did say "imagining" not "believing". But now I'm even less sure what his point was, tbh.

[-] Ephera@lemmy.ml 11 points 5 months ago

My interpretation was that they're exactly talking about anthropomorphization, that's what we're good at. Put googly eyes on a random object and people will immediately ascribe it human properties, even though it's just three objects in a certain arrangement.

In the case of LLMs, the googly eyes are our language and the chat interface that it's displayed in. The anthropomorphization isn't inherently bad, but it does mean that people subconsciously ascribe human properties, like intelligence, to an object that's stringing words together in a certain way.

load more comments (2 replies)
load more comments (2 replies)
[-] NutWrench@lemmy.world 12 points 5 months ago* (last edited 5 months ago)

We're good at scamming investors into thinking that a room full of monkeys on typewriters can be "AI." And all it takes to make that happen is to waste time, resources, lives and money, (ESPECIALLY money) into building an army of fusion-powered robots to beat the monkeys into working just a little bit harder.

Because that's businesses solution to everything: work harder, not smarter.

[-] ameancow@lemmy.world 5 points 5 months ago

We’re good at scamming investors into thinking that a room full of monkeys on typewriters can be “AI.”

Current generations of LLM from everything I've learned are basically really, really, really large rooms of monkeys pounding on keyboards. The algorithm that sifts through that mess to find actual meaning isn't even particularly new or revolutionary, we just never had databases large enough that can be indexed fast enough to actually find the emergent patterns and connections between fields.

If you pile enough libraries in front of you and can sift out the exact lines that you know will make you feel a certain way, you can arrange that pile of information in ways that will give you almost any result you want.

The thing that tricks a lot of us is we're never really conscious of what we want. We want to be tricked though, we want to control and manipulate something that seems conscious for our own ends, that gives a feeling of power so your brain validates the experience by telling you the story that it's alive. You see pictures that look neat and depict the scenes you wanted to see in your mind, so your brain convinces you that it's inventing things out of nothing and that it has to be magically smart to be able to mash pikachu with darth vader.

[-] kshade@lemmy.world 10 points 5 months ago

Anthropomorphism is one hell of a drug

[-] JimSamtanko@lemm.ee 10 points 5 months ago

That is one astute point! Damn.

load more comments (3 replies)
[-] match@pawb.social 9 points 5 months ago

Alan Watts, talking on the subject of Buddhist vegetarianism, said that even if vegetables and animals both suffer when we eat them, vegetables don't scream as loudly. It is not good for your own mental state to perceive something else suffering, whether or not that thing is actually suffering, because it puts you in an an unhealthy position of ignoring your own inherent sense of compassion.

load more comments (1 replies)
[-] RBWells@lemmy.world 9 points 5 months ago

I used to tell my kids "Just pretend to sleep, trick me into thinking you are sleeping, I don't know the difference. Just pretend, lay there with your eyes closed."

I could tell, of course, and they did end up asleep, but I think that is like the Turing test - if you are talking to someone and it's not a person but you can't tell, from your perspective it's a person. Not necessarily from the perspective of the machine, we can only know our own experience so that is the measure.

load more comments
view more: next ›
this post was submitted on 28 May 2024
1072 points (100.0% liked)

Fuck AI

1408 readers
450 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS