view the rest of the comments
Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
A bit somewhere gets flipped from 0 to 1, and the ridiculously complicated program that's designed to output natural language text says something unexpected.
I know it seems really creepy, but I don't personally believe there's any real sentience or intention behind it. Stories about machines and computers saying stuff like this and taking over the world are probably in Gemini's training data somewhere.
AI companies need to stop scrapping from 4chan
If bits randomly got flipped 0 to 1, we wouldn't get stable software.
Definitely not a question of AI sentience, I'd say we're as close to that as the Wright Brothers were to figuring out the Apollo moon landing. But, it definitely raises questions on whether or not we should be giving everybody access to machines that can fabricate erroneous statements like this at random and what responsibility the companies creating them have if their product pushes someone to commit suicide or radicalizes them into committing an act of terrorism or something. Because them shrugging and saying, "Yeah, it does that sometimes. We can't and won't do anything about it, though" isn't gonna cut it, in my opinion.
So about 66 years then? I personally think we're very far from creating anything on par with human intelligence, but that isn't necessary for a lot of terrible things to come from AI tech. Honestly I would be more comfortable with a human-level or greater AI than something lesser still capable of agency.
If an AI is making decisions with consequences I'd prefer that it could be reasoned with as a peer, or at the least be smart enough to consider its' own long-term sustainability, which must in some way be linked with that of humanity's.
The Wright Brothers didn't figure out the moon landing. They figured out aerodynamics. There were plenty of other discoveries that went into the moon landing such as suborbital flight, supersonic flight, and orbital dynamics to list a few. It's less about the specific time as it is about the level of technology. The timescale is much harder to put down due to the nature of technological innovation.
As for the rest, I completely agree. One of the most dangerous things about these AI programs is the lack of responsibility or culpability.
I didn't mean to imply that the Wright Brothers were single-handedly responsible for the space-age tech boom lol, just that the royal "we" were about 66 years out from the moon landing at the time the Wright Brothers had their first successful flight.
You read about the teenager who fell in love with danaerys Targaryen who convinced him to join her, so he killed himself? Yeah, the public was not ready for AI
While I agree this is probably just reddit data contamination and weird hallucination, it might not be in the future. We don't know what makes us sentient, we argue what other animals might be actually sentient beside us, how can we even tell when machine becomes sentient?
As corporations put more and more power, and alter the models more and more, at some time it might actually become sentient, and we will dismiss it like every other time. It might be in a year, or maybe in a 100 years, but if machine sentience is even possible, it is inevitable. And we might not be able to tell at all - LLMs are made to talk, and they have all the human knowledge at it's disposal, it's already convincing enough to fool a bunch of people.