I would say this right here should be enough to not do business with them, but they sell AI slop, so I wouldn’t be doing business with them anyway.
It's a bullshit machine. It's doing what it's designed for.
The AI mashed information together that didn't go together in that context and returned something that was not correct. It was wrong, but did not invent anything.
Use of the intentional stance is perfectly justified in this kind of situation.
No.
Yes.
“Hallucinates”
AI doesn't do that either. That is another example of trying to make AI sound like it has reasoning and intent like a person instead of the pattern matching weighted randomizer that it is.
Since you're going to be ridiculously pedantic, it isn't AI. Start there, where it's actually potentially useful to make the distinction.
it isn’t AI
That is why I am pushing back on using these terms!
Sure, it’s not hallucinating in the sense of how a human does. But that is the industry name, and has been in use since 1995 in the study of neural networks
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
Terminology changes and bad terminology can and should be changed.
Anthromorphising software only helps the ones that are pushing it.
Yes, I am fully aware of the term's origin and it was clever at the time but it is important to point out that it is not literally accurate due to how LLMs and other AI are being promoted currently. Calling them hallucinations, saying they invent things, or anything else that sounds anthropomorphic is used by the companies selling the products to deflect from the fact that what is currently being shoved down our throats are unreliable bullshit generators.
To summerize, an AI bot, which isn't smart enough to think for itself, decided to think for itself. It then created a new policy that when programers switch between machines, it logs you out. Why? Because. Just because.
This is what the AI decided. New policy being led by no one, and the only reason it gets called out is because THIS change is instantly noticable. If the new policy affected you over time, it may never be called out, because thats been the policy for 6 months now.
But the fact remains that AI just decided to lead humans. The decision was made by no one. THIS change was a low stakes change. In that I mean nobody was hurt. Nobody died. Nobody was in danger. Nobody had medications altered.
But this is the road we're traveling. Maybe one day the AI decides that green lights make traffic flow better, so now without warning, all the lights in a city are just green.
Or maybe AI is in charge of saving a company money, and decides that paying for patients insulin costs the company a lot of money, without direct profit. So it cancels that coverage.
There's a near infinate amount of things that an AI can logically think makes sense because it has only a limited set of data on the human experience.
AI will NEVER know what it's like to be human. It can only cobble together an outcome based on what little data we feed it. What comes next is just an educated guess from an uneducated unempathetic machine.
AI just decided to lead humans. The decision was made by no one. THIS change was a low stakes change.
AI didn't make the change. AI made no policy changes. The logout thing was a backend bug. The only thing the AI did was hallucinate that the bug was actual policy.
That said, I agree with your sentiment regarding where the world is heading. If it weren't for pesky regulations and optics, the military would have been flying 100% AI killer drones.
AI didn’t make the change. AI made no policy changes. The logout thing was a backend bug. The only thing the AI did was hallucinate that the bug was actual policy.
And honestly, it's completely fair that it would behave this way if its training data contained actual interactions with support agents or developers apologizing for shitty software. I don't even know how many times I've encountered people in my career that insisted that -- to quote 30 Rock -- they had built the bookshelf that way on purpose, and that they wanted the books to slide off.
Unless this is a draft for a sci-fi short story, you should look into how current AI models actually work. They cannot "decide" or logically think about anything. You're humanizing an algorithm because it can produce text that sounds like it came out a brain, but that doesn't make it a form of cognition.
So weird that this keeps happening
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.