1315
racist ai
(lemmy.blahaj.zone)
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Is the developer also culpable? How about the data scientist? How about the data engineer? How about the BI Analyst? And the janitor?
How about the manufacturer of the knife / pill / gas they used to kill themselves?
As a developer: yes to the developer and data scientist and data engineer. Scientists and engineers should be responsible for their work.
The BI analyst: maybe, if they're responsible for collecting data that ignores the impact of the service on teens. If they're doing sales-comparisons between Anthropic and OpenAI... eh, I donno.
The janitor: probably not since I don't feel like the deaths are widely publicized and they probably work for a contracting company that handles the building.
That's a lot of people that are going to die for doing data mining...
In most cases suicide isn't anyone's fault. People like to find someone to blame, and I get that, but people who are even remotely close to doing that, were always going to find a way and a justification.
No AI is going to convince me to kill myself if I didn't already want to. Equally the inverse must also be true.
That's not to say that the companies are completely off the hook, it's utterly ridiculous that these conversations weren't flagged and sent to a human, but I think it's daft to suggest that these people would necessarily still be alive had the AI not existed.
I completely agree. Not off the hook. There should be better guardrails (like recipes for bombs and other dangerous things) but from there to accuse the CEO of murder there's quite a stretch.
If you manufacture a knife that convinces children to kill themselves, yeah, you're culpable. Everyone else can be charged according to their level of culpability, but any time a company is found liable for killing someone the CEO should be sentenced for their murder. Maybe that would incentivize CEOs to stop getting people killed.
What about a knife that does the sliicng of the body, the killing itself?
I don't think there's a difference. Children are not culpable, which means grooming children to kill themselves is murder.
Selling knifes to children is murder too?
Selling knifes to families with children?
Selling knifes to women who are pregnant?
Selling knives that talk and tell you to kill yourself to children is murder.
You're refusing to recognize the grooming angle to this.
You're refusing to recognize the tool angle to this, so that makes two of us.
Selling tools that kill people, knowing that they are dangerous, should have consequences.
Would the world really be a worse place if Sam Altman were tried for murder? What's the problem?
You are describing knifes, and forks, and cars, and pools, and...
I'm losing patience. I'm obviously fucking not talking about regular fucking objects, a knife doesn't fucking talk and convince you to kill yourself. There's an obvious categorical difference between objects, and tools designed to trick you into thinking they're intelligent. It's murder. Someone needs to face consequences.
Why would it be bad if Sam Altman went to prison? Would the world be a worse place? Why are you protecting him?
No, a knife just severes arteries and makes you bleed out.
A knife doesn't pretend to be your friend and convince you to sever your arteries. Categorically different.
Answer my question. Why would it be bad for Sam Altman to be tried for murder? If we decided that the owners of AI companies were culpable for the behavior of their chatbots and the consequences of their actions, wouldn't that solve the problem?
Categorically different, one kills you, the other just talks. (it doesn't talk, but I'll humor you)