This was already budgeted for when they decided to use a chatbot instead of paying employees to do that job.
Trying to blame the bot is just lame.
Corporate IT here. You're assuming they're smart enough to budget for this. They aren't. They never are. Things are rarely if never implemented with any thought put into any other scenario that isn't happy path.
As a corporate IT person also. Hello.
But we do put thought into what can go wrong. But no we don't budget for it, and as far as we are concerned 99% success rate is 100% correct 100% of the time. Nevermind 7 billion transactions per year multiplied by 99% is a fuck ton of failure.
Amen. Fwiw at my work we have an AI steering committee. No idea what they're doing though because you'd think enough articles and lawsuits against OpenAI and Microsoft on shady practices most recently allowing AI to be used by militaries potentially to kill people. I love knowing my org supports companies that enable the war machine.
Great! Please make sure that your server system is un-racked and physically present in court for cross examination.
Better put Ryan Gosling on standby in case he needs to "retire" the rouge Air Canada chatbot Blade Runner style.
Rogue*. I'm not usually that guy, but this particular typo makes me see red.
I know what you mean, except for me it makes me see rouge ever since I spent some time in France.
Yeah, that is indeed the joke I was making.
Ah, I no longer even Bat an eye at Rouge.
“Airline tried arguing virtual assistant was solely responsible for its own actions”
that’s not how corporations work. that’s not how ai works. that’s not how any of this works.
Oh, it is if they are using a dump integration of LLM in their Chatbot that is given more or less free reign. Lots of companies do that, unfortunately.
If it's integrated in their service, unless they have a disclaimer and the customer has to accept it to use the bot, they are the ones telling the customer that whatever the bot says is true.
If I contract a company to do X and one of their employees fucks shit up, I will ask for damages to the company, and They internally will have to deal with the worker. The bot is the worker in this instance.
So what you're saying is that companies will start hiring LLMs as "independent contractors"?
No, the company contracted the service from another company, but that's irrelevant. I'm saying that in any case, the company is responsible for any service it provides unless there's a disclaimer. Be that service a chat bot, a ticketing system, a store, workers.
If an Accenture contractor fucks up, the one liable for the client is Accenture. Now, Accenture may sue the worker but that's besides the point. If a store mismanaged products and sold wrong stuff or inputted incorrect prices, you go against the store chain, not the individual store, nor the worker. If a ticketing system takes your money but sends you an invalid ticket, you complain to the company that manages, it, not the ones that program it.
It's pretty simple actually.
Oh, I believe it.
My 2024 bingo card didn't have a major corporation litigating in favor of AI rights in order to avoid liability, but I'm not disappointed to see it.
That's an important precedent. Many companies turned to LLMs to cut the cost and dodge any liability for whatever model can say. It's great that they get rekt in the court.
Lol. “It wasn’t us - it was the bot! The bot did it! Yeah!”
"See officer, we didn't make these deepfakes, the AI did. Arrest the AI instead"
Making deepfakes aren't illegal
That seems like a stupid argument?
Even if a human employee did that aren't organisations normally vicariously liable?
That's what I thought of, at first. Interestingly, the judge went with the angle of the chatbot being part of their web site, and they're responsible for that info. When they tried to argue that the bot mentioned a link to a page with contradicting info, the judge said users can't be expected to check one part of the site against another part to determine which part is more accurate. Still works in favor of the common person, just a different approach than how I thought about it.
I like this. LLMs are powerful tools, but being rebranded as "AI" and crammed into ~everything is just bullshit.
The more legislation like this happens where the employing entity is responsible for the - lack of - accuracy, the better. At some point they'll notice they cannot guarantee the correct information is the only one provided as that's not how LLMs work in their function as stochastic parrots, and they'll stop using them for a lot of things. Hopefully sooner rather than later.
This is actually a very good outcome if achievable, leave LLMs to be used where there's nothing important on the line or have humans control them
A computer can never be held responsible so a computer must never make management decisions
- IBM in the 80s and 90s
A computer can never be held responsible so a computer must make all management decisions
- Corporations in 2025
Hey dumbasses maybe don't let a loose llm represent your company if you can't control what it's saying. It's not a real person, you can't throw blame to a non sentient being.
Oh good, we've entered into the "we can't be held responsible for what our machines do" age of late-stage capitalism.
Nice that the legal precedent is now "Yes you can be" though.
Sure. In Canada.
Conveniently I live in Canada :D
But yeah, a similar US ruling would be nice
Not just the U.S. I'm seeing this as being something corporations will argue the world over, especially with AI.
Your honor, I'm not responsible for the petabytes of pirated content that my computer downloaded!
I can’t wait for something like this to hit SCOTUS. We’ve already declared corporations are people and money is free speech, why wouldn’t we declare chatbots solely responsible for their own actions? Lmao 😂😂💀💀😭😭
money is free speech
Can someone explain this to me? I assume this is in relation to campaign finance, but what was the actual argument that makes "(spending/accepting/?) money is free speech"?
Maybe something along the lines of "if you can afford fines you can say whatever you want including but not limited to offence, lies, hate speech, and slander"
Par for the course for this airline, in my experience. They're allergic to responsibility.
This isn't fair to Steve, but corps shouldn't have to bear this burden either. We could levy a tax on not corporate citizens and use the revenue to create a fund to insure against situations like these. The fund would probably be best administered by corporate citizens.
Found the guy with the ~~AI~~ LLM girlfriend.
If big companies want to give jobs to bots instead of humans, they need to reap the consequences.
Side note: Personally, I've never found a chatbot helpful. They typically only provide information that I can find for myself on the web site. If I'm asking someone for help, it's solely because I can't find it myself.
Suck it feds, I hope your shares plummet
Not The Onion
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Comments must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!