618
submitted 9 months ago by ylai@lemmy.ml to c/nottheonion@lemmy.world
all 43 comments
sorted by: hot top controversial new old
[-] InEnduringGrowStrong@sh.itjust.works 169 points 9 months ago

This was already budgeted for when they decided to use a chatbot instead of paying employees to do that job.
Trying to blame the bot is just lame.

[-] whoisearth@lemmy.ca 24 points 9 months ago

Corporate IT here. You're assuming they're smart enough to budget for this. They aren't. They never are. Things are rarely if never implemented with any thought put into any other scenario that isn't happy path.

[-] Patches@sh.itjust.works 10 points 9 months ago* (last edited 9 months ago)

As a corporate IT person also. Hello.

But we do put thought into what can go wrong. But no we don't budget for it, and as far as we are concerned 99% success rate is 100% correct 100% of the time. Nevermind 7 billion transactions per year multiplied by 99% is a fuck ton of failure.

[-] whoisearth@lemmy.ca 5 points 9 months ago

Amen. Fwiw at my work we have an AI steering committee. No idea what they're doing though because you'd think enough articles and lawsuits against OpenAI and Microsoft on shady practices most recently allowing AI to be used by militaries potentially to kill people. I love knowing my org supports companies that enable the war machine.

[-] TehWorld@lemmy.world 128 points 9 months ago

Great! Please make sure that your server system is un-racked and physically present in court for cross examination.

[-] ImplyingImplications@lemmy.ca 14 points 9 months ago

Better put Ryan Gosling on standby in case he needs to "retire" the rouge Air Canada chatbot Blade Runner style.

[-] Skyhighatrist@lemmy.ca 40 points 9 months ago* (last edited 9 months ago)

Rogue*. I'm not usually that guy, but this particular typo makes me see red.

[-] Robdor@lemmy.world 8 points 9 months ago

I know what you mean, except for me it makes me see rouge ever since I spent some time in France.

[-] Skyhighatrist@lemmy.ca 17 points 9 months ago

Yeah, that is indeed the joke I was making.

[-] Carighan@lemmy.world 2 points 9 months ago

Ah, I no longer even Bat an eye at Rouge.

[-] dipshit@lemmy.world 124 points 9 months ago

“Airline tried arguing virtual assistant was solely responsible for its own actions”

that’s not how corporations work. that’s not how ai works. that’s not how any of this works.

[-] Pantoffel@feddit.de 21 points 9 months ago

Oh, it is if they are using a dump integration of LLM in their Chatbot that is given more or less free reign. Lots of companies do that, unfortunately.

[-] fushuan@lemm.ee 16 points 9 months ago

If it's integrated in their service, unless they have a disclaimer and the customer has to accept it to use the bot, they are the ones telling the customer that whatever the bot says is true.

If I contract a company to do X and one of their employees fucks shit up, I will ask for damages to the company, and They internally will have to deal with the worker. The bot is the worker in this instance.

[-] lunar17@lemmy.world 1 points 9 months ago

So what you're saying is that companies will start hiring LLMs as "independent contractors"?

[-] fushuan@lemm.ee 7 points 9 months ago

No, the company contracted the service from another company, but that's irrelevant. I'm saying that in any case, the company is responsible for any service it provides unless there's a disclaimer. Be that service a chat bot, a ticketing system, a store, workers.

If an Accenture contractor fucks up, the one liable for the client is Accenture. Now, Accenture may sue the worker but that's besides the point. If a store mismanaged products and sold wrong stuff or inputted incorrect prices, you go against the store chain, not the individual store, nor the worker. If a ticketing system takes your money but sends you an invalid ticket, you complain to the company that manages, it, not the ones that program it.

It's pretty simple actually.

[-] dipshit@lemmy.world 5 points 9 months ago

Oh, I believe it.

[-] Delta_V@lemmy.world 8 points 9 months ago

My 2024 bingo card didn't have a major corporation litigating in favor of AI rights in order to avoid liability, but I'm not disappointed to see it.

[-] andrew_bidlaw@sh.itjust.works 102 points 9 months ago

That's an important precedent. Many companies turned to LLMs to cut the cost and dodge any liability for whatever model can say. It's great that they get rekt in the court.

[-] homesweethomeMrL@lemmy.world 84 points 9 months ago

Lol. “It wasn’t us - it was the bot! The bot did it! Yeah!”

[-] TxzK@lemmy.zip 30 points 9 months ago

"See officer, we didn't make these deepfakes, the AI did. Arrest the AI instead"

[-] JackGreenEarth@lemm.ee 1 points 9 months ago

Making deepfakes aren't illegal

[-] RobotToaster@mander.xyz 81 points 9 months ago

That seems like a stupid argument?

Even if a human employee did that aren't organisations normally vicariously liable?

[-] atx_aquarian@lemmy.world 74 points 9 months ago

That's what I thought of, at first. Interestingly, the judge went with the angle of the chatbot being part of their web site, and they're responsible for that info. When they tried to argue that the bot mentioned a link to a page with contradicting info, the judge said users can't be expected to check one part of the site against another part to determine which part is more accurate. Still works in favor of the common person, just a different approach than how I thought about it.

[-] Carighan@lemmy.world 24 points 9 months ago

I like this. LLMs are powerful tools, but being rebranded as "AI" and crammed into ~everything is just bullshit.

The more legislation like this happens where the employing entity is responsible for the - lack of - accuracy, the better. At some point they'll notice they cannot guarantee the correct information is the only one provided as that's not how LLMs work in their function as stochastic parrots, and they'll stop using them for a lot of things. Hopefully sooner rather than later.

[-] sukhmel@programming.dev 2 points 9 months ago

This is actually a very good outcome if achievable, leave LLMs to be used where there's nothing important on the line or have humans control them

[-] kandoh@reddthat.com 60 points 9 months ago

A computer can never be held responsible so a computer must never make management decisions

  • IBM in the 80s and 90s

A computer can never be held responsible so a computer must make all management decisions

  • Corporations in 2025
[-] Baggie@lemmy.zip 42 points 9 months ago

Hey dumbasses maybe don't let a loose llm represent your company if you can't control what it's saying. It's not a real person, you can't throw blame to a non sentient being.

[-] FlyingSquid@lemmy.world 35 points 9 months ago

Oh good, we've entered into the "we can't be held responsible for what our machines do" age of late-stage capitalism.

[-] Revan343@lemmy.ca 7 points 9 months ago

Nice that the legal precedent is now "Yes you can be" though.

[-] FlyingSquid@lemmy.world 4 points 9 months ago
[-] Revan343@lemmy.ca 3 points 9 months ago

Conveniently I live in Canada :D

But yeah, a similar US ruling would be nice

[-] FlyingSquid@lemmy.world 3 points 9 months ago

Not just the U.S. I'm seeing this as being something corporations will argue the world over, especially with AI.

[-] Masamune@lemmy.world 2 points 9 months ago

Your honor, I'm not responsible for the petabytes of pirated content that my computer downloaded!

[-] shiroininja@lemmy.world 23 points 9 months ago

I can’t wait for something like this to hit SCOTUS. We’ve already declared corporations are people and money is free speech, why wouldn’t we declare chatbots solely responsible for their own actions? Lmao 😂😂💀💀😭😭

[-] Bob_Robertson_IX@lemmy.world 3 points 9 months ago

money is free speech

Can someone explain this to me? I assume this is in relation to campaign finance, but what was the actual argument that makes "(spending/accepting/?) money is free speech"?

[-] sukhmel@programming.dev 1 points 9 months ago

Maybe something along the lines of "if you can afford fines you can say whatever you want including but not limited to offence, lies, hate speech, and slander"

[-] rbesfe@lemmy.ca 21 points 9 months ago

Par for the course for this airline, in my experience. They're allergic to responsibility.

[-] lntl@lemmy.ml 5 points 9 months ago* (last edited 9 months ago)

This isn't fair to Steve, but corps shouldn't have to bear this burden either. We could levy a tax on not corporate citizens and use the revenue to create a fund to insure against situations like these. The fund would probably be best administered by corporate citizens.

[-] Patches@sh.itjust.works 7 points 9 months ago* (last edited 9 months ago)

Found the guy with the ~~AI~~ LLM girlfriend.

[-] LinkOpensChest_wav 3 points 9 months ago

If big companies want to give jobs to bots instead of humans, they need to reap the consequences.

Side note: Personally, I've never found a chatbot helpful. They typically only provide information that I can find for myself on the web site. If I'm asking someone for help, it's solely because I can't find it myself.

[-] Shake747@lemmy.dbzer0.com 1 points 9 months ago

Suck it feds, I hope your shares plummet

this post was submitted on 15 Feb 2024
618 points (100.0% liked)

Not The Onion

12272 readers
1278 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Comments must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 1 year ago
MODERATORS