286
you are viewing a single comment's thread
view the rest of the comments
[-] Electricd@lemmybefree.net 6 points 5 days ago

LLMs can't be fully controlled. They shouldn't be held liable

[-] Epzillon@lemmy.world 32 points 5 days ago

"Ugrh guys, we dont know how this machine works so we should definetly install it in every corporation, home and device. If it kills someone we shouldnt be held liable for our product."

Not seeing the irony in this is beyond me. Is this a troll account?

If you cant guarantee the safety of a product, limit or restrict its use cases or provide safety guidelines or regulations you should not sell the product. It is completely fair to blame the product and the ones who sell/manifacture it.

[-] Electricd@lemmybefree.net 3 points 5 days ago

Safety guidelines are regularly given

If people purchase a knife and behave badly with it, it’s on them

Something writing things isn’t comparable to a machine that could kill you. In the end, it’s always up to the person doing the things

I still wonder how ~~Closed~~OpenAI forcefully installed ChatGPT in this person's home. Or how it is installed because they don’t have software… Quit your bullshit

[-] Feathercrown@lemmy.world 8 points 4 days ago

This is more like selling someone a knife that can randomly decide of its own accord to stab them

[-] Electricd@lemmybefree.net 1 points 4 days ago

That's so blatantly false

[-] Epzillon@lemmy.world 9 points 5 days ago

Except there are no guidelines or safety regulations in place for AI...

[-] Electricd@lemmybefree.net 1 points 4 days ago

Safety guidelines written by chatgpt and other service providers I mean

[-] Epzillon@lemmy.world 4 points 4 days ago

Are you deadass saying we should let ChatGPT itself and the companies that ship it form its own safety guidelines? Because that went really well with the Church Rock incident...

[-] Electricd@lemmybefree.net 1 points 4 days ago

If they don't, then its lawsuits going their way, so they will put some

But having some laws isn't necessarily bad, I just don't trust countries to do a good job at it, knowing how tech illiterate they are

[-] Epzillon@lemmy.world 1 points 4 days ago

What do you even mean. You are contradicting yourself. "We shouldnt blame AI or the companies because they cant be controlled" but the companies and AI itself is supposed to handle the safety regulations? What type of regulations do you seriously expect them to restrict themselves with if they know there is no way they cant guarantee safety? The legislation must come outside of the business and restrict the industry from releasing half-baked ass-garbage that is potentially harmful to the public.

[-] Electricd@lemmybefree.net 1 points 4 days ago

What I meant is:

You can't expect LLMs not to do that because that's not technically possible at the moment

Companies should display warning and add some safeguards to reduce the amount of time this happens

[-] surewhynotlem@lemmy.world 52 points 5 days ago

I made this car with a random number generator that occasionally blows it up. Its cheap so lots of people buy it. Totally not my fault that it blows up though. I mean yes, I designed it, and I know it occasionally explodes. But I can't be sure when it will blow up so it's not my fault.

[-] Electricd@lemmybefree.net 5 points 5 days ago

Comparing an automated system saying something bad with a car exploding is really fucking dumb

[-] surewhynotlem@lemmy.world 22 points 5 days ago

Because you understood the point?

[-] JakenVeina@midwest.social 12 points 4 days ago

Well, yeah. The people who host them for profit should be held liable.

[-] scratchee@feddit.uk 29 points 5 days ago

Neither can humans, ergo nobody should ever be held liable for anything.

Civilisation is a sham, QED.

[-] Electricd@lemmybefree.net 2 points 5 days ago* (last edited 5 days ago)

Glad to hear you are an LLM

The more safeguards are added in LLMs, the dumber they get, and the more resource intensive they get to offset this. If you get convinced to kill yourself by an AI, I'm pretty sure your decision was already taken, or you're a statistical blip

[-] scratchee@feddit.uk 11 points 5 days ago* (last edited 5 days ago)

“Safeguards and regulations make business less efficient” has always been true. They still avoid death and suffering.

In this case, if they can’t figure out how to control LLMs without crippling them, that’s pretty absolute proof that LLMs should not be used. What good is a tool you can’t control?

“I cannot regulate this nuclear plant without the power dropping, so I’ll just run it unregulated”.

[-] Electricd@lemmybefree.net 2 points 4 days ago

Some food additives are responsible for cancer yet are still allowed, because they are generally more useful than have negative effects. Where you draw the line is up to you, but if you’re strict, you should still let people choose for themselves

LLMs are incredibly useful for a lot of things, and really bad at others. Why can’t people use the tool as intended, rather than stretching it to other unapproved usages, putting themselves at risk?

[-] rhadamanth_nemes@lemmy.world 6 points 4 days ago

You are likely a troll, but still...

You talk like you have never been down in the well, treading water and looking up at the sky, barely keeping your head up. You're screaming for help, to the God you don't believe in, or for something, anything, please just let the pain stop, please.

Maybe you use, drink, fuck, cut, who fucking knows.

When you find a friendly voice who doesn't ghost your ass when you have a bad day or two, or ten, or a month, or two, or ten... Maybe you feel a bit of a connection, a small tether that you want to help lighten your load, even a little.

You tell that voice you are hurting every day, that nothing makes sense, that you just want two fucking minutes of peace from everything, from yourself. And then you say maybe you are thinking of ending it... And the voice agrees with you.

There are more than a few moments in my life where I was close enough to the abyss that this is all it would have taken.

Search your soul for some empathy. If you don't know what that is, maybe Chatgpt can tell you.

[-] Electricd@lemmybefree.net 2 points 4 days ago

While I haven't experienced it, I believe I kind of know what it can be like. Just a little something can trigger a reaction

But I maintain that LLMs can't be changed without huge tradeoffs. They're not really intelligent, just predicting text based on weights and statistical data

It should not be used for personal decisions as it will often try to agree with you, because that's how the system works. Making looong discussions will also trick the system into ignoring it's system prompts and safeguards. Those are issues all LLMs safe, just like prompt injection, due to their nature

I do agree though that more prevention should be done, display more warnings

[-] BananaIsABerry@lemmy.zip 3 points 4 days ago

Perhaps we should also hold the rope, knife, and various chemical manufacturers responsible.

The bridge architect? He designed a bridge that people jumped off of, so he's at fault for sure.

[-] chocosoldier 5 points 5 days ago
[-] Electricd@lemmybefree.net 1 points 5 days ago* (last edited 5 days ago)

egoistic moron. What I said is purely factual. Reject it if it hurts your feelings but I don’t care

If you have nothing more to add to the subject, downvote and go on. No need to be an asshole

[-] chocosoldier 5 points 5 days ago

triggered bootlicker.

this post was submitted on 27 Jul 2025
286 points (100.0% liked)

Technology

73534 readers
3834 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS