39
you are viewing a single comment's thread
view the rest of the comments
[-] slazer2au@lemmy.world 6 points 5 days ago

The more artificial intelligence is used within a law firm, the more lawyers are needed to vet the technology’s outputs.

I mean, trust but verify is a thing for a reason.

You cannot honestly call it "trust" if you still have to go through the output with a magnifying glass and make sure it didn't tell anyone to put glue on their pizza.

When any other technology fails to achieve its stated purpose, we call it flawed and unreliable. But AI is so magical! It receives credit for everything it happens to get right, and it's my fault when it gets something wrong.

[-] slazer2au@lemmy.world 2 points 4 days ago

The business must have some level of trust to deploy the tool.

[-] BlueMonday1984@awful.systems 2 points 4 days ago

They are trusting a "tool" that categorically cannot be trusted. They are fools to trust it.

[-] slazer2au@lemmy.world 1 points 4 days ago

Yes they are fools.

[-] dgerard@awful.systems 7 points 4 days ago

Distrust but verify

[-] snooggums@piefed.world 6 points 5 days ago

The fact that it needs repeating is confirmation that AI output is dogshit that cannot be trusted. Using AI as anything other than a starting point, like how search engines are used, is dangerous for anything where accuracy matters.

[-] BlueMonday1984@awful.systems 5 points 5 days ago

I mean, trust but verify is a thing for a reason.

And it just so happens that chatbots discourage the "verify" part by design...

this post was submitted on 02 Dec 2025
39 points (100.0% liked)

TechTakes

2319 readers
31 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS