39
you are viewing a single comment's thread
view the rest of the comments
[-] slazer2au@lemmy.world 6 points 4 days ago

The more artificial intelligence is used within a law firm, the more lawyers are needed to vet the technology’s outputs.

I mean, trust but verify is a thing for a reason.

You cannot honestly call it "trust" if you still have to go through the output with a magnifying glass and make sure it didn't tell anyone to put glue on their pizza.

When any other technology fails to achieve its stated purpose, we call it flawed and unreliable. But AI is so magical! It receives credit for everything it happens to get right, and it's my fault when it gets something wrong.

[-] slazer2au@lemmy.world 2 points 4 days ago

The business must have some level of trust to deploy the tool.

[-] BlueMonday1984@awful.systems 2 points 4 days ago

They are trusting a "tool" that categorically cannot be trusted. They are fools to trust it.

[-] slazer2au@lemmy.world 1 points 4 days ago

Yes they are fools.

load more comments (3 replies)
this post was submitted on 02 Dec 2025
39 points (100.0% liked)

TechTakes

2316 readers
32 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS