82
you are viewing a single comment's thread
view the rest of the comments
[-] HedyL@awful.systems 37 points 1 week ago

Refusing to use AI tools or output. Sabotage!

Definitely guilty of this. Refused to use AI generated output when it was clearly hallucinated BS from start to finish (repeatedly!).

I work in the field of law/accounting/compliance, btw.

[-] HedyL@awful.systems 15 points 1 week ago* (last edited 1 week ago)

Maybe it's also considered sabotage if people (like me) try prompting the AI with about 5 to 10 different questions they are knowledgeable about, get wrong (but smart-sounding) answers every time (despite clearly worded prompts) and then refuse to continue trying. I guess it's expected to try and try again with different questions until one correct answer comes out and then use that one to "evangelize" about the virtues of AI.

[-] Slatlun@lemmy.ml 9 points 6 days ago

This is how I tested too. It failed. Why would I believe it on anything else?

[-] tazeycrazy@feddit.uk 2 points 1 week ago

You can definately ask ai for more jargon and add information about irrelevant details to make it practically unreadable. Pass this through the llm to add more vocabulary, deep fry it and sent it to management.

this post was submitted on 31 Aug 2025
82 points (100.0% liked)

TechTakes

2154 readers
64 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS