163
top 13 comments
sorted by: hot top controversial new old
[-] it_depends_man@lemmy.world 91 points 1 day ago

A new report

BY THE AI COMPANY "WRITER"

and research firm Workplace Intelligent found a massive portion of workers across the US, UK, and Europe are intentionally trying to sabotage their bosses’ AI initiatives.

Please don't spread obviously doctored "reports".

[-] greyscale@lemmy.grey.ooo 63 points 1 day ago

Good.

It is morally and ethically the right thing to do.

Also, did you know it is ethically and morally correct to firebomb datacenters? They're being used for structural violence, and are basically piñatas.

[-] Bluegrass_Addict@lemmy.ca 39 points 1 day ago

...workers admitted to sabotaging their company’s AI by entering proprietary info into public AI chatbots, using unapproved AI tools, or intentionally using low-quality AI output in their work without fixing it.

tbh it just reads like people are just using it ai, not actually sabotaging it. lol it's such trash

[-] cabbage@piefed.social 24 points 1 day ago

"workers admitted to sabotaging their company’s AI by [...] intentionally using low-quality AI output in their work without fixing it"

Lol. Sounds an awful lot like the company is sabotaging itself in this case.

[-] Monument@piefed.world 8 points 1 day ago

“We have poor customer data safeguards, confidently present subpar work as acceptable, and have failed to adequately train our intended users but would like you to believe it’s all the users fault.”

[-] greyscale@lemmy.grey.ooo 12 points 1 day ago

The sabotage narrative did feel weak when I was listening to Natasha Bernal talking. Its probably not sabotage, its just their data is wank and the employees aren't paid enough to care to fix it.

[-] WanderingThoughts@europe.pub 6 points 1 day ago

Just like just doing your job is quiet quitting, AI sabotage means not spending unpaid overtime to completely redo the slop.

[-] ctry21@sh.itjust.works 21 points 1 day ago

Can't sabotage what's already broken. The rare time I've been asked to use it for a piece of work, the output is so shit and full of errors that it would be easier to have done it by hand as a human.

[-] Gsus4@mander.xyz 11 points 1 day ago

Turns out when you're told to increase your output to replace 5 colleagues with LLMs...there is no time to find and fix all the bugs.

[-] T156@lemmy.world 15 points 1 day ago

The categories that they used for "sabotage" (Entering proprietary information into a different AI, using unapproved chatbots, and using low-quality AI responses as-is) seem like they're just put together so they can blame employees for sabotage for the failure of the AI rollout, rather than employers trying to wedge it onto a bad use case, or not rolling it out properly.

The first two just seem like the company having issues with people going straight to ChatGPT, and using that as-is, and the third seems to be more people not really caring and using the AI output as required.

None of that comes across as outright sabotage like the organisation or article the to imply. All three seem like reasonable end-points of telling people to use AI, and giving them metrics they need to meet, or a not-great interface, so they just go off and use a different AI thing, because it's all AI, and basically the same thing, right?

[-] brsrklf@jlai.lu 5 points 1 day ago

sabotaging their company’s AI by entering proprietary info into public AI chatbots, using unapproved AI tools,

This is counter-productive and can get you in big trouble IMO. I don't even get what these are protesting.

or intentionally using low-quality AI output in their work without fixing it.

This is better and I think I would totally do this if management forced me to use AI. If they want to pretend using this thing is a better use of my time, I'll give them what they want.

Fortunately I am working for an administration that has had rather tame expectations for gen AI use till now. They're basically just like "experiment if you want, be careful and use what works for you". So I just keep doing what I always did.

[-] theunknownmuncher@lemmy.world 10 points 1 day ago* (last edited 1 day ago)

I don't even get what these are protesting.

It doesn't make sense because the protest is an invention.

or intentionally using low-quality AI output in their work without fixing it.

Translated: "our software tool works poorly and produces bad output. If workers do not work to manually fix the output, then they are InTeNtIoNaLlY sAbOtAgInG our business. Responsibility should be on the workers to fix our product's flaw."

[-] brsrklf@jlai.lu 4 points 1 day ago* (last edited 1 day ago)

That would certainly explain it.

I guess the story they're trying to push is "People intentionally use bad AI just to give officially supported, good AI a bad name!". And that's quite the ridiculous claim.

this post was submitted on 17 Apr 2026
163 points (100.0% liked)

Technology

83893 readers
2572 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS