474
top 40 comments
sorted by: hot top controversial new old
[-] uuj8za@piefed.social 119 points 1 week ago* (last edited 1 week ago)

Fuck this ad for AI. It's trying to make it seem like workers don't use AI because they're scared. It's only 8% that said they were scared. The rest of us 92% workers aren't scared we're going to be replaced by AI. We see how shitty AI is and we don't like it because it sucks and makes things slower, not faster.

The super-users we surveyed were around 3x more likely to have received both a promotion and pay raise in the past year, compared to employees who have been slow to adopt these tools

I do agree with this point. One of my team members recently got a lot of brownie points because he's been doing AI demos. The execs love him because he's visibly following orders. Does he generate way more code than everyone else? YES, this is actually a horrible thing, but execs are clueless and think more code == more better. Is he more productive than others? Definitely not. The hot garbage he's generating is just bug-ridden tech debt.

I guess I'm sabotaging our AI rollout by getting out of the way. You wanna inject AI everywhere? Fine, do it. I'm not gonna review it though. If you can't take the time to write something, I'm not going to spend my time reading it.

[-] jaybone@lemmy.zip 26 points 1 week ago

In about five years from now, there will be so much garbage code with unfixable bugs. It’s difficult for me to imagine what kind of collapse this will cause. Or how we will recover from it, which might take another decade. Fortunately we might be fighting eachother with spears over fresh water by then, so we will have bigger problems to not solve.

[-] drcobaltjedi@programming.dev 9 points 1 week ago

I'd say be hopeful, but I don't know.

I am a software developer, and there's absolutely been times where a temp fix becomes permanent, but I've also had times where my boss has told me to clean up tech debt or I've been "look this whole chunk of code is both wrong and unmaintainable" (wrong as in it didn't do the thing correctly but it looked correctish) and I've been allowed to just rewrite the broken code from scratch.

Idk, I feel like also at a certain point the codes bugs might be so obvious and troublesome that companies are forced to actually deal with the problem code and when that happens will be different for every company and every program.

[-] quick_snail@feddit.nl 7 points 1 week ago

God, I can't imagine having to review code from that guy. What an ass

[-] CombatWombat@feddit.online 95 points 1 week ago

Oh no, it's so irresponsible of europesays.com to publish this practical list of ways to sabotage your company's AI rollout. Hopefully no other outlets include longer, more detailed lists, or we might see this kind of behavior start to spread:

The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools. Some employees report outright refusing to use AI tools. Others have even admitted to tampering with performance reviews or intentionally generating low-output work to make AI appear less effective.

[-] searabbit@piefed.social 63 points 1 week ago

This is amateur work. I've seen someone volunteer to head the staff AI training and in the presentation outline how bad AI is (i.e., terrible for the environment, not reliable, all true things) and also just put out the most half-assed training rollout. It had the effect of half the staff intentionally or unintentionally doing other forms of sabotage.

[-] man_wtfhappenedtoyou@lemmy.world 21 points 1 week ago

The balls on that guy, damn.

[-] quick_snail@feddit.nl 11 points 1 week ago

Balls? That's just doing your job

[-] Hackworth@piefed.ca 35 points 1 week ago

The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools.

Not sure how that one sabotages the company's AI strategy. That's just plain old data insecurity. Posting the same information to a forum would accomplish the same harm.

[-] CombatWombat@feddit.online 32 points 1 week ago

If the data leaks via an LLM, it discredits the LLM. If it leaks via a forum, it discredits the forum.

[-] dreamkeeper@literature.cafe 3 points 1 week ago

Not really imo. People will blame the leakers, not the llm, and they wouldn't be wrong. There's nothing you can do to stop people from leaking info into the public other than the threat of job loss and a massive lawsuit.

What would discredit the llm is if the llm provider violated their contract and used the data for something their customers didn't agree to.

[-] Lost_My_Mind@lemmy.world 13 points 1 week ago

And the CEOs phone number is 867-5309. I got it!

[-] noxypaws@pawb.social 1 points 1 week ago

same number that i enter at grocery store checkouts!

[-] lime@feddit.nu 10 points 1 week ago

if it's output by an ai, it can't be copyrighted.

[-] T156@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

That just sounds like the employees are using AI as asked of them, but the company's own offerings/tools are bad, or they're given bad goals, so they just turn to one of the major AI companies, like ChatGPT, since it's all AI anyway, rather than overt sabotage.

[-] sundray@lemmus.org 93 points 1 week ago

I see they're laying the groundwork for a "stabbed in the back" narrative, when AI inevitably bursts.

[-] beirut_bootleg@programming.dev 13 points 1 week ago

DING-DING-DING

[-] Signtist@bookwyr.me 74 points 1 week ago

My boss asked people with tech skills to chime in about AI, so I made a quick report of a bunch of things that AI sounds like it'd be useful for in my line of work, and why it wouldn't be, with examples of times when a real human made the same kind of mistake while I've been working there, and how much that mistake cost the company. He decided not to pursue AI. You can call it sabotaging the AI strategy, or you can call it helping keep the company from making a major fuckup, take your pick.

[-] kkj@lemmy.dbzer0.com 31 points 1 week ago

I'd call it helping form the AI strategy. Sabotaging it would be waiting until they make one and then not following it.

[-] Slotos@feddit.nl 73 points 1 week ago

Some employees report outright refusing to use AI tools.

So having morals is sabotage now?

[-] SeeMarkFly@lemmy.ml 42 points 1 week ago* (last edited 1 week ago)

Only a living wage can prevent warehouse fires.

Too soon?

[-] Aberration13@lemmy.world 18 points 1 week ago

unfortunately too late, if only that poor ceo had known, he could have prevented this 😂

[-] ThePowerOfGeek@lemmy.world 20 points 1 week ago

From the article:

An Anthropic study released last month found AI is already theoretically capable of completing the majority of tasks associated with computer science, law, business, and finance, and other major white-collar fields

There's a huge difference between "capable of completing the majority of tasks" and "capable of completing the majority of tasks WELL".

Sure, you can have an AI code your web app or mobile app, due example. But it will be riddled with bugs, and bloated with inefficient code.

And from what I've seen, it's not getting noticeable better at that.

But the AI companies won't acknowledge that, of course. They will continue selling the snake oil that cures everything that ails you.

[-] gravitas_deficiency@sh.itjust.works 18 points 1 week ago* (last edited 1 week ago)

An Anthropic study says AI can do everything

Crack dealer says crack is awesome

🙄

Honestly, the only reason I use that LLM shit now is because the job market in tech is getting a bit like the hunger games and my employer strongly encourages its use - and even then, the vast majority of my usage is “fancy search engine”. Even the boilerplate it gives me sometimes is like… really weirdly styled and quite often has to be corrected to be fit for purpose. I cannot understand people who just slap that shit in without even bothering to check it.

[-] TheDoctorDonna@piefed.ca 19 points 1 week ago

The company I work for is pushing Claude and have set up a bunch of training sessions for us after noticing the lack of use instructing us to plan to attend one of these sessions. I keep ignoring every push they make. Why the hell would I train this hallucinating liar to take my job? It's crazy to expect us to use the thing they want to eventually replace us with.

[-] Endymion_Mallorn@kbin.melroy.org 18 points 1 week ago

Only 29% admit to it. Most of the other 70% have the sense to keep their mouths shut.

[-] dreamkeeper@literature.cafe 6 points 1 week ago

There are plenty of AI shills out there, they're just a minority.

[-] Jankatarch@lemmy.world 17 points 1 week ago* (last edited 1 week ago)

Read the article. Their definition of "sabotage" includes not using AI tools.

I guess the wording says "sabotaging AI strategy" so it's our fault for the intended misunderstanding?

[-] mrgoosmoos@lemmy.ca 5 points 1 week ago* (last edited 1 week ago)

yeah I guess I fall under that definition as well

sure, I am an AI skeptic. I work in engineering. I should be critical of any tools. that doesn't mean that I'm sabotaging the company's strategy, unless their strategy is to blindly implement AI tools. in which case yeah sure, but like surely that's not the actual strategy, right?

anyways, short story time:

  • our company had a demo for an AI drawing creation tool. it was not very impressive and their team couldn't answer many questions about how it works. it didn't seem like it would provide any value to us since it couldn't do complex drawings and simple drawings take little time and effort to create. so the moderate complexity stuff is where it could shine, which coincidentally is also where junior people train their skills to become intermediates and seniors. and like I'm not going to choose to turn our jobs into reviewing AI output, because that's bad for the company - it leads to poor job satisfaction, poor work quality, and inexperienced team members

  • our CTO is vibe coding a bunch of tools for us right now. his approach is to basically not validate anything and let people find issues. I report these issues when I find them, as well as go looking for them if I suspect something is wrong. does that make me a saboteur? trying to correct something?

  • same guy also has used Claude to create some technical specifications/ document summaries, and sent them out external. external team had many questions because the document didn't make sense. bad look for our company, I think, and lots of wasted time trying to figure out that information and then going back and correcting it later. am I a saboteur because I don't blindly adopt AI document generation and keep asking who validated certain information instead of just using it and proceeding with my work?

[-] Jankatarch@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

Funny enough they count even "using unapproved AI tools" as sabotage. So I would say your actions fell under high treason?

[-] cloudy1999@sh.itjust.works 14 points 1 week ago

The AI First companies will reap what they sow.

[-] kurmudgeon@lemmy.world 13 points 1 week ago

The rest are telling that 29% to shut the fuck up.

[-] leoj@piefed.social 11 points 1 week ago

had an AI training this morning... Asked a very pointed question about how to prevent hallucinations and bad responses, their response was hilarious.

[-] axexrx@lemmy.world 10 points 1 week ago

I think i headed off AI at my job.

Back when chat gpt was still new, the DOO of the 20 person company company i worked at sent out a employment contract out as a PDF asking us to esign. I replied-all back asking if it was intentional that there was a 3 year non-compete in the middle of a list of terms under a header stating

' is an at will state. The following terms of employmebt are revokeable by either party upon notice of termination of employment:'

The company owner replied back LOL to the group and that was the last of it. They never did actually end up asking us to sign an employment contract. Have yet to see any AI adoption, besides the office director occasionally responding to my emails based on mistakes she blames on the ai summary.

[-] No1@aussie.zone 5 points 1 week ago

I was wondering the other day whether AI companies were polluting their competitions AI sources to make their AI look better.

[-] puppinstuff@lemmy.ca 4 points 1 week ago

You gotta pump those up! Those are rookie numbers.

[-] makyo@lemmy.world 3 points 1 week ago

Look what modernity has done to our Xwing pilots

[-] sheetzoos@lemmy.world 2 points 1 week ago

We did it Reddit! We stopped AI!

[-] Diurnambule@jlai.lu 2 points 1 week ago

Seeing the dumb fuck everywhere in corpo, they don't need any sabotage. Some people don't know how to write a jira 10 years after a reorganization. Or can't create a git repo on the first shot after 10 year at a post. Train an AI on him and you fuck all your datas. Most of smart people don't want to be the first to go because they have objective and they know that a new tools fuck the rhythm at first and AI is blingbling for most use case for now.

[-] ramenshaman@lemmy.world 1 points 1 week ago

And we'll do it again!

this post was submitted on 21 Apr 2026
474 points (100.0% liked)

Fuck AI

6864 readers
838 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS