66

My company is strongly pushing AI. There are lot of experiments, demos, and effort from decently smart people about integrating it into our workflows. There are some impressive victories that have been made with AI tooling producing some things fast. I am not in denial about this. And the SE department is tracking improved productivity (as measured by # of tickets being done, I guess?)

The problem is I hate AI. I hate every fucking thing about it. Its primary purpose, regardless of what utility is gained, is spam. I think it's obvious how google search results are spam, how spam songs and videos are being produced, etc. But even bad results from AI that have to be discarded, IMO, are spam.

And that isn't even getting into all massive amounts of theft to train the data, or the immense amounts of electricity it takes to do training and inference, as well as run, all this crap. Nor the psychosis being inflicted onto people who emplace their trust into these systems. Nor the fact that these tools are being used to empower authoritarian regimes to track vulnerable populations, both here (in the USA) and abroad. And all this AI shit serves to enrich the worst tech moguls and to displace people like artists and people like myself, a programmer.

I'm literally being told at my job that I should view myself basically as an AI babysitter, and that AI has been unambiguously proven in the industry, so the time for wondering about it, experimenting with it, or opposing it is over. The only fault and flaw is my (i.e. any given SE's) unwillingness to adapt and onboard.

Looking for advice from people who have had to navigate similar crap. Because I feel like I'm at a point where I must adapt or eventually get fired.

top 50 comments
sorted by: hot top controversial new old
[-] Sunsofold@lemmings.world 2 points 13 hours ago

The key to 'liking' AI is to turn corporation-brained and stop caring about any sense of quality, customer satisfaction, or even your own satisfaction.

Minimum. Viable. Product.

That's why they love AI. It promises to make it so that you can always BS something out the door in 10 minutes, collect the check, and move on before it breaks. When you have no conception of pride in your product, the money you get paid to sell cheap trash is just as spendable as what you get selling artisanal creations, and if you can BS out something in half the time, you can do it to twice as many customers. Double revenue. And by the time the lawsuits come in, you're supposed to be already working somewhere else. That's the company's problem, not yours, right?

sigh

[-] gwl 3 points 1 day ago

Just don't.

Don't change your morals just cause of peer pressure, especially not corporate pressure

[-] olafurp@lemmy.world 1 points 1 day ago

AI is pretty bad most things you do that are actually valuable so your critique definitely holds. It's bad for the environment and creates tech consolation and all round is creating around as many problems as it claims to solve.

AI as in neural networks are really good in most ways such as playing chess and detecting melonomas but I'm going to give some tips for spocifically LLMs.

Treat it as a dumb intern. You ask it to find research papers but you have to read them yourself to actually assess them. You can use it to draft an email but you still have to proofread it. You can use it to write code but expect bugs and unhandled edge cases.

I'm a software developer and I use an LLM to create code generators, internal tooling, a thing that takes a json and outputs SQL insert statements or to look up docs. The AI has not increased my productivity per se but the tooling I created with it has.

Another use case is to ask for critique, you paste some code block in and ask it to review performance for example and it can spot the "use a hash map there" cases pretty easily.

That's my 2 cents on the topic.

[-] MojoMcJojo@lemmy.world 7 points 1 day ago

My company does annual reviews. You have to write your own review, then they will read it over and then sit down to talk to you about it.

Last year, I just had ChatGPT write it for me based on all of my past conversations with it. Turned it in. The first question they asked me was, 'Did you use AI to write this?' Without hesitation, I said absolutely. They loved it so much, they had me show everyone else how to do it and made them redo theirs. I couldn't frikin believe it. Everyone is still pissed they have to use ChatGPT this year, but the bosses love that corporate hogwash so much.

They're about to receive a stack of AI-generated drivel so bad that I bet they have everyone go back to handwriting them.

[-] GnuLinuxDude@lemmy.ml 1 points 1 hour ago

That’s especially saddening because writing the review is specifically meant for you to contemplate what went well and perhaps what can go better next time. You would think managers would want you to reflect on that. For the benefit of the company, at minimum.

But with stories like yours it is becoming more clear that the only objective is to “use ai” or deliver ai-generated results. Why even bother caring or trying when management does not?

[-] PeriodicallyPedantic@lemmy.ca 1 points 1 day ago

Do we work at the same place? Lol

I totally get you, despite some pedantic disagreement about what "spam" means. I'm in the same boat a bit.
I'm embarrassed to admit that I've kinda leaned into it in the sense that "if they're gonna make us use it, then I want to be in a position to steer the direction". While I discourage its use and preach to people about it's evils, I also try to improve the AI tools and processes that we're using. That probably makes me part of the problem, but it relieves a bit of the pressure of how shit it is day to day.

I'm actually kinda bearish on AI.
Or rather I think that either it's a bubble that will pop, or before too long it's gonna cause a global depression. Maybe a bit of paranoia and doomerism.

[-] rImITywR@lemmy.world 63 points 3 days ago

Ask ChatGPT "How do I unionize my workplace to protect my job against AI obsessed management?"

[-] LaMouette@jlai.lu 7 points 2 days ago

+1 also look for "reverse centaur", its a metaphor by cory doctorow which you may find interesting

[-] LiveLM@lemmy.zip 3 points 1 day ago* (last edited 1 day ago)

I'm literally being told at my job that I should view myself basically as an AI babysitter

Feel you 100%.
I dunno why but my entire career everyone always talks like doing IT is simply a stepping stone to becoming a manager, so stupid. Like god forbid you're not the lEaDeRsHiP type.
And now with the rise of "Agentic IDEs" it's even fucking worse, I don't want to be managing people let alone herding a pack of ~~blind cats~~ autonomous agents.

Unfortunately the only solution is to stop caring, Yes, really.
I know it hurts producing sub-par garbage when you know you're capable of much more, but unfortunately there's no other way.
If upper management doesn't care about delivering quality products to their consumers anymore, you shouldn't either. You'll stress and burn yourself out meanwhile those responsible won't lose a blink of sleep over it.
Do exactly what they want. Slop it all. Fuck it. Save your energy for what really matters.

That or start looking for another job, but you might struggle to find one that isn't doing the same shit.

[-] UnspecificGravity@piefed.social 39 points 3 days ago

Stop carrying about the quality of your output and just copy and paste the slop back and forth. its what they want.

[-] GnuLinuxDude@lemmy.ml 9 points 2 days ago

The slop being copied back and forth is actually is what they want. At the recent all-hands they basically said this without exaggeration. Quality and correctness were demoted to secondary importance.

[-] ashughes@feddit.uk 15 points 2 days ago

This actually made something click for me: why I haven’t been able to find work for 3 years in software QA. It’s not that AI came for my job or that it replaced me. At some point people stopped caring about quality so the assurance became moot.

[-] DrDystopia@lemy.lol 16 points 2 days ago

Faster, not better.

[-] helix@feddit.org 9 points 2 days ago

Try to distance yourself from the quality of your work.

Produce AI slop like your overlords fetishise, then have a mouse jiggler wiggle the cursor and an AI answer your Teams messages.

[-] Feyd@programming.dev 29 points 3 days ago

You're the sane one.

[-] Brokkr@lemmy.world 30 points 3 days ago

AI is a tool, just like a hammer. You could use a rock, but that doesn't give you the leverage that a hammer does.

AI is also a machine, it can get you to your destination faster, like a car or train.

Evil people have used hammers, cars, and trains to do evil and horrible things. These things can also be used for useless stupid things, like advertising.

But they can also be used for good, like an ambulance or to transport food. They also make us more efficient and can be used to save resources and effort. It depends on who uses it and how they use it.

You can't control how other people may misuse these things, but you can control how much you know, how you use it, and what you use it for.

[-] gwl 1 points 1 day ago

AI is a tool like a Gun, more like, specifically designed for a terrible purpose, used for a terrible purpose, and with no way to use it for good except by deconstructing it and finding a new use for it

[-] thatsTheCatch@lemmy.nz 14 points 3 days ago

One aspect that analogy doesn't work for is hammers and cars weren't built with the mass theft of intellectual property, they aren't being leveraged to put people out of jobs, and they aren't the driving force for building insane numbers of data centres that increase power bills for locals and ravage their water supply.

It's not necessarily the pure usage of AI that I don't like, as much as what has been and is being used to create it.

Cars have their own problems of course, and cause more issues with the direct use of them than what went into building them.

I read someone leave a different comment where they said something like "If human meat was the healthiest, least environmentally damaging, and cheapest food, they still wouldn't eat it." In this case AI doesn't really match those benefits anyway

[-] helix@feddit.org 4 points 2 days ago

Combustion engine cars cause cancer, AI doesn't. Checkmate, Atheist.

[-] Cowbee@lemmy.ml 10 points 3 days ago

IP itself needs to be abolished, so that part isn't as important. Further, cars did put people out of jobs that used to draw horse carriages and maintain them. The original commenter is correct with their analysis.

[-] thericofactor@sh.itjust.works 4 points 2 days ago

If IP is abolished, that would to me imply that use of AI should be free for everyone, as it's based on everyones collective knowledge.

[-] Cowbee@lemmy.ml 7 points 2 days ago

Sure, I don't see a problem with that.

load more comments (3 replies)
[-] helix@feddit.org 8 points 2 days ago

Many people think they're 20% more productive with AI, but they're actually 20% less productive.

https://fortune.com/2025/07/20/ai-hampers-productivity-software-developers-productivity-study/

[-] LordCrom@lemmy.world 9 points 2 days ago

I remind my boss that giving AI full access to our codebase and access to environmemts, including prod, is the exact plot of the Silicon Valley episode where Gilfoyle gave Son of Anton access. His AI deleted the codebose after being asked to clean the bugs....deleting the entire codebase was the most efficient way of doing that.

[-] anime_ted@lemmy.world 12 points 3 days ago* (last edited 3 days ago)

I am also encouraged to use AI at work and also hate it. I agree with your points. I just had to learn to live with it. I've realized that I'm not going to make it go away. All I can do is recognize its limited strengths and significant weaknesses and only use it for limited tasks where it shines. I still avoid using it as much as possible. I also think "improved productivity" is a myth but fortunately that's not a metric I have to worry about.

My rules for myself, in case they help:

  • Use it as a tool only for appropriate tasks.
  • Learn its strengths and use it for those things and nothing else. You have to keep thinking and exploring and researching for yourself. Don't let it "think" for you. It's easy to let it make you a lazy thinker.
  • Quality check everything it gives you. It will often get things flat wrong and you will have to spend time correcting it.
  • Take lots of deep breaths.

[Edit: punctuation]

[-] trilobite@lemmy.ml 2 points 2 days ago

I agree with all your points. The problem is that quality cheching AI outputs is something that only a few will do. The other day my son did a search with chat GPT. He was doing an analysis of his competitors within 20km radius from home. He took all the results for grated and true. Then i looked at the list and found many business names looked strange. When i asked for the links to the website, i found that some were in different countries. My son said "u cant trust this". When i pointed it out to chatgpt, the dam thing replied "oh im sorry, i got it wrong". Then you realise that these AI things are not accountable. So quality checking is fundamental. The accountability will always sit with the user. I'd like to see the day when managers take accountability of ai crap. That wont happen, do jobs for now are secure.

load more comments (5 replies)
[-] cadekat@pawb.social 10 points 3 days ago

It definitely sounds like you've already decided to find AI useless regardless of what it can do, but on the chance that I can maybe change your mind a little bit...

I'm a huge AI skeptic myself, at least compared to my coworkers. Almost everything it spits out has been incorrect for me, except in very narrow use cases.

First, I find it useful to find links to actual documentation for tools/libraries/languages I am completely unfamiliar with. The examples and text it generates are usually poor, but it can do a decent job of finding webpages.

Second, I've found it good at guided code reviews. It is no substitute for a real human review, but adding an AI pass before you open a pull request can knock off some of the low hanging fruit.

load more comments (1 replies)
[-] Thisiswritteningerman@midwest.social 6 points 2 days ago* (last edited 2 days ago)

If you don't mind me asking, what do you do and kind of AI? Maybe it's the autism but I find LLMs are bit limited and useless but other use cases aren't quite as bad Training image recognition into AI is a legitimately great use of it and extremely helpful. Already being used for such cases. Just installed a vision system on a few of my manufacturing lines. A bottling operation detects cap presence, as well as cross threads or un-torqued caps based on how the neck vs cap bottom angle and distance looks as it passes the camera. Checking 10,000 bottles a day as they scroll past would be a mind numbing task for a human. Other line is making fresnel lenses. Operators make the lenses, and are personally checking each lens for defects and power. Using a known background and training the AI to what distortion good lenses should create when presented is showing good progress at screening just as well as my operators. In this case it's doing what the human eye can't; determine magnification and defraction visually.

[-] GnuLinuxDude@lemmy.ml 5 points 2 days ago

The AI in this case is, for all intents and purposes, using Copilot to write all the code. It is basically beginning to be promoted as being the first resort, rather than a supplement.

I don't know enough about copilot as work has made it optional for mostly accessibility related tasks: digging through the mass of extended Microsoft files in teams, outlook, OneDrive to find and summarize topics; record meeting notes, not that they're overly helpful compared to human taken notes due to a lack of context; and normalizing data, as every power BI report out is formatted as it's owner saw fit.

Given it's ability to make ridiculous errors confidently, I don't suppose it has the memory to be used more like a toddler helper? Small, frequent tasks that are pretty hard to fuck up, once it can reliably do these through repetition and guidance on what's a passing result, tieing more together?

[-] paequ2@lemmy.today 7 points 3 days ago

AI tooling producing some things fast

This isn't necessarily a good thing. Yeah, maybe AI wrote a new microservice and generated 100s of new files and 1000s of lines of new code... but... there's a big assumption there that you actually needed 100s of new files and 1000s of lines of new code. What it tends to generate is tech debt. That's also ignoring the benefits of your workforce upskilling by learning more about the system, where things are, how they're pieced together, why they're like that, etc.

AI just adds tech debt in a blackbox. It's gonna lower velocity in the long term.

[-] helix@feddit.org 3 points 2 days ago

What it tends to generate is tech debt.

Just like my coworkers.

load more comments (13 replies)
[-] Melobol@lemmy.ml 7 points 3 days ago* (last edited 3 days ago)

I do know that I am the minority on Lemmy. I believe, there is a space for AI in our life. Of course it is not the way big corporations is trying to shove down in our throats.
The ethical and moral problems of AI is not part of your question.

If you decide to work for your company that forces you to use AI, then you either use AI or get a new job.
That's how capitalism works.

You don't have to like it. You just have to accept the "must use AI" as part of the things they are paying you for.
There is no metric aside of: they are Paying you to do this way.
If you don't want to do that way - the ball is in your court.

Edit: Browsing Lemmy I just saw this post. Maybe it will help you:
https://lemmy.ml/post/40233766

load more comments (1 replies)
[-] artifex@piefed.social 5 points 3 days ago

Make a list of the tasks you hate doing or are repetitive, time consuming and not normally automateable. Then see if any of them are a good fit for an AI workflow.

load more comments
view more: next ›
this post was submitted on 12 Dec 2025
66 points (100.0% liked)

General Programming Discussion

9224 readers
8 users here now

A general programming discussion community.

Rules:

  1. Be civil.
  2. Please start discussions that spark conversation

Other communities

Systems

Functional Programming

Also related

founded 6 years ago
MODERATORS