113
submitted 19 hours ago by rain_lover@lemmy.ml to c/asklemmy@lemmy.ml

I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.

They send me documents they "put together" that are clearly ChatGPT generated, with no shame. They tell us that if we aren't doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.

I feel like I am living in a nightmare.

you are viewing a single comment's thread
view the rest of the comments
[-] sideponcho69@piefed.social 13 points 17 hours ago

I can only speak for my use of it in software development. I work with a large, relatively complex CRUD system so take the following as you will, but we have Claude integrated with MCPs and agent skills and it's honestly been phenomenal.

Initially we were told to "just use it" (Copilot at the time). We essentially used it as an enhanced google search. It wasn't great. It never had enough context and as such the logic it produced would not make sense, but it was handy for fixing bugs.

The advent of MCPs and agents skills really bring it to another level. It has far more context. It can pull tickets from Jira, read the requirements, propose a plan and then implement it once you've approved it. You can talk it through, ask it to explain some of the decisions it made and alter the plan as it's implemented. It's not perfect but what it can achieve when you have MCPs, skills, md files all set up is crazy.

The push for this was from non-tech management who are most definitely driven by hype/FOMO. So much so they actually updated our contracts to include AI use. In our case, it paid off. I think it's a night and day difference between using base Copilot to ask questions vs using it with context sources.

[-] rain_lover@lemmy.ml 8 points 17 hours ago

What happens when anthropic triple their prices and your company is totally dependent on them for any development work? You can't just stop using it because no in house developers, if there are even any left, will understand the codebase.

[-] sideponcho69@piefed.social 6 points 17 hours ago* (last edited 16 hours ago)

To the same point as lepinkainen, we are fully responsible for the code we commit. We are expected to understand what we've committed as if we wrote it ourselves. We treat it as a speed booster. The fact that Claude does a good job at maintaining the same structure as the rest of the codebase makes it no different than trying to understand changes made by a co-worker.

On your topic of dependency, the same point as above applies. If AI support were to drop tomorrow, we would be slower, the work would get done all the same.

I do agree with you though. I can tell we are getting more relaxed with the changes Claude makes and putting more blind trust in it. I'm curious as to how we will be in a years time.

As a disclaimer, I'm just a developer, I've no attachment to my company. This is just my take on the subject.

[-] lepinkainen@lemmy.world 3 points 17 hours ago

Not OP but:

In our company programmers are still fully responsible for the code they commit and must be able to explain it in a PR review.

It just speeds up things, it doesn’t replace anyone.

Simple things that would’ve taken a day can be done before lunch now, just because it’s just prompt + read code + PR (full unit and integration test suites ofc, made by humans)

[-] toor@lemmy.ml 3 points 17 hours ago

It just speeds up things, it doesn’t replace anyone.

Oh, my sweet summer child.

[-] lepinkainen@lemmy.world 1 points 16 hours ago

Let’s see, we’re understaffed even now and with AI we can kinda keep up.

But I’ll eat my liquorice shoe like Chaplin if this turns into massive layoffs 🫠

load more comments (2 replies)
this post was submitted on 12 Dec 2025
113 points (100.0% liked)

Asklemmy

51613 readers
824 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS