110
submitted 17 hours ago by rain_lover@lemmy.ml to c/asklemmy@lemmy.ml

I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.

They send me documents they "put together" that are clearly ChatGPT generated, with no shame. They tell us that if we aren't doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.

I feel like I am living in a nightmare.

top 50 comments
sorted by: hot top controversial new old
[-] Atlas_@lemmy.world 1 points 1 hour ago

I am building it! Or, well, not it anymore but a product that is heavily based on it.

I think we as a company recognize that the, like, 95% of AI products right now are shit. And that the default path from now is that power continues to concentrate at the large labs like OpenAI, which haven't been behaving particularly well.

But we also believe that there are good ways to use it. We hope to build one.

The thing your boss is asking you to do is shitty. However, TBQH humanity doesn't really know what LLMs are useful for yet. It's going to be a long process to find that out, and trying it in places where it isn't helpful is part of that.

Lastly, even if LLMs don't turn out to be useful for real work, there is something interesting and worth exploring there. Look at researcher/writers like Nostalgebrist and Janus - they're exploring what LLMs are like as beings. Not that they're conscious, but rather that there's interesting and complex things going on in there that we don't understand yet. The overarching feeling in my workplace is that we're in a frontier time, where clever thinking and exploration will be rewarded.

[-] paequ2@lemmy.today 5 points 2 hours ago* (last edited 2 hours ago)

Uff. That sounds like a nightmare. I'm glad my job doesn't force us to us AI. It's encouraged, but also my managers say "Use whatever makes you the most productive." AI makes me slower because I'm experienced and already know what I want and how I want it. So instead of fighting with the AI or fact checking it, I can just do shit right the first time.

For tasks that I don't have experience in, a web search is just as fast. Search, click first link. OR. Sure, I'll click and read a few pages, but that's not wasted time. That's called learning.

I have a friend who works at a company where they have AI usage quotas that affect their performance review. I would fucking quit that job immediately. Not all jobs are this crazy.

AI tends to generate tech debt. I have some coworkers that generate nasty, tech debt, AI slop merge requests for review. My policy is: if you're not gonna take the time to use your brain and write something, then I'm not gonna waste my time reviewing your slop. In those cases, I use AI to "review" the code and decide to approve or not. IDGAF.

[-] thatradomguy@lemmy.world 3 points 3 hours ago

Dumbass senior contract person and program managers are all for using copilot and I've caught several people using chatgpt as a search engine or at least that's what they tell me they think it is.

[-] VinesNFluff@pawb.social 4 points 4 hours ago* (last edited 4 hours ago)

Surprisingly reasonable?

I was terrified that entering the corporate world would mean being surrounded by people who are obssessed with AI.

Instead like... The higher-ups seem to be bullish on it and how much money it'll make them (... And I don't mind because we get bonuses if the corp does well), but even they talk about how "if you just let AI do the job for you, you'll turn in bad quality work" and "AI just gets you started, don't rely on it"

We use some machine learning stuff in places, and we have a local chatbot model for searching through internal regulations. I've used Copilot to get some raw ideas which I cooked up into something decent later.

It's been a'ight.

[-] some_kind_of_guy@lemmy.world 2 points 1 hour ago

This is the way. I honestly don't care how the execs think about ai or if they use it themselves, but don't force its usage on me. I've been touching computers since before some of them existed. For me it's just one extra tool that gets pulled out in very specific scenarios and used for a short amount of time. It's like the electric start on my snowblower - you don't technically need it, and it won't do the work for you, (so don't expect it to) but at the right time it's extremely nice to have.

[-] Godnroc@lemmy.world 5 points 4 hours ago

So far it's a glorified search engine, which it is mildly competent at. It just speeds up collecting the information I would anyways and then I can get to sorting useful from useless faster.

That said, I've seen emails from people that were written with AI and it instantly makes me less likely to take it seriously. Just tell me what the end goal is and we can discuss how to best get there instead is regurgitating some slop that wouldn't get is there in the first place!

[-] echodot@feddit.uk 2 points 4 hours ago

One of my managers is like that, I've known him for about 5 years and he's been the biggest idiot I've ever met the entire time. But ever since AI came out he's turned it up to 11.

Fortunately my other manager can't stand him, and they have blazing arguments, so generally speaking if he tells me to do something I don't like / want to do, I go and tattle tell.

[-] muxika@lemmy.world 6 points 6 hours ago* (last edited 6 hours ago)

I feel like giving AI our information on a regular basis is just training AI to do our jobs.

I'm a teacher and we're constantly encouraged to use Copilot for creating questions, feedback, writing samples, etc.

You can use AI to grade papers. That sure as shit shouldn't happen.

[-] NomenCumLitteris@lemmy.ml 4 points 6 hours ago

My subordinate is quite proud at the code AI produces based off his prompts. I don't use AI personally, but it is surely a tool. Don't know why one would be proud at the work they didn't do and can't explain though. I have to manage the AI use to a "keep it simple" level. Use AI if there is a use case, not just because it is there to be used...

[-] prettygorgeous@aussie.zone 6 points 8 hours ago

I vibe code from time to time because people sometimes demand quick results in an unachievable timeline. In saying that, I may use a LLM to generate the base code that provides a basic solution to what is needed and then I go over the code and review/refactor it line by line. Sometimes if time is severely pressed and the code is waaaay off a bare minimum, I'll have the LLM revise the code to solve some of the problem, and then I review, adjust, amend where needed.

I treat AI as a tool and (frustrating and annoying) companion in my work, but ultimately I review and adjust and amend (and sometimes refactor) everything. It's kind of similar to when you are reading code samples from websites, copying it if you can use it, and refactoring it for your app, except tailored a bit more to what you need already..

In the same token, I also prefer to do it all myself if I can, so if I'm not pressed for time, or I know it's something that I can do quickly, I'll do it myself.

[-] TheImpressiveX@lemmy.today 62 points 17 hours ago

I am reminded of this article.

The future of web development is AI. Get on or get left behind.

5/5/2025

Editor’s Note: previous titles for this article have been added here for posterity.

~~The future of web development is blockchain. Get on or get left behind.~~

~~The future of web development is CSS-inJS. Get on or get left behind.~~

~~The future of web development is Progressive Web Apps. Get on or get left behind.~~

~~The future of web development is Silverlight. Get on or get left behind.~~

~~The future of web development is XHTML. Get on or get left behind.~~

~~The future of web development is Flash. Get on or get left behind.~~

~~The future of web development is ActiveX. Get on or get left behind.~~

~~The future of web development is Java applets. Get on or get left behind.~~

If you aren’t using this technology, then you are shooting yourself in the foot. There is no future where this technology is not dominant and relevant. If you are not using this, you will be unemployable. This technology solves every development problem we have had. I can teach you how with my $5000 course.

lol Silverlight.

In fairness, a lot of those did take over the web for a time and lead to some cool stuff (and also some wild security exploits).

[-] chisel@piefed.social 16 points 16 hours ago

PWAs are cool af and widely used for publishing apps on the App/Play stores. It's a shame they haven't been adopted more widely for their original purpose of installing apps outside of those stores, but you can't get everything you want.

[-] Catalyst_A@lemmy.ml 7 points 14 hours ago

Holy shit... XD

[-] mub@lemmy.ml 3 points 9 hours ago
[-] clay_pidgin@sh.itjust.works 10 points 12 hours ago

Our devs are implementing some ML for anomaly detection, which seems promising.

There's also a LLM with MCP etc that is writing the pull requests and some documentation at least, so I guess our devs like it. The customers LOVE it, but it keeps making shit up and they don't mind. Stuff like "make a graph of usage on weekdays" and it includes 6 days some weeks. They generated a monthly report for themselves, and it made up every scrap of data, and the customer missed the little note at the bottom where the damn thing said "I can regenerate this report with actual data if it is made available to me".

[-] pebbles@sh.itjust.works 37 points 17 hours ago

My company is doing small trial runs and trying to get feedback on if stuff is helpful. They are obviously pushing things because they are hopeful, but most people report that AI is helpful about 45% of the time. I'm sorry your leadership just dove in head first. That's sound like such a pain.

[-] rain_lover@lemmy.ml 20 points 17 hours ago

Sounds like your company is run by people who are a bit more sensible and not driven by hype and fomo.

[-] earlgrey0@sh.itjust.works 18 points 17 hours ago

Hype and FOMO are the main drivers of the Silicon Valley economy! I hate it here.

[-] fizzle@quokk.au 2 points 9 hours ago

My "company" is tiny, and only employs myself 1 colleague, and an assistant. We're accountants.

We self host some models from huggingface.

We don't really use these as part of any established workflow. Thinking of some examples ...

This week my colleague used a model to prep a simple contract between herself and her daughter where by her daughter would perform whatever chores and she would pay for cello lessons.

My assistant used an AI thing to parse some scanned bank statements, so this one is work related. The alternative is bashing out the dates, descriptions, and amounts manually. Using traditional OCR for this purpose doesn't really save any time because hunting down all the mistakes and missed decimal places takes a lot of effort. Parsing this way takes about a third of the time, and it's less mentally taxing. However, this isn't a task we regularly perform because obviously in the vast majority of cases we can get the data instead of printed statements.

I was trying to think the proper term for an english word which has evolved from some phrase or whatever, like "stearing board" became "starboard". The Gen AI suggested portmanteau, but I actually think there's a better word I just haven't remembered yet.

I had it create a bash one liner to extract a specific section from a README.md.

I asked it to explain the method of action of diazepam.

My feelings about AI are that it's pretty great for specific niche tasks like this. Like the bash one liner. It took 30 seconds to ask and I got an immediate, working solution. Without Gen AI I just wouldn't be able to grep whatever section from a README - not exactly a life changing super power, but a small improvement to whatever project I was working on.

In terms of our ability to do our work and deliver results for clients, it's a 10% bump to efficiency and productivity when used correctly. Gen AI is not going to put us out of a job.

[-] Lettuceeatlettuce@lemmy.ml 6 points 12 hours ago

I work in IT, many of the managers are pushing it. Nothing draconian, there are a few true believers, but the general vibe is like everybody is trying to push it because they feel like they'll be judged if they don't push it.

Two of my coworkers are true believers in the slop, one of them is constantly saying he's been, "consulting with ChatGPT" like it's an oracle or something. Ironically, he's the least productive member of the team. It takes him days to do stuff that takes us a few hours.

[-] teawrecks@sopuli.xyz 8 points 13 hours ago

I'm in software. The company gives us access and broadly states they'd like people to find uses for it, but no mandates. People on my team occasionally find uses for it, but we understand what it is, what it can do, and what it would need to be able to do for it to be useful. And usually it's not.

If I thought anyone sent me an email written with AI, I would ask them politely but firmly to never waste my time like that again. I find using AI for writing email to be highly disrespectful. If I worked at a company making a habit out of that, I would leave.

[-] Witchfire@lemmy.world 2 points 10 hours ago* (last edited 10 hours ago)

I work at a large tech company. It's in our expectations. I hate it so much

It's very obvious half my team's code base is AI written garbage

[-] HobbitFoot@thelemmy.club 2 points 10 hours ago

Some people are using it for work purposes when there isn't a major policy on it.

You can tell because the work is shit.

[-] UnspecificGravity@piefed.social 13 points 15 hours ago

The most technically illiterate leaders are pushing the hell out of using for things that don't make sense while the workers who know what they are doing are finding some limited utility.

Out biggest concern is that people are going to be using it for the wrong stuff and fail to account for the errors and limitations.

[-] TragicNotCute@lemmy.world 21 points 17 hours ago

This is all of tech right now.

[-] sideponcho69@piefed.social 13 points 16 hours ago

I can only speak for my use of it in software development. I work with a large, relatively complex CRUD system so take the following as you will, but we have Claude integrated with MCPs and agent skills and it's honestly been phenomenal.

Initially we were told to "just use it" (Copilot at the time). We essentially used it as an enhanced google search. It wasn't great. It never had enough context and as such the logic it produced would not make sense, but it was handy for fixing bugs.

The advent of MCPs and agents skills really bring it to another level. It has far more context. It can pull tickets from Jira, read the requirements, propose a plan and then implement it once you've approved it. You can talk it through, ask it to explain some of the decisions it made and alter the plan as it's implemented. It's not perfect but what it can achieve when you have MCPs, skills, md files all set up is crazy.

The push for this was from non-tech management who are most definitely driven by hype/FOMO. So much so they actually updated our contracts to include AI use. In our case, it paid off. I think it's a night and day difference between using base Copilot to ask questions vs using it with context sources.

[-] rain_lover@lemmy.ml 8 points 15 hours ago

What happens when anthropic triple their prices and your company is totally dependent on them for any development work? You can't just stop using it because no in house developers, if there are even any left, will understand the codebase.

[-] sideponcho69@piefed.social 6 points 15 hours ago* (last edited 15 hours ago)

To the same point as lepinkainen, we are fully responsible for the code we commit. We are expected to understand what we've committed as if we wrote it ourselves. We treat it as a speed booster. The fact that Claude does a good job at maintaining the same structure as the rest of the codebase makes it no different than trying to understand changes made by a co-worker.

On your topic of dependency, the same point as above applies. If AI support were to drop tomorrow, we would be slower, the work would get done all the same.

I do agree with you though. I can tell we are getting more relaxed with the changes Claude makes and putting more blind trust in it. I'm curious as to how we will be in a years time.

As a disclaimer, I'm just a developer, I've no attachment to my company. This is just my take on the subject.

[-] lepinkainen@lemmy.world 3 points 15 hours ago

Not OP but:

In our company programmers are still fully responsible for the code they commit and must be able to explain it in a PR review.

It just speeds up things, it doesn’t replace anyone.

Simple things that would’ve taken a day can be done before lunch now, just because it’s just prompt + read code + PR (full unit and integration test suites ofc, made by humans)

[-] toor@lemmy.ml 3 points 15 hours ago

It just speeds up things, it doesn’t replace anyone.

Oh, my sweet summer child.

load more comments (1 replies)
[-] TheAlbatross 16 points 17 hours ago* (last edited 17 hours ago)

My boss has mentioned AI once, and it was when I was asking for guidance on how to take over a twice annual presentation he used to run. He said I could just get the LLM to generate the presentation then edit it as needed.

I did not do that. I just made the damn thing myself because, by him saying that, I realized he had no fucking clue how to do any of this so I didn't have to either.

Corporate has introduced an LLM we're supposed to be able to use in lieu of asking an HR representative basic questions, like vacation policy and how to get resources for open enrollment. It... barely works. If you happen to use the magic words, it'll lead you to a PDF of the policy you're looking for. I just call or email someone in HR. It's better to build rapport and awareness between home office and the satellite offices, anyway. Makes things smoother when there's an actual issue to deal with and people help people they like, even just a little, faster.

[-] GissaMittJobb@lemmy.ml 7 points 14 hours ago

We get encouraged to try out AI tools for various purposes to see where we can find value out of them, if any. There are some use-cases where the tech makes sense when wielded correctly, and in those cases I make use of it. In other cases, I don't.

So far, I suspect we may be striking a decent balance. I have however noticed a concern trend of people copy-pasting unfiltered slop as a response to various scenarios, which is obviously not helpful.

[-] skooma_king@piefed.social 13 points 16 hours ago

My boss was curious about it, so she asked me to show her how to use it. She wanted to have it summarize some data from a spreadsheet. It hallucinated with the very first prompt, and she lost all interest in it (win). Later, I disabled everything copilot I can in our tenant and implemented policies to hide/remove it from our PCs. I’ve only had one employee (one of our fire captains) complain about it so far, but I don’t feel bad as his boss was complaining about the AI slop he was turning in on fire reports.

I don’t really see ChatGPT or others popping up in our network traffic either, though I don’t know how/if they are using it on personal phones.

My situation might be different than the average fedi user though.. not a very technical group other than me (one guy IT department).

[-] abbadon420@sh.itjust.works 6 points 15 hours ago

The splution seems obvious. Let ChatGPT do the work, while you look for employment elsewhere.

You get a LLM genrated email? You let the LLM respond to that email....

[-] PonyOfWar@pawb.social 11 points 17 hours ago* (last edited 17 hours ago)

My company has 2 CEOs. One of them doesn't ever really talk about AI. The other one is personally obsessed with the topic. His picture on Teams is AI generated and every other day, he posts some random AI tutorial or news into work channels. His client presentations are mostly written by ChatGPT. But luckily, nothing is being forced on us developers. The company is very hands-off in general (some may say disorganized) and we can pretty much use any tools and methods we choose, as long as we deliver good results. I personally use AI only occassionally, mostly for quick prototyping in languages and frameworks I'm unfamiliar with. None of the other devs are very enthusiastic about AI either.

[-] Pipster 9 points 17 hours ago

Intolerable

[-] howrar@lemmy.ca 2 points 12 hours ago

I work in AI research, so naturally, AI is part of our day to day work. But when it comes to things like LLMs tools and other generative models, we rarely hear anyone talk about those. Sometimes, people will share their workflow, and that may involve LLMs to supplement traditional search engines for literature reviews for example. That's about the extent of it. No one really cares to talk about them much. No one pushed those tools on us. We just do our work with whatever tools we think are best.

[-] Unquote0270@programming.dev 8 points 17 hours ago

Not quite that extreme where I am but it is being thrust into any kind of strategy scenario with absolutely nothing to back it up. They are desperate to incorporate.

[-] TheOneCurly@feddit.online 5 points 15 hours ago

I hear there's some sort of AI mandate coming but no idea what it is yet. A few coworkers poke at chatGPT for basic coding questions. I will not use it for anything at this point. IT here is mostly useless contractors so they can't tell what we're doing and they can't make us do anything. My direct reporting chain will back me working how I want to work so I don't foresee any issues.

I fully recognize I'm in a highly privileged position that many others aren't. But I'm going to take full advantage and keep my sanity.

[-] Passerby6497@lemmy.world 7 points 17 hours ago

I use chatgpt when I care to, and while I was given a subscription by work, I'm not actively encouraged to use it. I really only use it for researching problems that Google search is too seo poisoned to help me with, or debugging scripts. Past that it doesn't have much professional use for me, given how much time I spend validating output and insulting the AI for hallucinations and just generally being terrible at moderate tasks.

Basic data interpretation can be pretty great though. I've had it find a couple problems I missed after having it parse log files.

[-] Tyrq@lemmy.dbzer0.com 7 points 17 hours ago

Just use it to generate the kind of work he does, so that you can prove his own worthlessness

[-] dwemthy@lemmy.world 3 points 14 hours ago

We are definitely encouraged to use it where I work. There are regular sessions for engineers to share workflows they're using with LLMs.
What surprised me recently is the objective metrics that the company is trying to gather about usage. Not just in usage amount, but also quality. We put labels on our PRs to indicate to what extent we used AI and which tools, with a "wasn't helpful" option.

It's a lot better than my previous job that went full build-a-new-product-on-top-of-ChatGPT

[-] knightly@pawb.social 4 points 15 hours ago

I got laid off because I didn't use Copilot after discovering that it wasn't helpful for any of my job duties.

[-] BannedVoice@lemmy.zip 6 points 16 hours ago

The organization I work for uses it but they’re taking a very cautious approach to it, we are instructed to double, triple check everything AI generated. Only use specific tools they approve for work related matters as not to train LLMs on company data and slowly rolling out AI in specific areas before they’re more widely adopted.

[-] KingGordon@lemmy.world 7 points 16 hours ago

Double and triple checking everything takes longer than just doing the work.

[-] golden_zealot@lemmy.ml 3 points 15 hours ago

The owner of the software company I work at openly said to a room full of multiple clients that he believed that AI is a bubble and that it is going fail, but nonetheless let them know the business would be adding an optional AI feature to one aspect of the software product for those who want it, and even at that it's not an LLM or anything, it's intended to try to speed up the re-creation of specific types of diagrams based on an input of the original diagrams.

There is no requirement or suggestion to use AI as an employee at my company, personal preference for how each person works is generally respected and everything goes through a few layers of review regardless. All the management cares about is that the work gets done somehow.

There's one dev who uses it for 1 or 2 things on rare occasions, no one else ever uses it.

[-] DaleGribble88@programming.dev 4 points 16 hours ago

I teach a state university in the US, and AI use is encouraged for tasks that LLMs are actually good at. Generating wrong answers for multiple choice questions, formatting latex documents, writing excel formulas, etc. We've also used it during some brain storming sessions to generate ideas and check for any obvious holes in our ideas.

[-] kamenlady@lemmy.world 4 points 16 hours ago

I'm exactly in the same boat.

I'm a developer and my boss has no idea about developing. If i take long with something, he consults chatGTP and gives me the solutions chatGPT came up with.

It's exhausting...

load more comments (3 replies)
load more comments
view more: next ›
this post was submitted on 12 Dec 2025
110 points (100.0% liked)

Asklemmy

51613 readers
796 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 6 years ago
MODERATORS