66

My company is strongly pushing AI. There are lot of experiments, demos, and effort from decently smart people about integrating it into our workflows. There are some impressive victories that have been made with AI tooling producing some things fast. I am not in denial about this. And the SE department is tracking improved productivity (as measured by # of tickets being done, I guess?)

The problem is I hate AI. I hate every fucking thing about it. Its primary purpose, regardless of what utility is gained, is spam. I think it's obvious how google search results are spam, how spam songs and videos are being produced, etc. But even bad results from AI that have to be discarded, IMO, are spam.

And that isn't even getting into all massive amounts of theft to train the data, or the immense amounts of electricity it takes to do training and inference, as well as run, all this crap. Nor the psychosis being inflicted onto people who emplace their trust into these systems. Nor the fact that these tools are being used to empower authoritarian regimes to track vulnerable populations, both here (in the USA) and abroad. And all this AI shit serves to enrich the worst tech moguls and to displace people like artists and people like myself, a programmer.

I'm literally being told at my job that I should view myself basically as an AI babysitter, and that AI has been unambiguously proven in the industry, so the time for wondering about it, experimenting with it, or opposing it is over. The only fault and flaw is my (i.e. any given SE's) unwillingness to adapt and onboard.

Looking for advice from people who have had to navigate similar crap. Because I feel like I'm at a point where I must adapt or eventually get fired.

you are viewing a single comment's thread
view the rest of the comments
[-] paequ2@lemmy.today 7 points 3 days ago

AI tooling producing some things fast

This isn't necessarily a good thing. Yeah, maybe AI wrote a new microservice and generated 100s of new files and 1000s of lines of new code... but... there's a big assumption there that you actually needed 100s of new files and 1000s of lines of new code. What it tends to generate is tech debt. That's also ignoring the benefits of your workforce upskilling by learning more about the system, where things are, how they're pieced together, why they're like that, etc.

AI just adds tech debt in a blackbox. It's gonna lower velocity in the long term.

[-] helix@feddit.org 3 points 2 days ago

What it tends to generate is tech debt.

Just like my coworkers.

[-] digdilem@lemmy.ml 3 points 3 days ago* (last edited 3 days ago)

I know I'm not reading the room here, but you mentioned "long term" and I think that's an important term.

AI tools will improve and in the near future, I'm pretty confident they will get better and one of the things they can do then is to solve the tech debt their previous generations caused.

"Hey, ChatGPT 8.0, go fix the fucking mess ChatGPT 5.0 created"... and it will do it. It will understand security, and reliance and all the context it needs and it will work and be good. There is no reason why it won't.

That doesn't help us if things break before that point, of course, so let's keep a copy of the code that we knew worked okay.

[-] helix@feddit.org 3 points 2 days ago

It will understand

Hey ChatGPT, show me you don't know what LLMs do without telling me.

LLMs are basically autocorrect on steroids. They'll implement deterministic algorithms in the background cobbled together via glue code and every time you ask it a math question the LLM will forward this to Wolfram Alpha and just spit out the result.

LLMs don't "understand" things, it's just pattern matching and autocomplete on steroids. There's no thinking involved here, however much the AI companies add "thinking..." to their output.

[-] digdilem@lemmy.ml 4 points 2 days ago

That's a fair point about defining them as LLMs.

But it's wrong to assume those algorithms don't change. They do, and improve, and become better with iterative changes and will continue to get less distinguishable from real intelligence with time. (Clarke's quote about "sufficiently advanced technology being indistinguishable from magic" springs to mind)

As for my point - writing good code is exactly the sort of task that LLMs will be good at. They're just not always there /yet/. Their context histories are short, their references are still small (in comparison), they're slow compared to what they will be. I'm an old coder and I've known many others, some define their code as art and there is some truth in that, and art is of course something any AI will struggle with, but code doesn't need to be artistic to work well.

There's also the possibility there will be a real milestone and true AI will emerge. That's a scary thought and we've no way of telling if that's close or far away.

[-] helix@feddit.org 2 points 2 days ago

That’s a fair point about defining them as LLMs.

But it’s wrong to assume those algorithms don’t change.

Sure, but the current LLMs have inherent flaws in the concept of them being, well, supercharged autocorrect.

It's impressive that we can basically brute force language concepts and distill knowledge into a model of knowledge. To really advance in AI you'd have to come up with a different class of algorithms than deep learning and LLMs. You'd probably need to combine this with adversarial networks, algorithmic (deterministic!) decisions and so on.

A teacher once told me "a computer is only as intelligent as the people programming it" and that sentence holds true even 30 years later.

LLMs are already "true" AI in a sense that they're a subclass of models produced by a subclass of machine learning algorithms. I'd argue that there will be many different kinds of AI cobbled together into a more potent chatbot or agentic system.

And code definitely needs to be artistic to work well in some cases. You need to really understand the subject matter to write proper tests, for example. There will always be an issue of man-machine interfaces.

You're dead right in them being able to produce better code than the average software dev. The skill floor to work as a dev will be raised.

These LLMs can take your job as a software dev. They can already translate instructions into code. But wait! They only work when the user knows what they want. I think your job is safe after all.

There's a difference between programming and software development, after all.

[-] digdilem@lemmy.ml 2 points 2 days ago

All good points and well argued. Thank you.

There’s a difference between programming and software development, after all.

Yes, absolutely, but only because we're the customers.

The art is software design (imo) comes in understanding the problem and creating a clever, efficient and cost effective solution that is durable and secure. (This hardly ever happens in practice which is why we're constantly rewriting stuff). This is good and useful and in this case Art is Good. The artist has ascended to seeing the whole problem from the beginning and a short path from A to B, not just starting to code and seeing where it goes, as so many of us do.

A human programmer writing "artistic code" is often someone showing off by doing something in an unusual or clever way. In that case, I think boring, non-artistic code is better since it's easier to maintain. Once smarty-pants has gone elsewhere, someone else has to pick up their "art" and try to figure it out. In this case, Art is Bad. Boring is Good. LLMs are good at boring.

So the customer thing - by that I mean, we set the targets. We tell coders (AI or human) what we want, so it's us that judge what's good and if it meets our spec. The difficulty for the coders is not so much writing the code, but understanding the target, and that barrier is one that's mostly our fault. We struggle to tell other humans what we want, let alone machines, which is why development meetings can go on for hours and a lot of time is wasted showing progress for approval. Once the computers are defining the targets, they'll be fixing them before we're even aware. This means a change from the LLM prompt -> answer methodology, and a number of guardrails being removed, but that's going to happen sometime.

At the moment it's all new and we're watching changes carefully. But we'll tire of doing that and get complacent, after all we're only human. Our focus is limited and we're sometimes lazy. We'll relax those guardrails. We'll get AIs to tell other AIs what to do to save ourselves even the work of prompting. We'll let them work in our codebase without checking every line. It'll go wrong, probably spectacularly. But we won't stop using it.

[-] helix@feddit.org 2 points 1 day ago

Good points aswell. I agree with most, but one: AI writes good code because it's boring.

There's fancy code which is too artful to maintain and artful code which is easy and beautiful and good to maintain. Artful code doesn't have to be fancy and hard to read. Artful code can be boring and stupidly simple.

LLMs tend to write stuff a skilled programmer can write in 10 lines in 50 lines instead. Think about it unwrapping loops into sequential statements [++var;++var;++var... instead of while(++var)] or case statements and nested ifs into if.. if.. if.. chains.

Sure, such code works, but it's hard to maintain and the alternative is more beautiful, less lines of code, easier to read and to understand. That's what artful code is to me.

Most code in companies tends to be less than optimal. Most companies employ mostly workers who aren't skillful. If you compare regular business code with super clean code of Open Source programming frameworks (e.g. Spring), you tend to hit your head against the wall.

LLM code is way harder to maintain than human code, even worse than lifeless, artless, "boring" business code. I doubt it'll get better because it copies shit code from the average and less-than-average programmers doing a busy-ness.

I mean, you could easily throw lots and lots of already solved and documented problems against an LLM and they'll be better than humans, because they're essentially autocorrect with context from stackoverflow and interview question books.

Over time, LLMs will get better input data and produce better output, which will lead to better code and better code quality. You still need to know how to prompt and it still won't solve any new problems you encounter, only problems others encountered and solved thousands of times.

In that regard, the shit programmers in companies usually churn out can and will be replaced with LLM generated output, which, on average, is better than the median business programmer. I'll give you that. I guess it will make bad programmers less obvious and harmful, which might be good. Or bad, if your company only employs prompt monkeys and not a single sane developer.

it’s us that judge what’s good and if it meets our spec.

I'd argue that most people in companies can't even judge what's good and meets the specs 🤓

[-] helix@feddit.org 1 points 2 days ago

That’s a scary thought and we’ve no way of telling if that’s close or far away.

AI is always 5 years away, no matter the year.

[-] digdilem@lemmy.ml 2 points 2 days ago

I still think it's going to be discovered by some guy working at home one evening.

The first most of us will know about it is when the sky goes dark.

(I've possibly read too much scifi)

[-] Feyd@programming.dev 4 points 3 days ago

AI tools will improve and in the near future

There isn't a good reason to believe they'll be as good as you're saying.

[-] digdilem@lemmy.ml 1 points 2 days ago

You sure?

Every iteration of the major models is better, faster, with more context. They're getting better at a faster speed. They're already relied upon to write code for production systems in thousands of companies. Today's reality is already as good as I'm saying. Tomorrow's will be better.

Give it, what, ten or twenty years and the thought of a human being writing computer code will be anachronistic.

[-] Feyd@programming.dev 1 points 2 days ago* (last edited 2 days ago)

The major thing holding LLMs back is that they don't actually understand or reason. They purely predict in the dimension of text. That is a fundamental aspect of the technology that isn't going to change. To be as good as you're saying requires a different technology.

Also, alot of what you see people say they're doing that is strongly exaggerated...

[-] digdilem@lemmy.ml 1 points 2 days ago

I think it's... not wise to underplay or predict the growth of LLMs and AI. Five years ago we couldn't have predicted their impact on many roles today. In another five years it will be different again.

[-] helix@feddit.org 1 points 2 days ago

Yeah, I think we'll get the model collapse issue soon. As most of the dead internet is generated by AI, the amount of work done to try to figure out what is real and what is a hallucination will inevitably fail and lead to the LLM Ouroboros eating its own tail.

this post was submitted on 12 Dec 2025
66 points (100.0% liked)

General Programming Discussion

9224 readers
6 users here now

A general programming discussion community.

Rules:

  1. Be civil.
  2. Please start discussions that spark conversation

Other communities

Systems

Functional Programming

Also related

founded 6 years ago
MODERATORS