1078
submitted 1 week ago* (last edited 1 week ago) by AutistoMephisto@lemmy.world to c/technology@lemmy.world

Just want to clarify, this is not my Substack, I'm just sharing this because I found it insightful.

The author describes himself as a "fractional CTO"(no clue what that means, don't ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

you are viewing a single comment's thread
view the rest of the comments
[-] InvalidName2@lemmy.zip 28 points 1 week ago

And then there are actual good developers who could or would tell you that LLMs can be useful for coding, in the right context and if used intelligently. No harm, for example, in having LLMs build out some of your more mundane code like unit/integration tests, have it help you update your deployment pipeline, generate boilerplate code that's not already covered by your framework, etc. That it's not able to completely write 100% of your codebase perfectly from the get-go does not mean it's entirely useless.

[-] Soggy@lemmy.world 31 points 1 week ago

Other than that it's work that junior coders could be doing, to develop the next generation of actual good developers.

[-] SreudianFlip@sh.itjust.works 19 points 1 week ago* (last edited 1 week ago)

Yes, and that's exactly what everyone forgets about automating cognitive work. Knowledge or skill needs to be intergenerational or we lose it.

If you have no junior developers, who will turn into senior developers later on?

[-] pinball_wizard@lemmy.zip 7 points 1 week ago

If you have no junior developers, who will turn into senior developers later on?

At least it isn't my problem. As long as I have CrowdStrike, Cloudflare, Windows11, AWS us-east-1 and log4j... I can just keep enjoying today's version of the Internet, unchanged.

[-] MisterOwl@lemmy.world 2 points 1 week ago
[-] SreudianFlip@sh.itjust.works 2 points 1 week ago

Al is a pretty good guy but he can't be everywhere. Maybe he can use some A.I. to help!

[-] JcbAzPx@lemmy.world 4 points 1 week ago

If it's boilerplate, copy/paste; find/replace works just as well without needing data centers in the desert to develop.

[-] raspberriesareyummy@lemmy.world 2 points 1 week ago

And then there are actual good developers who could or would tell you that LLMs can be useful for coding

The only people who believe that are managers and bad developers.

[-] keegomatic@lemmy.world 7 points 1 week ago

You’re wrong, whether you figure that out now or later. Using an LLM where you gatekeep every write is something that good developers have started doing. The most senior engineers I work with are the ones who have adopted the most AI into their workflow, and with the most care. There’s a difference between vibe coding and responsible use.

[-] raspberriesareyummy@lemmy.world 3 points 1 week ago

There’s a difference between vibe coding and responsible use.

There's also a difference between the occasional evening getting drunk and alcoholism. That doesn't make an occasional event healthy, nor does it mean you are qualified to drive a car in that state.

People who use LLMs in production code are - by definition - not "good developers". Because:

  • a good developer has a clear grasp on every single instruction in the code - and critically reviewing code generated by someone else is more effort than writing it yourself
  • pushing code to production without critical review is grossly negligent and compromises data & security

This already means the net gain with use of LLMs is negative. Can you use it to quickly push out some production code & impress your manager? Possibly. Will it be efficient? It might be. Will it be bug-free and secure? You'll never know until shit hits the fan.

Also: using LLMs to generate code, a dev will likely be violating copyrights of open source left and right, effectively copy-pasting licensed code from other people without attributing authorship, i.e. they exhibit parasitic behavior & outright violate laws. Furthermore the stuff that applies to all users of LLMs applies:

  • they contribute to the hype, fucking up our planet, causing brain rot and skill loss on average, and pumping hardware prices to insane heights.
[-] theterrasque@infosec.pub 1 points 6 days ago

You're pushing code to prod without pr's and code reviews? What kind of jank-ass cowboy shop are you running?

It doesn't matter if an llm or a human wrote it, it needs peer review, unit tests and go through QA before it gets anywhere near production.

[-] keegomatic@lemmy.world 1 points 1 week ago

We have substantially similar opinions, actually. I agree on your points of good developers having a clear grasp over all of their code, ethical issues around AI (not least of which are licensing issues), skill loss, hardware prices, etc.

However, what I have observed in practice is different from the way you describe LLM use. I have seen irresponsible use, and I have seen what I personally consider to be responsible use. Responsible use involves taking a measured and intentional approach to incorporating LLMs into your workflow. It’s a complex topic with a lot of nuance, like all engineering, but I would be happy to share some details.

Critical review is the key sticking point. Junior developers also write crappy code that requires intense scrutiny. It’s not impossible (or irresponsible) to use code written by a junior in production, for the same reason. For a “good developer,” many of the quality problems are mitigated by putting roadblocks in place to…

  1. force close attention to edits as they are being written,
  2. facilitate handholding and constant instruction while the model is making decisions, and
  3. ensure thorough review at the time of design/writing/conclusion of the change.

When it comes to making safe and correct changes via LLM, specifically, I have seen plenty of “good developers” in real life, now, who have engineered their workflows to use AI cautiously like this.

Again, though, I share many of your concerns. I just think there’s nuance here and it’s not black and white/all or nothing.

[-] raspberriesareyummy@lemmy.world 2 points 6 days ago

While I appreciate your differentiated opinion, I strongly disagree. As long as there is no actual AI involved (and considering that humanity is dumb enough to throw hundreds of billions at a gigantic parrot, I doubt we would stand a chance to develop true AI, even if it was possible to create), the output has no reasoning behind it.

  • it violates licenses and denies authorship and - if everyone was indeed equal before the law, this alone would disqualify the code output from such a model because it's simply illegal to use code in violation of license restrictions & stripped of licensing / authorship information
  • there is no point. Developing code is 95-99% solving the problem in your mind, and 1-5% actual code writing. You can't have an algorithm do the writing for you and then skip on the thinking part. And if you do the thinking part anyways, you have gained nothing.

A good developer has zero need for non-deterministic tools.

As for potential use in brainstorming ideas / looking at potential solutions: that's what the usenet was good for, before those very corporations fucked it up for everyone, who are now force-feeding everyone the snake oil that they pretend to have any semblance of intelligence.

[-] keegomatic@lemmy.world 1 points 5 days ago

violates licenses

Not a problem if you believe all code should be free. Being cheeky but this has nothing to do with code quality, despite being true

do the thinking

This argument can be used equally well in favor of AI assistance, and it’s already covered by my previous reply

non-deterministic

It’s deterministic

brainstorming

This is not what a “good developer” uses it for

[-] raspberriesareyummy@lemmy.world 2 points 5 days ago
  • you have no clue about licenses
  • you have no clue what deterministic means

I can't keep you from doing what you want, but I will continue to view software developers using LLMs as script kiddies playing with fire.

this post was submitted on 07 Dec 2025
1078 points (100.0% liked)

Technology

77666 readers
2773 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS