1064
submitted 3 days ago* (last edited 3 days ago) by AutistoMephisto@lemmy.world to c/technology@lemmy.world

Just want to clarify, this is not my Substack, I'm just sharing this because I found it insightful.

The author describes himself as a "fractional CTO"(no clue what that means, don't ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

top 50 comments
sorted by: hot top controversial new old
[-] DupaCycki@lemmy.world 8 points 1 day ago

Personally I tried using LLMs for reading error logs and summarizing what's going on. I can say that even with somewhat complex errors, they were almost always right and very helpful. So basically the general consensus of using them as assistants within a narrow scope.

Though it should also be noted that I only did this at work. While it seems to work well, I think I'd still limit such use in personal projects, since I want to keep learning more, and private projects are generally much more enjoyable to work on.

Another interesting use case I can highlight is using a chatbot as documentation when the actual documentation is horrible. However, this only works within the same ecosystem, so for instance Copilot with MS software. Microsoft definitely trained Copilot on its own stuff and it's often considerably more helpful than the docs.

[-] phed@lemmy.ml 24 points 2 days ago

I do a lot with AI but it is not good enough to replace humans, not even close. It repeats the same mistakes after you tell it no, it doesn't remember things from 3 messages ago when it should. You have to keep re-explaining the goal to it. It's wholey incompetant. And yea when you have it do stuff you aren't familiar with or don't create, def. I have it write a commentary, or I take the time out right then to ask it what x or y does then I add a comment.

[-] kahnclusions@lemmy.ca 16 points 1 day ago* (last edited 1 day ago)

Even worse, the ones I’ve evaluated (like Claude) constantly fail to even compile because, for example, they mix usages of different SDK versions. When instructed to use version 3 of some package, it will add the right version as a dependency but then still code with missing or deprecated APIs from the previous version that are obviously unavailable.

More time (and money, and electricity) is wasted trying to prompt it towards correct code than simply writing it yourself and then at the end of the day you have a smoking turd that no one even understands.

LLMs are a dead end.

[-] MangoCats@feddit.it 5 points 1 day ago

constantly fail to even compile because, for example, they mix usages of different SDK versions

Try an agentic tool like Claude Code - it closes the loop by testing the compilation for you, and fixing its mistakes (like human programmers do) before bothering you for another prompt. I was where you are at 6 months ago, the tools have improved dramatically since then.

From TFS > I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

That sounds like a "fractional CTO problem" to me (IMO a fractional CTO is a guy who convinces several small companies that he's a brilliant tech genius who will help them make their important tech decisions without actually paying full-time attention to any of them. Actual tech experience: optional.)

If you have lost confidence in your ability to modify your own creation, that's not a tools problem - you are the tool, that's a you problem. It doesn't matter if you're using an LLM coding tool, or a team of human developers, or a pack of monkeys to code your applications, if you don't document and test and formally develop an "understanding" of your product that not only you but all stakeholders can grasp to the extent they need to, you're just letting the development run wild - lacking a formal software development process maturity. LLMs can do that faster than a pack of monkeys, or a bunch of kids you hired off Craigslist, but it's the exact same problem no matter how you slice it.

[-] kahnclusions@lemmy.ca 1 points 22 hours ago* (last edited 22 hours ago)

If you mean I have to install Claude’s software on my own computer, no thanks.

[-] III@lemmy.world 2 points 1 day ago

The LLM comparison to a team of human developers is a great example. But like outsourcing your development, LLM is less a tool and more just delegation. And yes, you can dig in deep to understand all the stuff the LLM is delegated to do the same as you can get deeply involved with a human development team to maintain an understanding. But most of the time, the sell is that you can save time - which means you aren't expecting to micro manage your development team.

It is a fractional CTO problem but the actual issue is that developers are being demanded to become fractional CTOs by using LLM because they are being measured by expected productivity increases that limit time for understanding.

load more comments (2 replies)
[-] echodot@feddit.uk 10 points 1 day ago

There's no point telling it not to do x because as soon as you mention it x it goes into its context window.

It has no filter, it's like if you had no choice in your actions, and just had to do every thought that came into your head, if you were told not to do a thing you would immediately start thinking about doing it.

[-] kahnclusions@lemmy.ca 4 points 1 day ago* (last edited 1 day ago)

I’ve noticed this too, it’s hilarious(ly bad).

Especially with image generation, which we were using to make some quick avatars for a D&D game. “Draw a picture of an elf.” Generates images of elves that all have one weird earring. “Draw a picture of an elf without an earing.” Great now the elves have even more earrings.

[-] MangoCats@feddit.it 2 points 1 day ago

I find this kind of performance to vary from one model to the next. I definitely have experienced the bad image getting worse phenomenon - especially with MS Copilot - but different models will perform differently.

[-] MangoCats@feddit.it 2 points 1 day ago

There’s no point telling it not to do x because as soon as you mention it x it goes into its context window.

Reminds me of the Sonny Bono high speed downhill skiing problem: don't fixate on that tree, if you fixate on that tree you're going to hit the tree, fixate on the open space to the side of the tree.

LLMs do "understand" words like not, and don't, but they also seem to work better with positive examples than negative ones.

[-] Nalivai@lemmy.world 31 points 2 days ago

They never actually say what "product" do they make, it's always "shipped product" like they're fucking amazon warehouse. I suspect because it's some trivial webpage that takes an afternoon for a student to ship up, that they spent three days arguing with an autocomplete to shit out.

[-] e461h@sh.itjust.works 7 points 2 days ago

Cloudflare, AWS, and other recent major service outages are what come to mind re: AI code. I’ve no doubt it is getting forced into critical infrastructure without proper diligence.

Humans are prone to error so imagine the errors our digital progeny are capable of!

[-] dejected_warp_core@lemmy.world 48 points 2 days ago

To quote your quote:

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

I think the author just independently rediscovered "middle management". Indeed, when you delegate the gruntwork under your responsibility, those same people are who you go to when addressing bugs and new requirements. It's not on you to effect repairs: it's on your team. I am Jack's complete lack of surprise. The idea that relying on AI to do nuanced work like this and arrive at the exact correct answer to the problem, is naive at best. I'd be sweating too.

The problem though (with AI compared to humans): The human team learns, i.e. at some point they probably know what the mistake was and avoids doing it again. AI instead of humans: well maybe the next or different model will fix it maybe...

And what is very clear to me after trying to use these models, the larger the code-base the worse the AI gets, to the point of not helping at all or even being destructive. Apart from dissecting small isolatable pieces of independent code (i.e. keep the context small for the AI).

Humans likely get slower with a larger code-base, but they (usually) don't arrive at a point where they can't progress any further.

[-] MangoCats@feddit.it 3 points 1 day ago

Humans likely get slower with a larger code-base, but they (usually) don’t arrive at a point where they can’t progress any further.

Notable exceptions like: https://peimpact.com/the-denver-international-airport-automated-baggage-handling-system/

[-] Evotech@lemmy.world 28 points 2 days ago

Just ask the ai to make the change?

[-] theneverfox@pawb.social 21 points 2 days ago

AI isn't good at changing code, or really even understanding it... It's good at writing it, ideally 50-250 lines at a time

load more comments (15 replies)
load more comments (9 replies)
[-] Agent641@lemmy.world 48 points 2 days ago

I cannot understand and debug code written by AI. But I also cannot understand and debug code written by me.

Let's just call it even.

load more comments (6 replies)
[-] edgemaster72@lemmy.world 201 points 3 days ago

Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive.

And all they'll hear is "not failure, metrics great, ship faster, productive" and go against your advice because who cares about three months later, that's next quarter, line must go up now. I also found this bit funny:

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me... I was proud of what I’d created.

Well you didn't create it, you said so yourself, not sure why you'd be proud, it's almost like the conclusion should've been blindingly obvious right there.

load more comments (16 replies)
[-] raspberriesareyummy@lemmy.world 106 points 3 days ago

So there's actual developers who could tell you from the start that LLMs are useless for coding, and then there's this moron & similar people who first have to fuck up an ecosystem before believing the obvious. Thanks fuckhead for driving RAM prices through the ceiling... And for wasting energy and water.

[-] psycotica0@lemmy.ca 105 points 3 days ago

I can least kinda appreciate this guy's approach. If we assume that AI is a magic bullet, then it's not crazy to assume we, the existing programmers, would resist it just to save our own jobs. Or we'd complain because it doesn't do things our way, but we're the old way and this is the new way. So maybe we're just being whiny and can be ignored.

So he tested it to see for himself, and what he found was that he agreed with us, that it's not worth it.

Ignoring experts is annoying, but doing some of your own science and getting first-hand experience isn't always a bad idea.

load more comments (7 replies)
load more comments (28 replies)
[-] lepinkainen@lemmy.world 16 points 2 days ago* (last edited 12 hours ago)

Same thing would happen if they were a non-coder project manager or designer for a team of actual human programmers.

Stuff done, shipped and working.

“But I can’t understand the code 😭”, yes. You were the project manager why should you?

[-] JcbAzPx@lemmy.world 36 points 2 days ago

I think the point is that someone should understand the code. In this case, no one does.

load more comments (10 replies)
[-] Suffa@lemmy.wtf 37 points 2 days ago

AI is really great for small apps. I've saved so many hours over weekends that would otherwise be spent coding a small thing I need a few times whereas now I can get an AI to spit it out for me.

But anything big and it's fucking stupid, it cannot track large projects at all.

load more comments (43 replies)
load more comments
view more: next ›
this post was submitted on 07 Dec 2025
1064 points (100.0% liked)

Technology

77541 readers
2478 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS