1064
submitted 3 days ago* (last edited 3 days ago) by AutistoMephisto@lemmy.world to c/technology@lemmy.world

Just want to clarify, this is not my Substack, I'm just sharing this because I found it insightful.

The author describes himself as a "fractional CTO"(no clue what that means, don't ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

(page 2) 50 comments
sorted by: hot top controversial new old
[-] ignirtoq@feddit.online 133 points 3 days ago

We’re about to face a crisis nobody’s talking about. In 10 years, who’s going to mentor the next generation? The developers who’ve been using AI since day one won’t have the architectural understanding to teach. The product managers who’ve always relied on AI for decisions won’t have the judgment to pass on. The leaders who’ve abdicated to algorithms won’t have the wisdom to share.

Except we are talking about that, and the tech bro response is "in 10 years we'll have AGI and it will do all these things all the time permanently." In their roadmap, there won't be a next generation of software developers, product managers, or mid-level leaders, because AGI will do all those things faster and better than humans. There will just be CEOs, the capital they control, and AI.

What's most absurd is that, if that were all true, that would lead to a crisis much larger than just a generational knowledge problem in a specific industry. It would cut regular workers entirely out of the economy, and regular workers form the foundation of the economy, so the entire economy would collapse.

"Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders."

[-] grue@lemmy.world 65 points 3 days ago

That's why they're all-in on authoritarianism.

load more comments (5 replies)
[-] pdxfed@lemmy.world 58 points 3 days ago

Great article, brave and correct. Good luck getting the same leaders who blindly believe in a magical trend for this or next quarters numbers; they don't care about things a year away let alone 10.

I work in HR and was stuck by the parallel between management jobs being gutted by major corps starting in the 80s and 90s during "downsizing" who either never replaced them or offshore them. They had the Big 4 telling them it was the future of business. Know who is now providing consultation to them on why they have poor ops, processes, high turnover, etc? Take $ on the way in, and the way out. AI is just the next in long line of smart people pretending they know your business while you abdicate knowing your business or employees.

Hope leaders can be a bit braver and wiser this go 'round so we don't get to a cliffs edge in software.

load more comments (5 replies)
[-] Unlearned9545@lemmy.world 51 points 3 days ago

Fractional CTO: Some small companies benefit from the senior experience of these kinds of executives but don't have the money or the need to hire one full time. A fraction of the time they are C suite for various companies.

load more comments (2 replies)
[-] vpol@feddit.uk 59 points 3 days ago

The developers can’t debug code they didn’t write.

This is a bit of a stretch.

[-] Xyphius@lemmy.ca 47 points 3 days ago

agreed. 50% of my job is debugging code I didn't write.

load more comments (11 replies)
[-] deathbird@mander.xyz 25 points 2 days ago

I think this kinda points to why AI is pretty decent for short videos, photos, and texts. It produces outputs that one applies meaning to, and humans are meaning making animals. A computer can't overlook or rationalize a coding error the same way.

[-] JustTesting@lemmy.hogru.ch 1 points 1 day ago

so the obvious solution is to just have humans execute our code manually. Grab a pen and some crayons, go through it step by step and write variable values on the paper and draw the interface with the crayons and show it on a webcam or something. And they can fill in the gaps with what they think the code in question is supposed to do. easy!

[-] rimu@piefed.social 54 points 3 days ago* (last edited 3 days ago)

FYI this article is written with a LLM.

image

Don't believe a story just because it confirms your view!

load more comments (17 replies)
[-] CarbonatedPastaSauce@lemmy.world 64 points 3 days ago* (last edited 3 days ago)

Something any (real, trained, educated) developer who has even touched AI in their career could have told you. Without a 3 month study.

[-] AutistoMephisto@lemmy.world 74 points 3 days ago* (last edited 3 days ago)

What's funny is this guy has 25 years of experience as a software developer. But three months was all it took to make it worthless. He also said it was harder than if he'd just wrote the code himself. Claude would make a mistake, he would correct it. Claude would make the same mistake again, having learned nothing, and he'd fix it again. Constant firefighting, he called it.

load more comments (4 replies)
load more comments (13 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 07 Dec 2025
1064 points (100.0% liked)

Technology

77541 readers
2478 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS