1082

"No Duh," say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

top 50 comments
sorted by: hot top controversial new old
[-] Lettuceeatlettuce@lemmy.ml 24 points 6 hours ago* (last edited 6 hours ago)

You mean relying blindly on a statistical prediction engine to attempt to produce sophisticated software without any understanding of the underlying principles or concepts doesn't magically replace years of actual study and real-world experience?

But trust me, bro, the singularity is imminent, LLMs are the future of human evolution, true AGI is nigh!

I can't wait for this idiotic "AI" bubble to burst.

[-] Tollana1234567@lemmy.today 4 points 5 hours ago

so is the profit it was foretold to generate, but it actually costs money than its actually generating.

[-] melfie@lemy.lol 12 points 11 hours ago* (last edited 10 hours ago)

This article sums up a Stanford study of AI and developer productivity. TL;DR - net productivity boost is a modest 15-20%, or as low as negative to 10% in complex, brownfield codebases. This tracks with my own experience as a dev.

https://www.linkedin.com/pulse/does-ai-actually-boost-developer-productivity-striking-%C3%A7elebi-tcp8f

[-] ready_for_qa@programming.dev 5 points 10 hours ago

These types of articles always fail to mention how well trained the developers were on techniques and tools. In my experience that makes a big difference.

My employer mandates we use AI and provides us with any model, IDE, service we ask for. But where it falls short is providing training or direction on ways to use it. Most developers seem to go for results prompting and get a terrible experience.

I on the other hand provide a lot of context through documents and various mcp tooling, I talk about the existing patterns in the codebase and provide sources to other repositories as examples, then we come up with an implementation plan and execute on it with a task log to stay on track. I spend very little time fixing bad code because I spent the setup time nailing down context.

So if a developer is just prompting "Do XYZ". It's no wonder they're spending more time untangling a random mess.

Another aspect is that everyone seems to always be working under the gun and they just don't have the time to figure out all the best practices and techniques on their own.

I think this should be considered when we hear things like this.

[-] korazail@lemmy.myserv.one 4 points 8 hours ago

I have 3 questions, and I'm coming from a heavily AI-skeptic position, but am open:

  1. Do you believe that providing all that context, describing the existing patterns, creating an implementation plan, etc, allows the AI to both write better code and faster than if you just did it yourself? To me, this just seems like you have to re-write your technical documentation in prose each time you want to do something. You are saying this is better than 'Do XYZ', but how much twiddling of your existing codebase do you need to do before an AI can understand the business context of it? I don't currently do development on an existing codebase, but every time I try to get these tools to do something fairly simple from scratch, they just flail. Maybe I'm just not spending the hours to build my AI-parsable functional spec. Every time I've tried this, asking something as simple as (and paraphrased for brevity) "write an Asteroids clone using JavaScript and HTML 5 Canvas" results in a full failure, even with multiple retries chasing errors. I wrote something like that a few years ago to learn Javascript and it took me a day-ish to get something that mostly worked.

  2. Speaking of that context. Are you running your models locally, or do you have some cloud service? If you give your entire codebase to a 3rd party as context, how much of your company's secret sauce have you disclosed? I'd imagine most sane companies are doing something to make their models local, but we see regular news articles about how ChatGPT is training on user input and leaking sensitive data if you ask it nicely and I can't imagine all the pro-AI CEOs are aware of the risks here.

  3. How much pen-testing time are you spending on this code, error handling, edge cases, race conditions, data sanitation? An experienced dev understands these things innately, having fixed these kinds of issues in the past and knows the anti-patterns and how to avoid them. In all seriousness, I think this is going to be the thing that actually kills AI vibe coding, but it won't be fast enough. There will be tons of new exploits in what used to be solidly safe places. Your new web front-end? It has a really simple SQL injection attack. Your phone app? You can tell it your username is admin'joe@google.com and it'll let you order stuff for free since you're an admin.

I see a place for AI-generated code, for instant functions that do something blending simple and complex. "Hey claude, write a function to take a string and split it at the end of every sentence containing an uppercase A". I had to write weird functions like that constantly as a sysadmin, and transforming data seems like a thing an AI could help me accelerate. I just don't see that working on a larger scale, though, or trusting an AI enough to allow it to integrate a new function like that into an existing codebase.

[-] Aljernon@lemmy.today 11 points 12 hours ago

Senior Management in much of Corporate America is like a kind of modern Nobility in which looking and sounding the part is more important than strong competence in the field. It's why buzzwords catch like wildfire.

[-] ChaoticEntropy@feddit.uk 13 points 12 hours ago* (last edited 12 hours ago)

Are you trying to tell me that the people wanting to sell me their universal panacea for all human endeavours were... lying...? Say it ain't so.

[-] SparroHawc@lemmy.zip 3 points 12 hours ago

I mean, originally they thought they had come upon a magic bullet. Turns out it wasn't the case, and now they're going to suffer for it.

[-] Feyd@programming.dev 1 points 1 hour ago* (last edited 1 hour ago)

You're assuming honesty and they've earned the opposite posture.

[-] sadness_nexus@lemmy.ml 6 points 11 hours ago

I'm not a programmer in any sense. Recently, I made a project where I used python and raspberry pi and had to train some small models on a KITTI data set. I used AI to write the broad structure of the code, but in the end, it took me a lot of time going through python documentation as well as the documentation of the specific tools/modules I used to actually get the code working. Would an experienced programmer get the same work done in an afternoon? Probably. But the code AI output still had a lot of flaws. Someone who knows more than me would probably input better prompts and better follow up requirements and probably get a better structure from the AI, but I doubt they'll get a complete code. In the end, even to use AI, you have to know what you're doing to use AI efficiently and you still have to polish the code into something that actually works.

[-] spicehoarder@lemmy.zip 3 points 8 hours ago* (last edited 8 hours ago)

From my experience, AI just seems to be a lesson in overfitment. You can't use it to do things nobody has done before. Furthermore, you only really get good responses from prompts related to Javascript

[-] MrSulu@lemmy.ml 4 points 11 hours ago

Perhaps it should read "All AI is over hyped, over done and we should be over it"

[-] badgermurphy@lemmy.world 12 points 16 hours ago

I work adjacent to software developers, and I have been hearing a lot of the same sentiments. What I don't understand, though, is the magnitude of this bubble then.

Typically, bubbles seem to form around some new market phenomenon or technology that threatens to upset the old paradigm and usher in a new boom. Those market phenomena then eventually take their place in the world based on their real value, which is nowhere near the level of the hype, but still substantial.

In this case, I am struggling to find examples of the real benefits of a lot of these AI assistant technologies. I know that there are a lot of successes in the AI realm, but not a single one I know of involves an LLM.

So, I guess my question is, "What specific LLM tools are generating profits or productivity at a substantial level well exceeding their operating costs?" If there really are none, or if the gains are only incremental, then my question becomes an incredulous, "Is this biggest in history tech bubble really composed entirely of unfounded hype?"

[-] brunchyvirus@fedia.io 1 points 52 minutes ago

I think right now companies are competing until they're only 1 or 2 that clearly own the majority of the market.

Afterwards they will devolve back into the same thing search engines are now. A cesspool of sponsored ads and links to useless SEO blogs.

They'll just become gate keepers of information again and the only ones that will be heard are the ones who pay a fee or game the system.

Maybe not though, I'm usually pretty cynical when it comes to what the incentives of businesses are.

[-] JcbAzPx@lemmy.world 9 points 11 hours ago

This struck upon one of the greatest wishes of all corporations. A way to get work without having to pay people for it.

[-] SparroHawc@lemmy.zip 21 points 15 hours ago

From what I've seen and heard, there are a few factors to this.

One is that the tech industry right now is built on venture capital. In order to survive, they need to act like they're at the forefront of the Next Big Thing in order to keep bringing investment money in.

Another is that LLMs are uniquely suited to extending the honeymoon period.

The initial impression you get from an LLM chatbot is significant. This is a chatbot that actually talks like a person. A VC mogul sitting down to have a conversation with ChatGPT, when it was new, was a mind-blowing experience. This is a computer program that, at first blush, appears to be able to do most things humans can do, as long as those things primarily consist of reading things and typing things out - which a VC, and mid/upper management, does a lot of. This gives the impression that AI is capable of automating a lot of things that previously needed a live, thinking person - which means a lot of savings for companies who can shed expensive knowledge workers.

The problem is that the limits of LLMs are STILL poorly understood by most people. Despite constructing huge data centers and gobbling up vast amounts of electricity, LLMs still are bad at actually being reliable. This makes LLMs worse at practically any knowledge work than the lowest, greenest intern - because at least the intern can be taught to say they don't know something instead of feeding you BS.

It was also assumed that bigger, hungrier LLMs would provide better results. Although they do, the gains are getting harder and harder to reach. There needs to be an efficiency breakthrough (and a training breakthrough) before the wonderful world of AI can actually come to pass because as it stands, prompts are still getting more expensive to run for higher-quality results. It took a while to make that discovery, so the hype train was able to continue to build steam for the last couple years.

Now, tech companies are doing their level best to hide these shortcomings from their customers (and possibly even themselves). The longer they keep the wool over everyone's eyes, the more money continues to roll in. So, the bubble keeps building.

[-] badgermurphy@lemmy.world 6 points 12 hours ago

The upshot of this and a lot of the other replies I see here and elsewhere seem to suggest that one big difference between this bubble and other past ones is that with this most recent one, there is so much of the global economy now tied to the fate of this bubble that the entire financial world is colluding to delay the inevitable due to the expected severity of the consequences.

[-] leastaction@lemmy.ca 8 points 14 hours ago

AI is a financial scam. Basically companies that are already mature promise great future profits thanks to this new technological miracle, which makes their stock more valuable than it otherwise would be. Cory Doctorow has written eloquently about this.

[-] TipsyMcGee@lemmy.dbzer0.com 7 points 16 hours ago

When the AI bubble bursts, even janitors and nurses will lose their jobs. Financial institutions will go bust.

[-] arc99@lemmy.world 11 points 16 hours ago* (last edited 16 hours ago)

I have never seen an AI generated code which is correct. Not once. I've certainly seen it broadly correct and used it for the gist of something. But normally it fucks something up - imports, dependencies, logic, API calls, or a combination of all them.

I sure as hell wouldn't trust to use it without reviewing it thoroughly. And anyone stupid enough to use it blindly through "vibe" programming deserves everything they get. And most likely that will be a massive bill and code which is horribly broken in some serious and subtle way.

[-] bountygiver@lemmy.ml 1 points 2 hours ago

for me it typically don't cause syntax errors, but the main thing it fucks up is what you specifically told them to do, where the output straight up does not perform the way your specification requires. If it's just some syntax errors at least the compiler can catch them, this you won't even know if you don't bother testing the output.

[-] theterrasque@infosec.pub 5 points 12 hours ago* (last edited 11 hours ago)

I've used Claude code to fix some bugs and add some new features to some of my old, small programs and websites. Not things I can't do myself, but things I can't be arsed to sit down and actually do.

It's actually gone really well, with clean and solid code. easily readable, correct, with error handling and even comments explaining things. It even took a gui stream processing program I had and wrote a server / webapp with the same functionality, and was able to extend it with a few new features I've been thinking to add.

These are not complex things, but a few of them were 20+ files big, and it manage to not only navigate the code, but understand it well enough to add features with the changes touching multiple files (model, logic, view layer for example, or refactor a too big class and update all references to use the new classes).

So it's absolutely useful and capable of writing good code.

[-] chicagohuman@lemmy.zip 3 points 12 hours ago

This is the truth. It has tremendous value but it isn't a solution -- it's a tool. And if you don't know how to code or what good code looks like, then it is a tool you can't use!

[-] Corridor8031@lemmy.ml 1 points 11 hours ago

would you deploy this server?

[-] ikirin@feddit.org 5 points 14 hours ago* (last edited 14 hours ago)

I've seen and used AI for snippets of code and it's pretty decent at that.

With my colleagues I always compare it to a battery powered drill. It's very powerful and can make shit a lot easier. But you'd not try to build furniture from scratch with only a battery powered drill.

You need the knowledge to use it - and also saws, screws, the proper bits for those screws and so on and so forth.

[-] setsubyou@lemmy.world 4 points 13 hours ago* (last edited 13 hours ago)

What bothers me the most is the amount of tech debt it adds by using outdated approaches.

For example, recently I used AI to create some python scripts that use polars and altair to parse some data and draw charts. It kept insisting to bring in pandas so it could convert the polars dataframes to pandas dataframes just for passing them to altair. When I told if that altair can use polars dataframes directly, that helped, but two or three prompts later it would try to solve problems by adding the conversion again.

This makes sense too, because the training material, on average, is probably older than the change that enabled altair to use polars dataframes directly. And a lot of code out there just only uses pandas in the first place.

The result is that in all these cases, someone who doesn’t know this would probably be impressed that the scripts worked, and just not notice the extra tech debt from that unnecessary dependency on pandas.

It sounds like it’s not a big deal, but these things add up and eventually, our AI enhanced code bases will be full of additional dependencies, deprecated APIs, unnecessarily verbose or complicated code, etc.

I feel like this is one aspect that gets overlooked a bit when we talk about productivity gains. We don’t necessarily immediately realize how much of that extra LoC/time goes into outdated code and old fashioned verbosity. But it will eventually come back to bite us.

load more comments (5 replies)
[-] andros_rex@lemmy.world 9 points 18 hours ago

So when the AI bubble burst, will there be coding jobs available to clean up the mess?

[-] Alaknar@sopuli.xyz 6 points 16 hours ago

There already are. People all over LinkedIn are changing their titles to "AI Code Cleanup Specialist".

load more comments (1 replies)
[-] Corridor8031@lemmy.ml 1 points 11 hours ago

I think maybe a good comparison is to written papers/ assignments. It can generate those just like it can generate code.

But it is not about the words themself, but about the content.

[-] JackbyDev@programming.dev 6 points 17 hours ago

The people talking about AI coding the most at my job are architects and it drives me insane.

load more comments (2 replies)
[-] Deflated0ne@lemmy.world 7 points 17 hours ago* (last edited 13 hours ago)

According to Deutsche Bank the AI bubble is ~~a~~ the pillar of our economy now.

So when it pops. I guess that's kinda apocalyptic.

Edit - strikethrough

[-] hroderic@lemmy.world 7 points 17 hours ago

Only for taxpayers ☝️

load more comments
view more: next ›
this post was submitted on 30 Sep 2025
1082 points (100.0% liked)

Technology

75670 readers
2970 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS