834
top 50 comments
sorted by: hot top controversial new old
[-] Katzelle3@lemmy.world 177 points 1 week ago

Almost as if it was made to simulate human output but without the ability to scrutinize itself.

[-] mushroommunk@lemmy.today 85 points 1 week ago* (last edited 1 week ago)

To be fair most humans don't scrutinize themselves either.

(Fuck AI though. Planet burning trash)

[-] atomicbocks@sh.itjust.works 27 points 1 week ago

The number of times I have received an un-proofread two sentence email is too damn high.

[-] galaxy_nova@lemmy.world 10 points 1 week ago

And then the follow up email because they didn’t actually finish a complete thought

load more comments (1 replies)
[-] PetteriPano@lemmy.world 130 points 1 week ago

It's like having a lightning-fast junior developer at your disposal. If you're vague, he'll go on shitty side-quests. If you overspecify he'll get overwhelmed. You need to break down tasks into manageable chunks. You'll need to ask follow-up questions about every corner case.

A real junior developer will have improved a lot in a year. Your AI agent won't have improved.

[-] mcv@lemmy.zip 32 points 1 week ago

This is the real thing. You can absolutely get good code out of AI, but it requires a lot of hand holding. It helps me speed some tasks, especially boring ones, but I don't see it ever replacing me. It makes far too many errors, and requires me to point them out, and to point in the direction of the solution.

They are great at churning out massive amounts of code. They're also great at completely missing the point. And the massive amount of code needs to be checked and reviewed. Personally I'd rather write the code and have the AI review it. That's a much more pleasant way to work, and that way it actually enhances quality.

[-] Grimy@lemmy.world 10 points 1 week ago

They are improving, and probably faster then junior devs. The models we had had 2 years ago would struggle with a simple black jack app. I don't think the ceiling has been hit.

[-] lividweasel@lemmy.world 67 points 1 week ago

Just a few trillion more dollars, bro. We’re almost there. Bro, if you give up a few showers, the AI datacenter will be able to work perfectly.

Bro.

[-] Grimy@lemmy.world 13 points 1 week ago* (last edited 1 week ago)

The cost of the improvement doesn't change the fact that it's happening. I guess we could all play pretend instead if it makes you feel better about it. Don't worry bro, the models are getting dumber!

[-] underisk@lemmy.ml 23 points 1 week ago

Don’t worry bro, the models are getting dumber!

That would be pretty impressive when they already lack any intelligence at all.

load more comments (9 replies)
load more comments (1 replies)
load more comments (1 replies)
load more comments (1 replies)
[-] UnderpantsWeevil@lemmy.world 64 points 1 week ago

A computer is a machine that makes human errors at the speed of electricity.

[-] MountingSuspicion@reddthat.com 29 points 1 week ago

I think one of the big issues is it often makes nonhuman errors. Sometimes I forget a semicolon or there's a typo, but I'm well equipped to handle that. In fact, most programs can actually catch that kind of issue already. AI is more likely to generate code that's hard to follow and therefore harder to check. It makes debugging more difficult.

load more comments (2 replies)
[-] MyMindIsLikeAnOcean@piefed.world 51 points 1 week ago

No shit. 

I actually believed somebody when they told me it was great at writing code, and asked it to write me the code for a very simple lua mod. It’s made several errors and ended up wasting my time because I had to rewrite it.

[-] morto@piefed.social 18 points 1 week ago

In a postgraduate class, everyone was praising ai, calling it nicknames and even their friend (yes, friend), and one day, the professor and a colleague were discussing some code when I approached, and they started their routine bullying on me for being dumb and not using ai. Then I looked at his code and asked to test his core algorithm that he converted from a fortran code and "enhanced" it. I ran it with some test data and compared to the original code and the result was different! They blindly trusted some ai code that deviated from their theoretical methodology, and are publishing papers with those results!

Even after showing the different result, they didn't convince themselves of anything and still bully me for not using ai. Seriously, this shit became some sort of cult at this point. People are becoming irrational. If people in other universities are behaving the same and publishing like this, I'm seriously concerned for the future of science and humanity itself. Maybe we should archive everything published up to 2022, to leave as a base for the survivors from our downfall.

load more comments (3 replies)
[-] dalekcaan@feddit.nl 48 points 1 week ago
[-] Deestan@lemmy.world 40 points 1 week ago

I've been coding for a while. I did an honest eager attempt at making a real functioning thing with all code written by AI. A breakout clone using SDL2 with music.

The game should look good, play good, have cool effects, and be balanced. It should have an attractor screen, scoring, a win state and a lose state.

I also required the code to be maintainable. Meaning I should be able to look at every single line and understand it enough to defend its existence.

I did make it work. And honestly Claude did better than expected. The game ran well and was fun.

But: The process was shit.

I spent 2 days and several hundred dollars to babysit the AI, to get something I could have done in 1 day including learning SDL2.

Everything that turned out well, turned out well because I brought years of skill to the table, and could see when Claude was coding itself into a corner and tell it to break up code in modules, collate globals, remove duplication, pull out abstractions, etc. I had to detect all that and instruct on how to fix it. Until I did it was adding and re-adding bugs because it had made so much shittily structured code it was confusing itself.

TLDR; LLM can write maintainable code if given full constant attention by a skilled coder, at 40% of the coder's speed.

[-] thundermoose@lemmy.world 19 points 1 week ago

It depends on the subject area and your workflow. I am not an AI fanboy by any stretch of the imagination, but I have found the chatbot interface to be a better substitute for the "search for how to do X with library/language Y" loop. Even though it's wrong a lot, it gives me a better starting place faster than reading through years-old SO posts. Being able to talk to your search interface is great.

The agentic stuff is also really good when the subject is something that has been done a million times over. Most web UI areas are so well trodden that JS devs have already invented a thousand frameworks to do it. I'm not a UI dev, so being able to give the agent a prompt like, "make a configuration UI with a sidebar that uses the graphql API specified here" is quite nice.

AI is trash at anything it hasn't been trained on in my experience though. Do anything niche or domain-specific, and it feels like flipping a coin with a bash script. It just throws shit at the wall and runs tests until the tests pass (or it sneakily changes the tests because the error stacktrace repeatedly indicates the same test line as the problem).

[-] Deestan@lemmy.world 9 points 1 week ago

Yeah what you say makes sense to me. Having it make a "wrong start" in something new is useful, as it gives you a lot of the typical structure, introduces the terminology, maybe something sorta moving that you can see working before messing with it, etc.

load more comments (12 replies)
[-] pleaseletmein@lemmy.zip 35 points 1 week ago

Water makes things wetter than fire does.

[-] Goldholz 32 points 1 week ago
load more comments (4 replies)
[-] Ledivin@lemmy.world 31 points 1 week ago* (last edited 1 week ago)

Anyone blindly having AI write their code is an absolute moron.

Anyone with decent experience (5-10 years, maybe 10+?) can absolutely fucking skyrocket their output if they properly set up their environments and treat their agents as junior devs instead of competent programmers. You shouldn't trust generated code any more than you trust someone fresh out of college, but they produce code in seconds instead of weeks.

I have tripled my output while producing more secure code (based on my security audits), safer code (based on code coverage and security audits), and less error-prone code (based on production logs and our unchanged QA process).

Now, the ethical issues and environmental issues, I 100% can get behind. And I have no idea what companies are going to do in 10 years when they have to replace people like me and haven't been hiring or training replacements. But the productivity and quality debates are absolutely ridiculous, as long as a strong dev is behind the wheel and has been trained to use the tools.

[-] skibidi@lemmy.world 28 points 1 week ago* (last edited 1 week ago)

Consider: the facts

People are very bad at judging their own productivity, and AI consistently makes devs feel like they are working faster, while in fact slowing them down.

I've experienced it myself - it feels fucking great to prompt a skeleton and have something brand new up and running in under an hour. The good chemicals come flooding in because I'm doing something new and interesting.

Then I need to take a scalpel to a hundred scattered lines to get CI to pass. Then I need to write tests that actually test functionality. Then I start extending things and realize the implementation is too rigid and I need to change the architecture.

It is as this point that I admit to myself that going in intentionally with a plan and building it myself the slow way would have saved all that pain and probably got the final product shipped sooner, even if the prototype was shipped later.

load more comments (7 replies)
load more comments (1 replies)
[-] myfunnyaccountname@lemmy.zip 27 points 1 week ago

Did they compare it to the code of that outsourced company that provided the lowest bid? My company hasn’t used AI to write code yet. They outcourse/offshore. The code is held together with hopes and dreams. They remove features that exist, only to have to release a hot fix to add it back. I wish I was making that up.

load more comments (4 replies)
[-] Benchamoneh@lemmy.dbzer0.com 27 points 1 week ago
[-] SpicyTaint@lemmy.world 20 points 1 week ago

...is this supposed to be news?

load more comments (3 replies)
[-] azvasKvklenko@sh.itjust.works 20 points 1 week ago

Oh, so my sceptical, uneducated guesses about AI are mostly spot on.

As a computer science experiment, making a program that can beat the Turing test is a monumental step in progress.

However as a productive tool it is useless in practically everything it is implemented on. It is incapable of performing the very basic "Sanity check" that is important in programming.

[-] robobrain@programming.dev 9 points 1 week ago

The Turing test says more about the side administering the test than the side trying to pass it

Just because something can mimic text sufficiently enough to trick someone else doesn't mean it is capable of anything more than that

load more comments (7 replies)
load more comments (4 replies)
[-] WanderingThoughts@europe.pub 18 points 1 week ago

And even worse, it doesn't realise it and can't fix the errors.

[-] kalkulat@lemmy.world 16 points 1 week ago* (last edited 1 week ago)

I'd never ask a friggin machine to do coding for me, that's MY blast.

That said, I've had good luck asking GPT specific questions about multiple obscure features of Javascript, and of various browsers. It'll often feed me a sample script using a feature it explains ... a lot more helpful than many of the wordy websites like MDN ... saving me shit-tons of time that I'd spend bouncing around a half-dozen 'help' pages.

load more comments (10 replies)
[-] termaxima@slrpnk.net 15 points 1 week ago

ChatGPT is great at generating a one line example use of a function. I would never trust its output any further than that.

[-] diabetic_porcupine@lemmy.world 11 points 1 week ago

So much this. People who say ai can’t write code are just using it wrong. You need to break things down to bite size problems and just let it autocomplete a few lines at a time. Increase your productivity like 200%. And don’t get me started about not having to search through a bunch of garbage google results to find the documentation I’m actually looking for.

[-] termaxima@slrpnk.net 2 points 6 days ago

Personally I only do the "not search through garbage google results" part (especially now that it's clogged up with AI articles that don't even answer the question)

ChatGPT is great for that, I never have to spend 15 minutes searching up what's the function called to do X thing.

I really recommend to set the answers to be as brief and terse as possible. The base settings of a sycophant that generates a full article for every question are super annoying when you're doing actual work.

load more comments (6 replies)
[-] nutsack@lemmy.dbzer0.com 15 points 1 week ago* (last edited 1 week ago)

this is expected, isn't it? You shit fart code from your ass, doing it as fast as you can, and then whoever buys out the company has to rewrite it. or they fire everyone to increase the theoretical margins and sell it again immediately

[-] Tigeroovy@lemmy.ca 14 points 1 week ago

And then it takes human coders way longer to figure out what’s wrong to fix than it would if they just wrote it themselves.

[-] RampantParanoia2365@lemmy.world 12 points 1 week ago

I'm not a programmer, but I've dabbled with Blender for 3D modeling, and it uses Node trees for a lot of different things, which is pretty much a programming GUI. I googled how to make a shader, and the AI gave me instructions. About half of it was complete nonsense, but I did make my shader.

[-] Revan343@lemmy.ca 12 points 1 week ago
[-] fox2263@lemmy.world 10 points 1 week ago

You need to babysit and double check everything it does. You can’t just let it loose and trust everything it does.

[-] kent_eh@lemmy.ca 10 points 1 week ago* (last edited 1 week ago)

AI-generated code produces 1.7x more issues than human code

As expected

[-] Affidavit@lemmy.world 9 points 1 week ago

I really, really, want to stop seeing posts about:

  • Musk
  • Trump
  • Israel
  • Microsoft
  • AI

I swear these are the only things that the entire Lemmy world wants to talk about.

Maybe I should just go back to Reddit... Fuck Spez, but at least there is some variety.

[-] themagzuz 11 points 1 week ago

your frontend of choice probably has some option to hide posts containing specific keywords

load more comments (4 replies)
load more comments (1 replies)
[-] Minizarbi@jlai.lu 9 points 1 week ago

Not my code though. It contains a shit ton of bugs. When I am able to write some of course.

[-] jj4211@lemmy.world 12 points 1 week ago

Nah, AI code gen bugs are weird. As a person used to doing human review even from wildly incompetent people, AI messes up things that my mind never even thought needed to be double checked.

load more comments (1 replies)
load more comments (3 replies)
[-] antihumanitarian@lemmy.world 9 points 1 week ago

So this article is basically a puff piece for Code Rabbit, a company that sells AI code review tooling/services. They studied 470 merge/pull requests, 320 AI and 150 human control. They don't specify what projects, which model, or when, at least without signing up to get their full "white paper". For all that's said this could be GPT 4 from 2024.

I'm a professional developer, and currently by volume I'm confident latest models, Claude 4.5 Opus, GPT 5.2, Gemini 3 Pro, are able to write better, cleaner code than me. They still need high level and architectural guidance, and sometimes overt intervention, but on average they can do it better, faster, and cheaper than me.

A lot of articles and forums posts like this feel like cope. I'm not happy about it, but pretending it's not happening isn't gonna keep me employed.

Source of the article: https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report

[-] iglou@programming.dev 11 points 1 week ago

I am a professional software engineer, and my experience is the complete opposite. It does it faster and cheaper, yes, but also noticeably worse, and having to proofread the output, fix and refactor ends up taking more time than I would have taken writing it myself.

load more comments (4 replies)
load more comments (2 replies)
[-] MonkderVierte@lemmy.zip 8 points 1 week ago

This is news?

load more comments
view more: next ›
this post was submitted on 23 Dec 2025
834 points (100.0% liked)

Technology

78261 readers
1194 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS