322
submitted 3 days ago by not_IO to c/opensource@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] BlameTheAntifa@lemmy.world 17 points 2 days ago* (last edited 2 days ago)

Studies continue to show that AI routinely generates unsafe code and even human code reviews often don’t catch major problems. AI generated code should not be trusted or accepted and projects that accept them should be treated as compromised.

[-] yucandu@lemmy.world 4 points 2 days ago

Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.

I'm all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as "compromised" feels reactionary to me.

[-] BlameTheAntifa@lemmy.world 7 points 2 days ago* (last edited 2 days ago)
[-] yucandu@lemmy.world 2 points 1 day ago

I understand the problems, but I don't think they amount to something as simple and close-minded as "all LLM generated code bad and evil", unless thinking critically takes too much time and energy I guess? Some people just have to make blanket decisions because it's easier for them.

this post was submitted on 26 Feb 2026
322 points (100.0% liked)

Open Source

44971 readers
262 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 6 years ago
MODERATORS