483
submitted 2 weeks ago* (last edited 2 weeks ago) by General_Effort@lemmy.world to c/programmer_humor@programming.dev
you are viewing a single comment's thread
view the rest of the comments
[-] sun_is_ra@sh.itjust.works 28 points 2 weeks ago

Maybe he meant code quality was so good its like a human wrote it.

After all if the code is good and follow all best practices of the project, why reject it just because it was an AI who wrote it. That's racism against machines.

[-] endless_nameless@lemmy.world 89 points 2 weeks ago

It's not possible to be racist toward inanimate objects. Computers are not a race. LLMs are not people.

[-] Samsy@lemmy.ml 40 points 2 weeks ago

That was rude against my wife-chatbot. Apologize to her, here: https://...

[-] imsufferableninja@sh.itjust.works 4 points 2 weeks ago

More like http://localhost:8000/wifebot

[-] Samsy@lemmy.ml 3 points 2 weeks ago

Noo. Friends don't let friends alone with a generic Port 8000 (or 3000) wife. Go and find your > 60999.

[-] Appoxo@lemmy.dbzer0.com 2 points 2 weeks ago

Stay-at-home bot?

[-] ThePantser@sh.itjust.works 4 points 2 weeks ago
[-] BremboTheFourth@piefed.ca 20 points 2 weeks ago* (last edited 2 weeks ago)

LLMs will never be people. Computers might be, one day in the very distant future. But literally every piece of the current AI hype train is just hype. LLMs could, maybe, at best, be a single piece of a much larger puzzle for bringing consciousness into being. But the "Just Add More Compute Bro!" mantra is just tech bros doing their market hype thing. It has as much chance of giving rise to consciousness as my PC has whenever I add another hard drive.

[-] obelisk_complex@piefed.ca 4 points 2 weeks ago* (last edited 2 weeks ago)

LLMs will never be people

Boy oh boy, you're not gonna like this one bit: https://www.npr.org/2014/07/28/335288388/when-did-companies-become-people-excavating-the-legal-evolution

(To be clear, I understand you think you covered this with "computers may be" but my point is different: the law is often dumb and you would be amazed at what politicians who don't understand tech - or get paid not to understand it - will pull off)

Edit: Downvotes from people who missed the point. You can't say "LLMs will never be people" because you simply can't guarantee your/our lawmakers won't be that stupid.

[-] lIlIlIlIlIlIl@lemmy.world 4 points 2 weeks ago

It’s possible to leverage the same human quality called “hate,” which underpins racism. It’s the same ugly human behavior. You can call it whatever you want, it’s still ugly

[-] Gold_E_Lox@lemmy.dbzer0.com 6 points 2 weeks ago

eye of the holder or some shit

[-] zarkanian@sh.itjust.works 3 points 2 weeks ago

Humans have been hating software since the dawn of computing. Do you get upset when people say bad things about Windows? And if not, why is it different with LLMs?

[-] lIlIlIlIlIlIl@lemmy.world 1 points 2 weeks ago
[-] imsufferableninja@sh.itjust.works 1 points 2 weeks ago

We have a word for the concept you're thinking of. It's called bigotry. Racism is race-based bigotry. Anti-AI bigotry is reasonable and awesome, and is just called bigotry.

[-] zarkanian@sh.itjust.works 5 points 2 weeks ago* (last edited 2 weeks ago)

No, you can't have bigotry against software. At least, not currently.

Maybe in the future somebody will figure out how to make a sapient AI, like you see in science fiction, and then you can say that somebody is bigoted against it. We don't have sapient AI, though, so this is simply prejudice.

[-] lIlIlIlIlIlIl@lemmy.world 4 points 2 weeks ago

bigotry is reasonable

OK well thanks for the chat bye now

[-] lath@lemmy.world 52 points 2 weeks ago

If it's racism, it's also slavery. Can't have one without the other here.

[-] sun_is_ra@sh.itjust.works 17 points 2 weeks ago

I am sure that discussion will be taken a lot more seriously in the coming years

[-] markz@suppo.fi 45 points 2 weeks ago

One big reason people outright reject AI generated code is that it shifts the work from author to the reviewer. AI makes it easier to make low effort commits that look good on surface, but are very flawed. So far LLMs don't match the wisdom of an experienced software dev.

[-] bamboo 11 points 2 weeks ago

This is what happened with FFMpeg when Google was trying the same thing to promote their models. If the code is good, and doesn't put unnecessary burden on the reviewer, then that's great. But when the patches are sloppy or the reviews are overwhelming, it doesn't help the project, it hinders it.

[-] Serinus@lemmy.world 3 points 2 weeks ago

It's almost like there should be a human in the loop to guide and review what the ai is doing.

The thing works a lot better when I give it smaller chunks of work that I know are possible. Works best when I know how to implement it myself and it just saves me from looking up all the syntax.

[-] sun_is_ra@sh.itjust.works 7 points 2 weeks ago

totally agee also same problem with published scientific papers .

I just assume that since this code submission was done by Anthropic itself - probably to demonstrate how good their AI has became ( I don't know what is the actual background to this story) - FFmpeg team gave it more consideration as opposed to a random amature.

[-] markz@suppo.fi 7 points 2 weeks ago

I think the band's name was a little bit different

[-] Chaphasilor@lemmy.ml 4 points 2 weeks ago

Don't listen to MJ Rathbun here

[-] SomethingBurger@jlai.lu 2 points 2 weeks ago

racism against machines

Based.

[-] Appoxo@lemmy.dbzer0.com 2 points 2 weeks ago

Rage against machines

this post was submitted on 08 Apr 2026
483 points (100.0% liked)

Programmer Humor

31092 readers
469 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS