723
top 44 comments
sorted by: hot top controversial new old
[-] kayzeekayzee 217 points 1 week ago

Tech guy invents the concept of giving instructions

[-] ripcord@lemmy.world 115 points 1 week ago

With clear requirements and outcome expected

Why did no one think of this before

[-] wtckt@lemm.ee 24 points 1 week ago

Who does that? What if they do everything right and it doesn't work and then it turns out it's my fault?

[-] bjoern_tantau@swg-empire.de 120 points 1 week ago

It would be nice if it was possible to describe perfectly what a program is supposed to do.

[-] Venator@lemmy.nz 3 points 6 days ago

Yeah but that's a lot of writing. Much less effort to get the plagiarism machine to write it instead.

[-] orvorn@slrpnk.net 81 points 1 week ago

Someone should invent some kind of database of syntax, like a... code

[-] heavydust@sh.itjust.works 39 points 1 week ago

But it would need to be reliable with a syntax, like some kind of grammar.

[-] peoplebeproblems@midwest.social 26 points 1 week ago

That's great, but then how do we know that the grammar matches what we want to do - with some sort of test?

[-] Natanael@infosec.pub 21 points 1 week ago

How to we know what to test? Maybe with some kind of specification?

[-] maiskanzler@feddit.nl 2 points 1 week ago* (last edited 1 week ago)

People could give things a name and write down what type of thing it is.

[-] monkeyslikebananas2@lemmy.world 5 points 1 week ago
[-] Knock_Knock_Lemmy_In@lemmy.world 11 points 1 week ago

We don't want anything amateur. It has to be a professional codegrammar.

[-] spankmonkey@lemmy.world 19 points 1 week ago* (last edited 1 week ago)

What, like some kind of design requirements?

Heresy!

[-] bjoern_tantau@swg-empire.de 9 points 1 week ago

Design requirements are too ambiguous.

[-] spankmonkey@lemmy.world 10 points 1 week ago

Design requirements are what it should do, not how it does it.

[-] heavydust@sh.itjust.works 5 points 1 week ago

That's why you must negotiate or clarify what is being asked. Once it has been accepted, it is not ambiguous anymore as long as you respect it.

[-] psud@aussie.zone 1 points 1 week ago

I'm a systems analyst, or in agile terminology "a designer" as I'm responsible for "design artifacts"

Our designs are usually unambiguous

[-] drew_belloc@programming.dev 15 points 1 week ago
[-] rayquetzalcoatl@lemmy.world 8 points 1 week ago

I think our man meant in terms of real-world situations

[-] heavydust@sh.itjust.works 2 points 1 week ago

And NOT yet another front page written in ReactJS.

[-] peoplebeproblems@midwest.social 1 points 1 week ago

Oh, well, that's good, because I have a ton of people who work with Angular and not React.

[-] xthexder@l.sw0.com 5 points 1 week ago* (last edited 1 week ago)

This still isn't specific enough to specify exactly what the computer will do. There are an infinite number of python programs that could print Hello World in the terminal.

[-] drew_belloc@programming.dev 2 points 1 week ago

I knew it, i should've asked for assembly

[-] peoplebeproblems@midwest.social 1 points 1 week ago

Ha

None of us would have jobs

[-] MentalEdge@sopuli.xyz 9 points 1 week ago

I think the joke is that that is literally what coding, is.

[-] Lime66@lemmy.world 72 points 1 week ago
[-] undefinedValue@programming.dev 29 points 1 week ago

OP just chatting with themselves so they can screenshot it?

[-] mexicancartel@lemmy.dbzer0.com 16 points 1 week ago

That is some telegram group and both messages shows from left with profile icons(which got cropped). The screenshot person sent the last message which shows double ticks

[-] andrybak@startrek.website 4 points 1 week ago

In the desktop client the positions of bubbles also depend on the width of the window.

[-] bleistift2@sopuli.xyz 4 points 1 week ago

Great attention to detail!

[-] Talia@feddit.it 2 points 1 week ago

That's just a fake conversation in general, look at the timestamps between the messages from the interlocutor. Several minutes to type a complete sentence?

[-] StellarSt0rm@lemmy.world 2 points 1 week ago

Hey, i can take a few hours to reply sometimes :c

[-] pufferfisherpowder@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

Could be a group chat but we all know they're a twat

[-] Appoxo@lemmy.dbzer0.com 8 points 1 week ago

I wrote a shell script like this (it admin , notna dev) for private use.
The prompt took me like 5 hours of rewriting the instructions.
Don't even know yet if it works (lol)

[-] kn0wmad1c@programming.dev 7 points 1 week ago

Calling GPT a neural network is pretty generous. It's more like a markov chain

[-] brian@programming.dev 35 points 1 week ago

it legitimately is a neutral network, I'm not sure what you're trying to say here. https://en.wikipedia.org/wiki/Generative_pre-trained_transformer

[-] kn0wmad1c@programming.dev 6 points 1 week ago

You're right, my bad.

[-] peoplebeproblems@midwest.social 3 points 1 week ago

The core language model isn't a nueral network? I agree that the full application is more Markov chainy but I had no idea the LLM wasn't.

Now I'm wondering if there are any models that are actual neutral networks

[-] kn0wmad1c@programming.dev 2 points 1 week ago

I'm not an expert. I'd just expect a neural network to follow the core principle of self-improvement. GPT is fundamentally unable to do this. The way it "learns" is closer to the same tech behind predictive text in your phone.

It's the reason why it can't understand why telling you to put glue on pizza is a bad idea.

[-] lime@feddit.nu 6 points 1 week ago

the main thing is that the system end-users interact with is static. it's a snapshot of all the weights of the "neurons" at a particular point in the training process. you can keep training from that snapshot for every conversation, but nobody does that live because the result wouldn't be useful. it needs to be cleaned up first. so it learns nothing from you, but it could.

[-] frezik@midwest.social 2 points 1 week ago* (last edited 1 week ago)

"Improvement" is an open ended term. Would having longer or shorter toes be beneficial? Depends on the evolutionary environment.

ChatGPT does have a feedback loop. Every prompt you give it affects its internal state. That's why it won't give you the same response next time you give the same prompt. Will it be better or worse? Depends on what you want.

[-] frezik@midwest.social 3 points 1 week ago

I've played with markov chains. They don't create serious results, ever. ChatGPT is right just often enough for people to think it's right all the time.

[-] TrickDacy@lemmy.world 2 points 1 week ago

Neural network: for when saying LLM doesn't sound smart enough

[-] raina@sopuli.xyz 9 points 1 week ago

It's just what it was called in the nineties.

this post was submitted on 01 Apr 2025
723 points (100.0% liked)

Programmer Humor

22262 readers
699 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS