1748
we are safe (discuss.tchncs.de)
top 50 comments
sorted by: hot top controversial new old
[-] xmunk@sh.itjust.works 198 points 10 months ago

ChatGPT is hilariously incompetent... but on a serious note, I still firmly reject tools like copilot outside demos and the like because they drastically reduce code quality for short term acceleration. That's a terrible trade-off in terms of cost.

[-] ToothlessFairy@lemmy.world 123 points 10 months ago

I enjoy using copilot, but it is not made to think for you. It's a better autocomplete, but don't ever let it do more than a line at once.

[-] EatYouWell@lemmy.world 63 points 10 months ago

Yup, AI is a tool, not a complete solution.

[-] gravitas_deficiency@sh.itjust.works 12 points 10 months ago

As a software engineer, the number of people I encounter in a given week who either refuse to or are incapable of understanding that distinction baffles and concerns me.

load more comments (1 replies)
[-] takeda@lemmy.world 46 points 10 months ago

The problem I have with it is that all the time it saves me I have to use on reading the code. I probably spend more time on that as once in a while the code it produces is broken in a subtle way.

I see some people swearing by it, which is the opposite of my experience. I suspect that if your coding was copying code from stack overflow then it indeed improved your experience as now this process is streamlined.

load more comments (1 replies)
load more comments (4 replies)
[-] stjobe@lemmy.world 56 points 10 months ago

Biggest problem with it is that it lies with the exact same confidence it tells the truth. Or, put another way, it's confidently incorrect as often as it is confidently correct - and there's no way to tell the difference unless you already know the answer.

[-] Swedneck@discuss.tchncs.de 21 points 10 months ago

it's kinda hilarious to me because one of the FIRST things ai researchers did was get models to identify things and output answers together with the confidence of each potential ID, and now we've somehow regressed back from that point

[-] tryptaminev@feddit.de 23 points 10 months ago

did we really regress back from that?

i mean giving a confidence for recognizing a certain object in a picture is relatively straightforward.

But LLMs put together words by their likeliness of belonging together under your input (terribly oversimplified).the confidence behind that has no direct relation to how likely the statements made are true. I remember an example where someone made chatgpt say that 2+2 equals 5 because his wife said so. So chatgpt was confident that something is right when the wife says it, simply because it thinks these words to belong together.

load more comments (3 replies)
[-] DudeDudenson@lemmings.world 39 points 10 months ago* (last edited 10 months ago)

they drastically reduce code quality for short term acceleration.

Oh boy do I have news for you, that's basically the only thing middle managers care about, short tem acceleration

[-] Poggervania@kbin.social 37 points 10 months ago

But LinkedIn bros and corporate people are gonna gobble it up anyways because it has the right buzzwords (including “AI”) and they can squeeze more (low quality) work from devs to brag about how many things they (the corporate owners) are doing.

[-] lurch@sh.itjust.works 39 points 10 months ago

It's just a fad. There's just a small bit that will stay after the hype is gone. You know, like blockchain, AR, metaverse, NFT and whatever it was before that. In a few years there will be another breakthrough with it and we'll hear from it again for a short while, but for now it's just a one trick pony.

load more comments (15 replies)
[-] EatYouWell@lemmy.world 19 points 10 months ago

Yeah, they think it can turn a beginner dev into an advanced dev, but really it's more like having a team of beginner devs.

load more comments (1 replies)
[-] PlexSheep@feddit.de 32 points 10 months ago* (last edited 10 months ago)

I'm still convinced that GitHub copilot is actively violating copyleft licenses. If not in word, then in the spirit.

load more comments (1 replies)
[-] TonyTonyChopper@mander.xyz 29 points 10 months ago

they drastically reduce ... quality for short term acceleration

Western society is built on this principle

load more comments (7 replies)
load more comments (2 replies)
[-] MajorHavoc@lemmy.world 167 points 10 months ago* (last edited 10 months ago)

I predict that, within the year, AI will be doing 100% of the development work that isn't total and utter bullshit pain-in-the-ass complexity, layered on obfuscations, composed of needlessly complex bullshit.

That's right, within a year, AI will be doing .001% of programming tasks.

[-] otp@sh.itjust.works 18 points 10 months ago

Can we just get it to attend meetings for us?

load more comments (5 replies)
[-] oce@jlai.lu 13 points 10 months ago

Big companies will take 5 years just to get there.

load more comments (1 replies)
[-] cupcakezealot 144 points 10 months ago

"look i registered my own domain name all by myself!"

the domain: "localhost"

[-] superduperenigma@lemmy.world 66 points 10 months ago

I'm an elite hacker and I grabbed your IP address from this post. It's 192.168.0.1 just so you know I'm not bluffing.

[-] Rambi@lemm.ee 67 points 10 months ago

Heheh I'm ddossing them right now. Unfortunately the computer I'm doing it on is having a few connection issues

load more comments (1 replies)
[-] scarilog@lemmy.world 49 points 10 months ago

Haha punk it's actually 192.168.1.1. you dun goofed

load more comments (3 replies)
load more comments (1 replies)
[-] hglman@lemmy.ml 91 points 10 months ago

Engineering is about trust. In all other and generally more formalized engineering disciplines, the actual job of an engineer is to provide confidence that something works. Software engineering may employ fewer people because the tools are better and make people much more productive, but until everyone else trusts the computer more, the job will exist.

If the world trusts AI over engineers then the fact that you don't have a job will be moot.

[-] Rodeo@lemmy.ca 12 points 10 months ago

People don't have anywhere near enough knowledge of how things work to make their choices based on trust. People aren't getting on the subway because they trust the engineers did a good job; they're doing it because it's what they can afford and they need to get to work.

Similarly, people aren't using Reddit or Adobe or choosing their cars firmware based on trust. People choose what is affordable and convenient.

[-] hglman@lemmy.ml 15 points 10 months ago

In civil engineering public works are certified by an engineer; its literally them saying if this fails i am at fault. The public is trusting the engineer to say its safe.

load more comments (1 replies)
load more comments (2 replies)
load more comments (6 replies)
[-] netburnr@lemmy.world 66 points 10 months ago

I just used copilot for the first time. It made me a ton of call to action text and website page text for various service pages inwas creating for a home builder. It was surprisingly useful, of course I modified the output a bit but overall saved me a ton of time.

[-] Daxtron2@lemmy.ml 31 points 10 months ago

Copilot has cut my workload by about 40% freeing me up for personal projects

[-] uplusion23@lemm.ee 33 points 10 months ago

Copilot is only dangerous in the hands of people who couldn't program otherwise. I love it, it's helped a ton on tedious tasks and really is like a pair programmer

[-] Daxtron2@lemmy.ml 13 points 10 months ago

Yeah it's perfect for if you can distinguish between good and bad generations. Sometimes it tries to run on an empty text file in vscode and it just outputs an ingredients list lol

[-] Jerkface@lemmy.world 23 points 10 months ago

Copilot has cut my personal projects by about 40% freeing me up for work

load more comments (1 replies)
load more comments (2 replies)
[-] ook_the_librarian@lemmy.world 63 points 10 months ago

I think the correct response is "Wow. Has your mom seen it? Send her the link."

[-] scarilog@lemmy.world 11 points 10 months ago
load more comments (5 replies)
[-] c0mbatbag3l@lemmy.world 48 points 10 months ago

AI is only as good as the person using it, like literally any other tool in human existence.

It's meant to amplify the workload of the professional, not replace them with a layman armed with an LLM.

load more comments (2 replies)
[-] rockSlayer@lemmy.world 20 points 10 months ago* (last edited 10 months ago)

This AI thing will certainly replace my MD to HTML converter and definitely not misplace my CSS and JS headers

load more comments (4 replies)
[-] StereoTrespasser@lemmy.world 20 points 10 months ago

Wow, there is a lot of pearl-clutching and gatekeeping ITT. It's delicious!

[-] PeriodicallyPedantic@lemmy.ca 18 points 10 months ago

Tbf I don't really wanna do ops work. I barely even wanna do DevOps. Let me just dev

[-] BautAufWasEuchAufbaut 13 points 10 months ago

Me too 😭
I don't want to "kubectl", I want to " make" 😭

load more comments (1 replies)
[-] BrianTheeBiscuiteer@lemmy.world 18 points 10 months ago

You don't need to convince the devs, you need to convince the managers.

load more comments (1 replies)
[-] Gabu@lemmy.world 17 points 10 months ago

On a more serious note, ChatGPT, ironically, does suck at webdev frontend. The one task that pretty much everyone agrees could be done by a monkey (given enough time) is the one it doesn't understand at all.

[-] Strawberry 16 points 10 months ago

The one task that pretty much everyone agrees could be done by a monkey

A phrase commonly uttered about web dev by mediocre programmers who spend 99% of the time writing the same copy-paste spring boot mid-tier code

load more comments (11 replies)
load more comments (3 replies)
[-] 30p87@feddit.de 14 points 10 months ago

The only thing ChatGPT etc. is useful for, in every language, is to get ideas on how to solve a problem, in an area you don't know anything about.

ChatGPT, how can I do xy in C++?
You can use the library ab, like ...

That's where I usually search for the library and check the docs if it's actually possible to do it this way. And often, it's not.

load more comments (2 replies)
[-] Littleborat@feddit.de 13 points 10 months ago

These morons are probably going to train AI wrong so job security for the next 100 years.

[-] csm10495@sh.itjust.works 13 points 10 months ago

Don't forget that GPT4 was getting dumber the more it learned from people.

load more comments
view more: next ›
this post was submitted on 20 Nov 2023
1748 points (100.0% liked)

Programmer Humor

32058 readers
1532 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS