219
you are viewing a single comment's thread
view the rest of the comments
[-] mii@awful.systems 29 points 10 months ago

Let's be real here: when people hear the word AI or LLM they don't think of any of the applications of ML that you might slap the label "potentially useful" on (notwithstanding the fact that many of them also are in a all-that-glitters-is-not-gold--kinda situation). The first thing that comes to mind for almost everyone is shitty autoplag like ChatGPT which is also what the author explicitly mentions.

[-] 9point6@lemmy.world 14 points 10 months ago

I'm saying ChatGPT is not useless.

I'm a senior software engineer and I make use of it several times a week either directly or via things built on top of it. Yes you can't trust it will be perfect, but I can't trust a junior engineer to be perfect either—code review is something I've done long before AI and will continue to do long into the future.

I empirically work quicker with it than without and the engineers I know who are still avoiding it work noticeably slower. If it was useless this would not be the case.

[-] pipes@sh.itjust.works 3 points 10 months ago

In this and other use cases I call it a pretty effective search engine, instead of scrolling through stackexchange after clicking between google ads, you get the cleaned up example code you needed. Not a Chat with any intelligence though.

[-] froztbyte@awful.systems 25 points 10 months ago

"despite the many people who have shown time and time and time again that it definitely does not do fine detail well and will often present shit that just 10000% was not in the source material, I still believe that it is right all the time and gives me perfectly clean code. it is them, not I, that are the rubes"

[-] Soyweiser@awful.systems 16 points 10 months ago

The problem with stuff like this is not knowing when you dont know. People who had not read the books SSC Scott was reviewing didnt know he had missed the points (or not read the book at all) till people pointed it out in the comments. But the reviews stay up.

Anyway this stuff always feels like a huge motte bailey, where we go from 'it has some uses' to 'it has some uses if you are a domain expert who checks the output diligently' back to 'some general use'.

[-] V0ldek@awful.systems 9 points 10 months ago

A lot of the "I'm a senior engineer and it's useful" people seem to just assume that they're just so fucking good that they'll obviously know when the machine lies to them so it's fine. Which is one, hubris, two, why the fuck are you even using it then if you already have to be omniscient to verify the output??

[-] blakestacey@awful.systems 7 points 10 months ago

"If you don't know the subject, you can't tell if the summary is good" is a basic lesson that so many people refuse to learn.

[-] pipes@sh.itjust.works 2 points 10 months ago

Ahah I'm totally with you, I just personally know people that love it because they have never learned how to use a search engine. And these generalist generative AIs are trained on gobbled up internet basically, while also generating so many dangerous mistakes, I've read enough horror stories.

I'm in science and I'm not interested in ChatGPT, wouldn't trust it with a pancake recipe. Even if it was useful to me I wouldn't trust the vendor lock-in or enshittification that's gonna come after I get dependent on aa tool in the cloud.

A local LLM on cheap or widely available hardware with reproducible input / output? Then I'm interested.

load more comments (4 replies)
load more comments (51 replies)
load more comments (51 replies)
this post was submitted on 17 Dec 2024
219 points (100.0% liked)

TechTakes

2271 readers
56 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS