217
top 50 comments
sorted by: hot top controversial new old
[-] homesweethomeMrL@lemmy.world 83 points 1 month ago

in response to Bender pointing out that ChatGPT and its competitors simply encode relationships between words and have no concept of referent or meaning, which is a devastating critique of what the technology actually does, the absolute best response he can muster for his work is "yeah, but humans don't do anything more complicated than that". I mean, speak for yourself Sam: the rest of us have some concept of semiotics, and we can do things like identify anagrams or count the number of letters in a word, which requires a level of recursivity that's beyond what ChatGPT can muster.

Boom Shanka (emphasis added)

[-] Soyweiser@awful.systems 52 points 1 month ago* (last edited 1 month ago)

'i am a stochastic parrot and so are u'

reminds me of

"In his desperation to have produced reality through computation, he denigrates actual reality by equating it to computation"

(from this review/analysis of the devs series). A pattern annoying common among the LLM AI fans.

E: Wow, I did not like the reactionary great man theory spin this article took there. Don't think replacing the Altmans with Yarvins would be a big solution there. (At least that is how the NRx people would read this article). Quite a lot of the 'we need more well read renaissance men' people turned into hardcore trump supporters (and racists, and sexists and...). (Note this edit is after I already got 45 upvotes).

load more comments (1 replies)
[-] YourNetworkIsHaunted@awful.systems 28 points 1 month ago

First and foremost, the dunce is incapable of valuing knowledge that they don't personally understand or agree with. If they don't know something, then that thing clearly isn't worth knowing.

There is a corollary to this that I've seen as well, and it dovetails with the way so many of these guys get obsessed with IQ. Anything they can't immediately understand must be nonsense not worth knowing. Anything they can understand (or think they understand) that you don't is clearly an arcane secret of the universe that they can only grasp because of their innate superiority. I think that this is the combination that explains how so many of these dunces believe themselves to be the ubermensch who must exercise authoritarian power over the rest of us for the good of everyone.

See also the commenter(s) on this thread who insist that their lack of reading comprehension is evidence that they're clearly correct and are in no way part of the problem.

load more comments (1 replies)
[-] dgerard@awful.systems 24 points 1 month ago

someone sent out the batpromptfondler signal and the mods are in shooting gallery mode

please refrain from commenting unaccordingly

[-] self@awful.systems 18 points 1 month ago

b-but David, they’ve been so reasonable and here we are getting emotional about the fucking garbage technology they’ve come here to shove down our throats alongside a heaping serving of capitalist brainrot from the same types of self-described geniuses who gave us OKRs

[-] 9point6@lemmy.world 23 points 1 month ago

After all, there's almost nothing that ChatGPT is actually useful for.

It's takes like this that just discredit the rest of the text.

You can dislike LLM AI for its environmental impact or questionable interpretation of fair use when it comes to intellectual property. But pretending it's actually useless just makes someone seem like they aren't dissimilar to a Drama YouTuber jumping in on whatever the latest on-trend thing to hate is.

[-] spankmonkey@lemmy.world 44 points 1 month ago

"Almost nothing" is not the same as "actually useless". The former is saying the applications are limited, which is true.

LLMs are fine for fictional interactions, as in things that appear to be real but aren't. They suck at anything that involves being reliably factual, which is most things including all the stupid places LLMs and other AI are being jammed in to despite being consistely wrong, which tech bros love to call hallucinations.

They have LIMITED applications, but are being implemented as useful for everything.

[-] Amoeba_Girl@awful.systems 29 points 1 month ago* (last edited 1 month ago)

To be honest, as someone who's very interested in computer generated text and poetry and the like, I find generic LLMs far less interesting than more traditional markov chains because they're too good at reproducing clichés at the exclusion of anything surprising or whimsical. So I don't think they're very good for the unfactual either. Probably a homegrown neural network would have better results.

[-] dgerard@awful.systems 19 points 1 month ago

GPT-2 was peak LLM because it was bad enough to be interesting, it was all downhill from there

[-] Amoeba_Girl@awful.systems 13 points 1 month ago

Absolutely, every single one of these tools has got less interesting as they refine it so it can only output the platonic ideal of kitsch.

[-] bitwolf@sh.itjust.works 16 points 1 month ago

Agreed, our chat server ran a Markov chain bot for fun.

In comparison to ChatGPT on a 2nd server I frequent it had much funnier and random responses.

ChatGPT tends to just agree with whatever it chose to respond to.

As for real world use. ChatGPT 90% of the time produces the wrong answer. I've enjoyed Circuit AI however. While it also produces incorrect responses, it shares its sources so I can more easily get the right answer.

All I really want from a chatbot is a gremlin that finds the hard things to Google on my behalf.

load more comments (1 replies)
[-] mii@awful.systems 29 points 1 month ago

Let's be real here: when people hear the word AI or LLM they don't think of any of the applications of ML that you might slap the label "potentially useful" on (notwithstanding the fact that many of them also are in a all-that-glitters-is-not-gold--kinda situation). The first thing that comes to mind for almost everyone is shitty autoplag like ChatGPT which is also what the author explicitly mentions.

[-] 9point6@lemmy.world 14 points 1 month ago

I'm saying ChatGPT is not useless.

I'm a senior software engineer and I make use of it several times a week either directly or via things built on top of it. Yes you can't trust it will be perfect, but I can't trust a junior engineer to be perfect either—code review is something I've done long before AI and will continue to do long into the future.

I empirically work quicker with it than without and the engineers I know who are still avoiding it work noticeably slower. If it was useless this would not be the case.

[-] froztbyte@awful.systems 44 points 1 month ago* (last edited 1 month ago)

I’m a senior software engineer

ah, a señor software engineer. excusé-moi monsoir, let me back up and try once more to respect your opinion

uh, wait:

but I can’t trust a junior engineer to be perfect either

whoops no, sorry, can't do it.

jesus fuck I hope the poor bastards that are under you find some other place real soon, you sound like a godawful leader

and the engineers I know who are still avoiding it work noticeably slower

yep yep! as we all know, velocity is all that matters! crank that handle, produce those features! the factory must flow!!

fucking christ almighty. step away from the keyboard. go become a logger instead. your opinions (and/or the shit you're saying) is a big part of everything that's wrong with industry.

[-] froztbyte@awful.systems 24 points 1 month ago* (last edited 1 month ago)

and the engineers I know who are still avoiding it work noticeably slower

yep yep! as we all know, velocity is all that matters! crank that handle, produce those features! the factory must flow!!

and you fucking know what? it's not even just me being a snide motherfucker, this rant is literally fucking supported by data:

The survey found that 75.9% of respondents (of roughly 3,000* people surveyed) are relying on AI for at least part of their job responsibilities, with code writing, summarizing information, code explanation, code optimization, and documentation taking the top five types of tasks that rely on AI assistance. Furthermore, 75% of respondents reported productivity gains from using AI.

...

As we just discussed in the above findings, roughly 75% of people report using AI as part of their jobs and report that AI makes them more productive.

And yet, in this same survey we get these findings:

if AI adoption increases by 25%, time spent doing valuable work is estimated to decrease 2.6% if AI adoption increases by 25%, estimated throughput delivery is expected to decrease by 1.5% if AI adoption increases by 25%, estimated delivery stability is expected to decrease by 7.2%

and that's a report sponsored and managed right from the fucking lying cloud company, no less. a report they sponsor, run, manage, and publish is openly admitting this shit. that is how much this shit doesn't fucking work the way you sell it to be doing.

but no, we should trust your driveby bullshit. motherfucker.

load more comments (9 replies)
[-] raspberriesareyummy@lemmy.world 17 points 1 month ago

Thank you for saving me the breath to shit on that person's attitude :)

[-] froztbyte@awful.systems 19 points 1 month ago

yw

these arseslugs are so fucking tedious, and for almost 2 decades they've been dragging everything and everyone around them down to their level instead of finding some spine and getting better

[-] raspberriesareyummy@lemmy.world 19 points 1 month ago

word. When I hear someone say "I'm a SW developer and LLM xy helps me in my work" I always have to stop myself from being socially unacceptably open about my thoughts on their skillset.

[-] froztbyte@awful.systems 17 points 1 month ago

and that’s the pernicious bit: it’s not just their skillset, it also goes right to their fucking respect for their team. “I don’t care about just barfing some shit into the codebase, and I don’t think my team will mind either!”

utter goddamn clownery

[-] swlabr@awful.systems 14 points 1 month ago

Please, señor software engineer was my father. Call me Bob.

load more comments (1 replies)
load more comments (16 replies)
[-] mii@awful.systems 27 points 1 month ago* (last edited 1 month ago)

I’m a senior software engineer

Nice, me too, and whenever some tech-brained C-suite bozo tries to mansplain to me why LLMs will make me more efficient, I smile, nod politely, and move on, because at this point I don't think I can make the case that pasting AI slop into prod is objectively a worse idea than pasting Stack Overflow answers into prod.

At the end of the day, if I want to insert a snippet (which I don't have to double-check, mind you), auto-format my code, or organize my imports, which are all things I might use ChatGPT for if I didn't mind all the other baggage that comes along with it, Emacs (or Vim, if you swing that way) does this just fine and has done so for over 20 years.

I empirically work quicker with it than without and the engineers I know who are still avoiding it work noticeably slower.

If LOC/min or a similar metric is used to measure efficiency at your company, I am genuinely sorry.

load more comments (5 replies)
[-] BlueMonday1984@awful.systems 20 points 1 month ago

I’m a senior software engineer

Good. Thanks for telling us your opinion's worthless.

[-] sailor_sega_saturn@awful.systems 19 points 1 month ago* (last edited 1 month ago)

~~Senior software engineer~~ programmer here. I have had to tell coworkers "don't trust anything chat-gpt tells you about text encoding" after it made something up about text encoding.

load more comments (4 replies)
[-] 000@lemmy.dbzer0.com 11 points 1 month ago

Oh my god, an actual senior softeare engineer????? Amidst all of us mortals??

load more comments (9 replies)
[-] Architeuthis@awful.systems 29 points 1 month ago

It's useful insofar as you can accommodate its fundamental flaw of randomly making stuff the fuck up, say by having a qualified expert constantly combing its output instead of doing original work, and don't mind putting your name on low quality derivative slop in the first place.

[-] dgerard@awful.systems 17 points 1 month ago

actually you know what? with all the motte and baileying, you can take a month off. bye!

load more comments (5 replies)
load more comments (1 replies)
[-] homesweethomeMrL@lemmy.world 21 points 1 month ago

MRW 38 of the 39 comments have almost nothing to do with the article

load more comments
view more: next ›
this post was submitted on 17 Dec 2024
217 points (100.0% liked)

TechTakes

1563 readers
153 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS