216
top 50 comments
sorted by: hot top controversial new old
[-] homesweethomeMrL@lemmy.world 82 points 6 days ago

in response to Bender pointing out that ChatGPT and its competitors simply encode relationships between words and have no concept of referent or meaning, which is a devastating critique of what the technology actually does, the absolute best response he can muster for his work is "yeah, but humans don't do anything more complicated than that". I mean, speak for yourself Sam: the rest of us have some concept of semiotics, and we can do things like identify anagrams or count the number of letters in a word, which requires a level of recursivity that's beyond what ChatGPT can muster.

Boom Shanka (emphasis added)

First and foremost, the dunce is incapable of valuing knowledge that they don't personally understand or agree with. If they don't know something, then that thing clearly isn't worth knowing.

There is a corollary to this that I've seen as well, and it dovetails with the way so many of these guys get obsessed with IQ. Anything they can't immediately understand must be nonsense not worth knowing. Anything they can understand (or think they understand) that you don't is clearly an arcane secret of the universe that they can only grasp because of their innate superiority. I think that this is the combination that explains how so many of these dunces believe themselves to be the ubermensch who must exercise authoritarian power over the rest of us for the good of everyone.

See also the commenter(s) on this thread who insist that their lack of reading comprehension is evidence that they're clearly correct and are in no way part of the problem.

[-] WeirdWriter@caneandable.social 10 points 5 days ago

@YourNetworkIsHaunted I honestly was wondering why they were obsessing over IQ so much, but this comment actually made it all click

[-] Soyweiser@awful.systems 52 points 6 days ago* (last edited 5 days ago)

'i am a stochastic parrot and so are u'

reminds me of

"In his desperation to have produced reality through computation, he denigrates actual reality by equating it to computation"

(from this review/analysis of the devs series). A pattern annoying common among the LLM AI fans.

E: Wow, I did not like the reactionary great man theory spin this article took there. Don't think replacing the Altmans with Yarvins would be a big solution there. (At least that is how the NRx people would read this article). Quite a lot of the 'we need more well read renaissance men' people turned into hardcore trump supporters (and racists, and sexists and...). (Note this edit is after I already got 45 upvotes).

I'm glad I'm not the only one who picked up on that turn. The implication that what we need is an actual Bismark instead of a wannabe like we keep getting makes sense (I too would prefer if the levers of power were wielded by someone halfway competent who listens to and cares about people around them) but there are also some pretty strong reasons why we went from Bismark and Lincoln to Merkel and Trump, and also some pretty strong reasons why the road there led through Hitler and Wilson.

Along with my comments elsewhere about how the dunce believes their area of hypothetical expertise to be some kind of arcane gift revealed to the worthy, I feel like I should clarify that not only do the current top of dolts not have it but that there is no secret wisdom beyond the ken of normal men. That is a lie told by the powerful to stop you fro tom questioning their position; it's the "because I'm your Dad and I said so" for adults. Learning things is hard and hard means expensive, so people with wealth and power have more opportunities to study things, but that lack of opportunity is not the same as lacking the ability to understand things and to contribute to a truly democratic process.

[-] dgerard@awful.systems 24 points 6 days ago

someone sent out the batpromptfondler signal and the mods are in shooting gallery mode

please refrain from commenting unaccordingly

[-] self@awful.systems 18 points 6 days ago

b-but David, they’ve been so reasonable and here we are getting emotional about the fucking garbage technology they’ve come here to shove down our throats alongside a heaping serving of capitalist brainrot from the same types of self-described geniuses who gave us OKRs

[-] homesweethomeMrL@lemmy.world 21 points 6 days ago

MRW 38 of the 39 comments have almost nothing to do with the article

[-] 9point6@lemmy.world 23 points 6 days ago

After all, there's almost nothing that ChatGPT is actually useful for.

It's takes like this that just discredit the rest of the text.

You can dislike LLM AI for its environmental impact or questionable interpretation of fair use when it comes to intellectual property. But pretending it's actually useless just makes someone seem like they aren't dissimilar to a Drama YouTuber jumping in on whatever the latest on-trend thing to hate is.

[-] spankmonkey@lemmy.world 44 points 6 days ago

"Almost nothing" is not the same as "actually useless". The former is saying the applications are limited, which is true.

LLMs are fine for fictional interactions, as in things that appear to be real but aren't. They suck at anything that involves being reliably factual, which is most things including all the stupid places LLMs and other AI are being jammed in to despite being consistely wrong, which tech bros love to call hallucinations.

They have LIMITED applications, but are being implemented as useful for everything.

[-] Amoeba_Girl@awful.systems 29 points 6 days ago* (last edited 6 days ago)

To be honest, as someone who's very interested in computer generated text and poetry and the like, I find generic LLMs far less interesting than more traditional markov chains because they're too good at reproducing clichés at the exclusion of anything surprising or whimsical. So I don't think they're very good for the unfactual either. Probably a homegrown neural network would have better results.

[-] hrrrngh@awful.systems 6 points 4 days ago* (last edited 4 days ago)

I'm in the same boat. Markov chains are a lot of fun, but LLMs are way too formulaic. It's one of those things where AI bros will go, "Look, it's so good at poetry!!" but they have no taste and can't even tell that it sucks; LLMs just generate ABAB poems and getting anything else is like pulling teeth. It's a little more garbled and broken, but the output from a MCG is a lot more interesting in my experience. Interesting content that's a little rough around the edges always wins over smooth, featureless AI slop in my book.


slight tangent: I was interested in seeing how they'd work for open-ended text adventures a few years ago (back around GPT2 and when AI Dungeon was launched), but the mystique did not last very long. Their output is awfully formulaic, and that has not changed at all in the years since. (of course, the tech optimist-goodthink way of thinking about this is "small LLMs are really good at creative writing for their size!")

I don't think most people can even tell the difference between a lot of these models. There was a snake oil LLM (more snake oil than usual) called Reflection 70b, and people could not tell it was a placebo. They thought it was higher quality and invented reasons why that had to be true.

Orange site example:

Like other comments, I was also initially surprised. But I think the gains are both real and easy to understand where the improvements are coming from. [ . . . ]

I had a similar idea, interesting to see that it actually works. [ . . . ]

Reddit:

I think that's cool, if you use a regular system prompt it behaves like regular llama-70b. (??!!!)

It's the first time I've used a local model and did [not] just say wow this is neat, or that was impressive, but rather, wow, this is finally good enough for business settings (at least for my needs). I'm very excited to keep pushing on it. Llama 3.1 failed miserably, as did any other model I tried.

For story telling or creative writing, I would rather have the more interesting broken english output of a Markov chain generator, or maybe a tarot deck or D100 table. Markov chains are also genuinely great for random name generators. I've actually laughed at Markov chains before with friends when we throw a group chat into one and see what comes out. I can't imagine ever getting something like that from an LLM.

[-] dgerard@awful.systems 18 points 6 days ago

GPT-2 was peak LLM because it was bad enough to be interesting, it was all downhill from there

load more comments (1 replies)
[-] bitwolf@sh.itjust.works 16 points 6 days ago

Agreed, our chat server ran a Markov chain bot for fun.

In comparison to ChatGPT on a 2nd server I frequent it had much funnier and random responses.

ChatGPT tends to just agree with whatever it chose to respond to.

As for real world use. ChatGPT 90% of the time produces the wrong answer. I've enjoyed Circuit AI however. While it also produces incorrect responses, it shares its sources so I can more easily get the right answer.

All I really want from a chatbot is a gremlin that finds the hard things to Google on my behalf.

[-] mii@awful.systems 29 points 6 days ago

Let's be real here: when people hear the word AI or LLM they don't think of any of the applications of ML that you might slap the label "potentially useful" on (notwithstanding the fact that many of them also are in a all-that-glitters-is-not-gold--kinda situation). The first thing that comes to mind for almost everyone is shitty autoplag like ChatGPT which is also what the author explicitly mentions.

load more comments (56 replies)
[-] Architeuthis@awful.systems 29 points 6 days ago

It's useful insofar as you can accommodate its fundamental flaw of randomly making stuff the fuck up, say by having a qualified expert constantly combing its output instead of doing original work, and don't mind putting your name on low quality derivative slop in the first place.

[-] dgerard@awful.systems 17 points 6 days ago

actually you know what? with all the motte and baileying, you can take a month off. bye!

load more comments (5 replies)
load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 17 Dec 2024
216 points (100.0% liked)

TechTakes

1489 readers
52 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS