49

as seen here and here, some instances are feeding posts wholesale to prompts, for what seem like extremely unsound reasons to me

any of you run into this shit yet?

you are viewing a single comment's thread
view the rest of the comments
[-] db0@lemmy.dbzer0.com 4 points 2 days ago* (last edited 2 days ago)

assume I know and understand that the LLM did not literally do the banning

I am telling, again, that the human did not use the LLM to think for them either. The admin took the decision to ban the user irrespective of the LLM, and the rest of our admin team and me specifically, would never let an admin become a "human in the loop". The LLM was used just to summarize, as part of the test, with a misguided inside joke on using OpenAI tech.

I will readily admit that there was mistakes made by the admin. Not on their actions, but on their visuals. Because those visuals were spun to keep feeding this made-up controversy. We didn't use the LLM to decide or even guide our decision, but it appeared like we did, and we already owned that.

[-] self@awful.systems 14 points 2 days ago

The admin took the decision to ban the user irrespective of the LLM, and the rest of our admin team and me specifically, would never let an admin become a “human in the loop”. The LLM was used just to summarize

you don’t appear to have much understanding of how a human in the loop system works in practice. LLM summaries are used to confirm biases, especially when the prompt is something along the lines of “do these posts contain ?” though these systems are stochastic so you’re going to get unpredictable biases regardless of the prompt.

I don’t accept that the LLM summary didn’t influence the decision because the mod in question confirmed that he knew the LLM agreed with him (that’s bias, and also not something LLMs are capable of actually doing) and because if it didn’t, then the summary is worthless

which is why maybe you should just not have them in the future? just don’t touch LLMs when you’re doing mod work. either there’s no reason for it or you’re doing something monstrously wrong.

[-] db0@lemmy.dbzer0.com 6 points 2 days ago* (last edited 2 days ago)

I don’t accept that the LLM summary didn’t influence the decision because the mod in question confirmed that he knew the LLM agreed with him (that’s bias, and also not something LLMs are capable of actually doing) and because if it didn’t, then the summary is worthless

In this case, according to the admin in question, the LLM summary came after the decision, as a sort of a test. I.e. the admin made a decision, and wanted to see if an LLM would subsequently agree with that decision. In this specific case, it did, which is why they misguidedly decided to keep its summary in the modlog (opening us up to this whole shitstorm), but ultimately, that admin anyway decided LLMs in the mix is not good at all, which is why you never again saw an LLM summary in the modlog.

I can only put so much fault for a person for just testing shit out, yanno? I am not happy that they decided to use the output of the test because they are not familiar with how quickly disinfo breeds, but ultimately they came to the right decision anyway. If they had not and they had raised the issue on using LLMs officially, they would have been shut down.

this post was submitted on 13 May 2026
49 points (100.0% liked)

TechTakes

2573 readers
46 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS