view the rest of the comments
politics
Welcome to the discussion of US Politics!
Rules:
- Post only links to articles, Title must fairly describe link contents. If your title differs from the site’s, it should only be to add context or be more descriptive. Do not post entire articles in the body or in the comments.
Links must be to the original source, not an aggregator like Google Amp, MSN, or Yahoo.
Example:
- Articles must be relevant to politics. Links must be to quality and original content. Articles should be worth reading. Clickbait, stub articles, and rehosted or stolen content are not allowed. Check your source for Reliability and Bias here.
- Be civil, No violations of TOS. It’s OK to say the subject of an article is behaving like a (pejorative, pejorative). It’s NOT OK to say another USER is (pejorative). Strong language is fine, just not directed at other members. Engage in good-faith and with respect! This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban.
- No memes, trolling, or low-effort comments. Reposts, misinformation, off-topic, trolling, or offensive. Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.
- Vote based on comment quality, not agreement. This community aims to foster discussion; please reward people for putting effort into articulating their viewpoint, even if you disagree with it.
- No hate speech, slurs, celebrating death, advocating violence, or abusive language. This will result in a ban. Usernames containing racist, or inappropriate slurs will be banned without warning
We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.
All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.
That's all the rules!
Civic Links
• Congressional Awards Program
• Library of Congress Legislative Resources
• U.S. House of Representatives
Partnered Communities:
• News
Reported as "AI Slop Post"
but a) we don't have a rule against that.
and 2) OP clearly noted the used Co-Pilot to generate it, they aren't trying to pass it off as their own.
I'm actually OK with this. Obviously we'll remove AI generated ARTICLES that get posted, same as we'd remove videos and such, but in a comment? Clearly noted as AI? I think I'm OK with that.
If y'all WANT a rule about it, hit me up. I'll bring it up with the other mods and admins.
I've got three arguments for you on why you should make a rule against LLM comments, even those publicly marked as AI. And I'm going to refer to AI as LLM because large language models are what we are dealing with here.
First, LLMs aren't a reliable source of information, especially for recent events. They regurgitate training data based on weights calibrated during training. These weights are used to create results that, especially for numbers, can look accurate for the topic but still be the wrong number. For recent events, they will lack the relevant data because it won't have been in the data set they were trained on. So until that data is added in, the LLMs are giving an answer to something they don't know, for lack of a better phrasing. These are commonly known limitations of the LLMs we are discussing.
If people start using LLMs to argue then the comments sections are going to be filled with pages of made up LLM garbage. LLMs will generate more misinformation than anyone can keep up with to debunk. Especially when misinformation could do the most damage like in the weeks leading up to the special election this November 4th in California.
I find it unlikely that all of the statistics listed, without sources, by the LLM are accurate. But regardless of that, if a user was to respond by taking that comment and putting it in a LLM it's not likely that the LLM would be able to keep those numbers consistent. These errors would compound the longer the discussion went on between two LLMs.
At best this all wastes peoples' time and lemmy becomes an extension of the misinformation LLM machine. At worst this becomes an attack vector for bad actors. Bad actors fill up comment sections with LLM discussions that promote one view point and bury the rest. Knowing the comments are LLM generated doesn't solve these problems on its own.
Second, we shouldn't want to automate thinking. Tools are supposed to save time while retaining agency. My laptop saves me the time of having to send you a letter in the mail and having to wait for the response. My laptop doesn't deny me agency when it does this. I get to decide what I value and how that is communicated to you. The LLM saved OP's time, if all OP wanted was text that looks correct at a glance, but it removed OP's agency to think.
Facts and data, purportedly accurate, are assembled into a structure to deliver a central point, but none of that is done with the agency of OP. It's not the OP's thoughts or values being delivered to any of us. It's not even a position held for the sake of a debate. This is the LLM regurgitating the position it received in the prompt in the affirmative, because that's what the LLMs we have access to do. Like shouting into a cave and getting the echo back out.
We aren't getting what we want faster with LLM content, we are being denied it. The LLM takes away our ability to have a discussion with each other. Anyone using an LLM to think for them is by definition not participating in the discussion. No one can have a conversation, argument, or debate with this OP because despite OP having commented OP didn't write it. For lack of a better analogy, I might as well have a discussion with a parrot.
What are we doing on this website if we are all going to roll out our LLMs and have them talk to each other for us? We can all open two windows, position them side by side, and copy and paste prompts back and forth without needing a decentralized social media website as the middle man. The goal of social media and lemmy is to talk to other people.
Third, do you really want to volunteer to moderate LLM content? ChatGPT prose gets repetitive and it can never come up with anything new. I would not want to be stuck reading that all day.
I can definitely see the argument, OTOH, if someone actually owns up to it and says something on the order of "I dunno, so I asked Chat GPT and it says..."
I think the admission/disclosure model is fine, AND it actually opens up discussion for "OK, here's why Chat GPT is wrong..." which is a healthy discussion to have.
But I can definitely bring it up with the group and see what people think!
The issue is the scale. One comment can be fact checked in under an hour. Thousands not so much.
Also, it's not purely about accuracy. I want to be having discussions with other humans. Not software.
Thanks for bringing this up to the group, I appreciate it! edit: typo
Scale is always a problem, and if someone is using it to spam, we'd ban it for spam.
I see a LOT of generative spam posts, those get removed with a quickness, but it's because of the spam, not because it's generated.
Discussion is open now, so far it's leaning on "hey as long as they disclose it..." which still leaves it open to remove undisclosed generated comments.
But then you have the trap of "Well, how do you prove it if they don't disclose it?" 🤔 There really is no LLM detector yet.
Bots could be used to spam LLM comments, but users can effectively act as a manual bot with a LLM assisting them.
Unless the prompter goes out of their way to obfuscate the text manually, which sort of defeats the purpose, they tend to be very samey. The generated text would stand out if multiple users were using the same or even similar prompts. And OPs stands out even without the admission.
edit: to clarify I mean stand out to the human eye, human mods would have to be the ones removing the comments
As you see... Automod has issues from time to time. LOL.