view the rest of the comments
Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
6. Defend your opinion
This is a bit of a mix of rules 4 and 5 to help foster higher quality posts. You are expected to defend your unpopular opinion in the post body. We don't expect a whole manifesto (please, no manifestos), but you should at least provide some details as to why you hold the position you do.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
Generally I agree that it can be an incredible tool for learning, but a big problem is one needs a baseline ability to think critically, or to understand when new information may be flawed. That often means having at least a little bit of existing knowledge about a particular subject. For younger people with less education and life experience, that can be really difficult if not impossible.
The 10% of information that's incorrect could be really critical or contextually important. Also (anecdotally) it's often way more than 10%, or that 10% is distributed such that 9 out of 10 prompts are flawless, and the 10th is 100% nonsense.
And then you have people out there creating AI chat bots with the sole intention of spreading disinformation, or more commonly, with the intention of keeping people engaged or even emotionally dependent on their service — information accuracy often isn't the priority.
The rate at which AI-generated content is populating the internet is increasing exponentially, and that's where most LLM training data comes from currently, so it's hard to see how the accuracy problem improves going forward.
All that said, like most things, when AI is used in moderation by responsible people, it's a fantastic tool. Unfortunately, the people in charge are incentivized to be unscrupulous and irresponsible, and we live in a decadent society that doesn't exactly promote moderation, to massively understate things...
(yeah, I used an em-dash, you wanna fight bro? 😘)
Good point, as an adult that grew up long before LLMs and social media, I feel that it's an incredible tool, I just don't trust it fully. Critical thinking and fact checking is a reflex at this point, I must admit that I don't always fact check unless something seems shocking or unexpected to me. The accuracy problem is something I doubt they can fix short term