From the "flipping through LessWrong for entertainment" department:
What effect does LLM use have on the quality of people's thinking / knowledge?
- I'd expect a large positive effect from just making people more informed / enabling them to interpret things correctly / pointing out fallacies etc.
And people believe this ... why? I mean, shouldn't the default assumption about anything anyone in AI says is that it's a lie?