919
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 06 Oct 2025
919 points (100.0% liked)
Political Humor
1502 readers
1 users here now
Welcome to Political Humor!
Rules:
- Be excellent to each other.
- No harassment.
- No sexism, racism or bigotry.
- All arguments should be made in good faith.
- No misinformation. Be prepared to back up your factual claims with evidence.
- All posts should relate to politics and be of a humorous nature.
- No bots, spam or self-promotion.
- If you want to run a bot, ask first.
- Site wide rules apply.
- Have fun.
founded 2 years ago
MODERATORS
I must confess to not understanding your anecdote here. Pure chance might give you a p<0.05 when your sample size is low - but that disappears as the sample size grows larger.
I don't want to dig out the math figures, because god knows they're hard enough to scribble freehand, but as you add more samples, the difference between your null hypothesis and sample average shrinks in regards to what establishes a p<0.05. Let's just use not-real numbers: if a sample of 100 people has a difference of 5 units from the null hypothesis, and has a p value of 0.1, a sample of 10,000 with a difference of .1 unit might have a p value of 0.02. In the quote (that I can't seem to find now), the essential wisdom to take is that if you dragged in enough samples, you could find a statistically significant difference because your null hypothesis would never be exact, so even the smallest of differences would generate a low p-value. It's why whenever you see a p-value, you should definitely see an effect size estimate nearby, such as cohen's D.
Here's a paper outlining some of this in much better words than I have.
Thank you for the link - that's a very interesting paper. I've taken Statistics twice (two different engineering degrees) and still need to reread that a few times to "get it"!