920
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 06 Oct 2025
920 points (100.0% liked)
Political Humor
1514 readers
514 users here now
Welcome to Political Humor!
Rules:
- Be excellent to each other.
- No harassment.
- No sexism, racism or bigotry.
- All arguments should be made in good faith.
- No misinformation. Be prepared to back up your factual claims with evidence.
- All posts should relate to politics and be of a humorous nature.
- No bots, spam or self-promotion.
- If you want to run a bot, ask first.
- Site wide rules apply.
- Have fun.
founded 2 years ago
MODERATORS
Yep. Also, famously, a statistics/psychology professor was once quoted as saying the only reason you don't find a statistically significant difference is because we're "too damn lazy to drag enough people in." The larger the sample size, the less of a difference is needed to hit that 5% mark. So if you aren't "lazy," you can just add more folks to your study and be more likely to find a 'significant difference' that you can then publish.
My statistics professor would rerun experiments that hit the 5% (p<0.05) mark and need it less than a 0.001 or 0.005 just to waggle his dick at others, saying his findings were a lot more reliable than theirs.
I must confess to not understanding your anecdote here. Pure chance might give you a p<0.05 when your sample size is low - but that disappears as the sample size grows larger.
I don't want to dig out the math figures, because god knows they're hard enough to scribble freehand, but as you add more samples, the difference between your null hypothesis and sample average shrinks in regards to what establishes a p<0.05. Let's just use not-real numbers: if a sample of 100 people has a difference of 5 units from the null hypothesis, and has a p value of 0.1, a sample of 10,000 with a difference of .1 unit might have a p value of 0.02. In the quote (that I can't seem to find now), the essential wisdom to take is that if you dragged in enough samples, you could find a statistically significant difference because your null hypothesis would never be exact, so even the smallest of differences would generate a low p-value. It's why whenever you see a p-value, you should definitely see an effect size estimate nearby, such as cohen's D.
Here's a paper outlining some of this in much better words than I have.
Thank you for the link - that's a very interesting paper. I've taken Statistics twice (two different engineering degrees) and still need to reread that a few times to "get it"!