Undergrad Why Are P-values Uniformly Distributed in Hypothesis Testing?

  • Thread starter Thread starter alan2
  • Start date Start date
  • Tags Tags
    Distributed
Click For Summary
P-values in hypothesis testing are uniformly distributed when derived from a continuous null distribution, as they represent the probability of observing a test statistic as extreme as the one calculated. To generate a p-value, one can produce a uniform random number and apply the inverse cumulative distribution function (CDF) of the null distribution, resulting in a p-value that reflects this uniformity. However, this uniformity may not hold true for discrete distributions or certain acceptance/rejection regions. The discussion emphasizes the need for an intuitive understanding of this concept, especially for those outside of statistical fields. Overall, the uniform distribution of p-values is contingent on the nature of the test statistic and the underlying distribution.
alan2
Messages
324
Reaction score
56
Help. I need an intuitive, non-mathematical explanation of why p-values from hypothesis testing are uniformly distributed. I was talking to a social scientist and got a blank stare. I couldn't come up with anything except the proof. Thanks.
 
Physics news on Phys.org
alan2 said:
Help. I need an intuitive, non-mathematical explanation of why p-values from hypothesis testing are uniformly distributed. I was talking to a social scientist and got a blank stare. I couldn't come up with anything except the proof. Thanks.
If you wanted to generate random numbers from whatever the null distribution is, you would first generate a uniform random number and then apply the inverse cdf of the null distribution to get a random value. That value would be a (one-sided) p-value.
 
alan2 said:
Help. I need an intuitive, non-mathematical explanation of why p-values from hypothesis testing are uniformly distributed.

That won't be true if the the test statistic has a discrete distribution.
tnich said:
If you wanted to generate random numbers from whatever the null distribution is, you would first generate a uniform random number and then apply the inverse cdf of the null distribution to get a random value. That value would be a (one-sided) p-value.

That value would be a value ##t_0## of the test statistic. The p-value corresponding to ##t_0## would be (for a left tail test) the cdf evaluated at ##t_0## so you get back the original random number that you chose from a uniform distribution.

It looks like we're ok for a left tail test from a continuous distribution. Are things really going to work out for other types of acceptance/rejection regions?
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
Replies
3
Views
15K
  • · Replies 21 ·
Replies
21
Views
4K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 5 ·
Replies
5
Views
15K
  • · Replies 20 ·
Replies
20
Views
4K