I Why Are P-values Uniformly Distributed in Hypothesis Testing?

  • I
  • Thread starter Thread starter alan2
  • Start date Start date
  • Tags Tags
    Distributed
AI Thread Summary
P-values in hypothesis testing are uniformly distributed when derived from a continuous null distribution, as they represent the probability of observing a test statistic as extreme as the one calculated. To generate a p-value, one can produce a uniform random number and apply the inverse cumulative distribution function (CDF) of the null distribution, resulting in a p-value that reflects this uniformity. However, this uniformity may not hold true for discrete distributions or certain acceptance/rejection regions. The discussion emphasizes the need for an intuitive understanding of this concept, especially for those outside of statistical fields. Overall, the uniform distribution of p-values is contingent on the nature of the test statistic and the underlying distribution.
alan2
Messages
324
Reaction score
56
Help. I need an intuitive, non-mathematical explanation of why p-values from hypothesis testing are uniformly distributed. I was talking to a social scientist and got a blank stare. I couldn't come up with anything except the proof. Thanks.
 
Physics news on Phys.org
alan2 said:
Help. I need an intuitive, non-mathematical explanation of why p-values from hypothesis testing are uniformly distributed. I was talking to a social scientist and got a blank stare. I couldn't come up with anything except the proof. Thanks.
If you wanted to generate random numbers from whatever the null distribution is, you would first generate a uniform random number and then apply the inverse cdf of the null distribution to get a random value. That value would be a (one-sided) p-value.
 
alan2 said:
Help. I need an intuitive, non-mathematical explanation of why p-values from hypothesis testing are uniformly distributed.

That won't be true if the the test statistic has a discrete distribution.
tnich said:
If you wanted to generate random numbers from whatever the null distribution is, you would first generate a uniform random number and then apply the inverse cdf of the null distribution to get a random value. That value would be a (one-sided) p-value.

That value would be a value ##t_0## of the test statistic. The p-value corresponding to ##t_0## would be (for a left tail test) the cdf evaluated at ##t_0## so you get back the original random number that you chose from a uniform distribution.

It looks like we're ok for a left tail test from a continuous distribution. Are things really going to work out for other types of acceptance/rejection regions?
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top