Using a one-sided P-value test in a selection

  • Context: Undergrad 
  • Thread starter Thread starter natski
  • Start date Start date
  • Tags Tags
    P-value Test
Click For Summary
SUMMARY

This discussion focuses on the application of a one-sided P-value test in hypothesis selection, specifically between the null hypothesis (H0) and the "signal" hypothesis (H1). Participants highlight the limitations of P-values, emphasizing that while rejecting H0 at a 95% significance level suggests a strong case for H1, it does not equate to a 95% confidence in H1 due to potential Type I and Type II errors. The Neyman-Pearson Lemma is referenced as a method to minimize error rates through likelihood ratios, indicating that both alpha and beta are influenced by the chosen statistical test and sample size.

PREREQUISITES
  • Understanding of one-sided P-value tests
  • Familiarity with null and alternative hypotheses
  • Knowledge of Type I and Type II errors
  • Basic concepts of the Neyman-Pearson Lemma
NEXT STEPS
  • Explore Bayesian model selection techniques
  • Study the Neyman-Pearson Lemma in detail
  • Learn about likelihood ratios in hypothesis testing
  • Investigate the implications of sample size on alpha and beta errors
USEFUL FOR

Statisticians, data analysts, and researchers involved in hypothesis testing and model selection will benefit from this discussion, particularly those interested in the nuances of P-values and Bayesian approaches.

natski
Messages
262
Reaction score
2
Hey guys,

I am currently using a one-sided P-value test in a selection between two hypotheses:

H0: null hypothesis
H1: "signal" hypothesis

I know that there are only two possible hypotheses. I have read much in the literature about how the P-value is a bit crappy and a Bayesian model selection can offer the true probabilities, whereas the P-value only let's you reject H0 at a certain significance level.

However, if I reject H0 at say 95% probability, then since there are only 2 possible hypotheses, does this not mean that H1 must be accepted at 95% confidence level too? Afterall, what other possibility is there?

Natski
 
Physics news on Phys.org


I'm not really a statistician but the idea isn't the confidence interval. The power of the test matters when deciding what test to use. There are two types of errors you can run into, you can reject H_0 when H_0 is true (with probability alpha) and accept H_0 when H_0 is false (probability beta).

When you say confidence interval, you are talking about alpha. 95% confidence interval means that the probability of you rejecting H_0 when it is true is 0.05. But this doesn't really say much about the probability of you accepting H_0 when H_0 is false. The Neyman Pearson Lemma tells you for a given alpha, the lowest beta can be obtained by using likelihood ratios. P-value comes in here, you can work out what significance level you need to reject H_0 (say you have p-value 0.02, then you can accept H_0 if you take alpha to be less than 0.02).

Again I have no idea about Bayesian statistics but I believe the alpha and beta are not just dependent on the test you use but also dependent on the sample.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 24 ·
Replies
24
Views
7K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 23 ·
Replies
23
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
Replies
3
Views
12K