# Using a one-sided P-value test in a selection

Hey guys,

I am currently using a one-sided P-value test in a selection between two hypotheses:

H0: null hypothesis
H1: "signal" hypothesis

I know that there are only two possible hypotheses. I have read much in the literature about how the P-value is a bit crappy and a Bayesian model selection can offer the true probabilities, whereas the P-value only lets you reject H0 at a certain significance level.

However, if I reject H0 at say 95% probability, then since there are only 2 possible hypotheses, does this not mean that H1 must be accepted at 95% confidence level too? Afterall, what other possibility is there?

Natski

Related Set Theory, Logic, Probability, Statistics News on Phys.org

I'm not really a statistician but the idea isn't the confidence interval. The power of the test matters when deciding what test to use. There are two types of errors you can run into, you can reject H_0 when H_0 is true (with probability alpha) and accept H_0 when H_0 is false (probability beta).

When you say confidence interval, you are talking about alpha. 95% confidence interval means that the probability of you rejecting H_0 when it is true is 0.05. But this doesn't really say much about the probability of you accepting H_0 when H_0 is false. The Neyman Pearson Lemma tells you for a given alpha, the lowest beta can be obtained by using likelihood ratios. P-value comes in here, you can work out what significance level you need to reject H_0 (say you have p-value 0.02, then you can accept H_0 if you take alpha to be less than 0.02).

Again I have no idea about Bayesian statistics but I believe the alpha and beta are not just dependent on the test you use but also dependent on the sample.