# Alternative to the p-value

1. Sep 8, 2008

### natski

Dear all,

If one has a distribution f(x,mean,sigma), for example the normal distribution, is there a better alternative to the p-value?

I have heard many comments that claim the p-value is not to be taken too seriously.

Consider that I wish to calculate that probability that x could take a value x1, where x1 > mean, how do I do this more reliably? Is there a Bayesian method perhaps?

Cheers,
Natski

2. Sep 8, 2008

### Focus

Sorry I don't quite understand this. What do you need the p-value for, if you want to know the probability that x will take a value x1 (when x is distributed with pdf f) then thats just f(x1). Or do you mean when you don't know mean or sigma (or both)? In that case you can get an estimate and a confidence interval.

3. Sep 8, 2008

### natski

Firstly, yes we do know the mean and variance, we know everything about the distribution f.

So I understand that I could simply integrate f from x1 to infinity to get the one-tailed p-value.

So the one-sided p-value gives me the probability that the value x1 could have be acquired simply by chance. But I have read that (1-p) is NOT the probability that the alternative hypothesis (i.e. it did occur by chance) is true (see Wikipedia entry on the p-value).

Hypothesis H0: The value x1 was acquired by chance.
Alternative hypothesis H1: The value x1 was not acquired by chance (e.g. a signal is present or something)

What I wanted was something that was a little more sophisticated that a p-value and could give the probability that the alternative hypothesis is correct.

Natski

4. Sep 8, 2008

### Focus

So you have a distribution and you want to know if the observation x1 is from the distribution? Well what you said is correct, the integral gives you the probability that x1 is from the distribution. To test a hypothesis you need to pick a significance level. This is usualy 0.05. Thus is the probability is less than that you reject the hypothesis x1 is from the distribution, if it is more then you accept that x1 is from the distribution.

You don't do 1-p for the alternative because you assume the null hypothesis until you have enough evidence to reject it.

5. Sep 8, 2008

### natski

Ok but this is all p-testing really. I was hoping for some kind of Bayesian version of the p-test which would be more concrete.

Say that H0: x1 is a member of the f distribution
and H1: x1 is a member of a new distribution, g, which has a mean at x1.

And we also know:
- the mean of g > mean of f since x1 > the mean of f

So is there a Bayesian way of comparing the probability that the value x1 was a result of f or g?

Natski

Last edited: Sep 8, 2008
6. Sep 8, 2008

### rbeale98

There is a Bayesian method for hypothesis testing but I do not know it well enough to explain here. it may or may not be better than the p-test for your specific application.

7. Sep 8, 2008

### Count Iblis

You need the prior probabilities for the distributions f and g.

8. Sep 8, 2008

### Count Iblis

So, consider this example. Suppose you have two populations, one distributed according to f, the other according to g. Someone randomly chooses one of the two populations (with 50% probability) and then samples x. He then tells you the value x he sampled, but doesn't tell you what population he sampled x from. The question then is: what is the probability it was sampled from f given the value x?

This is a straghtforward application of Bayes's theorem. So, try to solve this problem!

9. Sep 9, 2008

### natski

OK, I can solve this problem quite easily by taking the odds ratio, $O_{10}$ of the two probability distributions evaluating them for x=x1.

If there are only two possible hypotheses then, P(H_1) = 1/[1+O_{10}^{-1}].

This is all fine and good. But is it really allowed?

I mean, I have chosen by alternative hypothesis to have a pdf centred at the observed value of the data, this feels like cheating in a way.... but I have no prior information as to what the value could be otherwise....