Is There a Reliable Bayesian Alternative to the P-Value?

  • Thread starter natski
  • Start date
  • Tags
    P-value
In summary: NatskiThis is a tricky question. On one hand, you are essentially choosing to believe in the hypothesis that x1 was acquired by chance. On the other hand, you don't have any information about what the value could be, so you could be arbitrarily choosing to believe in any value. I think it is okay to choose the alternative hypothesis, but it is a bit cheaty.
  • #1
natski
267
2
Dear all,

If one has a distribution f(x,mean,sigma), for example the normal distribution, is there a better alternative to the p-value?

I have heard many comments that claim the p-value is not to be taken too seriously.

Consider that I wish to calculate that probability that x could take a value x1, where x1 > mean, how do I do this more reliably? Is there a Bayesian method perhaps?

Cheers,
Natski
 
Physics news on Phys.org
  • #2
Sorry I don't quite understand this. What do you need the p-value for, if you want to know the probability that x will take a value x1 (when x is distributed with pdf f) then that's just f(x1). Or do you mean when you don't know mean or sigma (or both)? In that case you can get an estimate and a confidence interval.
 
  • #3
Firstly, yes we do know the mean and variance, we know everything about the distribution f.

So I understand that I could simply integrate f from x1 to infinity to get the one-tailed p-value.

So the one-sided p-value gives me the probability that the value x1 could have be acquired simply by chance. But I have read that (1-p) is NOT the probability that the alternative hypothesis (i.e. it did occur by chance) is true (see Wikipedia entry on the p-value).

Hypothesis H0: The value x1 was acquired by chance.
Alternative hypothesis H1: The value x1 was not acquired by chance (e.g. a signal is present or something)

What I wanted was something that was a little more sophisticated that a p-value and could give the probability that the alternative hypothesis is correct.

Natski
 
  • #4
natski said:
So I understand that I could simply integrate f from x1 to infinity to get the one-tailed p-value.

So you have a distribution and you want to know if the observation x1 is from the distribution? Well what you said is correct, the integral gives you the probability that x1 is from the distribution. To test a hypothesis you need to pick a significance level. This is usualy 0.05. Thus is the probability is less than that you reject the hypothesis x1 is from the distribution, if it is more then you accept that x1 is from the distribution.

You don't do 1-p for the alternative because you assume the null hypothesis until you have enough evidence to reject it.
 
  • #5
Ok but this is all p-testing really. I was hoping for some kind of Bayesian version of the p-test which would be more concrete.

Say that H0: x1 is a member of the f distribution
and H1: x1 is a member of a new distribution, g, which has a mean at x1.

And we also know:
- the mean of g > mean of f since x1 > the mean of f

So is there a Bayesian way of comparing the probability that the value x1 was a result of f or g?

Natski
 
Last edited:
  • #6
There is a Bayesian method for hypothesis testing but I do not know it well enough to explain here. it may or may not be better than the p-test for your specific application.
 
  • #7
You need the prior probabilities for the distributions f and g.
 
  • #8
So, consider this example. Suppose you have two populations, one distributed according to f, the other according to g. Someone randomly chooses one of the two populations (with 50% probability) and then samples x. He then tells you the value x he sampled, but doesn't tell you what population he sampled x from. The question then is: what is the probability it was sampled from f given the value x?

This is a straghtforward application of Bayes's theorem. So, try to solve this problem!
 
  • #9
OK, I can solve this problem quite easily by taking the odds ratio, $O_{10}$ of the two probability distributions evaluating them for x=x1.

If there are only two possible hypotheses then, P(H_1) = 1/[1+O_{10}^{-1}].

This is all fine and good. But is it really allowed?

I mean, I have chosen by alternative hypothesis to have a pdf centred at the observed value of the data, this feels like cheating in a way... but I have no prior information as to what the value could be otherwise...
 

1. What is an alternative to the p-value?

An alternative to the p-value is the confidence interval. Instead of providing a single value, the confidence interval gives a range of values within which the true value is likely to fall. This can provide a more informative and comprehensive understanding of the data.

2. How is the confidence interval calculated?

The confidence interval is calculated using the sample mean, standard deviation, and sample size. It is based on the assumption that the data follows a normal distribution. The formula for a 95% confidence interval is mean ± 1.96 * (standard deviation / √sample size).

3. How is the confidence interval interpreted?

The confidence interval represents the range of values within which the true population mean is likely to fall. For example, a 95% confidence interval of 50-60 for a sample mean of 55 would mean that we are 95% confident that the true population mean lies between 50 and 60.

4. What are the advantages of using a confidence interval over a p-value?

A confidence interval provides a more comprehensive understanding of the data by giving a range of values rather than a single point estimate. It also takes into account the sample size and provides a measure of uncertainty. Additionally, confidence intervals can be used for all types of data, while p-values are limited to certain statistical tests.

5. Are there any limitations to using a confidence interval?

One limitation of a confidence interval is that it assumes a normal distribution of data. If the data is not normally distributed, the confidence interval may not accurately represent the true range of values. Additionally, the interpretation of a confidence interval may vary depending on the sample size and confidence level chosen.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
958
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
7K
  • Set Theory, Logic, Probability, Statistics
Replies
19
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
17
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
840
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
2K
Back
Top