Is There a Reliable Bayesian Alternative to the P-Value?

  • Context: Graduate 
  • Thread starter Thread starter natski
  • Start date Start date
  • Tags Tags
    P-value
Click For Summary

Discussion Overview

The discussion revolves around the exploration of alternatives to the p-value in statistical hypothesis testing, particularly through Bayesian methods. Participants examine the reliability of p-values and seek a more sophisticated approach to assess the probability of hypotheses related to a given distribution.

Discussion Character

  • Exploratory
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Natski questions the reliability of p-values and seeks a Bayesian alternative to assess the probability that a value x1 is greater than the mean of a known distribution f.
  • One participant suggests that if the mean and variance of the distribution are known, the probability of x1 can be directly calculated using the distribution's probability density function (pdf).
  • Natski clarifies that while integrating the distribution from x1 to infinity provides a one-tailed p-value, it does not equate to the probability that the alternative hypothesis is true.
  • Another participant emphasizes the need to establish a significance level for hypothesis testing, which is typically set at 0.05, and discusses the assumption of the null hypothesis until evidence suggests otherwise.
  • Natski expresses a desire for a Bayesian approach that compares the probabilities of x1 being from distribution f versus a new distribution g, which has a mean greater than that of f.
  • A participant mentions that a Bayesian method exists for hypothesis testing but lacks sufficient knowledge to explain it in detail.
  • Another participant notes the necessity of prior probabilities for the distributions f and g in a Bayesian framework.
  • A hypothetical scenario is presented involving two populations, prompting a discussion on applying Bayes's theorem to determine the probability of sampling from one distribution given a value x.
  • Natski raises a concern about the validity of choosing an alternative hypothesis centered at the observed value, questioning whether this approach is legitimate without prior information.

Areas of Agreement / Disagreement

Participants express differing views on the reliability of p-values and the appropriateness of Bayesian methods. There is no consensus on the best approach, and multiple competing perspectives on hypothesis testing remain present.

Contextual Notes

Participants discuss the implications of prior probabilities and the choice of hypotheses in Bayesian analysis, indicating potential limitations in the assumptions made during the discussion.

natski
Messages
262
Reaction score
2
Dear all,

If one has a distribution f(x,mean,sigma), for example the normal distribution, is there a better alternative to the p-value?

I have heard many comments that claim the p-value is not to be taken too seriously.

Consider that I wish to calculate that probability that x could take a value x1, where x1 > mean, how do I do this more reliably? Is there a Bayesian method perhaps?

Cheers,
Natski
 
Physics news on Phys.org
Sorry I don't quite understand this. What do you need the p-value for, if you want to know the probability that x will take a value x1 (when x is distributed with pdf f) then that's just f(x1). Or do you mean when you don't know mean or sigma (or both)? In that case you can get an estimate and a confidence interval.
 
Firstly, yes we do know the mean and variance, we know everything about the distribution f.

So I understand that I could simply integrate f from x1 to infinity to get the one-tailed p-value.

So the one-sided p-value gives me the probability that the value x1 could have be acquired simply by chance. But I have read that (1-p) is NOT the probability that the alternative hypothesis (i.e. it did occur by chance) is true (see Wikipedia entry on the p-value).

Hypothesis H0: The value x1 was acquired by chance.
Alternative hypothesis H1: The value x1 was not acquired by chance (e.g. a signal is present or something)

What I wanted was something that was a little more sophisticated that a p-value and could give the probability that the alternative hypothesis is correct.

Natski
 
natski said:
So I understand that I could simply integrate f from x1 to infinity to get the one-tailed p-value.

So you have a distribution and you want to know if the observation x1 is from the distribution? Well what you said is correct, the integral gives you the probability that x1 is from the distribution. To test a hypothesis you need to pick a significance level. This is usualy 0.05. Thus is the probability is less than that you reject the hypothesis x1 is from the distribution, if it is more then you accept that x1 is from the distribution.

You don't do 1-p for the alternative because you assume the null hypothesis until you have enough evidence to reject it.
 
Ok but this is all p-testing really. I was hoping for some kind of Bayesian version of the p-test which would be more concrete.

Say that H0: x1 is a member of the f distribution
and H1: x1 is a member of a new distribution, g, which has a mean at x1.

And we also know:
- the mean of g > mean of f since x1 > the mean of f

So is there a Bayesian way of comparing the probability that the value x1 was a result of f or g?

Natski
 
Last edited:
There is a Bayesian method for hypothesis testing but I do not know it well enough to explain here. it may or may not be better than the p-test for your specific application.
 
You need the prior probabilities for the distributions f and g.
 
So, consider this example. Suppose you have two populations, one distributed according to f, the other according to g. Someone randomly chooses one of the two populations (with 50% probability) and then samples x. He then tells you the value x he sampled, but doesn't tell you what population he sampled x from. The question then is: what is the probability it was sampled from f given the value x?

This is a straghtforward application of Bayes's theorem. So, try to solve this problem!
 
OK, I can solve this problem quite easily by taking the odds ratio, $O_{10}$ of the two probability distributions evaluating them for x=x1.

If there are only two possible hypotheses then, P(H_1) = 1/[1+O_{10}^{-1}].

This is all fine and good. But is it really allowed?

I mean, I have chosen by alternative hypothesis to have a pdf centred at the observed value of the data, this feels like cheating in a way... but I have no prior information as to what the value could be otherwise...
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
10K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
8K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
2
Views
2K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K