Comparing probabilities of Binomially-distributed RVs

  • Thread starter Thread starter TaPaKaH
  • Start date Start date
  • Tags Tags
    Probabilities
TaPaKaH
Messages
51
Reaction score
0
Suppose we have two series of independent Bernoulli experiments with unknown probabilities ##p_1## and ##p_2##. The first series registers ##x_1## successes in ##n_1## trials while the second series registers ##x_2## successes in ##n_2## trials. Is there a way that we can compute probability ##\mathbb{P}(p_1\geq p_2)##?
 
Physics news on Phys.org
I have seen a two-sample test with null hypothesis ##H_0: p_1 = p_2 ## and alternative hypothesis ##H_a: p_1>p_2##. This would call for a 1-tailed T test using pooled variance.
This question seems like a variation on that standard problem.
 
Isnt this equivalent to the two distributions being equal? If the two ratios are equal, then E|X_1=X_2|=0 , right?
 
TaPaKaH said:
Is there a way that we can compute probability ##\mathbb{P}(p_1\geq p_2)##?

No, not unless you have some assumption or knowledge about the "prior" probability of that happening.

If you make specific assumptions about the probabilities, you can can compute the probability of various aspects of the data being what was observed. However the assumption that p_1 \geq p_2 isn't specific enough to do that. I think you must at least assume that p_1 = p_2.
 
TaPaKaH said:
Suppose we have two series of independent Bernoulli experiments with unknown probabilities ##p_1## and ##p_2##. The first series registers ##x_1## successes in ##n_1## trials while the second series registers ##x_2## successes in ##n_2## trials. Is there a way that we can compute probability ##\mathbb{P}(p_1\geq p_2)##?
Yes it is called Bayes' theorem. You must start by assuming some joint prior probability distribution of p_1 and p_2, representing your (lack of) knowledge about p_1 and p_2 before doing the experiment. Then you observe x_1 and x_2. Now you compute the probability distribution of p_1 and p_2 given x_1 and x_2 using the formula posterior density is proportional to prior density times likelihood. Normalise it to integrate to 1. Compute the probability that p_1 is greater than or equal to p_2.

Trouble is, the result will depend on your prior probability distribution of p_1 and p_2. According to the principles of Bayesian statistics, this prior distribution should represent your beliefs about p_1 and p_2 before doing the experiment.

The likelihood for p_1 and p_2 is proportional to lik(p_1, p_2) = p_1^{x_1} (1 - p_1}^{n_1 - x_1} . p_2^{x_2} (1 - p_2}^{n_2 - x_2}. So if for instance you start with uniform independent priors for p_1 and p_2, the answer is the integral of the likelihood over all p_1 greater than or equal to p_2, divided by the integral of the likelihood over all p_1 and p_2.

This results in an exercise concerning two independent beta distributed random variables. See for instance http://stats.stackexchange.com/ques...lity-px-y-given-x-bea1-b1-and-y-bea2-b2-and-x
 
Last edited:
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top