I Write probability in terms of shape parameters of beta distribution

hjam24
Messages
4
Reaction score
0
TL;DR Summary
Initially we have a probability of someone winning a game with certain scoring rules. The probability is of winning depends on the probability of winning a point, p, (which is assumed to be constant). The goal is to draw p from a beta distribution and change the formula accordingly
Assume that players A and B play a match where the probability that A will win each point is p, for B its 1-p and a player wins when he reach 11 points by a margin of >= 2The outcome of the match is specified by $$P(y|p, A_{wins})$$
If we know that A wins, his score is specified by B's score; he has necessarily scored max(11, y + 2) points

In the case of y >= 10 we have

$$ P(A_{wins} \cap y|p) = \binom{10 + 10}{10}p^{10}(1-p)^{10}
\cdot[2p(1-p)]^{y-10}\cdot p^ 2$$

The elements represents respectively:
- probability of reaching (10, 10)
- probability of reaching y after (10, 10)
- probability of A winning two times in a row

I would like to change the constant p assumption and draw p from a beta distribution.
The first part can be rewritten as as [beta-binomial](https://en.wikipedia.org/wiki/Beta-binomial_distribution) function:

$$ P(A_{wins} \cap y|\alpha, \beta) =\binom{10+10}{10}\frac{B(10+\alpha, 10+\beta)}{B(\alpha, \beta)} \cdot \space _{...} \cdot \space _{...}$$

The original formula can be simplified to

$$P(A_{wins} \cap y|p) = \binom{10 + 10}{10}p^{12}(1-p)^{10}
\cdot[2p(1-p)]^{y-10}$$

Is it correct to combine the first and third element as follows:

$$ P(A_{wins} \cap y|\alpha, \beta) =\binom{10+10}{10}\frac{B(12+\alpha, 10+\beta)}{B(\alpha, \beta)} \cdot \space _{...} $$
 
Physics news on Phys.org
hjam24 said:
TL;DR Summary: Initially we have a probability of someone winning a game with certain scoring rules. The probability is of winning depends on the probability of winning a point, p, (which is assumed to be constant). The goal is to draw p from a beta distribution and change the formula accordingly

Assume that players A and B play a match where the probability that A will win each point is p, for B its 1-p and a player wins when he reach 11 points by a margin of >= 2The outcome of the match is specified by $$P(y|p, A_{wins})$$
If we know that A wins, his score is specified by B's score; he has necessarily scored max(11, y + 2) points

In the case of y >= 10 we have

$$ P(A_{wins} \cap y|p) = \binom{10 + 10}{10}p^{10}(1-p)^{10}
\cdot[2p(1-p)]^{y-10}\cdot p^ 2$$

The elements represents respectively:
- probability of reaching (10, 10)
- probability of reaching y after (10, 10)
- probability of A winning two times in a row

I would like to change the constant p assumption and draw p from a beta distribution.
The first part can be rewritten as as [beta-binomial](https://en.wikipedia.org/wiki/Beta-binomial_distribution) function:

$$ P(A_{wins} \cap y|\alpha, \beta) =\binom{10+10}{10}\frac{B(10+\alpha, 10+\beta)}{B(\alpha, \beta)} \cdot \space _{...} \cdot \space _{...}$$

The original formula can be simplified to

$$P(A_{wins} \cap y|p) = \binom{10 + 10}{10}p^{12}(1-p)^{10}
\cdot[2p(1-p)]^{y-10}$$

Is it correct to combine the first and third element as follows:

$$ P(A_{wins} \cap y|\alpha, \beta) =\binom{10+10}{10}\frac{B(12+\alpha, 10+\beta)}{B(\alpha, \beta)} \cdot \space _{...} $$
It's not quite a beta-binomial distribution. However you can do a similar integral $$\int_0^1 P(A_{wins}\cap y|\alpha,\beta)Beta(p|\alpha,\beta)$$. Note also that the expression for P further simplifies before you do the integral.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top