Winning and losing teams probability distribution

libragirl79
Messages
30
Reaction score
0
Suppose two teams play a series of games, each producing a winner and a loser, until one team has won two more games than the other. Let Y be the total number of games played. Assume each team has a chance of 0.5 to win each game, independent of results of the previous games. Find the probability distribution of Y.



The Attempt at a Solution



At first I thought I'd say that given that team A has had X games and team B has Z games, then Y=X+Z, and since it takes two more games for the winner to get, then Z=X+2, so Y=2x+2. Probability function would be then P(Y<=y)=P(Y<=2x+2) but I am not sure where to go from here...Also, was thinking that it may be a binomial, with (Y choose 2x+2)(0.5)^(2x+2)(0.5)^y where x=1,2,3...y.

Any input is appreciated!
 
Physics news on Phys.org
libragirl79 said:
Suppose two teams play a series of games, each producing a winner and a loser, until one team has won two more games than the other. Let Y be the total number of games played. Assume each team has a chance of 0.5 to win each game, independent of results of the previous games. Find the probability distribution of Y.



The Attempt at a Solution



At first I thought I'd say that given that team A has had X games and team B has Z games, then Y=X+Z, and since it takes two more games for the winner to get, then Z=X+2, so Y=2x+2. Probability function would be then P(Y<=y)=P(Y<=2x+2) but I am not sure where to go from here...Also, was thinking that it may be a binomial, with (Y choose 2x+2)(0.5)^(2x+2)(0.5)^y where x=1,2,3...y.

Any input is appreciated!

Have you studied Markov chains yet? If so, it is a simple exercise to get the generating function of Y, hence to find the distribution P{Y = n}, n = 2,3,4,... .

Here is a hint: let X = #won by A - #won by B. We start at X = 0. X increases by 1 if A wins and decreases by 1 if B wins. The game stops when X reaches ± 2. So, we have a simple 4-state Markov chain with states X = {0,1,-1,'stop'}, and Y = first passage time from state 0 to state 'stop'.

RGV
 
No, we haven't done Markov Chains yet...
What do you mean by "first passage time"?
 
libragirl79 said:
No, we haven't done Markov Chains yet...
What do you mean by "first passage time"?

The first passage time from any state i to state 'stop' is the first time (t = 1,2,3,...) at which state 'stop' is reached. It is exactly the Y that you seek. (Calling it a first-passage time allows you to do a meaningful Google search of that term to find out more about it.)

RGV
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top