Many pool games, like 8-ball and 9-ball, are played with the "winner breaks" rule. That is, winning a game gives you the right to break (make the opening shot) in the next game. Players usually flip a coin or lag to decide who breaks on the first game. For decent players, getting the break can be a real advantage, because they can sometimes "break and run", winning the game in one turn. And if not, they still may be able to gain control of the game by leaving their opponent a difficult shot. This amounts to saying that the probability r of winning the game on your break is greater than the probability s of winning the game on your opponent's break. The winner breaks rule means that a series of games is not very well modeled as a series of independent bernoulli trials. The outcomes of games i and j are not independent, although their dependence gets smaller as |j-i| increases. It would seem that in a very long series of games, the fraction of games you win would very likely approach some value f. This sounds like a law of large numbers claim, but it isn't immediately obvious to me how you would prove it. The fraction of games that you win is also the fraction of games where you are breaking, so f = f*r + (1-f)*s f = s/(s+1-r) I think this is the probability of interest when deciding what the odds are that player A wins a long match against player B, say a race to 9 or 11 games. Of course, that assumes you have some way of knowing r and s. Is there a name for this kind of process, where there are two different success probabilities, r and s, one applying when the previous trial was successful and the other when it failed?