Probability of a player winning a best of 7

  • Context: Graduate 
  • Thread starter Thread starter disregardthat
  • Start date Start date
  • Tags Tags
    Probability
Click For Summary
SUMMARY

This discussion focuses on calculating the probability of a player winning a best of 7 match, given a probability p of winning individual games. The probability of player A winning the first game and the entire match is expressed as a summation formula: ∑^3_{k=0} p^4(1-p)^k {2+k choose k}. The derived polynomial equation 2p^6 - 6p^5 + 5p^4 - p + 1 = 0.73 indicates that p can be approximately 0.3 or 0.7. The conversation raises questions about interpreting these probabilities, particularly regarding the variability of p and its implications for match outcomes.

PREREQUISITES
  • Understanding of probability theory and stochastic variables
  • Familiarity with combinatorial mathematics, specifically binomial coefficients
  • Knowledge of polynomial equations and their solutions
  • Basic concepts of statistical distributions, such as normal distribution
NEXT STEPS
  • Research the implications of stochastic variables in probability calculations
  • Learn about binomial distributions and their applications in game theory
  • Explore methods for estimating probabilities in sports analytics
  • Study the integration of probability distributions to derive expected values
USEFUL FOR

This discussion is beneficial for mathematicians, statisticians, sports analysts, and anyone interested in probability theory and its applications in competitive scenarios.

disregardthat
Science Advisor
Messages
1,864
Reaction score
34
This discussion popped up on a different forum and I'd like to hear some opinions on this.

Suppose player A and B are playing each other in a best of 7 match. Player A has probability p of winning against B. We want to calculate the probability of a player winning the first game and the entire match.

The probability of this may be calculated as such: We calculate the probability of A winning the first game and the entire match. Then A wins the first and last game, and B wins either of k out of 2+k matches in between, for k = 0,1,2,3. Then the probability becomes

##\sum^3_{k=0} p^4(1-p)^k {2+k \choose k}##

Conversely, the same happening for B has probability:

##\sum^3_{k=0} p^k(1-p)^4 {2+k \choose k}##

Summing these yields

##2p^6 - 6p^5 + 5p^4 - p + 1##.

The observed probability of this happening is 0.73. On to the questions:

Solving ##2p^6 - 6p^5 + 5p^4 - p + 1 = 0.73## yields ##p \approx 0.3## or ##0.7##.

On to the questions:1) Can one interpret 0.7 as the expected probability of the stronger player winning the first game and the entire match? I'm not so sure.

2) Given p as a random variable, how can one calculate the probability of the stronger player winning the entire match and the first match?

3) Given p as a random variable, how can one calculate the probability of a player winning the entire match GIVEN he wins the first match?

4) What is the difference between 2) and 3), i.e. how can they be interpreted?

5) What can the entire data (list of observed results) say about the distribution of p, or any of the above?

I am sure there is a lot of confusion here. I'd appreciate someone setting this straight.
 
Last edited:
Physics news on Phys.org
disregardthat said:
The observed probability of this happening is 0.73.

This rings a warning bell. From your setup, you are looking at the playoff games in one or more sports. Your underlying assumption is that p is fixed, but this is certainly not true and will depend on the actual teams. You are trying to infer a parameter which will change with the matchup from a large number of matchups.

disregardthat said:
Can one interpret 0.7 as the expected probability of the stronger player winning the first game and the entire match? I'm not so sure.
No, by your own definition, p is the probability of the stronger team (in a fixed matchup) winning any particular game.

disregardthat said:
Given p as a random variable, how can one calculate the probability of the stronger player winning the entire match and the first match?
A priori you could do this if you knew the distribution of p, which is going to depend on the possible matchups. Naturally, this probability is going to be larger for larger p (for p = 1 the probability is 1).
 
Thanks. But the expression, as written, does make sense for a stochastic variable p, right?

So we have an estimate of the new variable, ##X =2p^6 - 6p^5 + 5p^4 - p + 1##.

Let's say we know something about p (e.g. normally distributed, mean 0.5 and standard deviation s), what can we infer about X, and in particular P(X >= 0.73)?
 
If you see p as a stochastic variable, then the probability of having one team win the first game and the series is a stochastic variable and has a certain distribution. In order to find out the probability given this distribution you will have to integrate over the distribution of p. Inserting the expectation value of p into the formula will generally not give you the expectation value of X.
 

Similar threads

  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 75 ·
3
Replies
75
Views
8K
  • · Replies 4 ·
Replies
4
Views
17K
  • · Replies 9 ·
Replies
9
Views
5K