Tennis Probabilities: Player A vs. Player B

In summary, the probability of winning against any rival is 0.5 if player A is playing against player B, and 0.6 if player B is playing against player A.
  • #1
musicgold
304
19
Hi,

There are two tennis players. Player A wins a point 50% of the time (against any rival), while Player B wins a point 60% of the time against any rival. That is P(A) = 0.50 and P(B) = 0.60.

If Player A plays with Player B, what is the probability that Player A beats Player B, i.e. P(A, B) and the probability that Player B beats Player A, P(B, A) ?

I am not sure how I should simulate this problem.


Thanks,

MG.
 
Physics news on Phys.org
  • #2
Intuitively, since A is an "average" player and B beats the "average" player 60% of the time, perhaps B should win 60% of the time against A.

But it doesn't necessarily work like that. For example, perhaps A and B are equally skilled when playing against people at a given experience level, except that B is better at taking advantage of the weaknesses of poor players than A is. That would give B a higher win percentage, without giving him any advantage against A.

Conclusion: you just don't have enough information.
 
  • #3
Player B has a P(A)*P(B) / (P(A)*P(B) + (1-P(A))*(1-P(B))) = 60% chance of winning a point against player A.

The probability of then winning a MATCH against A will then be significantly higher (close to 100%) depending on if they play best of three or five sets.
 
  • #4
winterfors said:
Player B has a P(A)*P(B) / (P(A)*P(B) + (1-P(A))*(1-P(B))) = 60% chance of winning a point against player A.
Justify your claim. What assumptions are you making?
 
Last edited:
  • #5
Player B has a P(A)*P(B) / (P(A)*P(B) + (1-P(A))*(1-P(B))) = 60% chance of winning a point against player A.

Can you please explain your answer.

I am reading a book: Calculated Bets by Steven Skiena. The book explains mathematical modeling to beat the JAI ALAI betting system.
The author used the following function get the answer to the above problem.

[tex] P (A , B ) = \frac { 1 + [ P (A) - P(B) ]^ \alpha} {2} [/tex] if P(A) >= P(B)

[tex] P (A , B ) = \frac { 1 - [ P (B) - P(A) ]^ \alpha} {2} [/tex] if P(A) <= P(B)

The constant [tex] \alpha >= 0 [/tex] is used as a fudge factor to tune the results of this function to the observed stats.
 
  • #6
I was using a Bayesian approach.

Let's denote the probability that player [tex]X_1[/tex] wins against player [tex]X_2[/tex] :
[tex]P(W | X_1, X_2) [/tex]
We know the following:

[tex]P(W | A) = \sum\limits_{X}P(W | A, X)P(X) = 0.5 [/tex] (called P(A) in posts above)
[tex]P(W | B) = \sum\limits_{X}P(W | B, X)P(X) = 0.6 [/tex] (called P(B) in posts above)

Furthermore, chance of anyone beating anyone is 50%
[tex]P(W) = \sum\limits_{X_1}\sum\limits_{X_2}P(W, X_1, X_2)= 0.5 [/tex]
due to symmetry.We want to calculate [tex]P(W | A, B) [/tex]. This can be done from assuming that
[tex]P(W, X_1, X_2) = P(X_1 | W)P(X_2 | W)P(W)[/tex].

Then,
[tex]P(W| A, B) = \frac{P(W, A, B)}{P(A, B)}[/tex]
[tex] = \frac{P(A | W)P(B | W)P(W)}{P(A, B)}[/tex]
[tex] = \frac{P(W|A)P(A) P(W|B)P(B) /P(W)}{(P(W|A)P(A) P(W|B)P(B) /P(W)) + P(L|A)P(A) P(L|B)P(B) /P(L)} [/tex]
[tex] = \frac{P(W|A) P(W|B) } {(P(W|A) P(W|B)) + P(L|A)P(L|B) } [/tex]
[tex] = \frac{0.5 * 0.6 } {0.5 * 0.6 + 0.5 * 0.4 } [/tex]
[tex] = 0.6 [/tex]

where [tex]P(L|A)=1-P(W|A)[/tex] and [tex]P(L|B)=1-P(W|B)[/tex] is the probability of losing.

It all got a bit technical, sorry about that, but I can't find any more simple derivation...
 
Last edited:
  • #7
winterfors,

Thanks for the detail explanation. However, I am stuck at the following equation.
I would appreciate it if you could explain this to me.

winterfors said:
[tex]P(W | A) = \sum\limits_{X}P(W | A, X) = 0.5 [/tex] (called P(A) in posts above)
I don't understand how we can sum individual probabilities to get a collective probability of A winning against anybody, as the sum may be bigger than 1.

Let us say, there are 4 players, A, B, X1 and X2, and P(W | A, X1) = 0.4, P(W | A, X2) = 0.7.

Then P(W | A, X1) + P(W | A, X2) > 1

IMHO, we should take an average (or weighted average) of individual winning probabilities to get a collective probability.
 
  • #8
You're perfectly right it dosn't make sense. I meant the expectation over [tex]X[/tex] rather than the sum.
[tex]P(W | A) = \sum\limits_{X}P(W | A, X)P(X) = 0.5 [/tex]

(I have corrected my original post)
 
  • #9
winterfors,

Thanks. I am still digesting your explanation. I am a bit confused with your notations. Could you explain what the following notations stand for?

[tex] P(X) [/tex]

[tex] P(W, X_1, X_2) [/tex] versus [tex] P(W | X_1, X_2) [/tex]

[tex] P(X_1 | W) [/tex]

[tex] P(W) [/tex]
 
  • #10
[tex] P(X) [/tex] Probability that any random player will have the value [itex]X[/itex]. If for instance [itex]X[/itex] is a number describing player's strength, there might be 10% of players with [itex]X=1[/itex], 40% with [itex]X=2[/itex] and 50% with [itex]X=3[/itex].

Note that this probability is not actually used to calculate [itex] P(W | A, B) [/itex], all terms [itex] P(X) [/itex] cancel out along the way.

[tex] P(W | X_1, X_2) [/tex] Probability of player [itex] X_1 [/itex] winning over player [itex] X_2 [/itex] (vertical bar indicated conditional probability of W (=winning), given players [itex] X_1 [/itex] and [itex] X_2 [/itex].

[tex] P(W, X_1, X_2) [/tex] Probability of player [itex] X_1 [/itex] winning over player [itex] X_2 [/itex] times relative abundances of players with [itex]X=X_1[/itex] and [itex]X=X_2[/itex], respectively.

[tex] P(W) [/tex] Probability that any random player will win over another random player

[tex] P(X_1 | W) [/tex] The probability of a random player [itex] X [/itex] will have the value [itex] X = X_1 [/itex], given that he has just won a match.
 
  • #11
winterfors,

Thanks for bearing with me. I get all the notations except [tex] P(W, X_1, X_2) [/tex].
Could you please explain that concept with an example?
 
  • #12
winterfors said:
I was using a Bayesian approach.

Let's denote the probability that player [tex]X_1[/tex] wins against player [tex]X_2[/tex] :
[tex]P(W | X_1, X_2) [/tex]



We know the following:

[tex]P(W | A) = \sum\limits_{X}P(W | A, X)P(X) = 0.5 [/tex] (called P(A) in posts above)
[tex]P(W | B) = \sum\limits_{X}P(W | B, X)P(X) = 0.6 [/tex] (called P(B) in posts above)

Furthermore, chance of anyone beating anyone is 50%
[tex]P(W) = \sum\limits_{X_1}\sum\limits_{X_2}P(W, X_1, X_2)= 0.5 [/tex]
due to symmetry.


We want to calculate [tex]P(W | A, B) [/tex]. This can be done from assuming that
[tex]P(W, X_1, X_2) = P(X_1 | W)P(X_2 | W)P(W)[/tex].

Then,
[tex]P(W| A, B) = \frac{P(W, A, B)}{P(A, B)}[/tex]
[tex] = \frac{P(A | W)P(B | W)P(W)}{P(A, B)}[/tex]
[tex] = \frac{P(W|A)P(A) P(W|B)P(B) /P(W)}{(P(W|A)P(A) P(W|B)P(B) /P(W)) + P(L|A)P(A) P(L|B)P(B) /P(L)} [/tex]
[tex] = \frac{P(W|A) P(W|B) } {(P(W|A) P(W|B)) + P(L|A)P(L|B) } [/tex]
[tex] = \frac{0.5 * 0.6 } {0.5 * 0.6 + 0.5 * 0.4 } [/tex]
[tex] = 0.6 [/tex]

where [tex]P(L|A)=1-P(W|A)[/tex] and [tex]P(L|B)=1-P(W|B)[/tex] is the probability of losing.

It all got a bit technical, sorry about that, but I can't find any more simple derivation...
You have a notational problem. The events separated by commas after the conditional bar should commute, but in your notation P(W|A,B) is apparently not the same as P(W|B,A). You need to keep track of which player the W refers to. For example, you might say P(W(A) | A, B), where W(A) means "A wins," A means "A plays," and B means "B plays." It looks like the fact that you don't do this leads to some confusion later on.
 
  • #13
As mXSCNT points out, I have messed up the notation in my previous post (definitions of [itex]P(W |X_1)[/itex] and [itex]P(W |X_2)[/itex] are ambiguous). I have tried to make a more formal derivation below.


PRELIMINARIES AND NOTATION

Let's denote the probability that player [itex]A[/itex] wins against player [itex]B[/itex] :
(1) [tex]P(W | A_1, B_2) [/tex]
where the indices 1 and 2 are only to mark that it is player 1 winning over player 2 (and not the other way around)

This implies that [itex]P(W | A_1, B_2)[/itex] is equal to the probability that player [itex]B[/itex] loses to player [itex]A[/itex]
(2) [tex]P(W | A_1, B_2) = P(L | B_1, A_2) [/tex]

One can define joint probability distributions over the space of all (pairwise) combinations of players and all outcomes (Win / Loss) by multiplying with [itex]P(A)P(B)[/itex]
(3) [tex]P(W, A_1, B_2) = P(W | A_1, B_2) P(A) P(B)[/tex]

One can also define marginal probability distributions by summing over all [itex]X_1[/itex] or [itex]X_2[/itex]
(4) [tex]P(W , A_1) = \sum\limits_{X_2}P(W, A_1, X_2)[/tex]

(5) [tex]P(W , B_2) = \sum\limits_{X_1}P(W, X_1, B_2)[/tex]

(6) [tex]P(W) = \sum\limits_{X_1}\sum\limits_{Y_2}P(W, X_1, Y_2)[/tex]

From equation (2) we can deduce also that [itex]P(W, A_1) =P(L, A_2)[/itex] and [itex]P(W , B_1) =P(L , B_2)[/itex].

Finally, we can define the conditional probabilites:
(7) [tex]P(W | A_1) P(A) = P(A_1|W) P(W) = P(W, A_1)[/tex]

(8) [tex]P(W | B_2) P(B) = P(B_2|W) P(W) = P(W, B_2)[/tex]


WHAT WE KNOW

(9) [tex]P(W | B_1) = 0.6 [/tex]

(10) [tex]P(W | A_2) = 0.5 [/tex]



ASSUMTIONS MADE

(11) [tex]P(W) = P(L)= 0.5 [/tex]
The probability of a random player winning (or losing) against another random player is exactly 50%

(12) [tex]P(W, B_1, A_2) = P(B_1 | W) P(A_2 | W) P(W) [/tex]

This assumption can be justified by the maximum entropy principle, since it minimizes the Shannon information of [itex]P(O, B_1, A_2)[/itex] (O= W or L) given that we know [itex] P(B_1 | W) [/itex], [itex] P(A_2 | W) [/itex] and [itex] P(W) [/itex].



DEDUCTIONS FROM ABOVE

We want to calculate
[tex]P(W| B_1, A_2) = \frac{P(W, B_1, A_2)}{P(B) P(A)} [/tex]

Inserting (12) gives
[tex]P(W| B_1, A_2) = \frac{P(B_1 | W) P(A_2 | W) P(W)}{P(B) P(A)} [/tex]

Using (7) and (8) we can finally write
[tex]P(W| B_1, A_2) = \frac{P(W| B_1) P(W| A_2 ) }{P(W)} [/tex]

Inserting (9) (10) and (11) gives
[tex]P(W| B_1, A_2) = \frac{0.6 * 0.5 }{0.5} = 0.6[/tex]

That is, the probability of player B beating player A is 60%
 
Last edited:
  • #14
winterfors,

Thanks a lot. Though your solution is not fully clear to me, I appreciate your efforts.
I think I need to do more reading on the Bayesian probability theory.

MG.
 
  • #15
musicgold said:
Hi,

There are two tennis players. Player A wins a point 50% of the time (against any rival), while Player B wins a point 60% of the time against any rival.
...
If Player A plays with Player B,
...

Then, considering long time of observation (or "after a large amount of points played" ), for each 10 points played, Player A wins 5 points, and Player B wins 6 points...

:yuck:
 
Last edited:
  • #16
musicgold said:
There are two tennis players. Player A wins a point 50% of the time (against any rival), while Player B wins a point 60% of the time against any rival.
...
If Player A plays with Player B,...

Kittel Knight said:
Then, ...for each 10 points played, Player A wins 5 points, and Player B wins 6 points...

It seems the OP, the way it is, leads to a strange conclusion!

Who is wrong: me, or the assumptions in the OP?
:confused:
 

1. What are the chances of Player A winning against Player B?

The chances of Player A winning against Player B depend on multiple factors, such as their individual skills, physical condition, and past performance. It is difficult to determine an exact probability without considering these variables.

2. Is it possible to predict the outcome of a tennis match between Player A and Player B?

While it is not possible to predict the outcome of a tennis match with 100% accuracy, scientists and statisticians use various methods, such as statistical analysis and predictive modeling, to estimate the probability of each player winning.

3. How do you calculate the probability of a player winning a specific point or game?

The probability of a player winning a point or game is calculated using a combination of data analysis and mathematical formulas. Factors such as serving percentage, break point conversion rate, and unforced errors are considered when making these calculations.

4. Can the probability of a player winning change throughout a match?

Yes, the probability of a player winning can change throughout a match. As the match progresses, the players' performance and strategies may change, affecting their chances of winning. Additionally, external factors such as weather conditions can also impact the probability.

5. How accurate are the predicted probabilities of a tennis match?

The accuracy of predicted probabilities depends on the quality of data and the methods used to calculate them. While it is not possible to have a 100% accurate prediction, advanced statistical models and algorithms can provide fairly accurate probabilities based on historical data and current variables.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
804
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • General Math
Replies
6
Views
497
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
960
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
872
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
962
Back
Top