Tennis Probabilities: Player A vs. Player B

musicgold
Messages
303
Reaction score
19
Hi,

There are two tennis players. Player A wins a point 50% of the time (against any rival), while Player B wins a point 60% of the time against any rival. That is P(A) = 0.50 and P(B) = 0.60.

If Player A plays with Player B, what is the probability that Player A beats Player B, i.e. P(A, B) and the probability that Player B beats Player A, P(B, A) ?

I am not sure how I should simulate this problem.

Thanks,

MG.
 
Physics news on Phys.org
Intuitively, since A is an "average" player and B beats the "average" player 60% of the time, perhaps B should win 60% of the time against A.

But it doesn't necessarily work like that. For example, perhaps A and B are equally skilled when playing against people at a given experience level, except that B is better at taking advantage of the weaknesses of poor players than A is. That would give B a higher win percentage, without giving him any advantage against A.

Conclusion: you just don't have enough information.
 
Player B has a P(A)*P(B) / (P(A)*P(B) + (1-P(A))*(1-P(B))) = 60% chance of winning a point against player A.

The probability of then winning a MATCH against A will then be significantly higher (close to 100%) depending on if they play best of three or five sets.
 
winterfors said:
Player B has a P(A)*P(B) / (P(A)*P(B) + (1-P(A))*(1-P(B))) = 60% chance of winning a point against player A.
Justify your claim. What assumptions are you making?
 
Last edited:
Player B has a P(A)*P(B) / (P(A)*P(B) + (1-P(A))*(1-P(B))) = 60% chance of winning a point against player A.

Can you please explain your answer.

I am reading a book: Calculated Bets by Steven Skiena. The book explains mathematical modeling to beat the JAI ALAI betting system.
The author used the following function get the answer to the above problem.

P (A , B ) = \frac { 1 + [ P (A) - P(B) ]^ \alpha} {2} if P(A) >= P(B)

P (A , B ) = \frac { 1 - [ P (B) - P(A) ]^ \alpha} {2} if P(A) <= P(B)

The constant \alpha &gt;= 0 is used as a fudge factor to tune the results of this function to the observed stats.
 
I was using a Bayesian approach.

Let's denote the probability that player X_1 wins against player X_2 :
P(W | X_1, X_2)
We know the following:

P(W | A) = \sum\limits_{X}P(W | A, X)P(X) = 0.5 (called P(A) in posts above)
P(W | B) = \sum\limits_{X}P(W | B, X)P(X) = 0.6 (called P(B) in posts above)

Furthermore, chance of anyone beating anyone is 50%
P(W) = \sum\limits_{X_1}\sum\limits_{X_2}P(W, X_1, X_2)= 0.5
due to symmetry.We want to calculate P(W | A, B). This can be done from assuming that
P(W, X_1, X_2) = P(X_1 | W)P(X_2 | W)P(W).

Then,
P(W| A, B) = \frac{P(W, A, B)}{P(A, B)}
= \frac{P(A | W)P(B | W)P(W)}{P(A, B)}
= \frac{P(W|A)P(A) P(W|B)P(B) /P(W)}{(P(W|A)P(A) P(W|B)P(B) /P(W)) + P(L|A)P(A) P(L|B)P(B) /P(L)}
= \frac{P(W|A) P(W|B) } {(P(W|A) P(W|B)) + P(L|A)P(L|B) }
= \frac{0.5 * 0.6 } {0.5 * 0.6 + 0.5 * 0.4 }
= 0.6

where P(L|A)=1-P(W|A) and P(L|B)=1-P(W|B) is the probability of losing.

It all got a bit technical, sorry about that, but I can't find any more simple derivation...
 
Last edited:
winterfors,

Thanks for the detail explanation. However, I am stuck at the following equation.
I would appreciate it if you could explain this to me.

winterfors said:
P(W | A) = \sum\limits_{X}P(W | A, X) = 0.5 (called P(A) in posts above)
I don't understand how we can sum individual probabilities to get a collective probability of A winning against anybody, as the sum may be bigger than 1.

Let us say, there are 4 players, A, B, X1 and X2, and P(W | A, X1) = 0.4, P(W | A, X2) = 0.7.

Then P(W | A, X1) + P(W | A, X2) > 1

IMHO, we should take an average (or weighted average) of individual winning probabilities to get a collective probability.
 
You're perfectly right it dosn't make sense. I meant the expectation over X rather than the sum.
P(W | A) = \sum\limits_{X}P(W | A, X)P(X) = 0.5

(I have corrected my original post)
 
winterfors,

Thanks. I am still digesting your explanation. I am a bit confused with your notations. Could you explain what the following notations stand for?

P(X)

P(W, X_1, X_2) versus P(W | X_1, X_2)

P(X_1 | W)

P(W)
 
  • #10
P(X) Probability that any random player will have the value X. If for instance X is a number describing player's strength, there might be 10% of players with X=1, 40% with X=2 and 50% with X=3.

Note that this probability is not actually used to calculate P(W | A, B), all terms P(X) cancel out along the way.

P(W | X_1, X_2) Probability of player X_1 winning over player X_2 (vertical bar indicated conditional probability of W (=winning), given players X_1 and X_2.

P(W, X_1, X_2) Probability of player X_1 winning over player X_2 times relative abundances of players with X=X_1 and X=X_2, respectively.

P(W) Probability that any random player will win over another random player

P(X_1 | W) The probability of a random player X will have the value X = X_1, given that he has just won a match.
 
  • #11
winterfors,

Thanks for bearing with me. I get all the notations except P(W, X_1, X_2).
Could you please explain that concept with an example?
 
  • #12
winterfors said:
I was using a Bayesian approach.

Let's denote the probability that player X_1 wins against player X_2 :
P(W | X_1, X_2)



We know the following:

P(W | A) = \sum\limits_{X}P(W | A, X)P(X) = 0.5 (called P(A) in posts above)
P(W | B) = \sum\limits_{X}P(W | B, X)P(X) = 0.6 (called P(B) in posts above)

Furthermore, chance of anyone beating anyone is 50%
P(W) = \sum\limits_{X_1}\sum\limits_{X_2}P(W, X_1, X_2)= 0.5
due to symmetry.


We want to calculate P(W | A, B). This can be done from assuming that
P(W, X_1, X_2) = P(X_1 | W)P(X_2 | W)P(W).

Then,
P(W| A, B) = \frac{P(W, A, B)}{P(A, B)}
= \frac{P(A | W)P(B | W)P(W)}{P(A, B)}
= \frac{P(W|A)P(A) P(W|B)P(B) /P(W)}{(P(W|A)P(A) P(W|B)P(B) /P(W)) + P(L|A)P(A) P(L|B)P(B) /P(L)}
= \frac{P(W|A) P(W|B) } {(P(W|A) P(W|B)) + P(L|A)P(L|B) }
= \frac{0.5 * 0.6 } {0.5 * 0.6 + 0.5 * 0.4 }
= 0.6

where P(L|A)=1-P(W|A) and P(L|B)=1-P(W|B) is the probability of losing.

It all got a bit technical, sorry about that, but I can't find any more simple derivation...
You have a notational problem. The events separated by commas after the conditional bar should commute, but in your notation P(W|A,B) is apparently not the same as P(W|B,A). You need to keep track of which player the W refers to. For example, you might say P(W(A) | A, B), where W(A) means "A wins," A means "A plays," and B means "B plays." It looks like the fact that you don't do this leads to some confusion later on.
 
  • #13
As mXSCNT points out, I have messed up the notation in my previous post (definitions of P(W |X_1) and P(W |X_2) are ambiguous). I have tried to make a more formal derivation below.


PRELIMINARIES AND NOTATION

Let's denote the probability that player A wins against player B :
(1) P(W | A_1, B_2)
where the indices 1 and 2 are only to mark that it is player 1 winning over player 2 (and not the other way around)

This implies that P(W | A_1, B_2) is equal to the probability that player B loses to player A
(2) P(W | A_1, B_2) = P(L | B_1, A_2)

One can define joint probability distributions over the space of all (pairwise) combinations of players and all outcomes (Win / Loss) by multiplying with P(A)P(B)
(3) P(W, A_1, B_2) = P(W | A_1, B_2) P(A) P(B)

One can also define marginal probability distributions by summing over all X_1 or X_2
(4) P(W , A_1) = \sum\limits_{X_2}P(W, A_1, X_2)

(5) P(W , B_2) = \sum\limits_{X_1}P(W, X_1, B_2)

(6) P(W) = \sum\limits_{X_1}\sum\limits_{Y_2}P(W, X_1, Y_2)

From equation (2) we can deduce also that P(W, A_1) =P(L, A_2) and P(W , B_1) =P(L , B_2).

Finally, we can define the conditional probabilites:
(7) P(W | A_1) P(A) = P(A_1|W) P(W) = P(W, A_1)

(8) P(W | B_2) P(B) = P(B_2|W) P(W) = P(W, B_2)


WHAT WE KNOW

(9) P(W | B_1) = 0.6

(10) P(W | A_2) = 0.5



ASSUMTIONS MADE

(11) P(W) = P(L)= 0.5
The probability of a random player winning (or losing) against another random player is exactly 50%

(12) P(W, B_1, A_2) = P(B_1 | W) P(A_2 | W) P(W)

This assumption can be justified by the maximum entropy principle, since it minimizes the Shannon information of P(O, B_1, A_2) (O= W or L) given that we know P(B_1 | W), P(A_2 | W) and P(W).



DEDUCTIONS FROM ABOVE

We want to calculate
P(W| B_1, A_2) = \frac{P(W, B_1, A_2)}{P(B) P(A)}

Inserting (12) gives
P(W| B_1, A_2) = \frac{P(B_1 | W) P(A_2 | W) P(W)}{P(B) P(A)}

Using (7) and (8) we can finally write
P(W| B_1, A_2) = \frac{P(W| B_1) P(W| A_2 ) }{P(W)}

Inserting (9) (10) and (11) gives
P(W| B_1, A_2) = \frac{0.6 * 0.5 }{0.5} = 0.6

That is, the probability of player B beating player A is 60%
 
Last edited:
  • #14
winterfors,

Thanks a lot. Though your solution is not fully clear to me, I appreciate your efforts.
I think I need to do more reading on the Bayesian probability theory.

MG.
 
  • #15
musicgold said:
Hi,

There are two tennis players. Player A wins a point 50% of the time (against any rival), while Player B wins a point 60% of the time against any rival.
...
If Player A plays with Player B,
...

Then, considering long time of observation (or "after a large amount of points played" ), for each 10 points played, Player A wins 5 points, and Player B wins 6 points...

 
Last edited:
  • #16
musicgold said:
There are two tennis players. Player A wins a point 50% of the time (against any rival), while Player B wins a point 60% of the time against any rival.
...
If Player A plays with Player B,...

Kittel Knight said:
Then, ...for each 10 points played, Player A wins 5 points, and Player B wins 6 points...

It seems the OP, the way it is, leads to a strange conclusion!

Who is wrong: me, or the assumptions in the OP?
:confused:
 

Similar threads

Replies
7
Views
2K
Replies
1
Views
1K
Replies
1
Views
1K
Replies
3
Views
1K
Replies
4
Views
15K
Replies
14
Views
3K
Replies
2
Views
2K
Back
Top