PeroK said:
The assumptions are certainly inadequate to bear any relation to on-court experience.
Isn't that just a tad harsh? Surely they bear
some relation.
Your assumptions are more in line with, say, a game where each player takes turns to serve and the first to serve a fault loses. That would fit your model. Or, where each player tosses a coin and the first to toss a tail loses. In all these cases, going first is clearly a disadvantage.
But, each shot in a rally may be different from the previous one, so there is no obvious way to analyze the problem along these lines. Each player must choose between a low risk shot and a high risk shot each time etc.
Yes, of course, in a real game no two shots are exactly the same and players will take more chances in some situations than others. But averages are still of some use in analyzing what makes a difference in the play. No two games in any sport are exactly the same, yet the odds makers still publish and use averages. A .250 hitter will, on average, get a hit every 4th time at bat even though the pitchers and the pitches vary dramatically as do the situations. But a .300 hitter will always get paid more than a .250 hitter, everything else equal, because, on average, they will get more hits.
My goal is to get a general sense of the impact on a player's overall success (winning points, games, sets, and matches) by improving their success rate at returning each ball. If a player improves their return rate by 5%, what does that do to their odds of winning that point, that game, that set, and that match? For that analysis, I think working with averages has merit. It's not perfect, but I think it has validity.
I posted here mainly to check my preliminary equations. So far, neither you nor anyone else has commented on that.
Once I confirm that these equations are correct, I plan to expand them to include parameters for first and second serve and the returns, with the ability to assign different average probabilities for each. That will allow me to calculate expected odds of winning a point, game, set, and match.
There is a quick way to do this for the simple situation where each player takes turns and each is equally likely to fail on each attempt:
Let ##p## be the probability of success on anyone shot and ##P## be the probability that the player going first wins.
The first shot has two possible outcomes: success (with probability ##p##) and failure with probability ##1-p##. In order for the first player to win the game they must be successful on their first shot. If they are, then the second player is left in an identical position, from which their chance of winning must be ##P##. This gives:
##P = p(1-P)##
Which can be rearranged to give:
##P = \frac{p}{1+p}##
For example, if ##p = 1/2## then ##P = 1/3##.
That agrees with my results for the special case where both players have the same return rates. But I don't follow the logic that got you to
##P = p(1-P)##
How did you avoid having to sum the infinite series? Maybe my math skills are not up to this task.
In any case, I am looking for the more general case where the players are not at the same skill level. That's why I used the infinite series, as explained in the PDF I attached.
Is there a similar simple approach to the case where p1 is the probability of success on anyone shot for player 1 and p2 is the probability for player 2?