How Can Bayesian Analysis Estimate Player A's Winning Probability in a Match?

  • Thread starter Thread starter taylrl3
  • Start date Start date
  • Tags Tags
    Bayesian Example
taylrl3
Messages
60
Reaction score
0
Hi,

I am trying to learn something about Bayesian Analysis by doing an example.

I have a series of 10 matches played between A and B, where each match is the first to 3 points. With an example data set that looks like this:

ABBAA
BAAA
AABBA
BBB
BABB
AAA
AABA
BAAA
AABBB
AAA

I would like to calculate the probability that A wins any given point, based on this data set and using a Bayesian approach.

I have read some information on Bayesian Analysis and I believe that in order to start I need to have something called a prior probability distribution (please correct me at any point if I am wrong). This distribution I was thinking I would construct from the mean number of points won by A across all 40 points played along with the standard deviation.

Next I need to calculate the likelihood function and there is something called a posterior function too. To be honest I am starting to get a little lost by this point and I am slightly confused as I already have my data and this seems to be trying to predict what I already have.

Help?
 
Physics news on Phys.org
taylrl3 said:
I would like to calculate the probability that A wins any given point

Begin by phrasing your questions precisely. Are you asking how to answer questions like "What is the probability that in a randomly selected match, player A wins the 4th point?". Or are you asking questions like "What is the probability that in a randomly selected match, player A wins the match by winning the 4th point?".
 
Ok, apologies for being vague. I am only just finding my way in this.

What I had done initially was simply to take the average number of points that A won and divide by the total number of points played (p = 24/40 = 0.6). I realized that this isn't really that great and I would like to take things much deeper. If like you said, I could calculate the probability that in a randomly selected match, the player wins the 4th point then that would be great as it would give me a better overall picture of what is happening. I am actually not concerned with who wins a match, just the different possible ways of estimating a probability of winning any particular point. Here is my attempt at a more precise question:

How can I estimate the probability of player A winning any particular point in a match based on the previous points in the data set?

(I am not concerned with winning the match) :-)
 
I have calculated the mean for each point played based on all the previous points. I think this might be the place to start. I am reading that the posterior = (likelihood * prior)/marginal likelihood.

I wan to use a Bayesian approach to estimating the likelihood of player A winning a match. I don't even know if that question is phrased correctly but someone out there must be able to help me

:-S
 
You already have data for a general case of A playing against B. A won 7 out of 10 times. This gives the prior distribution. But if B wins the first game, what is the probability that he will win the match? B won the first game 4 times and, of those 4 times, he won the match twice. So given that B won the first game, the odds of B winning the match is 50/50. So you have data that let's you know the prior probabilities for the general case and also you have Bayes' rule that let's you look at special cases where you know something like "B won the first game"

Given that B won the second point, what is the probability that B will win the third point? B won the second point one time and he also won the third point that time. So, based on this limited data for the prior distribution, B will always win the third point if he won the second point.
 
Last edited:
taylrl3 said:
I wan to use a Bayesian approach to estimating the likelihood of player A winning a match. I don't even know if that question is phrased correctly but someone out there must be able to help me

To use Bayesian probability theory or any other type of mathematics requires knowing or assuming sufficient "given" information. In most real life situations, the bare facts are not sufficient "given" information to do mathematics. Hence, you must make assumptions. Rather than speaking of making assumptions, some people prefer to say that they are "making approximations" or "using a model", but it amounts to the same thing.

A typical approach in applying probability is to assume a model. For your problem you could assume that there is an algorithm that simulates a match, so think of the model as a computer simulation. The algorithm has parameters in it that represent probabilities. You don't know the values of these parameters. To fit the model, you use the data to estimate the parameters.

Roughly speaking, a non-Bayesian approach to your problem would be to estimate each parameter as a single value, run the simulation many times, and see what fraction of the time player A wins. (If the algorithm for the simulation is simple, you might be able to compute the probability that A wins by some formula instead of using the Monte-Carlo method.)

A Bayesian approach would be to assume that Nature (or the AB Players Association or whatever) works by selecting the parameters of the model from some "prior distribution(s)". So instead of estimating each parameter as a single value from the data, your task is to estimate the parameters of the prior distribution for the parameter. After that is done, thinking in terms of Monte-Carlo'ing the answer, you would run many batches of simulation. Each batch would consist of picking specific values of the parameters of the model from the prior distribution and then, holding those parameters constant, running many simulations of matches. The data from all the batches is used to estimate the probability that A wins a match.

Your previous posts show that you understand that Bayesian methods have something to do with conditional probability. However, non-Bayesian methods may also use conditional probability. It isn't the use of conditional probability that makes an approach Bayesian.

A simple model for matches is that on each turn, player A has the same probability p of winning the point.

A non-Bayesian approach would be to estimate p from the data.

A Bayesian approach would be to estimate a prior distributions for p. For example, you could assume p is selected by using (1/2) + X where X is uniformly distributed on the interval [-k,k]. Your task would be to estimate k from the data.

Both Bayesians and non-Bayesians can use more complicated models that involve conditional probabilities. For example, we can consider a model with three parameters. p0 = the probability that A wins the very first point of the match, p1 = the probability that A wins the point given that he has lost the previous point, p2 = the probability that A wins the point given that he has won the previous point. If you use the data to estimate a single value for each of these parameters, you are taking a non-Bayesian approach.
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top