Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A simple Bayesian example

  1. Jan 23, 2014 #1
    Hi,

    I am trying to learn something about Bayesian Analysis by doing an example.

    I have a series of 10 matches played between A and B, where each match is the first to 3 points. With an example data set that looks like this:

    ABBAA
    BAAA
    AABBA
    BBB
    BABB
    AAA
    AABA
    BAAA
    AABBB
    AAA

    I would like to calculate the probability that A wins any given point, based on this data set and using a Bayesian approach.

    I have read some information on Bayesian Analysis and I believe that in order to start I need to have something called a prior probability distribution (please correct me at any point if I am wrong). This distribution I was thinking I would construct from the mean number of points won by A across all 40 points played along with the standard deviation.

    Next I need to calculate the likelihood function and there is something called a posterior function too. To be honest I am starting to get a little lost by this point and I am slightly confused as I already have my data and this seems to be trying to predict what I already have.

    Help?
     
  2. jcsd
  3. Jan 23, 2014 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    Begin by phrasing your questions precisely. Are you asking how to answer questions like "What is the probability that in a randomly selected match, player A wins the 4th point?". Or are you asking questions like "What is the probability that in a randomly selected match, player A wins the match by winning the 4th point?".
     
  4. Jan 23, 2014 #3
    Ok, apologies for being vague. I am only just finding my way in this.

    What I had done initially was simply to take the average number of points that A won and divide by the total number of points played (p = 24/40 = 0.6). I realised that this isn't really that great and I would like to take things much deeper. If like you said, I could calculate the probability that in a randomly selected match, the player wins the 4th point then that would be great as it would give me a better overall picture of what is happening. I am actually not concerned with who wins a match, just the different possible ways of estimating a probability of winning any particular point. Here is my attempt at a more precise question:

    How can I estimate the probability of player A winning any particular point in a match based on the previous points in the data set?

    (I am not concerned with winning the match) :-)
     
  5. Jan 23, 2014 #4
    I have calculated the mean for each point played based on all the previous points. I think this might be the place to start. I am reading that the posterior = (likelihood * prior)/marginal likelihood.

    I wan to use a Bayesian approach to estimating the likelihood of player A winning a match. I don't even know if that question is phrased correctly but someone out there must be able to help me

    :-S
     
  6. Jan 26, 2014 #5

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    You already have data for a general case of A playing against B. A won 7 out of 10 times. This gives the prior distribution. But if B wins the first game, what is the probability that he will win the match? B won the first game 4 times and, of those 4 times, he won the match twice. So given that B won the first game, the odds of B winning the match is 50/50. So you have data that lets you know the prior probabilities for the general case and also you have Bayes' rule that lets you look at special cases where you know something like "B won the first game"

    Given that B won the second point, what is the probability that B will win the third point? B won the second point one time and he also won the third point that time. So, based on this limited data for the prior distribution, B will always win the third point if he won the second point.
     
    Last edited: Jan 26, 2014
  7. Jan 26, 2014 #6

    Stephen Tashi

    User Avatar
    Science Advisor

    To use Bayesian probability theory or any other type of mathematics requires knowing or assuming sufficient "given" information. In most real life situations, the bare facts are not sufficient "given" information to do mathematics. Hence, you must make assumptions. Rather than speaking of making assumptions, some people prefer to say that they are "making approximations" or "using a model", but it amounts to the same thing.

    A typical approach in applying probability is to assume a model. For your problem you could assume that there is an algorithm that simulates a match, so think of the model as a computer simulation. The algorithm has parameters in it that represent probabilities. You don't know the values of these parameters. To fit the model, you use the data to estimate the parameters.

    Roughly speaking, a non-Bayesian approach to your problem would be to estimate each parameter as a single value, run the simulation many times, and see what fraction of the time player A wins. (If the algorithm for the simulation is simple, you might be able to compute the probability that A wins by some formula instead of using the Monte-Carlo method.)

    A Bayesian approach would be to assume that Nature (or the AB Players Association or whatever) works by selecting the parameters of the model from some "prior distribution(s)". So instead of estimating each parameter as a single value from the data, your task is to estimate the parameters of the prior distribution for the parameter. After that is done, thinking in terms of Monte-Carlo'ing the answer, you would run many batches of simulation. Each batch would consist of picking specific values of the parameters of the model from the prior distribution and then, holding those parameters constant, running many simulations of matches. The data from all the batches is used to estimate the probability that A wins a match.

    Your previous posts show that you understand that Bayesian methods have something to do with conditional probability. However, non-Bayesian methods may also use conditional probability. It isn't the use of conditional probability that makes an approach Bayesian.

    A simple model for matches is that on each turn, player A has the same probability p of winning the point.

    A non-Bayesian approach would be to estimate p from the data.

    A Bayesian approach would be to estimate a prior distributions for p. For example, you could assume p is selected by using (1/2) + X where X is uniformly distributed on the interval [-k,k]. Your task would be to estimate k from the data.

    Both Bayesians and non-Bayesians can use more complicated models that involve conditional probabilities. For example, we can consider a model with three parameters. p0 = the probability that A wins the very first point of the match, p1 = the probability that A wins the point given that he has lost the previous point, p2 = the probability that A wins the point given that he has won the previous point. If you use the data to estimate a single value for each of these parameters, you are taking a non-Bayesian approach.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: A simple Bayesian example
  1. Bayesian Inference (Replies: 2)

Loading...