Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Calculating Mutual Information

  1. Jun 27, 2011 #1
    Hi all,

    I had a question about calculating mutual information, a topic to which I am very new. Consider the following hypothetical situation, in which I describe the entire process from what I understand:

    Suppose I had a normal die (sides numbered 1-6) that you roll on a leaning table. I draw a line on the table so that the line divides the higher and lower halves of the table. Now, suppose I wanted to calculate the probability that I would roll a 5 on the upper half of the table. So, I begin by drawing a table like such:

    -----1----2----3----4----5----6
    High
    Low

    Then I perform 100 trials and record how many times a certain number was rolled on the higher or lower half of the table:

    -----1----2----3----4----5----6
    High_2____5___7___10___1___8
    Low_11___6___8___12__20___10

    To calculate the joint probability distribution P(roll, position), I would simply divide by the number of times the die was rolled (100 times) to produce the following table:

    -------1------2------3------4------5-------6
    High_0.02___0.05___0.07___0.10__0.01___0.08
    Low_0.11___0.06___0.08___0.12__0.20___0.10

    To calculate the marginal probabilities, I sum the rows and columns:
    P(roll) = (sum(col 1), sum(col 2), ..., sum(col 6)) = (0.13, 0.11, 0.15, 0.22, 0.21, 0.18)
    P(position) = (sum(row 1), sum(row 2)) = (0.33, 0.67)

    Lastly, I calculate the mutual information of the data set (for an article on mutual information, see http://en.wikipedia.org/wiki/Mutual_information) using the following formula, using a base-2 logarithm to obtain an answer in units of bits:

    MI(roll#; position) = [itex]\sum[/itex]roll#[itex]\sum[/itex]positionP(roll#, position)log2[itex]\frac{P(roll, position)}{P(roll)P(position)}[/itex]

    The number I get is 0.1206 bits...I suspect I did something wrong somewhere along the way, since this number is suspiciously small, but I cannot find my mistake. Any suggestions/corrections would be very much appreciated. Thanks in advance!
     
  2. jcsd
  3. Jun 27, 2011 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    The most fundamental mistake you are making is to confuse the idea of "data from a sample" with the idea of "parameters of a probability distribution" from which a sample was selected.

    The formula in the wikipedia article on mutual information is a formula that would be applied to to a probability distribution. What you did is the natural way that one would would estimate the mutual information from a sample. You used the observed frequencies of the dice results as their probabilities. When you contemplate your results you have to think this way:

    I'm using a sample to estimate the true probabilities. The frequencies in the sample are unlikely to exactly match the true probabilities. What sort of distribution of errors between the estimated mutual information and the true mutual information should I expect?

    Sometime the best way to estimate things from a sample is to use a formula that does not look like the corresponding formula involving the parameters of the probability distribution. I don't know the best estimation formula for mutual information.

    One way you can try to answer this is take samples from various probability distributions using a computer simulation and see how the estimated mutual information varies from the truth when various ways of estimating it are used.

    You say that your calculation produced a number is "suspiciously small", but if this were a problem from a textbook, one would expect the mutual information to be zero because in a typical textbook setting, the way the die lands would be independent of where it landed on the table. Why do you think the number is too small? Is the thrower attempting to manipulate how the die lands and do you expect him to be more successful on one half of the table than the other?
     
    Last edited: Jun 27, 2011
  4. Jun 27, 2011 #3
    Thanks for the reply, Stephen. I did suspect that the die number and the position on the table were unrelated, but wasn't 100% sure. So, in a perfect world/after an infinite number of trials, the mutual information would have actually approached 0 bits, correct?

    I am actually working with some large amounts of data in a project in which I believed two random variables involved are highly related, yet I was getting smaller numbers than in this little example here. So I suspected that perhaps my calculations were incorrect; but, if I understand you correctly, my calculations were indeed correct, but my interpretation of the results was wrong.

    Interesting side note: could we tell whether the die is loaded, based on how much mutual information we obtain?
     
  5. Jun 27, 2011 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    That would depend on how you define a perfect world!

    You didn't understand me. What would it mean for your calculations to be "correct"? You seem to think there is only one "correct" way to estimate something from data. That isn't true. The definition of mutual information (which is given in terms of probabilities) certainly suggests that we can try estimating mutual information by plugging in observed frequencies in place of probabilities. But it isn't clear this is the "correct" way.

    This link says that for small samples, the calculation you are using will, on the average, overestimate mutual information:

    http://robotics.stanford.edu/~gal/Research/Redundancy-Reduction/Neuron_suppl/node2.html

    My guess is that for large samples, your method is OK, but I haven't tried to prove that.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Calculating Mutual Information
Loading...