Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Probability distribution comparison

  1. Sep 14, 2005 #1
    I am trying to compare two probability distributions. I tried the chi-square test, but that requires binned data, and all I have is probability. It seems to work if I ignore the rules about degrees of freedom and just use df=1, but I doubt this is statistically valid. I tried to 'unnormalize' my probability to approximate bins but that is also not working. Are there any tests meant to compare normalized probability functions?
     
  2. jcsd
  3. Sep 14, 2005 #2

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    What is the goal that you are trying to accomplish?
     
  4. Sep 15, 2005 #3
    I am doing biased molecular dynamics simulations. Since the method for unbiasing the data gives a probability curve, that is what I have to compare rather than binned data. I am trying to show statistically that the probability curve from a new method looks like the probability curve from the older, more computationally expensive method. Visual comparison supports this hypothesis, but I would like to have some mathematical way of showing this so that I can make a plot of simulation time versus some measure of similarity to the expected probability. I can do this using relative (but wrong) chi square values, but I would rather have something which I can justify mathematically.
     
  5. Sep 15, 2005 #4

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    You could divide each distribution's domain into "bins" (frequency ranges) and use the test that way. A number of non-parametric tests can also be used, e.g. the runs test. See this thread.

    P.S. From that earlier thread:
    Originally posted by EnumaElish
    There are several non-parametric tests for assessing whether 2 samples are from the same distribution. For example, the "runs" test. Suppose the two samples are [itex]u_1<...<u_n[/itex] and [itex]v_1<...<v_n[/itex]. Suppose you "mix" the samples. If the resulting mix looks something like [itex]u_1< v_1 < u_2 < u_3 < u_4 < v_2 < v_3 <[/itex] ... [itex] < u_{n-1} < v_{n-1} < v_n < u_n[/itex] then the chances that they are from the same distribution is greater than if they looked like [itex]u_1<...<u_n<v_1<...<v_n[/itex]. The latter example has a smaller number of runs (only two: first all u's then all v's) than the former (at least seven runs: one u, one v, u's, v's, ..., u's, v's, one u). This and similar tests are usually described in standard probability textbooks like Mood, Graybill and Boes.
     
    Last edited: Sep 15, 2005
  6. Sep 15, 2005 #5
    Perhaps I should clarify, I have a series of 160 points along my reaction coordinate and a probability value for each. I tried to, in essence, reverse the normalization to create an approximation of binned data, but the probability curve was produced from over 500,000 data points. When I run the chi square test on unnormalized bins, the result is a value in the tens of thousands, which with 159 degrees of freedom gives a probabilty that the bins are equal to the expected of about 0 for a distribution which actually looks quite like the expected. I know chi square is sensitve to the binning of data, but because of the way the unbiasing method works, I cannot get a very large number of bins. Is there any test that is specific to comparing probabilities, or is there some way to use chi square on probabilities without unnormalizing?
     
  7. Sep 15, 2005 #6

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Am I right to think that you have two "continuous" distributions that you have simulated, and you'd like to prove that they are identical?
     
  8. Sep 16, 2005 #7
    yes, or more specifically I would like to show how close to identical they are
     
  9. Sep 16, 2005 #8

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Ideas:

    1. You could make two variables X(t) = value of the "true" disrtibution (expensive simulation) at point t and Y(t) = value of the alternative dist. (practical simulation) at point t. Then run the regression Y(t) = a + b X(t) for as many t's as you can (or like), then show that the joint hypothesis "(a = 0) AND (b = 1)" is highly statistically significant.

    2. Plot X(t) and Y(t) on the same graph. Select a lower bound T0 and an upper bound T1. Let's assume X(T0) = Y(T0) and X(T1) = Y(T1), i.e. both T0 and T1 are crossing points. Divide the interval [T0,T1] into arbitrary subintervals {s(1),...,s(N)}. Define string variable z(i) = "x" if the integral of X(t) - Y(t) > 0 over subinterval s(i); z(i) = "y" otherwise. You'll end up with a string like xxxyyyxyxyx... whose length = N. Now apply the RUNS TEST that I described above.

    I may post again if I can think of anything else.
     
  10. Sep 16, 2005 #9
    thank you :smile:
     
  11. Sep 17, 2005 #10

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    N.B. The "runs test" addresses the directionality of the error e(t) = X(t) - Y(t); the regression addresses the magnitude of the errors. Technically, the regression minimizes the sum of e(t)2 = sum of [X(t) - Y(t)]2 over all t in the sample. Ideally one should apply both techniques to cover the directionality as well as the magnitude of the errors.
     
    Last edited: Sep 17, 2005
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Probability distribution comparison
Loading...