# Probability distribution comparison

pisaster
I am trying to compare two probability distributions. I tried the chi-square test, but that requires binned data, and all I have is probability. It seems to work if I ignore the rules about degrees of freedom and just use df=1, but I doubt this is statistically valid. I tried to 'unnormalize' my probability to approximate bins but that is also not working. Are there any tests meant to compare normalized probability functions?

## Answers and Replies

Science Advisor
Homework Helper
What is the goal that you are trying to accomplish?

pisaster
I am doing biased molecular dynamics simulations. Since the method for unbiasing the data gives a probability curve, that is what I have to compare rather than binned data. I am trying to show statistically that the probability curve from a new method looks like the probability curve from the older, more computationally expensive method. Visual comparison supports this hypothesis, but I would like to have some mathematical way of showing this so that I can make a plot of simulation time versus some measure of similarity to the expected probability. I can do this using relative (but wrong) chi square values, but I would rather have something which I can justify mathematically.

Science Advisor
Homework Helper
You could divide each distribution's domain into "bins" (frequency ranges) and use the test that way. A number of non-parametric tests can also be used, e.g. the runs test. See this thread.

P.S. From that earlier thread:
Originally posted by EnumaElish
There are several non-parametric tests for assessing whether 2 samples are from the same distribution. For example, the "runs" test. Suppose the two samples are $u_1<...<u_n$ and $v_1<...<v_n$. Suppose you "mix" the samples. If the resulting mix looks something like $u_1< v_1 < u_2 < u_3 < u_4 < v_2 < v_3 <$ ... $< u_{n-1} < v_{n-1} < v_n < u_n$ then the chances that they are from the same distribution is greater than if they looked like $u_1<...<u_n<v_1<...<v_n$. The latter example has a smaller number of runs (only two: first all u's then all v's) than the former (at least seven runs: one u, one v, u's, v's, ..., u's, v's, one u). This and similar tests are usually described in standard probability textbooks like Mood, Graybill and Boes.

Last edited:
pisaster
Perhaps I should clarify, I have a series of 160 points along my reaction coordinate and a probability value for each. I tried to, in essence, reverse the normalization to create an approximation of binned data, but the probability curve was produced from over 500,000 data points. When I run the chi square test on unnormalized bins, the result is a value in the tens of thousands, which with 159 degrees of freedom gives a probabilty that the bins are equal to the expected of about 0 for a distribution which actually looks quite like the expected. I know chi square is sensitve to the binning of data, but because of the way the unbiasing method works, I cannot get a very large number of bins. Is there any test that is specific to comparing probabilities, or is there some way to use chi square on probabilities without unnormalizing?

Science Advisor
Homework Helper
Am I right to think that you have two "continuous" distributions that you have simulated, and you'd like to prove that they are identical?

pisaster
yes, or more specifically I would like to show how close to identical they are

Science Advisor
Homework Helper
Ideas:

1. You could make two variables X(t) = value of the "true" disrtibution (expensive simulation) at point t and Y(t) = value of the alternative dist. (practical simulation) at point t. Then run the regression Y(t) = a + b X(t) for as many t's as you can (or like), then show that the joint hypothesis "(a = 0) AND (b = 1)" is highly statistically significant.

2. Plot X(t) and Y(t) on the same graph. Select a lower bound T0 and an upper bound T1. Let's assume X(T0) = Y(T0) and X(T1) = Y(T1), i.e. both T0 and T1 are crossing points. Divide the interval [T0,T1] into arbitrary subintervals {s(1),...,s(N)}. Define string variable z(i) = "x" if the integral of X(t) - Y(t) > 0 over subinterval s(i); z(i) = "y" otherwise. You'll end up with a string like xxxyyyxyxyx... whose length = N. Now apply the RUNS TEST that I described above.

I may post again if I can think of anything else.

pisaster
thank you

Science Advisor
Homework Helper
N.B. The "runs test" addresses the directionality of the error e(t) = X(t) - Y(t); the regression addresses the magnitude of the errors. Technically, the regression minimizes the sum of e(t)2 = sum of [X(t) - Y(t)]2 over all t in the sample. Ideally one should apply both techniques to cover the directionality as well as the magnitude of the errors.

Last edited: