Probability distribution comparison

  • Thread starter pisaster
  • Start date
  • #1
5
0

Main Question or Discussion Point

I am trying to compare two probability distributions. I tried the chi-square test, but that requires binned data, and all I have is probability. It seems to work if I ignore the rules about degrees of freedom and just use df=1, but I doubt this is statistically valid. I tried to 'unnormalize' my probability to approximate bins but that is also not working. Are there any tests meant to compare normalized probability functions?
 

Answers and Replies

  • #2
EnumaElish
Science Advisor
Homework Helper
2,304
124
What is the goal that you are trying to accomplish?
 
  • #3
5
0
I am doing biased molecular dynamics simulations. Since the method for unbiasing the data gives a probability curve, that is what I have to compare rather than binned data. I am trying to show statistically that the probability curve from a new method looks like the probability curve from the older, more computationally expensive method. Visual comparison supports this hypothesis, but I would like to have some mathematical way of showing this so that I can make a plot of simulation time versus some measure of similarity to the expected probability. I can do this using relative (but wrong) chi square values, but I would rather have something which I can justify mathematically.
 
  • #4
EnumaElish
Science Advisor
Homework Helper
2,304
124
You could divide each distribution's domain into "bins" (frequency ranges) and use the test that way. A number of non-parametric tests can also be used, e.g. the runs test. See this thread.

P.S. From that earlier thread:
Originally posted by EnumaElish
There are several non-parametric tests for assessing whether 2 samples are from the same distribution. For example, the "runs" test. Suppose the two samples are [itex]u_1<...<u_n[/itex] and [itex]v_1<...<v_n[/itex]. Suppose you "mix" the samples. If the resulting mix looks something like [itex]u_1< v_1 < u_2 < u_3 < u_4 < v_2 < v_3 <[/itex] ... [itex] < u_{n-1} < v_{n-1} < v_n < u_n[/itex] then the chances that they are from the same distribution is greater than if they looked like [itex]u_1<...<u_n<v_1<...<v_n[/itex]. The latter example has a smaller number of runs (only two: first all u's then all v's) than the former (at least seven runs: one u, one v, u's, v's, ..., u's, v's, one u). This and similar tests are usually described in standard probability textbooks like Mood, Graybill and Boes.
 
Last edited:
  • #5
5
0
Perhaps I should clarify, I have a series of 160 points along my reaction coordinate and a probability value for each. I tried to, in essence, reverse the normalization to create an approximation of binned data, but the probability curve was produced from over 500,000 data points. When I run the chi square test on unnormalized bins, the result is a value in the tens of thousands, which with 159 degrees of freedom gives a probabilty that the bins are equal to the expected of about 0 for a distribution which actually looks quite like the expected. I know chi square is sensitve to the binning of data, but because of the way the unbiasing method works, I cannot get a very large number of bins. Is there any test that is specific to comparing probabilities, or is there some way to use chi square on probabilities without unnormalizing?
 
  • #6
EnumaElish
Science Advisor
Homework Helper
2,304
124
Am I right to think that you have two "continuous" distributions that you have simulated, and you'd like to prove that they are identical?
 
  • #7
5
0
yes, or more specifically I would like to show how close to identical they are
 
  • #8
EnumaElish
Science Advisor
Homework Helper
2,304
124
Ideas:

1. You could make two variables X(t) = value of the "true" disrtibution (expensive simulation) at point t and Y(t) = value of the alternative dist. (practical simulation) at point t. Then run the regression Y(t) = a + b X(t) for as many t's as you can (or like), then show that the joint hypothesis "(a = 0) AND (b = 1)" is highly statistically significant.

2. Plot X(t) and Y(t) on the same graph. Select a lower bound T0 and an upper bound T1. Let's assume X(T0) = Y(T0) and X(T1) = Y(T1), i.e. both T0 and T1 are crossing points. Divide the interval [T0,T1] into arbitrary subintervals {s(1),...,s(N)}. Define string variable z(i) = "x" if the integral of X(t) - Y(t) > 0 over subinterval s(i); z(i) = "y" otherwise. You'll end up with a string like xxxyyyxyxyx... whose length = N. Now apply the RUNS TEST that I described above.

I may post again if I can think of anything else.
 
  • #9
5
0
thank you :smile:
 
  • #10
EnumaElish
Science Advisor
Homework Helper
2,304
124
N.B. The "runs test" addresses the directionality of the error e(t) = X(t) - Y(t); the regression addresses the magnitude of the errors. Technically, the regression minimizes the sum of e(t)2 = sum of [X(t) - Y(t)]2 over all t in the sample. Ideally one should apply both techniques to cover the directionality as well as the magnitude of the errors.
 
Last edited:

Related Threads for: Probability distribution comparison

Replies
5
Views
4K
Replies
5
Views
2K
Replies
19
Views
1K
Replies
1
Views
4K
Replies
7
Views
1K
Replies
12
Views
2K
Replies
2
Views
661
Replies
17
Views
541
Top