How to Measure the Distance Between Two Distributions?

  • Thread starter Thread starter danik_ejik
  • Start date Start date
  • Tags Tags
    Distributions
Click For Summary
To measure the distance between two distributions, it's important to clarify the definition of "distance" being used. The difference between mean values is not a valid metric, as different distributions can share the same mean. A more appropriate measure is the Kolmogorov-Smirnov distance, which calculates the maximum difference between the cumulative distribution functions (CDFs) of the two distributions. Additionally, understanding the nature of the distributions and their dependencies is crucial for accurate measurement. Selecting the right distance metric is essential for meaningful comparison.
danik_ejik
Messages
18
Reaction score
0
Hello,
I've some two distributions,
how can I find the distance between those two distributions?

is the difference between the mean values would be the distance ?
 
Physics news on Phys.org
danik_ejik said:
Hello,
I've some two distributions,
how can I find the distance between those two distributions?

is the difference between the mean values would be the distance ?

Hey there.

I'm not exactly sure what you mean by distance.

If you are talking about expectation of variance for example you need to specify things like what distribution your RV's are, if they have any dependence on each other and so on.

Like I said, try to be clearer in stating what you are trying to find out.
 
danik_ejik said:
Hello,
I've some two distributions,
how can I find the distance between those two distributions?

is the difference between the mean values would be the distance ?

Usually a "distance" measure would be defined to satisfy the axioms of a metric - which the difference of means doesn't, because distinct distributions can have the same mean.

One popular distance measure is the Kolmogorov-Smirnov distance which is effectively the maximum difference between the CDFs of the distributions.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 31 ·
2
Replies
31
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K