KL divergence on different domains

In summary, the conversation is about comparing the distance between two distributions using the Kullback-Leibler divergence, but the domains of the distributions are different. The speaker is questioning if there is a way to use the KL divergence or if they need to find another method. The expert notes that the support of the distributions is different, but suggests defining the domain to be the same.
  • #1
flatlinez
1
0
Hallo,


I'm trying to compare the distance between two distributions that I got from a Kernel smoothing density estimate (ksdensity in matlab). I was thinking of using the kullback leibler divergence, but I realized that the domains of my distributions are different (see attached).
Can I find a way to use the KLdivergence or i need to find another way?

Thank you
 

Attachments

  • 2ep4yab.jpg
    2ep4yab.jpg
    13.2 KB · Views: 518
Physics news on Phys.org
  • #2
flatlinez said:
Hallo,
I'm trying to compare the distance between two distributions

What is the goal of doing this comparison? If you measure a "distance" between the distribution then what are you comparing that distance to?

the domains of my distributions are different

It is the "support" of the distributions that are different. You can define the "domain" to be the same. A domain for a probability distribution can include values where the density is zero.
 

1. What is KL divergence?

KL divergence, also known as Kullback-Leibler divergence, is a measure of the difference between two probability distributions. It is commonly used in statistics and information theory to quantify how one distribution differs from another.

2. How is KL divergence calculated?

KL divergence is calculated by taking the sum of the product of the probability of each event in one distribution and the logarithm of the ratio between the probabilities of that same event in the two distributions. The result is a non-negative value, with a lower value indicating a smaller difference between the distributions.

3. What is the significance of KL divergence on different domains?

KL divergence can be used to compare distributions on different domains, such as text and images. This allows for the quantification of how much information is lost when one distribution is used to approximate another. It is often used in machine learning and natural language processing applications.

4. How is KL divergence used in machine learning?

In machine learning, KL divergence can be used as a loss function to measure the difference between a predicted distribution and the true distribution. It is also commonly used in generative models, where the goal is to minimize the KL divergence between the generated samples and the real data.

5. Can KL divergence be negative?

No, KL divergence is always a non-negative value. This is because the logarithm of a ratio can never be negative, and the sum of non-negative values is always non-negative. A negative value would indicate that the two distributions are actually closer together, which goes against the definition of KL divergence as a measure of difference.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Advanced Physics Homework Help
Replies
1
Views
722
  • Precalculus Mathematics Homework Help
Replies
11
Views
487
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
737
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
19
Views
5K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
8
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
1K
Back
Top