A Comparing Kullback-Leibler divergence values

AI Thread Summary
The discussion focuses on evaluating the realism of two survival models in R by comparing their Kullback-Leibler divergence values with an observed dataset. Initial KLD results indicate that `dat.s2` has a lower divergence value, suggesting a better fit. However, visual density plots reveal that `dat.s1` aligns more closely with the observed data. This discrepancy may stem from a misunderstanding of KLD's non-commutative nature, where the order of datasets affects the outcome. Exploring the KLD in the reverse direction could provide insights more aligned with the research objectives.
nigels
Messages
36
Reaction score
0
I’m currently evaluating the "realism" of two survival models in R by comparing the respective Kullback-Leibler divergence between their simulated survival time dataset (`dat.s1` and `dat.s2`) and a “true”, observed survival time dataset (`dat.obs`). Initially, directed KLD functions show that `dat.s2` is a better match to the observation:

> library(LaplacesDemon)
> KLD(dat.s1, dat.obs)$sum.KLD.py.px
[1] 1.17196
> KLD(dat.s2, dat.obs)$sum.KLD.py.px
[1] 0.8827712​

However, when I visualize the densities of all three datasets, it seems quite clear that `dat.s1` (green) better alignes with the observation:

> plot(density(dat.obs), lwd=3, ylim=c(0,0.9))
> lines(density(dat.s1), col='green')
> lines(density(dat.s2), col='purple')​

What is the cause behind this discrepancy? Am I applying KLD incorrectly due to some conceptual misunderstanding?
 

Attachments

  • KL_nonsense_sof.png
    KL_nonsense_sof.png
    14.2 KB · Views: 793
Physics news on Phys.org
Keep in mind that the KL-divergence is non-commutative, and different "orders" correspond to different objective functions (and different research questions). The way you're fitting it (that is, KL(Q||P), where Q is being fit to P) is trying to match regions of high density, and it does seem to be the case that the highest probability mass in your "worse fitting" model coincides with the highest probability mass in your target better than does the "better fitting" model. There's a fairly good discussion related to the topic here:

https://stats.stackexchange.com/questions/188903/intuition-on-the-kullback-leibler-kl-divergence
and
http://timvieira.github.io/blog/post/2014/10/06/kl-divergence-as-an-objective-function/

The other direction may actually be closer to what you're interested in:
KLD(dat.obs, dat.s1)$sum.KLD.py.px
KLD(dat.obs, dat.s2)$sum.KLD.py.px
 
Last edited:
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top