A Comparing Kullback-Leibler divergence values

AI Thread Summary
The discussion focuses on evaluating the realism of two survival models in R by comparing their Kullback-Leibler divergence values with an observed dataset. Initial KLD results indicate that `dat.s2` has a lower divergence value, suggesting a better fit. However, visual density plots reveal that `dat.s1` aligns more closely with the observed data. This discrepancy may stem from a misunderstanding of KLD's non-commutative nature, where the order of datasets affects the outcome. Exploring the KLD in the reverse direction could provide insights more aligned with the research objectives.
nigels
Messages
36
Reaction score
0
I’m currently evaluating the "realism" of two survival models in R by comparing the respective Kullback-Leibler divergence between their simulated survival time dataset (`dat.s1` and `dat.s2`) and a “true”, observed survival time dataset (`dat.obs`). Initially, directed KLD functions show that `dat.s2` is a better match to the observation:

> library(LaplacesDemon)
> KLD(dat.s1, dat.obs)$sum.KLD.py.px
[1] 1.17196
> KLD(dat.s2, dat.obs)$sum.KLD.py.px
[1] 0.8827712​

However, when I visualize the densities of all three datasets, it seems quite clear that `dat.s1` (green) better alignes with the observation:

> plot(density(dat.obs), lwd=3, ylim=c(0,0.9))
> lines(density(dat.s1), col='green')
> lines(density(dat.s2), col='purple')​

What is the cause behind this discrepancy? Am I applying KLD incorrectly due to some conceptual misunderstanding?
 

Attachments

  • KL_nonsense_sof.png
    KL_nonsense_sof.png
    14.2 KB · Views: 796
Physics news on Phys.org
Keep in mind that the KL-divergence is non-commutative, and different "orders" correspond to different objective functions (and different research questions). The way you're fitting it (that is, KL(Q||P), where Q is being fit to P) is trying to match regions of high density, and it does seem to be the case that the highest probability mass in your "worse fitting" model coincides with the highest probability mass in your target better than does the "better fitting" model. There's a fairly good discussion related to the topic here:

https://stats.stackexchange.com/questions/188903/intuition-on-the-kullback-leibler-kl-divergence
and
http://timvieira.github.io/blog/post/2014/10/06/kl-divergence-as-an-objective-function/

The other direction may actually be closer to what you're interested in:
KLD(dat.obs, dat.s1)$sum.KLD.py.px
KLD(dat.obs, dat.s2)$sum.KLD.py.px
 
Last edited:
I'm taking a look at intuitionistic propositional logic (IPL). Basically it exclude Double Negation Elimination (DNE) from the set of axiom schemas replacing it with Ex falso quodlibet: ⊥ → p for any proposition p (including both atomic and composite propositions). In IPL, for instance, the Law of Excluded Middle (LEM) p ∨ ¬p is no longer a theorem. My question: aside from the logic formal perspective, is IPL supposed to model/address some specific "kind of world" ? Thanks.
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...
Back
Top