I’m currently evaluating the "realism" of two survival models in R by comparing the respective Kullback-Leibler divergence between their simulated survival time dataset (`dat.s1` and `dat.s2`) and a “true”, observed survival time dataset (`dat.obs`). Initially, directed KLD functions show that `dat.s2` is a better match to the observation:(adsbygoogle = window.adsbygoogle || []).push({});

> library(LaplacesDemon)

> KLD(dat.s1, dat.obs)$sum.KLD.py.px

[1] 1.17196

> KLD(dat.s2, dat.obs)$sum.KLD.py.px

[1] 0.8827712

However, when I visualize the densities of all three datasets, it seems quite clear that `dat.s1` (green) better alignes with the observation:

> plot(density(dat.obs), lwd=3, ylim=c(0,0.9))

> lines(density(dat.s1), col='green')

> lines(density(dat.s2), col='purple')

What is the cause behind this discrepancy? Am I applying KLD incorrectly due to some conceptual misunderstanding?

**Physics Forums | Science Articles, Homework Help, Discussion**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# A Comparing Kullback-Leibler divergence values

Have something to add?

Draft saved
Draft deleted

**Physics Forums | Science Articles, Homework Help, Discussion**