I am wondering if Kernel Density Estimation (KDE) is appropriate for some data analysis I'm working on. I have a simulated process that produces a large number N of pieces of debris, and I want to know how these objects are distributed spatially. In other words, I'd like to estimate a density function for the number of debris pieces per unit volume. I can treat the debris pieces as being non-interacting, so I think it's safe to treat each one's location at any time as independent of all the others. If it were also true that all the pieces were identically distributed, then I think it would be okay to treat their positions at a time t as independent trials of one random variable and to apply the techniques of KDE to estimate a density function at each time t (?). My concern is that the debris pieces are possibly (probably?) not identically distributed, in other words, they will have varying physical characteristics that will most likely affect the way they are distributed, e.g. heavy fragments might follow different trajectories than ligher ones. If that is the case, is it valid to apply KDE to a single debris sample? Would I have to use a different kernel function for different fragments (in principle)? In fact, I will have multiple samples (the output of a Monte Carlo simulation), so I believe I can combine the results from different MC realizations using techniques suitable for I.I.D. data, but I'm still not sure how to treat the results from each individual run. Different numbers of debris pieces will typically be produced by different MC runs, so there's no way to identify the same fragment from run to run or to treat its positions as multiple trials of the same variable. Thanks for any suggestions.