Kernal density estimate in polar coordinates.

davcrai
Messages
13
Reaction score
0
Hi,
I have a data set containing values for power and direction. I would like to produce a probability density estimate. The data can have multiple sources so I want to use a nonparametric method. I work in python which has a method for kernal density estimation (KDE), which I think should be suitable. However, currently the method does not allow the data to be weighted, so I can only use the directions. Also, it does not allow polar coordinates so any bins near the ends of the distribution do not include all relevant values (i.e. bins centered close to zero degrees should include points close to 360 degrees). The result is a curve that is discontinuous across zero degrees. Does anyone know where I might find an implementation for KDE (any language) that allows polar coordinates, I might write one in python but would like to try it out somewhere to make sure it is suitable to what I need first. Alternatively, if there are any better suggestions on how to estimate the distribution I would be very interested??
 
Physics news on Phys.org
davcrai said:
Does anyone know where I might find an implementation for KDE (any language) that allows polar coordinates,

You might get better advice on particular software if you ask in the computer technology sections of physicsforums.

Alternatively, if there are any better suggestions on how to estimate the distribution I would be very interested??

You need to describe the problem. You've hinted that you want to represent "power" and "direction". Do you want a kernel density method that gives the joint distribution for "power" and "direction"?

I don't know how determined you are to implement something that is precisely kernel density estimation. If you are using "bins", it sounds like you are doing a numerical approximation of some kind. The intuitive way to think of kernel density estimation is that each sample of observed data is "smeared out" into a density function that represents other samples that "might well have also happened". What you need to implement this idea on a circle is a kernel that has "finite support" - i.e. it is only non-zero on an interval of finite length. For example suppose you use a kernel whose support is 180 deg centered at the observed direction. If the sample value is 10 degrees, the kernel extends to -80 deg = 280 deg on the left and 100 degrees on the right.

I notice there papers written about kernel density estimation with "finite support" and also about "multivariate kernel density estimation". I don't know if any of that theory has made it into commonly available software.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top