How to practically use a Kernek density for a smoothing application

In summary, John is seeking help with using the Epanechnikov density kernel to smooth out speed data he collected every second during a journey. He has a document with instructions and is wondering about the values for i and j, as well as the calculation for the weight associated with each time interval. The weight is calculated by dividing the Epanechnikov Kernel for the current time interval by the sum of the Epanechnikov Kernel for 3 seconds behind and 3 seconds ahead.
  • #1
bradyj7
122
0
Hi there,

I'm trying use the Epanechnikov density kernel to smooth out data that I have collected, but I'm not sure exactly how to use it.

I've recorded speed data from a car every second for a journey and I'm trying to smooth the data as it contains some noise.

I have document that explains exactly how to do it, it is available http://dl.dropbox.com/u/54057365/All/smoothing%20velocity%20data%20paper.pdf

Bear in mind the data is recorded every second.

Does mean, at the bottom of the second page, that i = 1?

What does j equal? Is it the length of the entire journey in seconds? Or something else?

On the third page:

When computing the weight associated with the speed at a particular time interval, is it calculated by dividing the Epanechnikov Kernel K(Zij) for the current time interval by the sum of the Epanechnikov Kernel for 3 seconds behind and 3 seconds ahead (7 seconds in total)?

Thank you for your help.

John
 
Last edited:
Physics news on Phys.org
  • #2
,

Thank you for reaching out for help with using the Epanechnikov density kernel to smooth out your speed data. I understand that you have collected speed data from a car every second for a journey and that you are looking to reduce some noise in the data.

Based on the document you provided, it appears that i = 1 refers to the first interval of time in your data, which would correspond to the first second of the journey. This means that j would equal the length of the journey in seconds, as you suspected.

On the third page of the document, it states that the weight associated with the speed at a particular time interval is calculated by dividing the Epanechnikov Kernel K(Zij) for the current time interval by the sum of the Epanechnikov Kernel for 3 seconds behind and 3 seconds ahead (7 seconds in total). This is done in order to give more weight to the current time interval and less weight to the surrounding intervals.

I hope this helps clarify how to use the Epanechnikov density kernel for smoothing your speed data. If you have any further questions or need additional assistance, please don't hesitate to ask. Best of luck with your research!
 

FAQ: How to practically use a Kernek density for a smoothing application

What is a Kernel Density?

A Kernel Density is a statistical method used to estimate the probability density function of a random variable. It is a non-parametric way of representing the distribution of data without assuming a specific distribution.

How is Kernel Density used for smoothing applications?

Kernel Density is used for smoothing applications by creating a smoothed representation of the data, which can then be used for analysis or visualization. It is commonly used in image processing, spatial analysis, and machine learning.

How do you choose the appropriate kernel for a smoothing application?

The appropriate kernel for a smoothing application depends on the type of data and the desired level of smoothing. Commonly used kernels include Gaussian, Epanechnikov, and Uniform kernels. It is important to experiment with different kernels to find the best fit for the data.

Can Kernel Density be used for all types of data?

Kernel Density can be used for most types of data, including continuous and discrete variables. However, it is important to ensure that the data is appropriate for kernel density estimation and that the chosen kernel is suitable for the data.

What are the limitations of using Kernel Density for smoothing applications?

One limitation of using Kernel Density for smoothing applications is that it can be computationally expensive for large datasets. Additionally, the results can be sensitive to the choice of kernel and the bandwidth parameter, which may require some trial and error to find the optimal values.

Similar threads

Back
Top