- #1

f95toli

Science Advisor

Gold Member

- 3,391

- 918

- TL;DR Summary
- What is the "best" real-time algorithm for frequency estimation if the goal is to minimise the number if samples

I have a practical question about frequency estimation of a noisy sine

In some of my experiments I need to estimate the frequency of a noisy damped sine.

Currently I just use uniform sampling (sampling at times t=n*T) (making sure to exceed the Nyquist criteria) followed by an FFT.

In the experiment I apply an excitation at time t0 and then wait for a time t before sampling for a short time(<<t). By repeating this thousands of times and gradually sweeping t, I can reconstruct the waveform (which typically last for 100 us before it decays into the noise floor).

Now, since the signal is noisy I need to average each sample for a very long time making this very time consuming.

My question is if there is a better way of doing this? There are three things that -naively- should mean that it might be possible to speed up the measurement:

-There is no reason for why I need sample uniformly. Would non-uniform sampling help? I believe the answer is yes, but how do I choose the sample times?

-I am doing this in "real" time, if needed I can decide where to sample next based on existing data.

-I can be quite confident that the signal is dominated by a single frequency, i.e. I just need to extract a single value.

Are there any methods which utilise one of more of these "advantages"?

I assumed this would be a common problem; but haven't had much luck when searching the literature. The best lead so far is:

https://www.sciencedirect.com/science/article/abs/pii/S1051200408001577

However, this is a new(ish) algorithm which I would need to implement myself. Are there no established methods ?

Ideally, I would like to find something that someone else has already implemented and tested.,,,

In some of my experiments I need to estimate the frequency of a noisy damped sine.

Currently I just use uniform sampling (sampling at times t=n*T) (making sure to exceed the Nyquist criteria) followed by an FFT.

In the experiment I apply an excitation at time t0 and then wait for a time t before sampling for a short time(<<t). By repeating this thousands of times and gradually sweeping t, I can reconstruct the waveform (which typically last for 100 us before it decays into the noise floor).

Now, since the signal is noisy I need to average each sample for a very long time making this very time consuming.

My question is if there is a better way of doing this? There are three things that -naively- should mean that it might be possible to speed up the measurement:

-There is no reason for why I need sample uniformly. Would non-uniform sampling help? I believe the answer is yes, but how do I choose the sample times?

-I am doing this in "real" time, if needed I can decide where to sample next based on existing data.

-I can be quite confident that the signal is dominated by a single frequency, i.e. I just need to extract a single value.

Are there any methods which utilise one of more of these "advantages"?

I assumed this would be a common problem; but haven't had much luck when searching the literature. The best lead so far is:

https://www.sciencedirect.com/science/article/abs/pii/S1051200408001577

However, this is a new(ish) algorithm which I would need to implement myself. Are there no established methods ?

Ideally, I would like to find something that someone else has already implemented and tested.,,,