- #1

- 2,813

- 491

I am trying to estimate the amplitude of a real signal with a particular frequency and unknown phase. The signal is sampled at a frequency much higher than the Nyquist frequency for the signal. For simplicity, I take an FFT period which is a multiple of the signal period (which conveniently is a multiple of the sample period).

It is very straightforward to estimate the amplitude in the absence of noise. I can just take the absolute value of the FFT and look for a peak in the spectrum. I can add the positive and negative frequency amplitudes which are equal for a real signal.

But if there is noise, then there will be a bias in the measurement since I took the absolute value, given by the Rice distribution. Noise in the real signal will create deviations in both the real and imaginary part of each FFT bin, but the absolute value will always give me a positive amplitude. When the noise is larger than the signal, the expected measurement is considerably larger than the signal.

My question is, how do I remove the bias from the estimate of the amplitude, if I know the noise distribution (Poissonian, but very close to Gaussian)? When the signal amplitude is much larger than the noise amplitude, the mean of the Rice distribution is very close to

##E[X] = \sqrt{x^2 + \sigma^2}##

where ##x## is the amplitude of the true signal and ##\sigma^2## is the noise variance. The measured amplitude is scattered around the true amplitude, but with a slight positive bias. In this case,

I find that I can estimate x from the measured signal using

##x_{corrected}^2 = x_{measured}^2 - \sigma^2##

But, this doesn't work when the noise is similar in magnitude to the signal. What can be done in this scenario?

Edit: It occurred to me that this will depend on the distribution of true x, as well as the distribution of the noise on top of x.

It is very straightforward to estimate the amplitude in the absence of noise. I can just take the absolute value of the FFT and look for a peak in the spectrum. I can add the positive and negative frequency amplitudes which are equal for a real signal.

But if there is noise, then there will be a bias in the measurement since I took the absolute value, given by the Rice distribution. Noise in the real signal will create deviations in both the real and imaginary part of each FFT bin, but the absolute value will always give me a positive amplitude. When the noise is larger than the signal, the expected measurement is considerably larger than the signal.

My question is, how do I remove the bias from the estimate of the amplitude, if I know the noise distribution (Poissonian, but very close to Gaussian)? When the signal amplitude is much larger than the noise amplitude, the mean of the Rice distribution is very close to

##E[X] = \sqrt{x^2 + \sigma^2}##

where ##x## is the amplitude of the true signal and ##\sigma^2## is the noise variance. The measured amplitude is scattered around the true amplitude, but with a slight positive bias. In this case,

I find that I can estimate x from the measured signal using

##x_{corrected}^2 = x_{measured}^2 - \sigma^2##

But, this doesn't work when the noise is similar in magnitude to the signal. What can be done in this scenario?

Edit: It occurred to me that this will depend on the distribution of true x, as well as the distribution of the noise on top of x.

Last edited: