# Unbiased estimate of a parameter in the Rice distribution

I am trying to estimate the amplitude of a real signal with a particular frequency and unknown phase. The signal is sampled at a frequency much higher than the Nyquist frequency for the signal. For simplicity, I take an FFT period which is a multiple of the signal period (which conveniently is a multiple of the sample period).

It is very straightforward to estimate the amplitude in the absence of noise. I can just take the absolute value of the FFT and look for a peak in the spectrum. I can add the positive and negative frequency amplitudes which are equal for a real signal.

But if there is noise, then there will be a bias in the measurement since I took the absolute value, given by the Rice distribution. Noise in the real signal will create deviations in both the real and imaginary part of each FFT bin, but the absolute value will always give me a positive amplitude. When the noise is larger than the signal, the expected measurement is considerably larger than the signal.

My question is, how do I remove the bias from the estimate of the amplitude, if I know the noise distribution (Poissonian, but very close to Gaussian)? When the signal amplitude is much larger than the noise amplitude, the mean of the Rice distribution is very close to
##E[X] = \sqrt{x^2 + \sigma^2}##
where ##x## is the amplitude of the true signal and ##\sigma^2## is the noise variance. The measured amplitude is scattered around the true amplitude, but with a slight positive bias. In this case,
I find that I can estimate x from the measured signal using
##x_{corrected}^2 = x_{measured}^2 - \sigma^2##
But, this doesn't work when the noise is similar in magnitude to the signal. What can be done in this scenario?

Edit: It occurred to me that this will depend on the distribution of true x, as well as the distribution of the noise on top of x.

Last edited:

I think I figured it out.
The Rice distribution PDF is [wikipedia]
##f(x;\nu,\sigma) = \frac{x}{\sigma^2} \exp(-\frac{x^2+\nu^2}{2\sigma^2}) I_0(\frac{x \nu}{\sigma^2})##
Let ##D(\nu)## be the prior distribution for ##\nu## values. Then (I think) the correction is
##x_{corrected}=\frac{\int \nu D(\nu) f(x_{meas};\nu,\sigma) d\nu}{\int D(\nu) f(x_{meas}; \nu,\sigma) d\nu}##
For a positive uniform distribution for ##\nu##, ##D(\nu)=1## and we take the integral over positive numbers, and we get (using Mathematica to help)
##x_{corrected} = \sigma \sqrt{\frac{2}{\pi}} \frac{1}{\exp({-\frac{x_{meas}^2}{4\sigma^2}}) I_0(\frac{x_{meas}^2}{4\sigma^2})}##

Edit: Nope. That's still wrong. My "corrected" value never predicts amplitudes near 0.

Last edited:
Well, somehow this got moved to the electrical engineering forum when it is a pure math question that has absolutely nothing to do with electrical engineering.

Mentor

Last edited by a moderator:
Well, somehow this got moved to the electrical engineering forum when it is a pure math question that has absolutely nothing to do with electrical engineering.

If it is a pure math question then you need to state it as one. You state facts about a distribution ##f(x,\nu,\sigma)## where ##x## is apparently the the variable and ##\nu,\sigma## are parameters. You ask about finding an unbiased estimator of something. In the normal terminology, estimators estimate the parameters of a distribution. So we would be seeking unbiased estimators for ##\nu,\sigma## as functions of the sample data. However, you speak of finding an "##x_{corrected}##". So I don't understand your question as a question in pure mathematics.

Mentor
2021 Award
I have done some work with the Rice distribution. Let me look in to it a bit and see if I can remember.

I am not sure that it is possible for a positive definite estimator to be unbiased. If that is the case, which property is more important for you?

I want to find an ##x_{corrected}## such that if I repeated the measurement many times, the mean of ##x_corrected## measurements would converge on to ##\nu##, where ##\nu## is the amplitude of the signal with no noise.

Specifically, I obtain a single sample ##x## taken from the distribution ##f(x;\nu,\sigma) = \frac{x}{\sigma^2} \exp(-\frac{x^2+\nu^2}{2\sigma^2}) I_0(\frac{x \nu}{\sigma^2})##, and I know the value of ##\sigma##. How do I estimate ##\nu##? ##x_{corrected}## is my name for an estimate of ##\nu##.

Mentor
2021 Award
Suppose ##\nu=0## then the only way for the mean of a bunch of ##x_{corrected}## values to converge to 0 is for some of the ##x_{corrected}## values to be less than 0. Is that ok?

Khashishi
Suppose ##\nu=0## then the only way for the mean of a bunch of ##x_{corrected}## values to converge to 0 is for some of the ##x_{corrected}## values to be less than 0. Is that ok?
Ah, that's a key insight I wasn't getting. I suppose that's ok.

It looks like a solution to a similar problem is given in Koay and Basser. Journal of Magnetic Resonance 179 (2006) 317-322. Converting their solution to my notation,
##\nu^2 = \left<x_{measured}\right>^2 + (\xi - 2)\sigma##
##\xi = 2 + \left(\frac{\nu}{\sigma}\right)^2 - \frac{\pi}{8} \exp(-\frac{\nu^2}{2\sigma^2}) ((2+\frac{\nu^2}{\sigma^2}) I_0\left(\frac{\nu^2}{4\sigma^2}\right)+ \frac{\nu^2}{\sigma^2} I_1 (\frac{\nu^2}{4\sigma^2}))^2##
It's not quite the same though, since we don't know ##\left<x_{measured}\right>^2## unless we make a lot of measurements. And we can't simply replace ##\left<x_{measured}\right>^2## with ##x_{measured}^2## since we will get imaginary answers for ##\left<x_{measured}\right>^2 + (\xi-2)\sigma < 0##. And we don't know exactly the value of ##\xi## since it depends on ##\nu##. Maybe what I'm asking is impossible?

Is notation like "##x_{corrected}##" common in electrical engineering? In mathematical statistics, the customary notation for a function that estimates a parameter "##\nu##" is "##\hat{\nu}##" which denotes a function of the observed data (rather than a "correction" of the observed data).

Maybe what I'm asking is impossible?

The non-existence of unbiased estimators is an exotic topic relative to the usual material in a statistics course. Doing a web search, I find https://arxiv.org/abs/1609.07415. Relating the article to your question presents the familiar problem of seeing how results stated for a an n-dimensional vector apply in the case when n = 1. Should we attempt that?

In the paper's common language statements we find:

In many real-world estimation problems we often encounter constraints on the parameter space in the form of side-information. For example in many communication systems we encounter positivity constraints, limited power constraints, bandwidth or delay constraints, circularity constraints, subspace constraints, and so on

and

Our results imply that almost in every constrained problem that one can think of, there exists no unbiased estimator. This result is surprising in light of the scarcity of examples which appear in the literature for the nonexistence of unbiased constrained estimators

Khashishi