Unbiased estimate of a parameter in the Rice distribution

In summary, it seems that you want to find an unbiased estimator of ##\nu## such that if you repeated the measurement many times, the mean of the estimates would converge on to ##0##. This can be done using the solution to a similar problem given in Koay and Basser.
  • #1
Khashishi
Science Advisor
2,813
492
I am trying to estimate the amplitude of a real signal with a particular frequency and unknown phase. The signal is sampled at a frequency much higher than the Nyquist frequency for the signal. For simplicity, I take an FFT period which is a multiple of the signal period (which conveniently is a multiple of the sample period).

It is very straightforward to estimate the amplitude in the absence of noise. I can just take the absolute value of the FFT and look for a peak in the spectrum. I can add the positive and negative frequency amplitudes which are equal for a real signal.

But if there is noise, then there will be a bias in the measurement since I took the absolute value, given by the Rice distribution. Noise in the real signal will create deviations in both the real and imaginary part of each FFT bin, but the absolute value will always give me a positive amplitude. When the noise is larger than the signal, the expected measurement is considerably larger than the signal.

My question is, how do I remove the bias from the estimate of the amplitude, if I know the noise distribution (Poissonian, but very close to Gaussian)? When the signal amplitude is much larger than the noise amplitude, the mean of the Rice distribution is very close to
##E[X] = \sqrt{x^2 + \sigma^2}##
where ##x## is the amplitude of the true signal and ##\sigma^2## is the noise variance. The measured amplitude is scattered around the true amplitude, but with a slight positive bias. In this case,
I find that I can estimate x from the measured signal using
##x_{corrected}^2 = x_{measured}^2 - \sigma^2##
But, this doesn't work when the noise is similar in magnitude to the signal. What can be done in this scenario?

Edit: It occurred to me that this will depend on the distribution of true x, as well as the distribution of the noise on top of x.
 
Last edited:
Physics news on Phys.org
  • #2
I think I figured it out.
The Rice distribution PDF is [wikipedia]
##f(x;\nu,\sigma) = \frac{x}{\sigma^2} \exp(-\frac{x^2+\nu^2}{2\sigma^2}) I_0(\frac{x \nu}{\sigma^2})##
Let ##D(\nu)## be the prior distribution for ##\nu## values. Then (I think) the correction is
##x_{corrected}=\frac{\int \nu D(\nu) f(x_{meas};\nu,\sigma) d\nu}{\int D(\nu) f(x_{meas}; \nu,\sigma) d\nu}##
For a positive uniform distribution for ##\nu##, ##D(\nu)=1## and we take the integral over positive numbers, and we get (using Mathematica to help)
##x_{corrected} = \sigma \sqrt{\frac{2}{\pi}} \frac{1}{\exp({-\frac{x_{meas}^2}{4\sigma^2}}) I_0(\frac{x_{meas}^2}{4\sigma^2})}##

Edit: Nope. That's still wrong. My "corrected" value never predicts amplitudes near 0.
 
Last edited:
  • #3
Well, somehow this got moved to the electrical engineering forum when it is a pure math question that has absolutely nothing to do with electrical engineering.
 
  • #4
Thread moved to Stochachstic.
 
Last edited by a moderator:
  • #5
Khashishi said:
Well, somehow this got moved to the electrical engineering forum when it is a pure math question that has absolutely nothing to do with electrical engineering.

If it is a pure math question then you need to state it as one. You state facts about a distribution ##f(x,\nu,\sigma)## where ##x## is apparently the the variable and ##\nu,\sigma## are parameters. You ask about finding an unbiased estimator of something. In the normal terminology, estimators estimate the parameters of a distribution. So we would be seeking unbiased estimators for ##\nu,\sigma## as functions of the sample data. However, you speak of finding an "##x_{corrected}##". So I don't understand your question as a question in pure mathematics.
 
  • #6
I have done some work with the Rice distribution. Let me look into it a bit and see if I can remember.

I am not sure that it is possible for a positive definite estimator to be unbiased. If that is the case, which property is more important for you?
 
  • #7
I want to find an ##x_{corrected}## such that if I repeated the measurement many times, the mean of ##x_corrected## measurements would converge on to ##\nu##, where ##\nu## is the amplitude of the signal with no noise.

Specifically, I obtain a single sample ##x## taken from the distribution ##f(x;\nu,\sigma) = \frac{x}{\sigma^2} \exp(-\frac{x^2+\nu^2}{2\sigma^2}) I_0(\frac{x \nu}{\sigma^2})##, and I know the value of ##\sigma##. How do I estimate ##\nu##? ##x_{corrected}## is my name for an estimate of ##\nu##.
 
  • #8
Suppose ##\nu=0## then the only way for the mean of a bunch of ##x_{corrected}## values to converge to 0 is for some of the ##x_{corrected}## values to be less than 0. Is that ok?
 
  • Like
Likes Khashishi
  • #9
Dale said:
Suppose ##\nu=0## then the only way for the mean of a bunch of ##x_{corrected}## values to converge to 0 is for some of the ##x_{corrected}## values to be less than 0. Is that ok?
Ah, that's a key insight I wasn't getting. I suppose that's ok.

It looks like a solution to a similar problem is given in Koay and Basser. Journal of Magnetic Resonance 179 (2006) 317-322. Converting their solution to my notation,
##\nu^2 = \left<x_{measured}\right>^2 + (\xi - 2)\sigma##
##\xi = 2 + \left(\frac{\nu}{\sigma}\right)^2 - \frac{\pi}{8} \exp(-\frac{\nu^2}{2\sigma^2}) ((2+\frac{\nu^2}{\sigma^2}) I_0\left(\frac{\nu^2}{4\sigma^2}\right)+ \frac{\nu^2}{\sigma^2} I_1 (\frac{\nu^2}{4\sigma^2}))^2##
It's not quite the same though, since we don't know ##\left<x_{measured}\right>^2## unless we make a lot of measurements. And we can't simply replace ##\left<x_{measured}\right>^2## with ##x_{measured}^2## since we will get imaginary answers for ##\left<x_{measured}\right>^2 + (\xi-2)\sigma < 0##. And we don't know exactly the value of ##\xi## since it depends on ##\nu##. Maybe what I'm asking is impossible?
 
  • #10
Is notation like "##x_{corrected}##" common in electrical engineering? In mathematical statistics, the customary notation for a function that estimates a parameter "##\nu##" is "##\hat{\nu}##" which denotes a function of the observed data (rather than a "correction" of the observed data).

Khashishi said:
Maybe what I'm asking is impossible?

The non-existence of unbiased estimators is an exotic topic relative to the usual material in a statistics course. Doing a web search, I find https://arxiv.org/abs/1609.07415. Relating the article to your question presents the familiar problem of seeing how results stated for a an n-dimensional vector apply in the case when n = 1. Should we attempt that?

In the paper's common language statements we find:

In many real-world estimation problems we often encounter constraints on the parameter space in the form of side-information. For example in many communication systems we encounter positivity constraints, limited power constraints, bandwidth or delay constraints, circularity constraints, subspace constraints, and so on

and

Our results imply that almost in every constrained problem that one can think of, there exists no unbiased estimator. This result is surprising in light of the scarcity of examples which appear in the literature for the nonexistence of unbiased constrained estimators
 
  • Like
Likes Khashishi
  • #11
Thanks for the reference. It does seem to apply to my case. I do have an extremal point ##\nu_\epsilon=0## and a distribution ##f(x, \nu, \sigma)## which is continuous with respect to ##\nu##. So no unbiased estimator exists.

Stephen Tashi said:
Is notation like "xcorrectedx_{corrected}" common in electrical engineering?
I don't know what notation is used in electrical engineering. I just used my own notation. I suppose I should have called ##x_{measured}## a random variable; ##f(x, \nu, \sigma)## a distribution; ##x##, ##\nu##, and ##\sigma## parameters; and ##x_{corrected} = g(x_{measured})## the estimator called on random variable ##x_{corrected}##.
 

1. What is the Rice distribution?

The Rice distribution, also known as the Rician distribution, is a probability distribution used to model data with positive values and a non-negative skewness. It is commonly used in fields such as telecommunications, signal processing, and radiology.

2. What is an unbiased estimate of a parameter in the Rice distribution?

An unbiased estimate of a parameter in the Rice distribution is a value that, on average, accurately reflects the true value of the parameter. In other words, it is not influenced by any systematic errors or biases and provides an accurate and fair representation of the population.

3. How is an unbiased estimate of a parameter in the Rice distribution calculated?

The unbiased estimate of a parameter in the Rice distribution is typically calculated using statistical methods, such as maximum likelihood estimation or the method of moments. These methods use a sample of data from the population to estimate the parameter, taking into account the sample size and variability of the data.

4. Why is it important to use an unbiased estimate in the Rice distribution?

Using an unbiased estimate in the Rice distribution is important because it ensures that the estimated parameter value is as close to the true value as possible. This is crucial in accurately representing and analyzing data in fields such as telecommunications, signal processing, and radiology.

5. Can an unbiased estimate of a parameter in the Rice distribution be guaranteed?

No, an unbiased estimate of a parameter in the Rice distribution cannot be guaranteed. This is because estimates are based on a sample of data and are subject to sampling error. However, using appropriate statistical methods and a sufficiently large sample size can increase the likelihood of obtaining an unbiased estimate.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
447
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
478
  • Classical Physics
Replies
8
Views
940
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
997
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
Back
Top