Unbiased estimator of modulation depth?

  • Thread starter Thread starter f91jsw
  • Start date Start date
  • Tags Tags
    Depth Modulation
Click For Summary

Discussion Overview

The discussion revolves around the problem of estimating the modulation depth of a sinusoidally varying signal from measurements taken at various phase points. Participants explore the bias in the mean of the modulation depth estimator and seek methods to obtain an unbiased estimator, particularly in the context of low signal-to-noise ratios.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Johannes describes a method for estimating modulation depth using a sine curve fit and expresses concern about the bias in the mean of this estimator.
  • Some participants question whether the bias arises from the nonlinearity of the modulation depth as a function of the fitting parameters.
  • One suggestion involves using "brute force" algebra to reformulate the expression for modulation depth to include it directly as a parameter.
  • Another participant proposes running simulations to determine the bias's magnitude and direction, potentially leading to a correction formula.
  • There is a discussion about the possibility of linearizing the modulation depth expression using Taylor series to facilitate estimation.
  • Johannes mentions averaging the fitting parameters to reduce bias, but others challenge this approach, suggesting that regression could handle the observations collectively without averaging.
  • Some participants highlight the distinction between estimating the mean of the modulation depth and obtaining an unbiased estimator of the modulation depth itself.

Areas of Agreement / Disagreement

Participants express differing views on the best approach to address the bias in the modulation depth estimator. There is no consensus on a single method, and the discussion remains unresolved regarding the most effective solution.

Contextual Notes

The discussion highlights the complexities of estimating nonlinear functions from noisy data and the challenges posed by varying phase measurements. Participants acknowledge the limitations of their proposed methods without reaching definitive conclusions.

f91jsw
Messages
22
Reaction score
0
I have the following problem:
I measure a sinusoidally varying signal at a number of phase points. I then fit a sine curve to the data points using least squares. The fitting function looks like:
f = a1 + a2 cos(phi) + a3 sin(phi)
I want to evaluate the modulation depth from the measurements given by
m = sqrt(a2*a2 + a3*a3)/a1
Now the problem is that mean(m) is a biased estimator of m, and this becomes very significant when the signal-to-noise ratio is low. This would seem like a pretty common type of problem but I'm not a stat guy. Can someone please point me in the right direction, literature, web site, anything that could help me out here?
Johannes
 
Engineering news on Phys.org
Why is mean(m) biased? Does it have to do with m being a nonlinear function of the a's?

I guess that would be right, E[m] = E[sqrt(a22 + a32)/a1] is not equal to M = sqrt(A22 + A32)/A1 where uppercase denotes true parameter value.
 
Last edited:
You could try "brute force" algebra to mould your expression into a shape that includes m as one of the parameters. That way you'd have a direct estimator.

E.g., if f = a1 + a2x + a3z and the statistic you were looking to estimate was (a1+a2)2, then your left hand side variable would be (1+1/x+1/x2)f2, because the intercept term in that expression is a12+2a1a2+a22.
 
Last edited:
Although admittedly your problem is harder because it involves a square root and division.
 
I can't point at literature, but maybe I can help figure this out. I'm not an electrical engineering expert, so you'll have to bear with me. :smile:


I think the scenario you're describing is the following:

The signal S(t) is being transmitted, where S(t) is given by:

[tex]S(t) = A_1 + A_2 \cos (\phi t) + A_3 \sin (\phi t)[/tex]

where [itex]\phi[/itex] is a known quantity. (?)


The signal is being transmitted through a noisy channel, and you sample the signal at times [itex]t_1, t_2, t_3, \cdots[/itex], yielding the samples:

[tex]y_k = S(t_k) + e_k[/tex]

Where [itex]e_k[/itex], the error on the k-th sample, is gaussian noise.


Your goal is to estimate the quantity [itex]m = \sqrt{A_2^2 + A_3^2} / A_1[/itex] from the samples [itex]y_k[/itex].


Is that correct?
 
Hurkyl said:
Your goal is to estimate the quantity [itex]m = \sqrt{A_2^2 + A_3^2} / A_1[/itex] from the samples [itex]y_k[/itex].
Is that correct?

Yes, correct, only that I don't measure S directly as a function of time, just phase. The measurement is based on a homodyne technique so I measure a DC signal at each discrete phase point (which I choose myself). It makes no difference for the problem though.

The way I do it now is, for an ensemble of measurements, to average each of the parameters a_1, a_2 and a_3 first. It works because it effectively increases the signal-to-noise ratio so any bias becomes negligible. It's a bit awkward though because for technical reasons the phase varies between the measurements so I need to make an accurate reference measurement to determine the phase variation in order to be able to average the a_i's.

Thanks for the input so far everyone!

Johannes
 
I'm going to move this over to the Electrical Engineering section; maybe we can drum up some more knowledgeable people that way!
 
f91jsw said:
The way I do it now is, for an ensemble of measurements, to average each of the parameters a_1, a_2 and a_3 first. It works because it effectively increases the signal-to-noise ratio so any bias becomes negligible.
Why do you need to average the a's? I am not an engineer, but why can't you just put all the observations you have collected into one big vector and let the regression do the averaging automatically (i.e. implicitly)?
 
EnumaElish said:
Why do you need to average the a's? I am not an engineer, but why can't you just put all the observations you have collected into one big vector and let the regression do the averaging automatically (i.e. implicitly)?

That would accomplish the same thing, except that the varying phase complicates things. But still, it doesn't solve the fundamental problem of bias.

Johannes
 
  • #10
You may be able to run a simulation to determine the magnitude and the direction of the bias. Then come up with a correction formula.
 
  • #11
Another approach would be to linearize the m expression (e.g. using Fourier series); then manipulate the f function to estimate the linearized expression directly.
 
  • #12
I meant Taylor series, not Fourier.
 
  • #13
EnumaElish said:
You may be able to run a simulation to determine the magnitude and the direction of the bias. Then come up with a correction formula.

I've actually already done that with a short Matlab script. That's the engineering solution but I'm fundamentally interested in a proper statistical treatment of the problem. It seems like something that must pop up often: you have an unbiased maximum likelihood estimate of some parameters, and then use those in a non-linear formula which introduces bias. I was hoping to find a general approach to that.

Johannes
 
  • #14
Given m = G(a), you can always obtain an unbiased estimator of m's mean if you can derive (or simulate) the distribution of m from those of a.

Example: a and b are two normal variables, distributed with N(A,sigma_1) and N(B,sigma_2) respectively. You'd like to calculate the expected value of x = ab. Let X be the true mean of x. You can take 1,000 random pairs (a,b) from the two normal distributions and calculate the product of each pair. This will yield 1,000 observations of x, whose average is an unbiased estimator of X. (I am guessing that you have already tried this.)

Or you may be able to work out the analytical solution.
 
Last edited:
  • #15
EnumaElish said:
Given m = G(a), you can always obtain an unbiased estimator of m's mean if you can derive (or simulate) the distribution of m from those of a.

Maybe I misunderstand you but I think my problem is that I want an unbiased estimator of m, not the mean of m.

Example: assume I have a variable a distributed N(A,sigma), and m = sqrt(a). The the true m is now sqrt(A). If I have 1000 observations of a and take sqrt(a) for each of them and then the mean, mean(m) will not be = sqrt(A).

Johannes
 
  • #16
My post was poorly written.

m = sqrt(a) is a random variable distributed with Fm(u) = Prob(sqrt(a) < u) = Prob(a < u^2) = Fa(u^2). The first moment of Fm is the expected value of the estimator, m.

Expected Value entry in Wikipedia said:
To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. This estimates the true expected value in an unbiased manner ...
 
Last edited:

Similar threads

  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 50 ·
2
Replies
50
Views
12K
  • · Replies 10 ·
Replies
10
Views
5K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 2 ·
Replies
2
Views
615
  • · Replies 89 ·
3
Replies
89
Views
38K
Replies
3
Views
2K