Unbiased estimator of modulation depth?

In summary: I've actually already done that with a short Matlab script. That's the engineering solution but I'm fundamentally interested in a proper statistical treatment of the problem. It seems like something that must pop up often: you have an unbiased maximum likelihood estimate of some parameters, and then use those in a non-linear formula which introduces bias. I was hoping to find a general approach to... avoiding bias.That's a great start! You might also want to try a bootstrap approach to sampling. That's a bit beyond my expertise, but it might help you get an idea of the magnitude and direction of the bias. bootstrap
  • #1
f91jsw
22
0
I have the following problem:
I measure a sinusoidally varying signal at a number of phase points. I then fit a sine curve to the data points using least squares. The fitting function looks like:
f = a1 + a2 cos(phi) + a3 sin(phi)
I want to evaluate the modulation depth from the measurements given by
m = sqrt(a2*a2 + a3*a3)/a1
Now the problem is that mean(m) is a biased estimator of m, and this becomes very significant when the signal-to-noise ratio is low. This would seem like a pretty common type of problem but I'm not a stat guy. Can someone please point me in the right direction, literature, web site, anything that could help me out here?
Johannes
 
Engineering news on Phys.org
  • #2
Why is mean(m) biased? Does it have to do with m being a nonlinear function of the a's?

I guess that would be right, E[m] = E[sqrt(a22 + a32)/a1] is not equal to M = sqrt(A22 + A32)/A1 where uppercase denotes true parameter value.
 
Last edited:
  • #3
You could try "brute force" algebra to mould your expression into a shape that includes m as one of the parameters. That way you'd have a direct estimator.

E.g., if f = a1 + a2x + a3z and the statistic you were looking to estimate was (a1+a2)2, then your left hand side variable would be (1+1/x+1/x2)f2, because the intercept term in that expression is a12+2a1a2+a22.
 
Last edited:
  • #4
Although admittedly your problem is harder because it involves a square root and division.
 
  • #5
I can't point at literature, but maybe I can help figure this out. I'm not an electrical engineering expert, so you'll have to bear with me. :smile:


I think the scenario you're describing is the following:

The signal S(t) is being transmitted, where S(t) is given by:

[tex]S(t) = A_1 + A_2 \cos (\phi t) + A_3 \sin (\phi t)[/tex]

where [itex]\phi[/itex] is a known quantity. (?)


The signal is being transmitted through a noisy channel, and you sample the signal at times [itex]t_1, t_2, t_3, \cdots[/itex], yielding the samples:

[tex]y_k = S(t_k) + e_k[/tex]

Where [itex]e_k[/itex], the error on the k-th sample, is gaussian noise.


Your goal is to estimate the quantity [itex]m = \sqrt{A_2^2 + A_3^2} / A_1[/itex] from the samples [itex]y_k[/itex].


Is that correct?
 
  • #6
Hurkyl said:
Your goal is to estimate the quantity [itex]m = \sqrt{A_2^2 + A_3^2} / A_1[/itex] from the samples [itex]y_k[/itex].
Is that correct?

Yes, correct, only that I don't measure S directly as a function of time, just phase. The measurement is based on a homodyne technique so I measure a DC signal at each discrete phase point (which I choose myself). It makes no difference for the problem though.

The way I do it now is, for an ensemble of measurements, to average each of the parameters a_1, a_2 and a_3 first. It works because it effectively increases the signal-to-noise ratio so any bias becomes negligible. It's a bit awkward though because for technical reasons the phase varies between the measurements so I need to make an accurate reference measurement to determine the phase variation in order to be able to average the a_i's.

Thanks for the input so far everyone!

Johannes
 
  • #7
I'm going to move this over to the Electrical Engineering section; maybe we can drum up some more knowledgeable people that way!
 
  • #8
f91jsw said:
The way I do it now is, for an ensemble of measurements, to average each of the parameters a_1, a_2 and a_3 first. It works because it effectively increases the signal-to-noise ratio so any bias becomes negligible.
Why do you need to average the a's? I am not an engineer, but why can't you just put all the observations you have collected into one big vector and let the regression do the averaging automatically (i.e. implicitly)?
 
  • #9
EnumaElish said:
Why do you need to average the a's? I am not an engineer, but why can't you just put all the observations you have collected into one big vector and let the regression do the averaging automatically (i.e. implicitly)?

That would accomplish the same thing, except that the varying phase complicates things. But still, it doesn't solve the fundamental problem of bias.

Johannes
 
  • #10
You may be able to run a simulation to determine the magnitude and the direction of the bias. Then come up with a correction formula.
 
  • #11
Another approach would be to linearize the m expression (e.g. using Fourier series); then manipulate the f function to estimate the linearized expression directly.
 
  • #12
I meant Taylor series, not Fourier.
 
  • #13
EnumaElish said:
You may be able to run a simulation to determine the magnitude and the direction of the bias. Then come up with a correction formula.

I've actually already done that with a short Matlab script. That's the engineering solution but I'm fundamentally interested in a proper statistical treatment of the problem. It seems like something that must pop up often: you have an unbiased maximum likelihood estimate of some parameters, and then use those in a non-linear formula which introduces bias. I was hoping to find a general approach to that.

Johannes
 
  • #14
Given m = G(a), you can always obtain an unbiased estimator of m's mean if you can derive (or simulate) the distribution of m from those of a.

Example: a and b are two normal variables, distributed with N(A,sigma_1) and N(B,sigma_2) respectively. You'd like to calculate the expected value of x = ab. Let X be the true mean of x. You can take 1,000 random pairs (a,b) from the two normal distributions and calculate the product of each pair. This will yield 1,000 observations of x, whose average is an unbiased estimator of X. (I am guessing that you have already tried this.)

Or you may be able to work out the analytical solution.
 
Last edited:
  • #15
EnumaElish said:
Given m = G(a), you can always obtain an unbiased estimator of m's mean if you can derive (or simulate) the distribution of m from those of a.

Maybe I misunderstand you but I think my problem is that I want an unbiased estimator of m, not the mean of m.

Example: assume I have a variable a distributed N(A,sigma), and m = sqrt(a). The the true m is now sqrt(A). If I have 1000 observations of a and take sqrt(a) for each of them and then the mean, mean(m) will not be = sqrt(A).

Johannes
 
  • #16
My post was poorly written.

m = sqrt(a) is a random variable distributed with Fm(u) = Prob(sqrt(a) < u) = Prob(a < u^2) = Fa(u^2). The first moment of Fm is the expected value of the estimator, m.

Expected Value entry in Wikipedia said:
To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. This estimates the true expected value in an unbiased manner ...
 
Last edited:

What is an unbiased estimator of modulation depth?

An unbiased estimator of modulation depth is a statistical method used to estimate the amount of modulation present in a signal. Modulation depth refers to the degree of variation or change in a signal over time, and an unbiased estimator is a calculation that accurately captures this variation without any systematic bias.

Why is an unbiased estimator of modulation depth important?

An unbiased estimator of modulation depth is important because it allows scientists to accurately measure and compare the amount of modulation present in different signals. This can be useful in various fields, such as telecommunications, neuroscience, and signal processing.

How is an unbiased estimator of modulation depth calculated?

The calculation of an unbiased estimator of modulation depth varies depending on the type of signal being analyzed. Generally, it involves measuring the variance of the signal and comparing it to a baseline or reference value. More complex calculations may involve Fourier analysis or other statistical methods.

What are the limitations of using an unbiased estimator of modulation depth?

Like any statistical method, an unbiased estimator of modulation depth has its limitations. These may include the assumptions made about the signal being analyzed, the accuracy of the measurement equipment, and the potential for error or bias in the calculation process. It is important for scientists to carefully consider these limitations when interpreting the results.

Can an unbiased estimator of modulation depth be used for any type of signal?

No, an unbiased estimator of modulation depth may not be suitable for all types of signals. It is typically most effective for signals that exhibit a certain amount of regularity or periodicity. Other types of signals, such as random noise or highly complex patterns, may require different methods for measuring modulation depth.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
2K
Replies
12
Views
966
  • Electrical Engineering
Replies
16
Views
2K
Replies
6
Views
3K
  • Introductory Physics Homework Help
Replies
25
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Precalculus Mathematics Homework Help
Replies
2
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Astronomy and Astrophysics
2
Replies
43
Views
10K
Replies
1
Views
6K
Back
Top