How to show that the Method of Moment Estimators for the Normal

Click For Summary

Homework Help Overview

The discussion revolves around demonstrating the consistency of moment estimators for the normal distribution, specifically focusing on the parameters \(\mu\) and \(\sigma^2\). Participants are exploring the conditions required to show that these estimators are consistent by examining their expected values and variances as the sample size approaches infinity.

Discussion Character

  • Exploratory, Assumption checking, Mathematical reasoning

Approaches and Questions Raised

  • Participants discuss the need to show that the expected value of the estimators equals the parameters and that the variance approaches zero as sample size increases. There are attempts to calculate the expected values and variances for both \(\mu\) and \(\sigma^2\), with some expressing confusion about the variance calculations.

Discussion Status

Some participants have provided insights into the bias of the \(\sigma^2\) estimator and have pointed out errors in the calculations presented. There is an ongoing exploration of the properties of the estimators, particularly regarding the relationship between the sample mean and the variance.

Contextual Notes

Participants are working under the constraints of homework rules, which may limit the information they can share or the methods they can use. There is a focus on ensuring that the estimators are unbiased and consistent, with specific attention to the calculations involved.

trap101
Messages
339
Reaction score
0
So I'm trying to show that the estimators for the normal distribution by the method of moments are consistent. So to show consistency, I have to :

1) Show E(θ(estimator) = θ (parameter)

2) lim n-->∞ Var(θ(estimator) = 0

So Since there are two estimators in the normal distribution ( \mu,\sigma2) I have to prove that they are each consistent:

To prove \mu(estimator):

E(\mu(estimate) = E((\sumXi)/n)

Working along I get that \mu (estimate) is unbias as well as \sigma2, but now to show the second condition I get tripped up for both estimators.

for \mu(estimate):

V(\mu(estimate) = V((\sumXi)/n

= 1/n2\sumV(Xi)

Similarly for V( \sigma2(estimate):

1/n2\sumV(Xi-X(bar))2

How do I proceed for these two estimators from here?
 
Physics news on Phys.org
trap101 said:
So I'm trying to show that the estimators for the normal distribution by the method of moments are consistent. So to show consistency, I have to :

1) Show E(θ(estimator) = θ (parameter)

2) lim n-->∞ Var(θ(estimator) = 0

So Since there are two estimators in the normal distribution ( \mu,\sigma2) I have to prove that they are each consistent:

To prove \mu(estimator):

E(\mu(estimate) = E((\sumXi)/n)

Working along I get that \mu (estimate) is unbias as well as \sigma2, but now to show the second condition I get tripped up for both estimators.

for \mu(estimate):

V(\mu(estimate) = V((\sumXi)/n

= 1/n2\sumV(Xi)

Similarly for V( \sigma2(estimate):

1/n2\sumV(Xi-X(bar))2

How do I proceed for these two estimators from here?

If you are estimating ##\sigma^2## using
\text{est}(\sigma^2) = \frac{1}{n} \sum_{i=1}^n (X_i - \bar{X})^2
then your estimate is biased.
 
Ray Vickson said:
If you are estimating ##\sigma^2## using
\text{est}(\sigma^2) = \frac{1}{n} \sum_{i=1}^n (X_i - \bar{X})^2
then your estimate is biased.


Then I established that result wrong. I did this:

Assuming \sigma2 estimate:

= 1/n (\sumE(Xi-X(bar))2

= 1/n (\sum\sigma2

= 1/n (n\sigma2)

= \sigma2
 
Your error is stating that
<br /> E\left(X_i - \overline{X}\right)^2 = \sigma^2<br />
 
trap101 said:
Then I established that result wrong. I did this:

Assuming \sigma2 estimate:

= 1/n (\sumE(Xi-X(bar))2

= 1/n (\sum\sigma2

= 1/n (n\sigma2)

= \sigma2

As 'statdad' has explained to you, ##E(X_i - \bar{X})^2 \neq \sigma^2##.

Take ##i = 1##, for example. We have X_1 - \bar{X} = \left(1-\frac{1}{n}\right) X_1 - \frac{1}{n} X_2 - \cdots - \frac{1}{n} X_n
You need to square this, expand it out, then take the expectation----and yes, I am perfectly serious! Try it yourself; it is not as bad as you might think at first. The basic properties you need are ##E X_j^2 = \sigma^2 + \mu^2## and the fact that the different ##X_j## are independent, so that ##E X_i X_j## is easy to get for ##i \neq j##.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
4K
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K