Optimizing Linear Estimators for Minimum Variance: How to Find the Best Weights?

  • Thread starter purplebird
  • Start date
  • Tags
    Linear
In summary, the conversation discusses finding W(i) in order to create a linear estimator \mu with a mean value of u and the minimum value of E[(u-\mu)^2]. The most direct way to do this is to set up the estimator as \hat\mu = \sum w(i) x_i and ensure that E(\hat \mu) = \mu by setting \sum w(i) = 1. To minimize the variance of the estimate, the method of undetermined multipliers is used to find the optimal values of w(i) while still satisfying the constraint.
  • #1
purplebird
18
0
Given
Y(i) = u + e(i) i = 1,2,...N
such that e(i)s are statistically independent and u is a parameter
mean of e(i) = 0
and variance = [tex]\sigma(i)[/tex]^2

Find W(i) such that the linear estimator

[tex]\mu[/tex] = [tex]\sum[/tex]W(i)X(i) for i = 1 to N

has

mean value of [tex]\mu[/tex] = u

and E[[tex](u-\mu)^2[/tex] is a minimum
 
Physics news on Phys.org
  • #2
There may be other ways to do this, but one of the most direct is outlined below.
[tex]
\hat\mu = \sum w(i) x_i
[/tex]
then in order for [tex] E(\hat \mu) = \mu [/tex] to be true you must have

[tex]
\sum w(i) = 1
[/tex]

Next, note that [tex] E(\hat \mu - \mu)^2 [/tex] is simply the [tex] \textbf{variance} [/tex] of your estimate (since your estimate has expectation [tex] \mu [/tex]).

Since the [tex] x_i [/tex] are independent, the variance of [tex] \hat \mu [/tex] is

[tex]
\mathbf{Var}{\hat \mu} = \sum w(i)^2 \mathbf{Var}(x_i) = \sigma^2 \sum w(i)^2
[/tex]

You want to choose the [tex] w(i) [/tex] so that the most recent expression is minimized, subject to the constraint that [tex] \sum w(i) = 1 [/tex]

From here on use the method of undetermined multipliers.
 

1. What is a linear estimator?

A linear estimator is a statistical model used to estimate the relationship between a dependent variable and one or more independent variables. It assumes that the relationship between the variables can be described by a straight line.

2. How are the weights of a linear estimator determined?

The weights of a linear estimator are determined by minimizing the sum of squared errors between the predicted values and the actual values. This is done through a process called ordinary least squares.

3. What is the significance of the weights in a linear estimator?

The weights in a linear estimator represent the slope or coefficient of the independent variables in the linear equation. They indicate the strength and direction of the relationship between the variables.

4. Can the weights of a linear estimator be negative?

Yes, the weights of a linear estimator can be negative. A negative weight indicates an inverse relationship between the independent variable and the dependent variable, meaning that as the independent variable increases, the dependent variable decreases.

5. How do multicollinearity and outliers affect the weights of a linear estimator?

Multicollinearity, or high correlation between independent variables, can inflate the weights in a linear estimator, making them less reliable. Outliers, or extreme data points, can also have a significant impact on the weights, potentially skewing the results of the model.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
439
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
23
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
802
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
900
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
926
Back
Top