Statistics Unbiased Estimators

  • #1
162
0
1. Let X1,X2, ... ,Xn be independent identically distributed random variables with ex-
pected value [tex]\mu[/tex] and variance [tex]\sigma^2[/tex]: Consider the class of linear estimators of the form
[tex]\mu\widehat{}[/tex] = a1X1 + a2X2 + ... + anXn (1)
for the parameter [tex]\mu[/tex], where a1, a2, ... an are arbitrary constants.
a) Find the expected value of the estimator [tex]\mu[/tex].
b) Find the variance of this estimator.
c) When is [tex]\mu\widehat{}[/tex] an unbiased estimator of [tex]\mu[/tex]?
d) Among all linear unbiased estimators of the above form (1), find the minimum
variance estimator.
Hint: Use the Cauchy-Schwarz inequality
e) What is the variance of the best unbiased estimator of the form (1)?

I am really lost and confused.
for (a) I got expected value is X1+X2+...Xn / n, is this correct or do i need to include the arbitrary constants? I thought about it again and got something totally different: i summed up a1EX1 + a2EX2 + anEXn, and wrote a general formula for it as aiEXi
for (b) is the variance just Var(miu hat) = sigma^2 (n summation (i=1) ai^2)
for (c) When Cramer-Rao inequality is satisifed?
I have not attempted (d) or (e) yet as I want to confirm that I am on the right track for (a) (b) and (c) for which I think I'm not.
 

Answers and Replies

  • #2
EnumaElish
Science Advisor
Homework Helper
2,322
124
(a) Is [itex]\mu[/itex] defined for arbitrary constants, or 1/n? And that's your answer.

aiEXi is right, now substitute in EXi = ___ (fill in the blank).

(b) The formula is [itex]Var[\sum a_i X_i] = \sum a_i^2 Var[X_i][/itex]; check your answer against this.

(c) No; what is the definition of an unbiased estimator?
 
Last edited:
  • #3
162
0
So these are my revised answers for a-c:
a: [itex]\mu[/itex] (a1 + a2 + .. + an)
b: [itex]\sigma^2[/itex] (a1 + a2 + ... + an)
c: expectation of estimator has to equal [itex]\mu[/itex], so in a: the (a1+a2..) part has to equal 1.
as for d, i am unsure how to start this. but i have noticed that the LHS of the cauchy-schwarz can be replaced by 1^2 due to the fact that it is identical to (a1+a2+....) but in summation form. i don't know how to go from there though.
as for e, would it just equal the answer in d, as they are suppose to equal, i found in my notes that best unbiased estimator = minimum variance estimator
 
  • #4
EnumaElish
Science Advisor
Homework Helper
2,322
124
b isn't right.

LHS of C-S is a.X, where a and X are vectors and . is the dot product.

For this problem C-S is:

(a.X)^2 < ||a|| ||X||

where ||a|| = sqrt[a.a] and ||X|| = sqrt[X.X]. How did you get to 1^2?

I don't understand your answer to e. It is asking for a variance.
 

Related Threads on Statistics Unbiased Estimators

  • Last Post
Replies
1
Views
3K
  • Last Post
Replies
5
Views
4K
  • Last Post
Replies
1
Views
3K
  • Last Post
Replies
1
Views
1K
Replies
0
Views
2K
  • Last Post
Replies
2
Views
2K
  • Last Post
Replies
2
Views
4K
  • Last Post
Replies
0
Views
2K
  • Last Post
Replies
5
Views
2K
  • Last Post
Replies
1
Views
973
Top