Statistics Unbiased Estimators

braindead101
Messages
158
Reaction score
0
1. Let X1,X2, ... ,Xn be independent identically distributed random variables with ex-
pected value \mu and variance \sigma^2: Consider the class of linear estimators of the form
\mu\widehat{} = a1X1 + a2X2 + ... + anXn (1)
for the parameter \mu, where a1, a2, ... an are arbitrary constants.
a) Find the expected value of the estimator \mu.
b) Find the variance of this estimator.
c) When is \mu\widehat{} an unbiased estimator of \mu?
d) Among all linear unbiased estimators of the above form (1), find the minimum
variance estimator.
Hint: Use the Cauchy-Schwarz inequality
e) What is the variance of the best unbiased estimator of the form (1)?

I am really lost and confused.
for (a) I got expected value is X1+X2+...Xn / n, is this correct or do i need to include the arbitrary constants? I thought about it again and got something totally different: i summed up a1EX1 + a2EX2 + anEXn, and wrote a general formula for it as aiEXi
for (b) is the variance just Var(miu hat) = sigma^2 (n summation (i=1) ai^2)
for (c) When Cramer-Rao inequality is satisifed?
I have not attempted (d) or (e) yet as I want to confirm that I am on the right track for (a) (b) and (c) for which I think I'm not.
 
Physics news on Phys.org
(a) Is \mu defined for arbitrary constants, or 1/n? And that's your answer.

aiEXi is right, now substitute in EXi = ___ (fill in the blank).

(b) The formula is Var[\sum a_i X_i] = \sum a_i^2 Var[X_i]; check your answer against this.

(c) No; what is the definition of an unbiased estimator?
 
Last edited:
So these are my revised answers for a-c:
a: \mu (a1 + a2 + .. + an)
b: \sigma^2 (a1 + a2 + ... + an)
c: expectation of estimator has to equal \mu, so in a: the (a1+a2..) part has to equal 1.
as for d, i am unsure how to start this. but i have noticed that the LHS of the cauchy-schwarz can be replaced by 1^2 due to the fact that it is identical to (a1+a2+...) but in summation form. i don't know how to go from there though.
as for e, would it just equal the answer in d, as they are suppose to equal, i found in my notes that best unbiased estimator = minimum variance estimator
 
b isn't right.

LHS of C-S is a.X, where a and X are vectors and . is the dot product.

For this problem C-S is:

(a.X)^2 < ||a|| ||X||

where ||a|| = sqrt[a.a] and ||X|| = sqrt[X.X]. How did you get to 1^2?

I don't understand your answer to e. It is asking for a variance.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top