Bias of an estimator: Can you confirm that I am doing this right?

  • Thread starter Thread starter LoadedAnvils
  • Start date Start date
  • Tags Tags
    Bias
LoadedAnvils
Messages
36
Reaction score
0
Let X_{1}, \ldots, X_{n} \; \mathtt{\sim} \; \textrm{Poisson} (\lambda) and let \hat{\lambda} = n^{-1} \sum_{i = 1}^{n} X_{i}.

The bias of \hat{\lambda} is \mathbb{E}_{\lambda} (\hat{\lambda}) - \lambda. Since X_{i} \; \mathtt{\sim} \; \textrm{Poisson} (\lambda), and all X_{i} are IID, \sum_{i = 1}^{n} X_{i} \; \mathtt{\sim} \; \textrm{Poisson} (n \lambda).

Thus, \mathbb{E} (\hat{\lambda}) = \sum_{nx = 1}^{\infty} x \exp{(-n \lambda)} \frac{(n \lambda)^{nx}}{(nx)!} = \lambda, and the indicator is unbiased (bias = 0).

However, I'm using \mathbb{E}_{\lambda} as \mathbb{E}, and I don't know if I'm doing it right. I haven't seen any similar examples and this is the first time I'm calculating the bias, so I would really love some insight.
 
Physics news on Phys.org
For any distribution where the mean exists (including Poisson), an average of trials is always an unbiased estimate of the mean. All you need is the law of large numbers.
 
Thanks. However, I still want to know if I calculated this correctly (as I will be doing the same for calculating the standard error and MSE).
 
LoadedAnvils said:
However, I'm using \mathbb{E}_{\lambda} as \mathbb{E}, and I don't know if I'm doing it right.

What does "using \mathbb{E}_{\lambda} as \mathbb{E}" mean? For the expectation operator to have a definite meaning, you must say what variable \mathbb{E} is being applied to.
 
What does "using \mathbb{E}_{λ} as \mathbb{E}" mean? For the expectation operator to have a definite meaning, you must say what variable \mathbb{E} is being applied to.

The textbook defines E_{\theta} \left( r(X) \right) = \int r(x) f(x; \theta) dx.

What I did is just evaluated the expectation of \hat{\lambda}.
 
LoadedAnvils said:
The textbook defines E_{\theta} \left( r(X) \right) = \int r(x) f(x; \theta) dx.
One would also need to know how the textbook defines the various things involved in that expression. To me that looks like some sort of conditional expectation where the condition is given by the value of the parameter \theta used in the probability density f.


In contrast to that, the notation E_X Y often means "the expected value of the function Y with respect to the random variable X. If the probability density of X is f(x) then this notation means E_X Y = \int Y(x) f(x) dx.

To relate the above notation to your work

X = Y = \hat{\lambda}
The possible values of X are denoted by nx.
The probability density of X is f(nx) = e^{-n\lambda} \frac{ (n \lambda)^{nx}}{(nx)!}

Taking the usual view that a sum is a type of integral, you should compute
E_X Y = \int nx\ f(nx) dx = \sum_{nx=0}^\infty nx\ e^{-n\lambda} \frac{ (n \lambda)^{nx}}{(nx)!}

If you did not know the probability density function for \hat{\lambda} then could have used the theorem that the expected value of a sum of random variables is the sum of their expected values and gotten the result in a less direct way.
 
Stephen Tashi said:
One would also need to know how the textbook defines the various things involved in that expression. To me that looks like some sort of conditional expectation where the condition is given by the value of the parameter \theta used in the probability density f.

This notation (the E_{\theta}[\text{ something }]) is often used when the assumption is the family of distributions is indexed by a (real or vector valued) parameter \theta. In that context there is no possibility of interpreting as a conditional expectation.
 
Last edited:
Back
Top