Bias of an estimator: Can you confirm that I am doing this right?

  • Context: Graduate 
  • Thread starter Thread starter LoadedAnvils
  • Start date Start date
  • Tags Tags
    Bias
Click For Summary

Discussion Overview

The discussion revolves around the calculation of the bias of an estimator for a Poisson distribution, specifically examining the estimator \(\hat{\lambda} = n^{-1} \sum_{i=1}^{n} X_{i}\). Participants explore the implications of using different expectation notations and seek clarification on the correctness of their calculations regarding bias, standard error, and mean squared error (MSE).

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant asserts that the bias of \(\hat{\lambda}\) is zero, claiming that the average of trials is an unbiased estimate of the mean for distributions where the mean exists, citing the law of large numbers.
  • Another participant questions the use of the notation \(\mathbb{E}_{\lambda}\) as equivalent to \(\mathbb{E}\), emphasizing the need for clarity on what variable the expectation operator is applied to.
  • Some participants discuss the definitions provided in their textbooks regarding expectation, noting that the notation \(\mathbb{E}_{\theta}(r(X))\) may imply a conditional expectation based on the parameter \(\theta\).
  • There is a suggestion that the expected value of a sum of random variables can be computed directly or through the probability density function, indicating different approaches to the problem.
  • One participant expresses uncertainty about their calculations and seeks confirmation on their methodology for calculating bias and other statistical measures.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the interpretation of expectation notation or the correctness of the calculations presented. Multiple competing views remain regarding the definitions and implications of the expectation operator in this context.

Contextual Notes

Limitations include potential misunderstandings of notation and definitions from different textbooks, as well as unresolved steps in the mathematical derivations presented by participants.

LoadedAnvils
Messages
36
Reaction score
0
Let X_{1}, \ldots, X_{n} \; \mathtt{\sim} \; \textrm{Poisson} (\lambda) and let \hat{\lambda} = n^{-1} \sum_{i = 1}^{n} X_{i}.

The bias of \hat{\lambda} is \mathbb{E}_{\lambda} (\hat{\lambda}) - \lambda. Since X_{i} \; \mathtt{\sim} \; \textrm{Poisson} (\lambda), and all X_{i} are IID, \sum_{i = 1}^{n} X_{i} \; \mathtt{\sim} \; \textrm{Poisson} (n \lambda).

Thus, \mathbb{E} (\hat{\lambda}) = \sum_{nx = 1}^{\infty} x \exp{(-n \lambda)} \frac{(n \lambda)^{nx}}{(nx)!} = \lambda, and the indicator is unbiased (bias = 0).

However, I'm using \mathbb{E}_{\lambda} as \mathbb{E}, and I don't know if I'm doing it right. I haven't seen any similar examples and this is the first time I'm calculating the bias, so I would really love some insight.
 
Physics news on Phys.org
For any distribution where the mean exists (including Poisson), an average of trials is always an unbiased estimate of the mean. All you need is the law of large numbers.
 
Thanks. However, I still want to know if I calculated this correctly (as I will be doing the same for calculating the standard error and MSE).
 
LoadedAnvils said:
However, I'm using \mathbb{E}_{\lambda} as \mathbb{E}, and I don't know if I'm doing it right.

What does "using \mathbb{E}_{\lambda} as \mathbb{E}" mean? For the expectation operator to have a definite meaning, you must say what variable \mathbb{E} is being applied to.
 
What does "using \mathbb{E}_{λ} as \mathbb{E}" mean? For the expectation operator to have a definite meaning, you must say what variable \mathbb{E} is being applied to.

The textbook defines E_{\theta} \left( r(X) \right) = \int r(x) f(x; \theta) dx.

What I did is just evaluated the expectation of \hat{\lambda}.
 
LoadedAnvils said:
The textbook defines E_{\theta} \left( r(X) \right) = \int r(x) f(x; \theta) dx.
One would also need to know how the textbook defines the various things involved in that expression. To me that looks like some sort of conditional expectation where the condition is given by the value of the parameter \theta used in the probability density f.


In contrast to that, the notation E_X Y often means "the expected value of the function Y with respect to the random variable X. If the probability density of X is f(x) then this notation means E_X Y = \int Y(x) f(x) dx.

To relate the above notation to your work

X = Y = \hat{\lambda}
The possible values of X are denoted by nx.
The probability density of X is f(nx) = e^{-n\lambda} \frac{ (n \lambda)^{nx}}{(nx)!}

Taking the usual view that a sum is a type of integral, you should compute
E_X Y = \int nx\ f(nx) dx = \sum_{nx=0}^\infty nx\ e^{-n\lambda} \frac{ (n \lambda)^{nx}}{(nx)!}

If you did not know the probability density function for \hat{\lambda} then could have used the theorem that the expected value of a sum of random variables is the sum of their expected values and gotten the result in a less direct way.
 
Stephen Tashi said:
One would also need to know how the textbook defines the various things involved in that expression. To me that looks like some sort of conditional expectation where the condition is given by the value of the parameter \theta used in the probability density f.

This notation (the E_{\theta}[\text{ something }]) is often used when the assumption is the family of distributions is indexed by a (real or vector valued) parameter \theta. In that context there is no possibility of interpreting as a conditional expectation.
 
Last edited:

Similar threads

Replies
1
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 21 ·
Replies
21
Views
2K
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K