Proving Unbiased Estimator: E(2/n*sum from 1 to n of Y(i)) = theta

  • Thread starter semidevil
  • Start date
In summary, the estimator for 1/theta, for 0<y<theta, is (theta hat) = 2/n * sum from 1 to n of Y(i). To prove its unbiasedness, we need to show that the expected value of (theta hat) is equal to theta. This can be done by showing that E(Y(i)) = theta/2, which is true for a uniform distribution of y from 0 to 1. However, it should be noted that the estimator is for 1/theta, not theta, which may cause confusion.
  • #1
semidevil
157
2
ok, so we know that an estimator for 1/theta, for 0<y<theta is (theta hat) = 2/n * sum from 1 to n of Y(i).


to prove that the estimator is unbiased, I need to show that the expected value of (theta hat) = theta.

so E(2/n*sum from 1 to n of Y(i)) =

2/n * sum from 1 to n of E(Y(i)).

then the book says we can cancel stuff because E(Y(i)) = theta/2.

so why is it equal to theta/2? I'm doing other problems similar to this, so do I just put E(Y(i)) = theta/2 for everything?

confused...
 
Physics news on Phys.org
  • #2
I think that E(y) is equal to y/2 ONLY if the distribution of y is uniform.

Think about a flat probability distribution function in y from 0 to 1. The expected value of y is 1/2.

You also mentioned that the estimator is for 1/theta, not theta, which doesn't seem consistent since you're referring to theta^hat.
 
  • #3


To prove that an estimator is unbiased, we need to show that its expected value is equal to the true population parameter. In this case, we are trying to show that the expected value of (theta hat) is equal to theta.

First, we can rewrite (theta hat) as 2/n * sum from 1 to n of Y(i) as given in the problem. Then, using linearity of expectation, we can move the constant 2/n outside of the sum and write it as 2/n * sum from 1 to n of E(Y(i)).

Next, we need to determine the expected value of Y(i). The problem states that 0 < Y(i) < theta, which means that the distribution of Y(i) is bounded between 0 and theta. This suggests that Y(i) follows a uniform distribution with parameters 0 and theta. The expected value of a uniform distribution on the interval [a, b] is (a + b) / 2. In this case, a = 0 and b = theta, so E(Y(i)) = (0 + theta) / 2 = theta / 2.

Thus, we can rewrite our expression as 2/n * sum from 1 to n of (theta / 2). Since the sum is just adding n copies of theta / 2, we can simplify this to (2/n) * (n * theta / 2) = theta, which is the true population parameter. Therefore, we have shown that the expected value of (theta hat) is equal to theta, which proves that the estimator is unbiased.

In summary, to prove that an estimator is unbiased, we need to show that its expected value is equal to the true population parameter. In this problem, we showed that the expected value of (theta hat) is equal to theta by using the fact that Y(i) follows a uniform distribution and the properties of linearity of expectation.
 

1. What is an unbiased estimator?

An unbiased estimator is a statistical method used to estimate a population parameter, such as the mean or variance, that does not systematically over or underestimate the true value.

2. How do you prove that an estimator is unbiased?

To prove that an estimator is unbiased, you need to show that the expected value of the estimator is equal to the true population parameter. In the case of E(2/n*sum from 1 to n of Y(i)) = theta, we need to show that the expected value of this estimator is equal to the true value of the population parameter, theta.

3. What does E(2/n*sum from 1 to n of Y(i)) = theta mean?

This notation means that the expected value of the estimator, 2/n times the sum of all the observations, is equal to the true population parameter, theta. In other words, this estimator is unbiased and on average, will provide an accurate estimate of the true value.

4. How is the sum of Y(i) related to the population parameter?

The sum of Y(i) is related to the population parameter through the equation 2/n*sum from 1 to n of Y(i) = theta. This means that the sum of all the observations, divided by the sample size, will give an unbiased estimate of the population parameter.

5. Why is an unbiased estimator important?

An unbiased estimator is important because it allows us to estimate a population parameter without any systematic error or bias. This means that we can use the estimator to make accurate inferences about the population based on the sample data. Additionally, unbiased estimators are desirable because they are more efficient and have smaller variances compared to biased estimators.

Similar threads

  • Introductory Physics Homework Help
Replies
1
Views
195
  • Introductory Physics Homework Help
Replies
10
Views
776
  • Introductory Physics Homework Help
Replies
7
Views
216
  • Introductory Physics Homework Help
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
746
  • Introductory Physics Homework Help
Replies
2
Views
1K
  • Introductory Physics Homework Help
Replies
3
Views
815
  • Introductory Physics Homework Help
Replies
1
Views
920
  • Introductory Physics Homework Help
Replies
16
Views
1K
  • Introductory Physics Homework Help
Replies
3
Views
149
Back
Top