What is an unbiased estimator ?

  • Context: High School 
  • Thread starter Thread starter Voilstone
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the concept of an unbiased estimator in statistics. Participants explore its definition, properties, and implications in statistical estimation, particularly in relation to sample data and parameters.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Homework-related

Main Points Raised

  • Some participants explain that an unbiased estimator is a random variable whose expected value equals the true value of the parameter being estimated.
  • One participant provides an example using regression, noting that while estimates vary due to noise, the mean of these estimates converges on the true values, suggesting unbiasedness.
  • Another participant emphasizes that unbiased estimators should have expectations that equal the parameter, and discusses the importance of consistency as sample size increases.
  • There is a question raised about the meaning of "noise" in data and its implications for estimators.
  • Participants discuss the relationship between unbiasedness and the usefulness of confidence intervals, suggesting that an unbiased estimator is crucial for accurate statistical inference.
  • One participant questions whether the definition of an unbiased estimator as mean/parameter is accurate and seeks clarification on its purpose.
  • Another participant highlights that estimators are functions of random variables and that each observation in a sample is independent, which simplifies variance calculations.
  • There is a reiteration of the need for the mean of the estimator to align with the parameter of interest, regardless of sample size.

Areas of Agreement / Disagreement

Participants express varying levels of understanding regarding unbiased estimators, with some providing definitions and examples while others seek clarification. The discussion includes both agreement on certain properties of unbiased estimators and unresolved questions about specific terms and implications.

Contextual Notes

Some participants express uncertainty about foundational concepts, such as the nature of estimators as random variables and the implications of noise in data. There are also discussions about the relationship between unbiasedness and statistical intervals that remain unresolved.

Who May Find This Useful

This discussion may be useful for students or individuals new to statistics who are seeking to understand the concept of unbiased estimators and their significance in statistical analysis.

Voilstone
Messages
7
Reaction score
0
What is an unbiased estimator ??

I do not really understand what is an unbiased estimator during my statistic studies ~~ THANKS ~~
 
Physics news on Phys.org


This should be in the statistics forum, but since I can answer it...

Remember that estimators are random variables; an estimator is "unbiased" if its expected value is equal to the true value of the parameter being estimated. To use regression as an example...

Suppose you measured two variables x and y where the true (linear) relationship is given by...y = 5x + 2Of course, any sample that you draw will have noise in it, so you have to estimate the true values of the slope and intercept. Suppose that you draw a thousand samples of x,y and calculate the least squares estimators for each sample (assuming that the noise is normally distributed). As you do that, you'll notice two things...

1) All of your estimates are different (because the data is noisy)
2) The mean of all of those estimates starts to converge on the true values (5 and 2)

The second occurs because the estimator is unbiased.
 


Thread moved.
 


any sample that you draw will have noise in it, >---- what does the noise mean by ??
mean like a One-to-One function ??
 


Voilstone said:
I do not really understand what is an unbiased estimator during my statistic studies ~~ THANKS ~~

Hey Voilstone and welcome to the forums.

Lets say you have a parameter, for simplicity let's say its the mean.

Now the mean has a distribution based on a sample. Using your sample, you are trying to estimate a parameter: this is why we call these things estimators because they are estimating something.

Unbiased estimators have the property that the expectation of the sampling distribution algebraically equals the parameter: in other words the expectation of our estimator random variable gives us the parameter. If it doesn't, then the estimator is called unbiased.

Of course, we want estimators that are unbiased because statistically they will give us an estimate that is close to what it should be.

Also the key thing is that the estimate stays the same even when the sample grows. You will learn that an estimator should be consistent which basically means that the variance of the estimator goes to zero as the sample size goes to infinity.
 


chiro said:
Hey Voilstone and welcome to the forums.

Of course, we want estimators that are unbiased because statistically they will give us an estimate that is close to what it should be.

Thanks , i am still new here .

since unbiased estimator = mean/paramete , we want it to be close to the mean for what purposes ??
 


Voilstone said:
Thanks , i am still new here .

since unbiased estimator = mean/paramete , we want it to be close to the mean for what purposes ??

The estimator is a function of a sample. Since each observation in the sample comes from the same distribution, we consider each observation to be the realization of a random variable that corresponds to the true distribution. We also consider that each observation is independent: this simplifies many things like variance because independent samples have no covariance terms which means we can add variances very easily (You will see results like this later).

So our estimator is a random variable that is a function of other random variables. Now our estimator random variable is the actual distribution for the parameter we are estimating.

When you start off, we look at estimating things like means and variances, but we can create estimators that are really complicated if we want to: it uses the same idea as the mean and the variance but it measures something else of interest.

But with regards to unbiasedness, if our estimator was unbiased, then if we used statistical theory, we may not get the right intervals for the actual parameter.

Just think about if you had an estimator, and you did a 95% confidence interval that was really unbiased (lets say five standard deviations away from estimator mean): it wouldn't be useful using that estimator would it? Might as well not use an estimator at all if that is all you had.

So yeah in response to being close to the parameter, yes that is what we want. We want the mean for the estimator random variable to be the same, and to be the parameter of interest no matter what the sample is and no matter how big the sample is.
 


Voilstone said:
I do not really understand what is an unbiased estimator during my statistic studies ~~ THANKS ~~

First, do you understand what an estimator is? In particular, do you understand that an estimator is a random variable ? (It is not generally a single number like 2.38. )
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 23 ·
Replies
23
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
781
  • · Replies 9 ·
Replies
9
Views
1K
  • · Replies 4 ·
Replies
4
Views
7K