What is an unbiased estimator ?

  • Thread starter Voilstone
  • Start date
In summary: An estimator is a random variable that is a function of other random variables. When you start off, we look at estimating things like means and variances, but we can create estimators that are really complicated if we want to: it uses the same idea as the mean and the variance but it measures something else of interest. So for the purposes of this discussion, we will only focus on unbiased estimators.Now, if our estimator was unbiased, then if we used statistical theory, we may not get the right intervals for the actual parameter. However, as long as the estimator is consistent, this won't be a problem. Consistency means that the variance of the estimator goes to zero as the sample
  • #1
Voilstone
7
0
What is an unbiased estimator ??

I do not really understand what is an unbiased estimator during my statistic studies ~~ THANKS ~~
 
Physics news on Phys.org
  • #2


This should be in the statistics forum, but since I can answer it...

Remember that estimators are random variables; an estimator is "unbiased" if its expected value is equal to the true value of the parameter being estimated. To use regression as an example...

Suppose you measured two variables x and y where the true (linear) relationship is given by...[tex]y = 5x + 2[/tex]Of course, any sample that you draw will have noise in it, so you have to estimate the true values of the slope and intercept. Suppose that you draw a thousand samples of x,y and calculate the least squares estimators for each sample (assuming that the noise is normally distributed). As you do that, you'll notice two things...

1) All of your estimates are different (because the data is noisy)
2) The mean of all of those estimates starts to converge on the true values (5 and 2)

The second occurs because the estimator is unbiased.
 
  • #3


Thread moved.
 
  • #4


any sample that you draw will have noise in it, >---- what does the noise mean by ??
mean like a One-to-One function ??
 
  • #5


Voilstone said:
I do not really understand what is an unbiased estimator during my statistic studies ~~ THANKS ~~

Hey Voilstone and welcome to the forums.

Lets say you have a parameter, for simplicity let's say its the mean.

Now the mean has a distribution based on a sample. Using your sample, you are trying to estimate a parameter: this is why we call these things estimators because they are estimating something.

Unbiased estimators have the property that the expectation of the sampling distribution algebraically equals the parameter: in other words the expectation of our estimator random variable gives us the parameter. If it doesn't, then the estimator is called unbiased.

Of course, we want estimators that are unbiased because statistically they will give us an estimate that is close to what it should be.

Also the key thing is that the estimate stays the same even when the sample grows. You will learn that an estimator should be consistent which basically means that the variance of the estimator goes to zero as the sample size goes to infinity.
 
  • #6


chiro said:
Hey Voilstone and welcome to the forums.

Of course, we want estimators that are unbiased because statistically they will give us an estimate that is close to what it should be.

Thanks , i am still new here .

since unbiased estimator = mean/paramete , we want it to be close to the mean for what purposes ??
 
  • #7


Voilstone said:
Thanks , i am still new here .

since unbiased estimator = mean/paramete , we want it to be close to the mean for what purposes ??

The estimator is a function of a sample. Since each observation in the sample comes from the same distribution, we consider each observation to be the realization of a random variable that corresponds to the true distribution. We also consider that each observation is independent: this simplifies many things like variance because independent samples have no covariance terms which means we can add variances very easily (You will see results like this later).

So our estimator is a random variable that is a function of other random variables. Now our estimator random variable is the actual distribution for the parameter we are estimating.

When you start off, we look at estimating things like means and variances, but we can create estimators that are really complicated if we want to: it uses the same idea as the mean and the variance but it measures something else of interest.

But with regards to unbiasedness, if our estimator was unbiased, then if we used statistical theory, we may not get the right intervals for the actual parameter.

Just think about if you had an estimator, and you did a 95% confidence interval that was really unbiased (lets say five standard deviations away from estimator mean): it wouldn't be useful using that estimator would it? Might as well not use an estimator at all if that is all you had.

So yeah in response to being close to the parameter, yes that is what we want. We want the mean for the estimator random variable to be the same, and to be the parameter of interest no matter what the sample is and no matter how big the sample is.
 
  • #8


Voilstone said:
I do not really understand what is an unbiased estimator during my statistic studies ~~ THANKS ~~

First, do you understand what an estimator is? In particular, do you understand that an estimator is a random variable ? (It is not generally a single number like 2.38. )
 

1. What is the definition of an unbiased estimator?

An unbiased estimator is a statistical calculation or method that accurately estimates the true value of a population parameter, without systematically overestimating or underestimating it.

2. How is an unbiased estimator different from a biased estimator?

An unbiased estimator provides an estimate that is, on average, equal to the true value of the population parameter. A biased estimator, on the other hand, consistently overestimates or underestimates the true value.

3. What are some common examples of unbiased estimators?

Some common examples of unbiased estimators include the sample mean, sample variance, and sample proportion. These estimators are unbiased because they provide accurate estimates of the population mean, variance, and proportion, respectively.

4. How is the bias of an estimator calculated?

The bias of an estimator is calculated by taking the expected value of the estimator and subtracting the true value of the population parameter. A bias of zero indicates an unbiased estimator, while a non-zero bias indicates a biased estimator.

5. Why is it important to use unbiased estimators in scientific research?

Using unbiased estimators is important in scientific research because it allows for accurate and reliable conclusions to be drawn from the data. Biased estimators can lead to incorrect conclusions and affect the validity of research findings.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
440
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
416
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
23
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
736
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
862
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
926
Back
Top