MSE and Estimators for Random Samples

In summary: Erm...so if I have a population mean and I have a sample mean, the expected value of the sample mean is the population mean?Yes.
  • #1
SavvyAA3
23
0
I would very much appreciate if someone could explain the following:

- What is the use of the MSE (Mean Square Error) i.e. why do we use it?
I understand that MSE(t) = Var(t) + {E(t-&)}^2, but what does this tell us?

- Why/ How does E{A*Sx^2 +b*Sy^2} = a*Var + b*Var

(I am using ^ to denote 'powers' i.e. '^2' means sqaured)

- What is the E(-) operator, what is its use and what are its properties.

I would really, really appreciate help on this as I am having difficulties grasping these concepts of statistical theory.

Thanks
 
Physics news on Phys.org
  • #3
Thanks for the above, I didn't realize the E(-) operator was simply Expected Values of Linear Functions and Random Variables.

For the last part, I am still a little confused. Are you wishing to infer that the expected value of an unbiased estimator is simply its variance?
 
  • #4
SavvyAA3 said:
Are you wishing to infer that the expected value of an unbiased estimator is simply its variance?
No, not its variance.

The term variance can refer to either the true variance or an estimated variance, which can be confusing.

The true variance is a population parameter. Just like the population mean, true variance is "hard-wired" into the behavior of the random variable, so to speak. Typically it is an unknown, and it can be estimated from a sample (just like the true mean can be estimated).

Suppose I "invent" a statistic called "EnumaElish variance estimator" (EEVE), as a function of the sample. You give me a sample, EEVE gives you an estimate (of the true variance in the population, based on the sample). If EEVE is an unbiased estimator of the true variance, then its expected value is equal to the population parameter: E[EEVE] = True Variance.

This is different than saying "let Z be any statistic (of the sample). (E.g., Z might be the sample average.) Then Z has its own variance due to the sample variance, which is what you were thinking when you posted the above quotation, I guess. The variance of Z is an estimated variance (like EEVE) and not the true variance of the population.

To sum up, I wasn't proposing E[Z] = Var[Z] for any statistic Z; but this: for a random variable X, if S^2[X] is an unbiased estimator of X's true variance, then E[S^2[X]] = Var[X], which is the true variance of random variable X.
 
Last edited:
  • #5
Erm Ok, so you are suggesting that considering random variables, the expected value of an unbiased estimator (used in a sample to estimate the true population parameter, such as the unknown mean or variance etc) has an expected value equal to its true population parameter?

So with what you have given above, does this mean, if I have a sample and have only a sample mean and wish to find the mean of the true population, I can take its expected value and the result that I get will be the true population mean?

Sorry for asking this but I have still not yet quite understood this.

Thanks
 
  • #6
As a student, I had difficulty understanding the whole expectation concept so you're not alone.

If I have only one sample (which is often what anybody can hope to have), and I compute its average, then that number is my single "best" unbiased estimator of the true mean (which is unknown, and remains unknown, unless either:

1. someone measures the entire population and computes its mean, or:

2. you are working with a theoretical distribution which comes with an assumed true mean (e.g., assume the sample is from a standard normal distribution; then the true mean is given as 0).

Although it may not be exactly equal to the true mean, I know that it is unbiased, so I am not making an error that is decidedly "one way or the other."
 
Last edited:

1. What is MSE and why is it important in statistics?

MSE stands for mean squared error and it is a measure of the average squared difference between the estimated values and the true values. It is important in statistics because it provides a way to evaluate the accuracy of an estimator and compare different estimators.

2. How is MSE calculated?

MSE is calculated by taking the average of the squared differences between the estimated values and the true values. This means taking the sum of the squared errors and dividing it by the number of observations.

3. What is an estimator and how does it relate to random samples?

An estimator is a statistical tool used to estimate a population parameter based on a sample of data. It relates to random samples because the quality of an estimator depends on the characteristics of the random sample used to estimate the parameter.

4. What are some common estimators for random samples?

Some common estimators for random samples include the sample mean, sample variance, and sample proportion. Other estimators may be used depending on the specific parameter being estimated and the characteristics of the data.

5. How can MSE be minimized when using estimators for random samples?

MSE can be minimized by using a larger sample size, which reduces the variability in the estimator. Additionally, using a more efficient estimator or selecting a more representative sample can also help to minimize MSE.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
59
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
442
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
420
  • Calculus and Beyond Homework Help
Replies
1
Views
650
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
221
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
827
  • Set Theory, Logic, Probability, Statistics
Replies
23
Views
2K
Back
Top