MSE and Estimators for Random Samples

  • Thread starter Thread starter SavvyAA3
  • Start date Start date
  • Tags Tags
    Estimators Random
AI Thread Summary
The discussion centers on understanding the Mean Square Error (MSE) and its components, specifically the relationship between variance and expected values. MSE is utilized to measure the accuracy of an estimator, combining both variance and bias. The expected value operator (E) is clarified as a tool for calculating the average outcome of random variables, particularly in the context of unbiased estimators. It is emphasized that while an unbiased estimator's expected value equals the true population parameter, this does not imply that the variance of any statistic equals its expected value. The conversation highlights the complexities of estimating population parameters from samples and the importance of understanding these statistical concepts.
SavvyAA3
Messages
23
Reaction score
0
I would very much appreciate if someone could explain the following:

- What is the use of the MSE (Mean Square Error) i.e. why do we use it?
I understand that MSE(t) = Var(t) + {E(t-&)}^2, but what does this tell us?

- Why/ How does E{A*Sx^2 +b*Sy^2} = a*Var + b*Var

(I am using ^ to denote 'powers' i.e. '^2' means sqaured)

- What is the E(-) operator, what is its use and what are its properties.

I would really, really appreciate help on this as I am having difficulties grasping these concepts of statistical theory.

Thanks
 
Physics news on Phys.org
Thanks for the above, I didn't realize the E(-) operator was simply Expected Values of Linear Functions and Random Variables.

For the last part, I am still a little confused. Are you wishing to infer that the expected value of an unbiased estimator is simply its variance?
 
SavvyAA3 said:
Are you wishing to infer that the expected value of an unbiased estimator is simply its variance?
No, not its variance.

The term variance can refer to either the true variance or an estimated variance, which can be confusing.

The true variance is a population parameter. Just like the population mean, true variance is "hard-wired" into the behavior of the random variable, so to speak. Typically it is an unknown, and it can be estimated from a sample (just like the true mean can be estimated).

Suppose I "invent" a statistic called "EnumaElish variance estimator" (EEVE), as a function of the sample. You give me a sample, EEVE gives you an estimate (of the true variance in the population, based on the sample). If EEVE is an unbiased estimator of the true variance, then its expected value is equal to the population parameter: E[EEVE] = True Variance.

This is different than saying "let Z be any statistic (of the sample). (E.g., Z might be the sample average.) Then Z has its own variance due to the sample variance, which is what you were thinking when you posted the above quotation, I guess. The variance of Z is an estimated variance (like EEVE) and not the true variance of the population.

To sum up, I wasn't proposing E[Z] = Var[Z] for any statistic Z; but this: for a random variable X, if S^2[X] is an unbiased estimator of X's true variance, then E[S^2[X]] = Var[X], which is the true variance of random variable X.
 
Last edited:
Erm Ok, so you are suggesting that considering random variables, the expected value of an unbiased estimator (used in a sample to estimate the true population parameter, such as the unknown mean or variance etc) has an expected value equal to its true population parameter?

So with what you have given above, does this mean, if I have a sample and have only a sample mean and wish to find the mean of the true population, I can take its expected value and the result that I get will be the true population mean?

Sorry for asking this but I have still not yet quite understood this.

Thanks
 
As a student, I had difficulty understanding the whole expectation concept so you're not alone.

If I have only one sample (which is often what anybody can hope to have), and I compute its average, then that number is my single "best" unbiased estimator of the true mean (which is unknown, and remains unknown, unless either:

1. someone measures the entire population and computes its mean, or:

2. you are working with a theoretical distribution which comes with an assumed true mean (e.g., assume the sample is from a standard normal distribution; then the true mean is given as 0).

Although it may not be exactly equal to the true mean, I know that it is unbiased, so I am not making an error that is decidedly "one way or the other."
 
Last edited:
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...
Back
Top