Having trouble understanding variance of OLS estimator


by chevrox
Tags: linear regression
chevrox
chevrox is offline
#1
Feb7-12, 12:42 AM
P: 2
So in computing the variance-covariance matrix for β-hat in an OLS model, we arrive at

VarCov(β-hat)=(σ_ε)^2E{[X'X]^-1}

However, I'm incredulous as to how X is considered non-stochastic and how we can just eliminate the expectation sign and have

VarCov(β-hat)=(σ_ε)^2[X'X]^-1

I'm accepting this to be true (since it's so written in the text) but I'm taking a leap of faith here: if this is true, the elements in the VarCov matrix are expressed in terms of sample statistics and are therefore stochastic. I thought that the variance of an estimator of a parameter, if consistent, should be a deterministic parameter itself and should not depend on the sample observations (besides sample size, n), such as the ones we see in using Cramer-Rao lower bound to determine efficiency. Likely I'm understanding something wrong here, any pointers would be greatly appreciated!
Phys.Org News Partner Science news on Phys.org
SensaBubble: It's a bubble, but not as we know it (w/ video)
The hemihelix: Scientists discover a new shape using rubber bands (w/ video)
Microbes provide insights into evolution of human language
Stephen Tashi
Stephen Tashi is offline
#2
Feb7-12, 03:31 AM
Sci Advisor
P: 3,177
You haven't clearly stated a mathematical question. Is X supposed to be the vector of independent variables? If so, they aren't considered to be stochastic if you compute the regression so it minimizes the least square error in predicting the dependent variables, which are "Y", by tradition. If you have data of the form (X,Y) and there are "errors" in both X and Y, you should use a "total least squares" model.
chiro
chiro is offline
#3
Feb7-12, 03:37 AM
P: 4,570
Hey chevrox and welcome to the forums.

Like Stephen Tashi I am going wait for clarification of what your variables are but I did want to comment on one thing you said:

Quote Quote by chevrox View Post
I thought that the variance of an estimator of a parameter, if consistent, should be a deterministic parameter itself and should not depend on the sample observations (besides sample size, n), such as the ones we see in using Cramer-Rao lower bound to determine efficiency. Likely I'm understanding something wrong here, any pointers would be greatly appreciated!
That should definitely be the case for a consistent estimator and it should be the case that the variance 'shrinks' with a higher sample size. If the variance does not do this, then basically your estimate doesn't get 'better' with a higher sample size and it becomes rather pointless to do statistics with any kind of sample using that estimator.

chevrox
chevrox is offline
#4
Feb8-12, 12:45 AM
P: 2

Having trouble understanding variance of OLS estimator


Thanks for the replies! Yes, X is the nxk matrix of explanatory variables such that y=Xβ+ε. I think I understand it now. Variables in X do not necessarily follow a stochastic process, and even if they do, since all variability of y is explained by ε in the model, the independent variables affect the dependent variable solely through their observed values rather than a range of distribution where those values could fall, and it is therefore considered non-stochastic. And meanwhile β-hat does not lose its consistency since E(β-hat)=β (which is possible only if X is non-stochastic) and Var(β-hat)→0 even though Var(β-hat) varies with sample.


Register to reply

Related Discussions
A little help understanding standard deviation & variance Set Theory, Logic, Probability, Statistics 5
Estimator for variance when sampling without replacement Calculus & Beyond Homework 5
expected value nd variance of mean estimator Calculus & Beyond Homework 6
Why is the variance of the Parzen density estimator infinite? Set Theory, Logic, Probability, Statistics 1
unbiased estimator of variance Set Theory, Logic, Probability, Statistics 2