Insights Blog
-- Browse All Articles --
Physics Articles
Physics Tutorials
Physics Guides
Physics FAQ
Math Articles
Math Tutorials
Math Guides
Math FAQ
Education Articles
Education Guides
Bio/Chem Articles
Technology Guides
Computer Science Tutorials
Forums
General Math
Calculus
Differential Equations
Topology and Analysis
Linear and Abstract Algebra
Differential Geometry
Set Theory, Logic, Probability, Statistics
MATLAB, Maple, Mathematica, LaTeX
Trending
Featured Threads
Log in
Register
What's new
Search
Search
Search titles only
By:
General Math
Calculus
Differential Equations
Topology and Analysis
Linear and Abstract Algebra
Differential Geometry
Set Theory, Logic, Probability, Statistics
MATLAB, Maple, Mathematica, LaTeX
Menu
Log in
Register
Navigation
More options
Contact us
Close Menu
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Mathematics
Set Theory, Logic, Probability, Statistics
What are the commonly used estimators in regression models?
Reply to thread
Message
[QUOTE="FactChecker, post: 6868646, member: 500115"] Note 1: In this post, it might seem that I am being picky about some of your terminology. That is not to criticize your terminology; I am just trying to be specific about my meaning. But the more precise you can be, the better. Note 2: In this post, I always assume that the ##\epsilon_i##s are an independent, identically distributed sample from a ##N(0, \sigma)## distribution. I think you mean the smallest sum-squared-errors for the given sample. Not the lowest variance of the estimator function. "better" in what sense? For some uses, the median is better. OLS, based on the sample, is the definition of the regression coefficients. IMO, to say that it gives the "best" approximation of the regression coefficients is misleading. If the linear model is correct, OLS gives the "best" approximation of the true linear coefficients from the given sample. (Best in the sense of ML) Under the usual assumptions, the OLS gives the ML estimator. Minimizing the SSE maximizes the likelihood function, assuming the linear model is correct. Correct. No. The real question should be whether the linear (in coefficients) model ##Y = a_0 + a_1 X + a_2 X^2 + \epsilon## is the correct model. If it is, then minimizing the SSEs is the same as minimizing the calculated values of the ##\epsilon_i##s in that model. Then OLS would give you the ML. I disagree. I assume that you mean "best" in the sense of ML. Remember that, given a correct model ##Y = f(X) + \epsilon##, where ##f(X)## is deterministic, maximizing the likelihood is the same as minimizing the sum of the ##\epsilon_i##s. So regardless of the form of ##f(X)##, the OLS would give the ML. OLS minimizes the sum squares of ##\epsilon_i## in the model ##Y = f(x)+\epsilon##, where ##\epsilon## is the only random component of the true system. In that case, OLS is the ML estimator, regardless of the form of ##f(x)##. It all depends on the particular application. There are a lot of diverse situations with other considerations than ML, or OLS. Suppose small errors are tolerable, but an error larger than a certain limit is a disaster. Then you might want a model that allows errors smaller than the limit as long as they do not exceed the limit. [/QUOTE]
Insert quotes…
Post reply
Forums
Mathematics
Set Theory, Logic, Probability, Statistics
What are the commonly used estimators in regression models?
Back
Top