Does the Error Distribution Affect Least Squares Estimation in Simple Models?

Click For Summary
SUMMARY

The discussion centers on the impact of error distribution on least squares estimation in simple models. The least squares estimator of the constant c is derived from the error cost function E(c) = 0.5(y - c)^2, leading to the conclusion that minimizing this function results in c being equal to y. The participant questions whether the distribution of the error, specifically the probability density function exp(-e-1) for e > -1, should influence the estimation process. They highlight confusion regarding the scalar case and emphasize the importance of considering the expected value of E(c) for effective minimization.

PREREQUISITES
  • Understanding of least squares estimation
  • Familiarity with probability density functions
  • Knowledge of matrix calculus
  • Basic concepts of error analysis in statistical models
NEXT STEPS
  • Explore the implications of error distribution on least squares estimation
  • Study the expected value calculations for error cost functions
  • Learn about the role of probability density functions in statistical modeling
  • Investigate advanced matrix calculus techniques for optimization
USEFUL FOR

Statisticians, data analysts, and researchers involved in regression analysis and model optimization will benefit from this discussion.

cutesteph
Messages
62
Reaction score
0
Suppose we have an observation y = c+ e where c is an unknown constant and e is error with the pdf = exp(-e-1) for e >-1 . We want to determine the least square estimator of c given by the c* which minimizes the error cost function E(c) = .5(y-c)^2

Minimizing the error cost is done by taking the derivative wrt c so y=c. Shouldnt it take into account the distribution of the error?

I understand in the matrix case E(c) = T(e)e where T( ) is the transpose . where y=Hc+e

= T(y-Hc) (y-Hc) = T(y)y -T( x)Hy -T(y)Hx +T(x)T(H)Hx .
The derivative wrt x is -2T(y)H+2T(x)T(H)H= 0 => x=inverse(T(H)H)*T(H)y

I guess I am just confused on the scalar case on what to do.
 
Physics news on Phys.org
cutesteph said:
Minimizing the error cost is done by taking the derivative wrt c so y=c. Shouldnt it take into account the distribution of the error?
This would fix e to 0. You cannot fix your random variable to minimize your error.
What is the expected value for E(c)? Afterwards you can try to minimize this.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K