MSE estimation with random variables

Click For Summary
SUMMARY

The discussion focuses on the Mean Squared Error (MSE) estimation for linear prediction involving a zero-mean random variable. The initial approach presented was deemed incorrect due to the improper assumption that the expected value of the estimate, ##\hat S##, equates to the actual value, ##S##. Instead, the correct method involves substituting ##\sum c_i X_i## for ##\hat S## in the expression for MSE, leading to a numerical minimization that yields coefficients approximately equal to 1 for two variables and near 0 for one. The possibility of an analytical solution for the optimization was acknowledged but not pursued.

PREREQUISITES
  • Understanding of Mean Squared Error (MSE) estimation
  • Familiarity with linear prediction models
  • Knowledge of random variables and their properties
  • Experience with numerical optimization techniques
NEXT STEPS
  • Study the derivation of MSE in linear regression contexts
  • Explore numerical optimization methods for coefficient estimation
  • Learn about the properties of zero-mean random variables
  • Investigate analytical solutions for MSE minimization problems
USEFUL FOR

Statisticians, data scientists, and machine learning practitioners involved in predictive modeling and MSE optimization will benefit from this discussion.

ashah99
Messages
55
Reaction score
2
Homework Statement
Please see below: finding an MSE estimate for random variables
Relevant Equations
Expectation formula, MSE = E( (S_hat - S)^2 )
Hello all, I am wondering if my approach is correct for the following problem on MSE estimation/linear prediction on a zero-mean random variable. My final answer would be c1 = 1, c2 = 0, and c3 = 1. If my approach is incorrect, I certainly appreciate some guidance on the problem. Thank you.

Problem
1667568284000.png

Approach:
1667568360483.png
 
Physics news on Phys.org
Yes, the approach is incorrect. When you take expected values, you assume that
$$E\left[ \hat S X_i\right] = E\left[ S X_i\right]$$
But we have no reason to suppose that is correct. ##\hat S## is only an estimate of ##S##, not identical to it, and cannot be substituted for it, except in very limited circumstances.

Instead, substitute ##\sum c_i X_i## for ##\hat S## in ##E[(\hat S - S)^2]##, then expand to get an expression in expected values of first and second order terms in ##X_1, X_2, X_3, S##, with unknowns ##c_1,c_2,c_3##. We have been given values for all those terms except ##E[S^2]##, which we can ignore, since it it is not multiplied by any of the unknown coefficients. Numerically minimising that expression, I get a solution where two of the coefficients are near 1 and one is near 0. Possibly the optimisation can be solved analytically, but I didn't try.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K