How Do You Calculate Predicted Values in Least Squares Regression?

In summary, the predicted value at ##\bar{x}## is equal to the average of the observed values ##\bar{Y}##.
  • #1
stats_student
41
0

Homework Statement


note a linear regression model with the response variable Y=(Y1..Yn) on a predictor variable X=(X1..Xn). the least squares estimates of the intercept and slope a(hat) and B(hat) are the values that minimize the function: (see attached image)

and the problem reads on further -

further predicted values equal y(hat)(x)=a(hat)+b(hat)x (note y(hat) is meant to be read as a function of x)

i have been asked to find y(hat)(Xbar), where X(bar) is the average of the Xi's. (note y(hat) is meant to be read as a function of Xbar).

i'm not sure where to start with this question. advice as to whether I'm on the right track is all i need for now.

so i was thinking that i could use the fact that

a(hat) = Y(bar)-B(hat)X(bar) and B(hat) = (sum) (Xi-X(bar))(Yi-Y(bar)) /(sum) (Xi-X(bar))^2

but I'm not exactly sure how to solve for y(hat)X(bar) -(yhat as a function of Xbar)

should i be trying to get a equation with only a(hat) , b(hat) , and Xbar?Thanks for the help - apologies for poor notation
 

Attachments

  • Snapshot.jpg
    Snapshot.jpg
    6.2 KB · Views: 387
Physics news on Phys.org
  • #2
stats_student said:

Homework Statement


note a linear regression model with the response variable Y=(Y1..Yn) on a predictor variable X=(X1..Xn). the least squares estimates of the intercept and slope a(hat) and B(hat) are the values that minimize the function: (see attached image)

and the problem reads on further -

further predicted values equal y(hat)(x)=a(hat)+b(hat)x (note y(hat) is meant to be read as a function of x)

i have been asked to find y(hat)(Xbar), where X(bar) is the average of the Xi's. (note y(hat) is meant to be read as a function of Xbar).

i'm not sure where to start with this question. advice as to whether I'm on the right track is all i need for now.

so i was thinking that i could use the fact that

a(hat) = Y(bar)-B(hat)X(bar) and B(hat) = (sum) (Xi-X(bar))(Yi-Y(bar)) /(sum) (Xi-X(bar))^2

but I'm not exactly sure how to solve for y(hat)X(bar) -(yhat as a function of Xbar)

should i be trying to get a equation with only a(hat) , b(hat) , and Xbar?Thanks for the help - apologies for poor notation

If you have set up and solved the least-squares equations (so that you know the parameters ##\hat{a}, \hat{b}## in terms of the ##\{x_i, y_i\}##, you can just substitute ##x = \bar{x}## into the equation ##\hat{y} = \hat{a} + \hat{b} x##, and carry out algebraic simplification.

BTW: it is easy to employ good notation---just use LaTeX. To see how I did it, just right-click on an expression and to to the menu item to 'display math as tex commands'. This site has a brief LaTeX tutorial on the use of LaTeX, but I cannot say exactly where/how to find it; others may know.
 
  • #3
so if i do this should i get
\hat{y} =Y(bar)?
 
  • #4
y(hat)(Xbar) = Y(bar)? still hopeless at notation :(
 
  • #5
or should i get,
Y(bar) = a(hat) +b(hat)X(bar)
 
  • #6
stats_student said:
or should i get,
Y(bar) = a(hat) +b(hat)X(bar)

You need to tell the system "LateX starts here" ... and "Latex ends here", with your mathematical expressions in between.

For displayed equations use

[t ex] ...your expressions ... [/t ex]

with no spaces between the 't' and the 'ex', and not in a red-colored font (which I used just for emphasis). Doing that on your expression above gives
[tex] \bar{Y} = \hat{a} + \hat{b} \bar{x} [/tex]
Note that we write \bar{Y}, not Y(bar), and we write \hat{a}, not a(hat). Some people prefer the look of \overline{...} instead of \bar{...}, and using that instead gives
[tex] \overline{Y} = \hat{a} + \hat{b} \overline{x} [/tex]

For in-line equations or expressions, use

# # ... expression... # #

with no space between the two #s at the start and at the end, and not in a red font. Doing that with your expression above gives ##\bar{Y} = \hat{a} + \hat{b} \bar{x}##, as wanted.

Anyway, you cannot just wrote that ##\bar{Y} = \hat{a} + \hat{b} \bar{x}##, because the right-hand-side is ##Y_{\text{fitted}}(\bar{x})##, but how do you know that ##Y_{\text{fitted}}(\bar{x}) = \bar{Y}##? Can you even be sure it is true?
 
Last edited:
  • #7
should i get y(Xbar) = a(hat)+b(hat)X(bar)
 
  • #8
stats_student said:
should i get y(Xbar) = a(hat)+b(hat)X(bar)

Yes, but that is not the end of the story. You ought to be able to simplify it a lot, using either the explicit expressions for ##\hat{a}## and ##\hat{b}##, or by exploiting the fact that ##\hat{a}, \hat{b}## satisfy some particular equations obtained by minimization of the total squared error.
 
  • #9
ahhh... so after doing some algebra i get yhat(Xbar) = Y(bar)
 
  • #10
let me try in notation [tex] \hat{y}(\bar{x})=\bar{Y} [/tex]
 
  • #11
stats_student said:
let me try in notation [tex] \hat{y}(\bar{x})=\bar{Y} [/tex]

Yes, exactly.
 

Related to How Do You Calculate Predicted Values in Least Squares Regression?

1. What is least squares regression?

Least squares regression is a statistical method used to find the line of best fit for a set of data points by minimizing the sum of the squared distances between the data points and the line.

2. How is least squares regression used in scientific research?

Least squares regression is commonly used in scientific research to analyze the relationship between two variables and make predictions based on the data. It can also be used to identify outliers and determine the significance of the relationship between the variables.

3. What assumptions are made in least squares regression?

The main assumptions made in least squares regression are that the relationship between the variables is linear, the errors are normally distributed, and the data points are independent of each other.

4. How is the quality of the regression model evaluated?

The quality of the regression model is evaluated by looking at the value of the coefficient of determination (R2) which measures the proportion of the variability in the dependent variable that can be explained by the independent variable. A higher R2 value indicates a better fit for the regression model.

5. What are some common limitations of least squares regression?

Some common limitations of least squares regression include the reliance on the assumption of linearity, the sensitivity to outliers, and the potential for overfitting if too many variables are included in the model. It also cannot establish causality between the variables, only correlation.

Similar threads

  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
927
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
961
  • Set Theory, Logic, Probability, Statistics
2
Replies
64
Views
3K
Back
Top