Understanding Overlay Plots in Linear Regression

  • Thread starter Thread starter maverick280857
  • Start date Start date
  • Tags Tags
    Plot
Click For Summary
An overlay plot in linear regression involves displaying the fitted model on top of the actual observed data to visualize how well the model predicts the outcomes. To create this plot, one must fit the linear regression model using the regressor variables and then plot the predicted values against the actual values. Additionally, a residual plot is generated by calculating the difference between each actual value and its corresponding predicted value, which helps assess the model's accuracy. This approach allows for a clear comparison between observed data and model predictions. Understanding these plots is essential for evaluating the performance of a linear regression model.
maverick280857
Messages
1,774
Reaction score
5
Hi,

I was wondering if someone could tell me what an overlay plot exactly is, in the context of linear regression.

Specifically, I have data to fit a model Y in terms of regressor variables x1 - x8 and the question asks me to

"Obtain the overlay plot of the fitted model on the actual values against the observed cases. Obtain the plot of the residuals against the fitted values."

What do I have to do here?

Thanks in advance!
 
Physics news on Phys.org
Overlay plot means putting a plot on top of another. You just, in this case put the fitted model on the top of the observed data, which means fit actual data with your linear regression model. Residual is calculated by using each actual datum subtract corresponding expected value on the regression line.
 
zli034 said:
Overlay plot means putting a plot on top of another. You just, in this case put the fitted model on the top of the observed data, which means fit actual data with your linear regression model. Residual is calculated by using each actual datum subtract corresponding expected value on the regression line.


Thanks...got it!
 
Last edited:
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 23 ·
Replies
23
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K