Graduate Does centering variables for regression always result in unchanged coefficients?

  • Thread starter Thread starter monsmatglad
  • Start date Start date
  • Tags Tags
    interaction ols
Click For Summary
Centering variables in multiple linear regression can lead to changes in the coefficients of non-interaction terms when interactions are present. While the coefficients of non-interaction variables may change, those of interaction terms and variables without interactions remain unchanged. This phenomenon is attributed to the properties of linearity, where translation does not affect certain coefficients. The discussion clarifies that the statement about unchanged coefficients primarily applies to non-interaction variables. Understanding these dynamics is crucial for accurate interpretation of regression results involving interactions.
monsmatglad
Messages
75
Reaction score
0
I am studying mean-centering for multiple linear regression (ols).
Specifically I'm talking about the situation when there is interaction.
When centering variables for a regression analysis, my literature tells me that the coefficients do not change? But when there is some sort of interaction between the variables, the coefficients of the non-interaction terms (the variables that take part in the interaction, but are also represented individually) of the variables do in fact change.

When it is said that when centering the variables, "the coefficients do not change", does that only apply to the non-integrated variables?
 
Physics news on Phys.org
monsmatglad said:
When it is said that when centering the variables, "the coefficients do not change", does that only apply to the non-integrated variables?
What do you mean by 'non-integrated variables'?
 
oops.. Was supposed to be "non-interaction"
 
In that case, yes. Consider the model
$$y_j = a_0 + a_1x_1 + a_2x_2 +a_12x_1x_2 + a_3 x_3+\epsilon_j$$
in which there is an interaction of $x_1,x_2$ but no interactions for $x_3$.
Now centring each variable we get
$$y_j = a'_0 + a'_1(x_1-\bar x_1) + a'_2(x_2-\bar x_2) +a'_{12}(x_1-\bar x_1)(x_2-\bar x_2) + a'_3 (x_3-\bar x_3)+\epsilon_j$$
Rearranging this and matching coefficients to the first equation, we get:
  • ##a_0=a'_0-a'_1\bar x_1-a'_2\bar x_2-a'_3\bar x_3 +a'_12\bar x_1\bar x_2##
  • ##a_1=a'_1 - a'_{12}\bar x_2##
  • ##a_2=a'_2 - a'_{12}\bar x_1##
  • ##a_3=a'_3## [no change]
  • ##a_{12}=a'_{12}## [no change]
So the only coefficients that remain unchanged are those of any variables with no interactions, plus those of any interaction terms.
 
I think this is just a property of linearity, which I believe is equivalent with a lack of interaction between variables, i.e., linearity "preserves translation" , but non-linear interactions do not.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
5
Views
5K
  • · Replies 16 ·
Replies
16
Views
3K