I Multicollinearity and Interactions

  • I
  • Thread starter Thread starter fog37
  • Start date Start date
  • Tags Tags
    Interactions
fog37
Messages
1,566
Reaction score
108
TL;DR Summary
Multicollinearity and Interactions
Hello,

I understand the concept of multicollinearity: when dealing with a multiple regression model with two or more independent variables, some of the independent variables may be pairwise correlated. This does not affect the model in terms of its predictive results but it impacts the regression coefficients and how we interpret the various variables (IVs).

Multiplicative interaction terms can also be included in a linear regression model. Multicollinearity and interactions are disjoint in the sense that a model with interaction terms does not need to have multicollinearity and vice versa (interesting things probably happen when the interaction terms are multicollinear).

That said, in the case of multicollinearity, one independent variable ##X_1## affects the dependent variable ##Y## but another independent variable ##X_2## affect (is correlated) with the first independent variable ##X_1##. Isn't that similar to what interaction does? Interaction means that when one IV changes the dependent variable but there is another IV that changes the first IV...

Thank you!
 
Physics news on Phys.org
Terminology confusing: Independence implies no correlation.
 
Statistical analysis would usually treat the two situations the same way.
 
mathman said:
Terminology confusing: Independence implies no correlation.
In a multiple regression model, ##Y = a_0 +a_1 X_1 + a_2 X_2 + ... + a_n X_n + \epsilon##, the ##X_i##s are called the "independent variables" regardless of whether they are correlated. ##Y## is the dependent variable.
 
Last edited:
Independence (in probability theory) means no connection and implies no correlation. Your question seems to be about terminology - I am not familiar with the definitions of these terms as you are using them.
 
According to sources on the web ( including https://aarongullickson.github.io/stat_book/interaction-terms.html )
An interaction term is a variable that is constructed from two other variables by multiplying those two variables together.

With that definition, an interaction term results from a decision about the form of the function that is being fitted to data. This decision need not be based on a correlation between variables. By contrast the correlation (or lack of it) between variables in a model is usually inferred from the data, but does not necessarily cause us to introduce interaction terms (and convert a linear regression model to a nonlinear model).

An interesting (and presumably well studied) question: How should correlations between variables influence our decision about whether to introduce interaction terms in the function we are fitting to the data?
 
  • Like
Likes FactChecker
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top