A How to Optimize Predictive Models: Include Interactions or Not?

  • A
  • Thread starter Thread starter FallenApple
  • Start date Start date
  • Tags Tags
    Prediction
FallenApple
Messages
564
Reaction score
61
So say I want to predict the next point in the data.(or outside since its prediction)

So first, I would include all the variables inside the dataset into the initial model. Then I would use stepwise AIC to iteratively reduce the model down to the final model with minimum AIC. ( I can do this because I do not care about causal explanation, just about prediction)

Now, should the large model consist of just sums of the variables, or should it also include all possible combinations of interactions as well?

Finally, what do I with the final model? Do I just interpret the estimates and confidence intervals as usual?
 
Physics news on Phys.org
If you want to do prediction, why use AIC? Why not evaluate prediction error directly? Cross-validation is what you want. Just compare all the models you're interested in and choose the one with the best predictive performance. The AIC is actually asymptotically equivalent to leave-one-out cross validation, but LOO-CV has weird properties, so go with something like k-fold cross-validation.
 
Last edited:
  • Like
Likes FallenApple
Number Nine said:
If you want to do prediction, why use AIC? Why not evaluate prediction error directly? Cross-validation is what you want. Just compare all the models you're interested in and choose the one with the best predictive performance. The AIC is actually asymptotically equivalent to leave-one-out cross validation, but LOO-CV has weird properties, so go with something like k-fold cross-validation.

Got it.

But what if I don't have any models in mind?

Then should I just take a combination of each possibility? For example, say I want to predict y and my data set has x1,x2,x3. Then complete model with everything in it is y=x1+x2+x3+x1*x2+x1*x3+x3*x2+x1*x2*x3. That is the complete model. Then whatever valid algorithm should be able to spit out the subset that best predicts y.

What if I have some models in mind from knowledge of the science? Then would that influence my choice for the prediction?
 
This is actually a bit of a tricky question. The simplest approach would be to just compare all possible models and select the best one, which would be 2^7 = 128 models in your case, which might take a while if you have to code them manually.

If you want to try something a little more sophisticated, my general approach to these kinds of problems is to fit the full model (with all variables and all interactions) and put a penalty on the model which will help to prevent overfitting and automatically select which variables should be included. For example, elastic-net regression is a modified version of ordinary least-squares regression which shrinks the coefficients towards zero (which tends to help avoid overfitting and increase predictive power) and has the power to set some coefficients to exactly zero, in some sense "automatically" removing variables which do not contribute to the model. If you're comfortable with R, there are several packages which will do this, included glmnet:

https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html

The package, handily, can also do cross validation automatically. The above link includes a tutorial.
 
  • Like
Likes FallenApple
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top