A How to Optimize Predictive Models: Include Interactions or Not?

  • A
  • Thread starter Thread starter FallenApple
  • Start date Start date
  • Tags Tags
    Prediction
AI Thread Summary
To optimize predictive models, start by including all variables and use stepwise AIC for model reduction, focusing solely on prediction rather than causal explanation. The discussion highlights the importance of incorporating interactions in the model, suggesting that a complete model should include all variable combinations. For model evaluation, cross-validation is recommended over AIC, with k-fold cross-validation being preferred due to its reliability in assessing predictive performance. If no specific models are predetermined, creating a comprehensive model with all interactions is advised, while knowledge of the science can guide model selection. Advanced methods like elastic-net regression can help manage overfitting and automatically select relevant variables, enhancing predictive accuracy.
FallenApple
Messages
564
Reaction score
61
So say I want to predict the next point in the data.(or outside since its prediction)

So first, I would include all the variables inside the dataset into the initial model. Then I would use stepwise AIC to iteratively reduce the model down to the final model with minimum AIC. ( I can do this because I do not care about causal explanation, just about prediction)

Now, should the large model consist of just sums of the variables, or should it also include all possible combinations of interactions as well?

Finally, what do I with the final model? Do I just interpret the estimates and confidence intervals as usual?
 
Physics news on Phys.org
If you want to do prediction, why use AIC? Why not evaluate prediction error directly? Cross-validation is what you want. Just compare all the models you're interested in and choose the one with the best predictive performance. The AIC is actually asymptotically equivalent to leave-one-out cross validation, but LOO-CV has weird properties, so go with something like k-fold cross-validation.
 
Last edited:
  • Like
Likes FallenApple
Number Nine said:
If you want to do prediction, why use AIC? Why not evaluate prediction error directly? Cross-validation is what you want. Just compare all the models you're interested in and choose the one with the best predictive performance. The AIC is actually asymptotically equivalent to leave-one-out cross validation, but LOO-CV has weird properties, so go with something like k-fold cross-validation.

Got it.

But what if I don't have any models in mind?

Then should I just take a combination of each possibility? For example, say I want to predict y and my data set has x1,x2,x3. Then complete model with everything in it is y=x1+x2+x3+x1*x2+x1*x3+x3*x2+x1*x2*x3. That is the complete model. Then whatever valid algorithm should be able to spit out the subset that best predicts y.

What if I have some models in mind from knowledge of the science? Then would that influence my choice for the prediction?
 
This is actually a bit of a tricky question. The simplest approach would be to just compare all possible models and select the best one, which would be 2^7 = 128 models in your case, which might take a while if you have to code them manually.

If you want to try something a little more sophisticated, my general approach to these kinds of problems is to fit the full model (with all variables and all interactions) and put a penalty on the model which will help to prevent overfitting and automatically select which variables should be included. For example, elastic-net regression is a modified version of ordinary least-squares regression which shrinks the coefficients towards zero (which tends to help avoid overfitting and increase predictive power) and has the power to set some coefficients to exactly zero, in some sense "automatically" removing variables which do not contribute to the model. If you're comfortable with R, there are several packages which will do this, included glmnet:

https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html

The package, handily, can also do cross validation automatically. The above link includes a tutorial.
 
  • Like
Likes FallenApple
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top