Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A How to do a correct prediction

  1. Apr 18, 2017 #1
    So say I want to predict the next point in the data.(or outside since its prediction)

    So first, I would include all the variables inside the dataset into the initial model. Then I would use stepwise AIC to iteratively reduce the model down to the final model with minimum AIC. ( I can do this because I do not care about causal explanation, just about prediction)

    Now, should the large model consist of just sums of the variables, or should it also include all possible combinations of interactions as well?

    Finally, what do I with the final model? Do I just interpret the estimates and confidence intervals as usual?
  2. jcsd
  3. Apr 18, 2017 #2
    If you want to do prediction, why use AIC? Why not evaluate prediction error directly? Cross-validation is what you want. Just compare all the models you're interested in and choose the one with the best predictive performance. The AIC is actually asymptotically equivalent to leave-one-out cross validation, but LOO-CV has weird properties, so go with something like k-fold cross-validation.
    Last edited: Apr 18, 2017
  4. Apr 18, 2017 #3
    Got it.

    But what if I don't have any models in mind?

    Then should I just take a combination of each possibility? For example, say I want to predict y and my data set has x1,x2,x3. Then complete model with everything in it is y=x1+x2+x3+x1*x2+x1*x3+x3*x2+x1*x2*x3. That is the complete model. Then whatever valid algorithm should be able to spit out the subset that best predicts y.

    What if I have some models in mind from knowledge of the science? Then would that influence my choice for the prediction?
  5. Apr 18, 2017 #4
    This is actually a bit of a tricky question. The simplest approach would be to just compare all possible models and select the best one, which would be 2^7 = 128 models in your case, which might take a while if you have to code them manually.

    If you want to try something a little more sophisticated, my general approach to these kinds of problems is to fit the full model (with all variables and all interactions) and put a penalty on the model which will help to prevent overfitting and automatically select which variables should be included. For example, elastic-net regression is a modified version of ordinary least-squares regression which shrinks the coefficients towards zero (which tends to help avoid overfitting and increase predictive power) and has the power to set some coefficients to exactly zero, in some sense "automatically" removing variables which do not contribute to the model. If you're comfortable with R, there are several packages which will do this, included glmnet:


    The package, handily, can also do cross validation automatically. The above link includes a tutorial.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted