- #1
wvguy8258
- 50
- 0
Hi,
I'm interested in this question primarily in how it relates to what is usually called classic or frequentist statistics (p-values, etc). I'm fully aware of how stepwise and other automated techniques can negatively impact analyses and render inference based upon parameter estimates suspect. I also realize how variable elimination may be desirable if one wants to make predictions outside of the calibration data, as extraneous variables can lead to overfitting. I'm primarily interested in hypothesis testing using multiple linear regression. If stepwise and other automated procedures distort the interpretation of significance tests, then will this not also happen if the analyst removes variables or builds the regression somewhat interactively (i.e. adding an important variable based upon theory, then looking at partial residual plots to determine the next likely variable to include, or even determining a log transform is needed on the dependent variable after inspection of initial results,etc)? If this is the case, then it seems best to calibrate the regression model using all variables you wish to test hypotheses on and also a fewer others you feel it may be necessary to control the effects of, and then base inference on this initial full model with correction taken for multiple testing. This is a bit risky because you are playing internally with the bias-variance trade-off for the parameter estimates (do I add this variable to control for an effect and thereby inflate variance?). It just seems that to be really pure in it, you aren't supposed to really look at the data before conducting a hypothesis test. Once you break this boundary, and start trying variable transformations, removing, adding things, it seems you are moving closer and closer to data dredging. What is the remedy for this? Splitting the data, doing what you will to it to tease apart relationships, and then apply one final model to the withheld data with hands off? I've been quite frustrated for lack of finding a few succinct articles on model building strategies under different modeling objectives.
Thanks,
Seth
I'm interested in this question primarily in how it relates to what is usually called classic or frequentist statistics (p-values, etc). I'm fully aware of how stepwise and other automated techniques can negatively impact analyses and render inference based upon parameter estimates suspect. I also realize how variable elimination may be desirable if one wants to make predictions outside of the calibration data, as extraneous variables can lead to overfitting. I'm primarily interested in hypothesis testing using multiple linear regression. If stepwise and other automated procedures distort the interpretation of significance tests, then will this not also happen if the analyst removes variables or builds the regression somewhat interactively (i.e. adding an important variable based upon theory, then looking at partial residual plots to determine the next likely variable to include, or even determining a log transform is needed on the dependent variable after inspection of initial results,etc)? If this is the case, then it seems best to calibrate the regression model using all variables you wish to test hypotheses on and also a fewer others you feel it may be necessary to control the effects of, and then base inference on this initial full model with correction taken for multiple testing. This is a bit risky because you are playing internally with the bias-variance trade-off for the parameter estimates (do I add this variable to control for an effect and thereby inflate variance?). It just seems that to be really pure in it, you aren't supposed to really look at the data before conducting a hypothesis test. Once you break this boundary, and start trying variable transformations, removing, adding things, it seems you are moving closer and closer to data dredging. What is the remedy for this? Splitting the data, doing what you will to it to tease apart relationships, and then apply one final model to the withheld data with hands off? I've been quite frustrated for lack of finding a few succinct articles on model building strategies under different modeling objectives.
Thanks,
Seth