Regression strategy for hypothesis testing vs prediction

AI Thread Summary
The discussion centers on the challenges of regression strategy in hypothesis testing versus prediction within the framework of frequentist statistics. Concerns are raised about the impact of automated techniques like stepwise regression on the validity of significance tests and the risk of overfitting when eliminating variables. The importance of using a full model for hypothesis testing is emphasized, along with the potential bias-variance trade-off when adjusting the model based on data inspection. The conversation highlights the tension between data exploration and maintaining the integrity of hypothesis testing, suggesting that splitting data for model validation may be a potential solution. Overall, the need for clear guidance on model building strategies tailored to different objectives is expressed.
wvguy8258
Messages
48
Reaction score
0
Hi,

I'm interested in this question primarily in how it relates to what is usually called classic or frequentist statistics (p-values, etc). I'm fully aware of how stepwise and other automated techniques can negatively impact analyses and render inference based upon parameter estimates suspect. I also realize how variable elimination may be desirable if one wants to make predictions outside of the calibration data, as extraneous variables can lead to overfitting. I'm primarily interested in hypothesis testing using multiple linear regression. If stepwise and other automated procedures distort the interpretation of significance tests, then will this not also happen if the analyst removes variables or builds the regression somewhat interactively (i.e. adding an important variable based upon theory, then looking at partial residual plots to determine the next likely variable to include, or even determining a log transform is needed on the dependent variable after inspection of initial results,etc)? If this is the case, then it seems best to calibrate the regression model using all variables you wish to test hypotheses on and also a fewer others you feel it may be necessary to control the effects of, and then base inference on this initial full model with correction taken for multiple testing. This is a bit risky because you are playing internally with the bias-variance trade-off for the parameter estimates (do I add this variable to control for an effect and thereby inflate variance?). It just seems that to be really pure in it, you aren't supposed to really look at the data before conducting a hypothesis test. Once you break this boundary, and start trying variable transformations, removing, adding things, it seems you are moving closer and closer to data dredging. What is the remedy for this? Splitting the data, doing what you will to it to tease apart relationships, and then apply one final model to the withheld data with hands off? I've been quite frustrated for lack of finding a few succinct articles on model building strategies under different modeling objectives.

Thanks,
Seth
 
Physics news on Phys.org
wvguy8258 said:
... I've been quite frustrated for lack of finding a few succinct articles on model building strategies under different modeling objectives.

I understand your frustration - authors who can summarize the key points in a few readable lines are few and far between. Let me know how the search goes.
 
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...

Similar threads

Replies
30
Views
4K
Replies
7
Views
2K
Replies
2
Views
2K
Replies
2
Views
3K
Replies
1
Views
2K
Back
Top