# Removing dummy variables from a model: singly or only as a group?

## Main Question or Discussion Point

Hi,

I'm running a few generalized linear models. One of the predictors of interest is a categorical variable with 4 levels. I have this coded as 3 dummy variables, with one as a baseline that will influence the intercept (multicollinearity concerns prompt this, of course). I have not read a good treatment of the following: should you consider dropping an individual dummy variable from the model or only do so by the whole group (meaning all in or all out). The categorical variable here is land use/cover, the classes are forest, agriculture, grass, wetlands. Forest is the category not represented by a dummy variable. If agriculture and grass are statistically significant but wetland is not, then it seems the effect of removing wetland as a variable is to make forest/wetland now a single, baseline category. This has some intuitive appeal because the nonsignificant results indicates the possibility of no difference between forest and wetland as a predictor. So, in a sense, you are allowing the model results to inform how to modify the categorical variable from which the dummy variables are produced, in this case aggregating forest/wetland would be indicated. Am I missing something important here? Any literature recommendation that is related? Thanks, Seth

Related Set Theory, Logic, Probability, Statistics News on Phys.org
EnumaElish
Homework Helper
You must be thinking that the differences {a, g, w} minus forest is more important than, say the difference a - g. Any particular reason why?

Before excluding anything I'd create a 4x4 matrix of all pairwise differences and try to see what's significant. Then you might consider joint F tests (e.g. Are x & y jointly significant when baseline is z?)

Hmm, I've never seen this suggested for dummy variables. Usually, the choice of a baseline is considered to be arbitrary or for reasons such as mine which is that forest is the most "natural" and common condition in this area (so it seems a natural baseline for comparison). I can perhaps see how the choice of a baseline might become more important when you are considering dropping individual dummy variables since the choice of baseline then dictates the possible class aggregations that the result from dropping variables.

You must be thinking that the differences {a, g, w} minus forest is more important than, say the difference a - g. Any particular reason why?

Before excluding anything I'd create a 4x4 matrix of all pairwise differences and try to see what's significant. Then you might consider joint F tests (e.g. Are x & y jointly significant when baseline is z?)
So you are suggesting running the model 4 times, once per possible baseline category, and then see how significance and parameter estimates vary?

EnumaElish