## Multiple regression, why use categories

Hello,

I have a question regarding multiple regression.

I am reading a paper in which the author performed a multiple regression to predict the energy consumption of an electric car based on a 27 variables measured during journeys, such as speed and acceleration etc.

The author categorised the variables into 4 groups as shown in this table. If 2 variables were correlated within the group he dropped one variable. At the end he has 16 nominated variables for the regression.

My questions are:

1. What is the advantage of using the categories?
2. What if two variables in separate groups are correlated?
3. Could he have put all the variables in one group and did a stepwise or best subsets regression?

The reason I am asking these questions is because, multicollinearity does not matter if your regression is for prediction. He is removing correlated variables within the categories but not between the categories.

I would of thought that leaving them all in one category, dropping one of two highly correlated variables and then doing a best subsets regression would be a better approach.

My main question is, what if any is the advantage of using the 4 categories?

Thank you

John

 PhysOrg.com science news on PhysOrg.com >> 'Whodunnit' of Irish potato famine solved>> The mammoth's lament: Study shows how cosmic impact sparked devastating climate change>> Curiosity Mars rover drills second rock target

Mentor
 1. What is the advantage of using the categories?
I would guess that you look at correlations only where you expect them to come from general concepts of car/tours/drivers and not from specific routes chosen for the calibration.

Categories reduce the complexity of the analysis a bit - maybe it is just a question of computational power. The "best" concept (in an ideal world with test data of arbitrary size and infinite computation power) would be to use all variables, but that might be impractical.