Feb11-11, 12:35 AM
I have run a few linear regression models predicting water quality for watersheds using explanatory variables such as mean impervious surface within watersheds and others suggested by theory and the research of others. I would like to be able to compare explanatory variables measured on different scales to determine their relative strengths of effect. The standard approach I see is to multiply the slope coefficients by standard deviation of the corresponding explanatory variable. I've also seen multiplying by some range based upon percentiles (such as the difference between the 75th and 25th percentile value for the variable) to deal with outliers, etc that call using standard deviation into question. I understand what partial r-squared and partial correlation are from text descriptions.
First question are partial r2 and partial correlation equivalent to each other? They are described in separate places and with different language so I have not gotten the mathematical relationship yet. Is one just the square of the other?
Second, I know that partial r2 is related to the increase in the amount of variance in the response explained by adding a variable of interest, given that all other variables are in the model. Is this a measure of effect size similar to a standardized coefficient? Will they necessarily rank variables by "importance" the same? I am trying to decide how to compare variables and partial r2 has intuitive appeal to people used to looking at multiple r2.
|Register to reply|
|Mixed Partial and non-partial derivative definition||General Physics||1|
|Angle of Incidence with partial reflection and partial refraction||Introductory Physics Homework||5|
|partial differential: partial scalar partial vector||Differential Equations||3|
|Partial sum of Fourier Coefficients||Calculus & Beyond Homework||8|
|Partial Mossbauer effect—Why not?||Quantum Physics||1|