# A Follow-Up on F-Test in Multi-Linear Regression

Tags:
1. May 11, 2017

### WWGD

Hi All,
Say we want to linearly regress Y (dependent) against $X_1, X_2,..., X_n$ (Independent) , all numerical variables to get a model $Y=a_1X_1+...+a_n X_n$ .

Then we test $H_0$ for whether :

$H_0: 0= a_1= a_2 =...=a_n$

$H_1 : a_i \neq 0$ for some $i=1,2,..,n$

( This is a generalization on the test for equality of 2 means to equality of means, to zero )

Could someone remind me what one does when one rejects $H_0$ in terms of deciding, figuring
out which of the $a_i'$s is non-zero , other than considering the t-intervals for each of the $a_i$ 'sand checking whether the intervals (a,b) contain 0, i.e., whether a<0<b ?

EDIT IIRC, we then do a pairwise comparison of means and then consider the intervals?

Last edited: May 11, 2017
2. May 11, 2017

### MarneMath

The F-test doesn't really allow for finding out which coefficients are zero. It's a test for overall regression. To look at the main effects and see which are interesting and which are not, you'll have to do a multiple comparison test of some sort, depending on your goal. At the very least, you should probably apply Bonferroni correction (although that's a debatable path).

3. May 11, 2017

### WWGD

I ( think I ) understand; the F-test only tells you ( If you reject $H_0$) whether there is at least one non-zero coefficient in the regression $Y=a_1X_1+..+a_n X_n$ , but does not say which. I understand afterwards you do pairwise comparisons.