Conceptual Question regarding hypothesis testing regression

Click For Summary
In hypothesis testing for regression coefficients, rejecting the null hypothesis H0: b1 = 0 indicates that age (x1) has a predictive ability for satisfaction (y) after accounting for the effects of other variables like severity (x2) and anxiety (x3). This means that the interpretation is that age can predict satisfaction levels while controlling for the other covariates. However, it is important to consider the potential for multicollinearity, where high correlations among predictors can lead to misleading significance results. A variable may appear significant due to its relationship with another variable rather than its independent effect. Thus, careful analysis is necessary to ensure accurate interpretations of regression results.
Rifscape
Messages
41
Reaction score
0

Homework Statement



Hi,

I had a question regarding testing a regression models coefficients.

Say there is a regression model that has the form:

y = b0 + b1x1 + b2x2 + b3x3 + b4x4 + e

For the sake of simplicity let: e be the random error, x1 is age, x2 is severity, and x3 is anxiety. y is satisfaction.

Say I do a hypothesis test on the coefficient b1:

H0: b1 = 0

Ha: b1 is not equal to 0.

Say I get strong enough evidence to reject the null, and state that b1 is not equal to 0.

Does this mean that

  1. age has some ability to predict satisfaction level even after the effects of severity and anxiety on satisfaction level have been taken into account?
or that

  1. age has some ability to predict satisfaction level regardless of if the effects of severity or anxiety on satisfaction level have been taken into account.
I though that it was the second one, since this is testing the affect of one predictor, and since the null hypothesis was rejected, it means that it has some predictive power regardless of the other covariates. However, someone told me that it was the opposite.

I am not sure now, but if possible could someone please let me know which interpretation is correct and the reason it is correct?

Any help is appreciated.

Thank you for your reading
 
Physics news on Phys.org
Anyone?
 
Rifscape said:
Anyone?

Depending on the nature of the data, it is quite possible that a statistically significant effect is actually not significant at all. Suppose, for example, that in your sample you have five variables whose values are all highly correlated. It is possible that a 5-term linear regression has ##x_1## declared very significant, but with the "true" significance really being due to ##x_3##, say. The variable ##x_1## can look significant simply because it is correlated closely to ##x_3##.
 
First, I tried to show that ##f_n## converges uniformly on ##[0,2\pi]##, which is true since ##f_n \rightarrow 0## for ##n \rightarrow \infty## and ##\sigma_n=\mathrm{sup}\left| \frac{\sin\left(\frac{n^2}{n+\frac 15}x\right)}{n^{x^2-3x+3}} \right| \leq \frac{1}{|n^{x^2-3x+3}|} \leq \frac{1}{n^{\frac 34}}\rightarrow 0##. I can't use neither Leibnitz's test nor Abel's test. For Dirichlet's test I would need to show, that ##\sin\left(\frac{n^2}{n+\frac 15}x \right)## has partialy bounded sums...