In one example I saw someone make an argument I haven't seen before. They had 2 confidence intervals for 2 different statistics, and they said that since these two confidence intervals didnt overlap, we could assume that the expected values where different. This made me think of test of hypothesis, and I am wondering the justification behind the argument that if 2 different confidence intervals do not overlap, we can assume the expected value of each is different.(adsbygoogle = window.adsbygoogle || []).push({});

We have two indepent normally distributed random variables, [itex]\hat{X}[/itex] and [itex]\hat{Y}[/itex], with given variance, [itex]\sigma^{2}_{\hat{X}}[/itex] and [itex]\sigma^{2}_{\hat{Y}}[/itex]. We also have that [itex]E(\hat{X})=\mu_{X}[/itex], and [itex]E(\hat{Y})=\mu_{Y}[/itex]. However we do not know [itex]\mu_{X}[/itex], and [itex]\mu_{Y}[/itex].

We now want to test the hypothesis that [itex]\mu_{X}[/itex] = [itex]\mu_{Y}[/itex].

The standard way of doing is offcourse by

rejecting the hypotesis if

[itex]\left|{\frac{\hat{X}-\hat{Y}}{\sqrt{\sigma^{2}_{\hat{X}}+\sigma^{2}_{\hat{Y}}} } }\right|>z_{\alpha/2}[/itex]

The alternative way of testing this was constructing 2 confidence intervals, and rejecting the hypothesis if the two intervals do not overlap.

The 2 intervals are:

[itex](\hat{X}-z_{\alpha/2}*\sigma_{\hat{X}}, \hat{X}+z_{\alpha/2}*\sigma_{\hat{X}})[/itex]

and [itex](\hat{Y}-z_{\alpha/2}*\sigma_{\hat{Y}}, \hat{Y}+z_{\alpha/2}*\sigma_{\hat{Y }})[/itex]

However if the two intervals are not to overlap we have either that:

[itex]\hat{Y}+z_{\alpha/2}*\sigma_{\hat{Y}} < \hat{X}-z_{\alpha/2}*\sigma_{\hat{X}} [/itex]

or we have that :

[itex]\hat{Y}-z_{\alpha/2}*\sigma_{\hat{Y}} > \hat{X}+z_{\alpha/2}*\sigma_{\hat{X}}[/itex]

Together these two inequalities give that we reject the hypothesis if:

[itex]\left|\frac{\hat{X}-\hat{Y}}{\sigma_{\hat{X}}+\sigma_{\hat{Y}}} \right|> z_{\alpha/2}[/itex]

Now my question is, what is then the justification for using the argument "we can assume that they are different if the 2 confidence intervals do not overlap". In the original test where we have [itex]\sqrt{\sigma^{2}_{\hat{X}}+\sigma^{2}_{\hat{Y}}}[/itex] in the denominator, and we have control on the significance level [itex]\alpha[/itex]. When we developed this new test, we just said that we didnt want the 2 confidence interval to overlap. But with algebra we have showed that this new test, is the same as the old test but with the new denominator: [itex]\sigma_{\hat{X}}+\sigma_{\hat{Y}}[/itex]. But is there anyting else we can say about this new test?, is it better or worse than the old test? Could it have been derived by starting with the statistic: [itex]\frac{\hat{X}-\hat{Y}}{\sigma_{\hat{X}}+\sigma_{\hat{Y}}} [/itex]? I mean, the statistic: [itex]\frac{\hat{X}-\hat{Y}}{\sigma_{\hat{X}}+\sigma_{\hat{Y}}} [/itex] is probably(?) not standard normal distributed, so what do we know about this test?

**Physics Forums - The Fusion of Science and Community**

Join Physics Forums Today!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Have you heard about this kind of test of hypothesis

Loading...

Similar Threads - heard kind test | Date |
---|---|

A F-test regression test, when and how? | Sep 17, 2017 |

What kind of analysis should I use? | May 1, 2012 |

A special kind of branching process | Feb 23, 2012 |

Bar dice: six-of-a-kind out of nine in one shake | Feb 17, 2012 |

What kind of test should I use for before and after treatment | Nov 18, 2011 |

**Physics Forums - The Fusion of Science and Community**