Have you heard about this kind of test of hypothesis

  • Thread starter riemann86
  • Start date
  • Tags
    Test
In summary, the conversation discusses two different tests for testing the hypothesis that the expected values of two independent normally distributed random variables are equal. The first test involves comparing the absolute value of the difference between the two variables to a standard normal distribution, while the second test involves constructing two confidence intervals and checking for overlap. The justification for using the second test is that it is equivalent to the first test but with a different denominator. However, it is important to note that the two tests are not interchangeable and using the second test does not guarantee the same results as the first test.
  • #1
riemann86
11
0
In one example I saw someone make an argument I haven't seen before. They had 2 confidence intervals for 2 different statistics, and they said that since these two confidence intervals didnt overlap, we could assume that the expected values where different. This made me think of test of hypothesis, and I am wondering the justification behind the argument that if 2 different confidence intervals do not overlap, we can assume the expected value of each is different.

We have two indepent normally distributed random variables, [itex]\hat{X}[/itex] and [itex]\hat{Y}[/itex], with given variance, [itex]\sigma^{2}_{\hat{X}}[/itex] and [itex]\sigma^{2}_{\hat{Y}}[/itex]. We also have that [itex]E(\hat{X})=\mu_{X}[/itex], and [itex]E(\hat{Y})=\mu_{Y}[/itex]. However we do not know [itex]\mu_{X}[/itex], and [itex]\mu_{Y}[/itex].

We now want to test the hypothesis that [itex]\mu_{X}[/itex] = [itex]\mu_{Y}[/itex].

The standard way of doing is offcourse by
rejecting the hypotesis if

[itex]\left|{\frac{\hat{X}-\hat{Y}}{\sqrt{\sigma^{2}_{\hat{X}}+\sigma^{2}_{\hat{Y}}} } }\right|>z_{\alpha/2}[/itex]The alternative way of testing this was constructing 2 confidence intervals, and rejecting the hypothesis if the two intervals do not overlap.
The 2 intervals are:
[itex](\hat{X}-z_{\alpha/2}*\sigma_{\hat{X}}, \hat{X}+z_{\alpha/2}*\sigma_{\hat{X}})[/itex]
and [itex](\hat{Y}-z_{\alpha/2}*\sigma_{\hat{Y}}, \hat{Y}+z_{\alpha/2}*\sigma_{\hat{Y }})[/itex]

However if the two intervals are not to overlap we have either that:
[itex]\hat{Y}+z_{\alpha/2}*\sigma_{\hat{Y}} < \hat{X}-z_{\alpha/2}*\sigma_{\hat{X}} [/itex]
or we have that :
[itex]\hat{Y}-z_{\alpha/2}*\sigma_{\hat{Y}} > \hat{X}+z_{\alpha/2}*\sigma_{\hat{X}}[/itex]

Together these two inequalities give that we reject the hypothesis if:
[itex]\left|\frac{\hat{X}-\hat{Y}}{\sigma_{\hat{X}}+\sigma_{\hat{Y}}} \right|> z_{\alpha/2}[/itex]Now my question is, what is then the justification for using the argument "we can assume that they are different if the 2 confidence intervals do not overlap". In the original test where we have [itex]\sqrt{\sigma^{2}_{\hat{X}}+\sigma^{2}_{\hat{Y}}}[/itex] in the denominator, and we have control on the significance level [itex]\alpha[/itex]. When we developed this new test, we just said that we didnt want the 2 confidence interval to overlap. But with algebra we have showed that this new test, is the same as the old test but with the new denominator: [itex]\sigma_{\hat{X}}+\sigma_{\hat{Y}}[/itex]. But is there anyting else we can say about this new test?, is it better or worse than the old test? Could it have been derived by starting with the statistic: [itex]\frac{\hat{X}-\hat{Y}}{\sigma_{\hat{X}}+\sigma_{\hat{Y}}} [/itex]? I mean, the statistic: [itex]\frac{\hat{X}-\hat{Y}}{\sigma_{\hat{X}}+\sigma_{\hat{Y}}} [/itex] is probably(?) not standard normal distributed, so what do we know about this test?
 
Last edited:
Physics news on Phys.org
  • #2
riemann86 said:
what is then the justification for using the argument "we can assume that they are different if the 2 confidence intervals do not overlap".

The two tests are different tests.

[itex] S_2 = \left| \frac { \hat{X} - \hat{Y} } { \sqrt{ \sigma_x^2 + \sigma_y^2}}\right| \ge S_1= \left| \frac{ \hat{X} - \hat{Y} }{ \sigma_x + \sigma_y}\right| [/itex]

So it would be correct to say that if a particular observed value of [itex] S_1\ge z_{\alpha/2} [/itex] then there is no need to compute the value of [itex] S_2 [/itex] to see if it is [itex] \ge z_{\alpha/2} [/itex] also.

It would not be correct to say that an observed value of [itex] S_1 \le z_{\alpha/2} [/itex] let's us conclude that the null hypothesis would be accepted by the test that uses [itex] S_2 [/itex].
 
  • #3
Stephen Tashi said:
The two tests are different tests.

[itex] S_2 = \left| \frac { \hat{X} - \hat{Y} } { \sqrt{ \sigma_x^2 + \sigma_y^2}}\right| \ge S_1= \left| \frac{ \hat{X} - \hat{Y} }{ \sigma_x + \sigma_y}\right| [/itex]

So it would be correct to say that if a particular observed value of [itex] S_1\ge z_{\alpha/2} [/itex] then there is no need to compute the value of [itex] S_2 [/itex] to see if it is [itex] \ge z_{\alpha/2} [/itex] also.

It would not be correct to say that an observed value of [itex] S_1 \le z_{\alpha/2} [/itex] let's us conclude that the null hypothesis would be accepted by the test that uses [itex] S_2 [/itex].

Thank you very much, that is a very good point!
 
Last edited:

What is a test of hypothesis?

A test of hypothesis is a statistical method used to determine whether a proposed explanation for a phenomenon is supported by evidence or not. It involves formulating a null hypothesis and an alternative hypothesis, collecting and analyzing data, and drawing conclusions based on the results.

Why is a test of hypothesis important in science?

A test of hypothesis is important in science because it allows researchers to make evidence-based decisions and draw conclusions about their hypotheses. It also helps to control for chance and bias in data analysis, and provides a way to assess the validity of scientific claims.

What are the steps involved in a test of hypothesis?

The steps involved in a test of hypothesis include:

  1. Formulating a null hypothesis and alternative hypothesis
  2. Choosing an appropriate test statistic and level of significance
  3. Collecting and organizing data
  4. Calculating the test statistic and determining the p-value
  5. Comparing the p-value to the level of significance and making a decision
  6. Interpreting the results and drawing conclusions

What is the difference between a one-tailed and two-tailed test of hypothesis?

In a one-tailed test of hypothesis, the alternative hypothesis specifies the direction of the effect (e.g. "the mean is greater than 50"). In a two-tailed test, the alternative hypothesis does not specify a direction of the effect (e.g. "the mean is not equal to 50"). The choice between a one-tailed and two-tailed test depends on the specific research question and the type of data being analyzed.

What are some common misconceptions about tests of hypothesis?

Some common misconceptions about tests of hypothesis include:

  • Assuming that a p-value less than 0.05 automatically means the results are significant
  • Believing that a non-significant result means the null hypothesis is true
  • Thinking that a significant result means the alternative hypothesis is true with absolute certainty
  • Assuming that a larger sample size will always lead to a significant result
  • Confusing correlation with causation

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
961
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
923
  • Set Theory, Logic, Probability, Statistics
Replies
26
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
938
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
746
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
2K
Back
Top