Showing Rejection Region Equality with Fisher Distribution

In summary, the problem involves testing the equality of two variances from normal populations. The rejection region for this test can be expressed in two different ways, and this can be used to show that under the null hypothesis, the probability of the ratio of the larger variance to the smaller variance being greater than a certain value is equal to the significance level of the test. This provides an alternative method for testing the equality of variances.
  • #1
transmini
81
1

Homework Statement


[/B]
For reference:
Book: Mathematical Statistics with Applications, 7th Ed., by Wackerly, Mendenhall, and Scheaffer.
Problem: 10.81

From two normal populations with respective variances ##\sigma_1^2## and ##\sigma_2^2##, we observe independent sample variances ##S_1^2## and ##S_2^2##, with corresponding degrees of freedom ##\nu_1=n_1-1## and ##\nu_2=n_2-1##. We wish to test ##H_0: \sigma_1^2=\sigma_2^2## versus ##H_a: \sigma_1^2 \neq \sigma_2^2##.

(a) Show that the rejection region given by
$$\{F > F_{\nu_2, \space \alpha/2}^{\nu_1} \space or \space F < (F_{\nu_1, \space \alpha/2}^{\nu_2})^{-1}\}$$
where ##F=S_1^2/S_2^2##, is the same as the rejection region given by
$$\{S_1^2/S_2^2 > F_{\nu_2, \space \alpha/2}^{\nu_1} \space or \space S_2^2/S_1^2 > F_{\nu_1, \space \alpha/2}^{\nu_2}\}.$$

(b) Let ##S_L^2## denote the larger of ##S_1^2## and ##S_2^2## and let ##S_S^2## denote the smaller of ##S_1^2## and ##S_2^2##. Let ##\nu_L## and ##\nu_S## denote the degrees of freedom associated with ##S_L^2## and ##S_S^2##, repectively. Use part (a) to show that, under ##H_0##,
$$P(S_L^2/S_S^2 > F_{\nu_S, \space \alpha/2}^{\nu_L})=\alpha.$$
Note that this gives an equivalent method for testing the equality of two variances.

Homework Equations


N/A

The Attempt at a Solution


[/B]
(a) $$\{F > F_{\nu_2, \space \alpha/2}^{\nu_1} \space or \space F < (F_{\nu_1, \space \alpha/2}^{\nu_2})^{-1}\}$$
$$ = \{S_1^2/S_2^2 > F_{\nu_2, \space \alpha/2}^{\nu_1} \space or \space S_1^2/S_2^2 < (F_{\nu_1, \space \alpha/2}^{\nu_2})^{-1}\}$$
$$ = \{S_1^2/S_2^2 > F_{\nu_2, \space \alpha/2}^{\nu_1} \space or \space (S_1^2/S_2^2)^{-1} > F_{\nu_1, \space \alpha/2}^{\nu_2}\}$$
$$ = \{S_1^2/S_2^2 > F_{\nu_2, \space \alpha/2}^{\nu_1} \space or \space S_2^2/S_1^2 >(F_{\nu_1, \space \alpha/2}^{\nu_2})^{-1}\}$$

(b) I have no idea on, as I'm not entirely certain how the statement could be true in the first place. Because, let's assume that ##S_1^2 = S_L^2## and ##S_2^2 = S_S^2##. Then the problem is saying to show ##P(S_1^2/S_2^2 > F_{\nu_2, \space \alpha/2}^{\nu_1}) = \alpha##, but since this gives the tail probability of the Fisher distribution, and ##F_{\alpha/2}## is defined as the value of F such that the tail probability is ##\frac{\alpha}{2}##, how can ##P(F > F_{\nu_2, \space \alpha/2}^{\nu_1}) = \alpha## when it by definition equals ##\frac{\alpha}{2}##?
 
Physics news on Phys.org
  • #2
Wait I THINK I may have figured it out. So we have, assuming ##S_1^2 = S_L^2## and ##S_2^2 = S_S^2##:
$$P(F \in Rejection \space Region) = \alpha$$
$$P(S_L^2/S_S^2 > F_{\nu_S, \space \alpha/2}^{\nu_L} \space or \space S_S^2/S_L^2 > F_{\nu_L, \space \alpha/2}^{\nu_S}) = \alpha$$
$$P(S_L^2/S_S^2 > F_{\nu_S, \space \alpha/2}^{\nu_L})+P(S_S^2/S_L^2 > F_{\nu_L, \space \alpha/2}^{\nu_S}) = \alpha$$ since they are mutually exclusive
$$P(S_L^2/S_S^2 > F_{\nu_S, \space \alpha/2}^{\nu_L})+0 = \alpha$$ since ##\frac{S_S^2}{S_L^2} < 1## and F-values are greater than 1 (at least as far as I can tell looking at this table anyway)
$$P(S_L^2/S_S^2 > F_{\nu_S, \space \alpha/2}^{\nu_L})= \alpha$$

Is this right? And if so, could someone give a more intuitive reasoning to this? Because it still feels weird that the tail area is ##\alpha/2## but the probability of being the rejection region is ##\alpha## when it's not possible to be in one tail.
 

Related to Showing Rejection Region Equality with Fisher Distribution

1. What is a rejection region?

A rejection region, also known as a critical region, is a range of values in a statistical test that is used to determine whether or not to reject the null hypothesis. If the test statistic falls within this region, it is considered significant and the null hypothesis is rejected.

2. How is a rejection region determined?

A rejection region is determined by setting a significance level, usually denoted as α, which represents the probability of making a Type I error (rejecting the null hypothesis when it is actually true). The boundaries of the rejection region are then calculated using the chosen significance level and the distribution of the test statistic.

3. What is the Fisher distribution?

The Fisher distribution, also known as the F-distribution, is a probability distribution that is commonly used in statistical tests to analyze the ratio of two sample variances. It is named after its creator, Sir Ronald Fisher, and is a continuous distribution with two parameters: the degrees of freedom for the numerator and denominator of the ratio.

4. How is the Fisher distribution used to show rejection region equality?

In order to show rejection region equality, the F-statistic (calculated from the sample data) is compared to the critical value from the Fisher distribution at a particular significance level. If the F-statistic falls within the rejection region, it is considered significant and the null hypothesis is rejected. This process is repeated for both the numerator and denominator degrees of freedom to ensure equality.

5. What are the advantages of using the Fisher distribution in statistical testing?

The Fisher distribution has several advantages, including its ability to handle non-normal data and its flexibility in handling different sample sizes and variances. It is also commonly used in analyzing the results of ANOVA tests, which are used to compare means across multiple groups. Additionally, the Fisher distribution allows for the calculation of p-values, which can provide a more precise measure of significance compared to simply comparing test statistics to critical values.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
933
  • Calculus and Beyond Homework Help
Replies
1
Views
938
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
20
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
944
  • Calculus and Beyond Homework Help
Replies
8
Views
3K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Math Proof Training and Practice
2
Replies
46
Views
5K
  • Calculus and Beyond Homework Help
Replies
8
Views
2K
Back
Top