Chi-squared test stat choosing a critical value

In summary, when solving problems that require the use of chi-squared table, it is important to correctly choose the right critical value using the table. The critical value can be found by looking at the appropriate row and column for the desired significance level, which may be in the left or right tail depending on the specific problem. It is important to make it hard on yourself and look for small or large values in the table to support your conjecture and convince skeptical individuals.
  • #1
Vital
108
4
Hello.

I am doing problem sets on very basic stat topics. When solving problems that require the use of chi-squared table, I stumbled upon an unexpected issue. I seem to miss something important about how to correctly choose the right critical value using the chi-squared table.

Below are two examples, and both have totally confused me. My questions are below these examples.

(1) During a 10-year period, the standard deviation of annual returns on a portfolio you are analyzing was 7 percent a year. You want to see whether this record is sufficient evidence to support the conclusion that the portfolio’s underlying variance of return was less than 395, the return variance of the portfolio’s benchmark.

Question: Identify the rejection point or points at the 0.05 significance level for the hypothesis H0: σ2 ≥ 400 versus Ha: σ2 < 395, where 395 is the hypothesized value of variance, σ2.

Solution
from the book:
The book provides the table which is named Values of chi-squared (degrees of freedom, level of significance) Probability in Right Tail. I am attaching the picture of the table.

The rejection point is found across degrees of freedom of 9, under the 0.95 column (95 percent of probability above the value). It is 3.325. We will reject the null hypothesis if we find that χ2 < 3.325.

(2)

For example, we will consider a goodness of fit test for a twelve-sided die. Our null hypothesis is that all sides are equally likely to be rolled, and so each side has a probability of 1/12 of being rolled. Since there are 12 outcomes, there are 12 -1 = 11 degrees of freedom. This means that we will use the row marked 11 for our calculations.
A goodness of fit test is a one-tailed test. The tail that we use for this is the right tail. Suppose that the level of significance is 0.05 = 5%. This is the probability in the right tail of the distribution. Our table is set up for probability in the left tail. So the left of our critical value should be 1 – 0.05 = 0.95. This means that we use the column corresponding to 0.95 and row 11 to give a critical value of 19.675.

My questions:
Doing a few problems, I came to understanding that if, for example, we take 5% significance level, it means that we have a critical value X, and 5% of data above this critical value X is where we would reject our null hypothesis, because it would mean that our actual observations do not fit into 95% of data below that critical value.
So, the first example "looks" at the table that is set for the right tale, and the second "looks" at the table set to the left tale, but both use the 5% significance level. This is so confusing.
If we take a 5% significance level for the first problem, than why do we take the critical value in the column for 95%? I thought that, as I explained above, that I have to look at the value in column 0.05 (given certain degrees of freedom), meaning that if my computed test statistic is lower than that critical value, then I reject the null, but if it is not, then I do not reject the null and it would mean that the data is actually in that right 5% segment.
I am completely confused by these two problems and by tables that show probabilities in the right or left tale for chi-squared distribution.

EDIT:
I have just found one more example, which uses the same 5% level of significance, and chooses the critical value in the column with 0.05, not in column 0.95.
Here is the link to this third example.

And here is another link to the youtube video, where he explains that when you are looking for a critical value of say 5% with say 5 degrees of freedom, you have to look in the column with 0.05 in the table to find the critical value of 11.070, and not in the column with 0.95.
Please, help) Thank you very much.
 

Attachments

  • Screen Shot 2019-01-01 at 12.12.56.png
    Screen Shot 2019-01-01 at 12.12.56.png
    54.5 KB · Views: 464
Last edited:
Physics news on Phys.org
  • #2
You are looking for a test that will support some conjecture that you have and you make it hard on yourself to get that support. You always make it hard on yourself so that you can convince skeptical people.

In the first example, your conjecture is that the variance is small. You make it hard on yourself by insisting that the experimental variance be very small. That is a small Chi-squared value. Small values are down in the 95% column or lower.

In the second example, your conjecture is that the die are not fair. You make it hard on yourself by insisting that the experimental values be far from the expected fair-die values. That is a large Chi-squared value. Large values are up in the 5% column or higher.
 
  • #3
I am sorry but I still don't understand how it works and how to choose a correct value from the table, given all these different approaches which seem to contradict each other in all 4 examples I showed.

I seem to understand a bit more the meaning of the chi-squared test used in the first and second example, but it still doesn't truly help to see why and how I have to choose 0.95 column in the first problem, and 0.05 column in the second, while the question gives the same level of significance of 5%.
- I understand that in the first example we compare the variance of the set of observations to our hypothised variance to see if the ratio is small, or big. If it is small, then the difference is not significant, and we cannot reject the null. But when the ratio is big (that there are many instances of our hypothised variance in the observed one), we reject the null. One way to check is, upon computing the test statistic ( [n-1]s^2/sigma^2), to find that value in the table and see what column corresponds to it.

But my book tells me that before computing the test statistic I have to find that critical value in the table, and later to compare my computed test statistic to that critical value. And here is my confusion comes - what approach is a correct one, out of all 4 examples, and how to choose? I truly miss not only the intuition behind it, but also the method itself. If the problems tells me "find the critical value at 5% level of significance", it means to me "go to the column 0.05 and grab that value (of course, with certain degrees of freedom)". To me 5% level of significance means that 95% (which is the level of confidence) of values to the left of that threshold fall within the area of our null hypothesis, and 5% ( which is 1 - level of confidence) are to the right of the critical value, i.e. the area where we would reject the null.

What a conundrum)
 
  • #4
Decide what your conjecture is. Determine which direction (small or large) will be needed for the Chi-square value of a data sample to support that conjecture. Set a level in that direction with a small enough probability that might impress a skeptical person.
 
  • #5
Vital said:
Hello.

I am doing problem sets on very basic stat topics. When solving problems that require the use of chi-squared table, I stumbled upon an unexpected issue. I seem to miss something important about how to correctly choose the right critical value using the chi-squared table.

Please, help) Thank you very much.

I am going to try to explain the general concepts involved, but without having a specific example in mind. So, I may not be talking specifically about ##\chi^2## or the "0.05" level. The explanation is a bit lengthy and may give stuff you already know, but in that case you can just ignore some material.

In general, we may have some non-random quantity ##\theta## whose value we are trying to assess or make judgements about. (Is it plausibly equal to a certain value? Is it too large? Is it too small? etc.) Of course, if we know exactly what is the true value of ##\theta## we are done, but that is something we do not, in fact, know. All we have is an estimate of ##\theta## based on some measurements and statistical calculations. If we take a sample of ##n## measurements ##x_1, x_2, \ldots, x_n## of some quantity ##X## we produce an estimate of ##\theta## based on that sample, say ##\theta_{\text{est.}} = T(x_1, x_2, \ldots, x_n).## The trouble is that because of various random errors, the estimated value of ##\theta_{\text{est.}} = T## will very likely differ from the exact, unknown, value (say ##\theta_0##), and the underlying issue is whether the observed difference is really due just to "randomness", or whether it is systematic---that is, due to our actually having the wrong value. So, we try to test for that.

Typically, statistical tests either consist of "confidence-interval" assessments or "hypothesis tests". These are closely related, but let's look at hypothesis testing, since that is the basis of your question. Typically, we want to test whether ##\theta = \theta_0## or whether ##\theta## is not equal to ##\theta_0##, expressed either as
(i) ##\theta = \theta_1## for some given ##\theta_1## that is not equal to ##\theta_0##; or
(ii) ##\theta > \theta_0##; or
(Iii) ##\theta < \theta_0##
No matter what we do we will make a mistake every once in a while, just because of random errors throwing off the test. However, we would like to either guard against making certain errors too often, or at least asses the chance we have made an error (even if not guarded against as much as we would like).

There are two kinds of errors:
(1) Rejecting a true hypothesis
(2) Accepting a false hypothesis.

In the context of a court case, an error of the first type corresponds to convicting an innocent man, while the second type corresponds to letting a guilty man go free. Typically, we worry a lot more about making the first type of error, and not so much about the second type. That is quite typical in statistical testing as well. The first type of error is called a "Type I error", and we typically try to keep the probability of making it low (perhaps less than 10%, 5% or 2%, sometimes even smaller). The second type of error is called a "Type II error", and we often do not worry about it too much. However, if we can afford to do enough tests we can also keep the probability of a Type II error small by manipulating the sample size ##n##. However, never mind that for now---just know that it can be done.

So, a typical hypothesis-testing situation might consist of the "null" hypothesis "##H_0: \theta = \theta_0##" (or maybe ##\theta \geq \theta_0##), vs, the "alternative" hypothesis "##H_1: \theta < \theta_0##". We are estimating ##\theta## using some estimator ##T## that depends on our sample, and whose observed value is ##t_O = T_{\text{obs.}}## Note that ##t_O## is not random; it is a number, one of the many possible observed or computed values of the random variable ##T##.

So, based on the observed ##t_O##, what can we infer about ##\theta##? Assuming that ##\theta = \theta_0## is actually true, we would nevertheless not be surprised to see a value of ##t_O## a bit less than ##\theta_0##, just due to randomness. The issue is: just how much less than ##\theta_0## can it be before we would start to suspect that ##\theta## really and truly is ##< \theta_0##? So, if we set a critical value ##t_\alpha## and reject ##H_0## whenever ##t_O < t_\alpha##, the type-I error would be ##P_I = P(T < t_O | H_0 \; \text{is true} ) ##, and we want to keep that below a chosen level ##\alpha##. For example, if we "test at the 5% level" we are typically asking to have ##P_I < 0.05## (or maybe ##P_I \leq 0.05##). In a typical statistical table for ##T## we may be given information about the cumulative distribution of ##T## in the form that ##t_\alpha = \text{table entry}(\alpha)## is the value giving ## P(T \leq t_\alpha)= \alpha ##. In that case we would check to see if ##t_O < t_\alpha##, because if that happens, our sampled-value of ##T## is too small to be plausible when ##\theta = \theta_0##: it is much more believable to us that the true value of ##\theta## is ## < \theta_0##, making a smaller value of ##T## more likely.

So, basically, the way to think about the right kind of test is to ask yourself: "is ##t_O## too small for ##H_0## to be readily believable, making ##H_1: \theta <
\theta_0## more likely? (Of course, I am here assuming that ##T## is an increasing function of ##\theta##, so larger values of ##\theta## lead to statistically larger values of ##T##. In the opposite case, you would just swap "large" and "small" in the statement of the test).

If (unlike the above) your hypotheses are ##H_0: \theta = \theta_0## vs. ##H_1: \theta > \theta_0## (and if ##T## increases as ##\theta## increases), you would say that the type-I error is ##P_I = P( T > t_O | H_0 \; \text{is true}),## and we need to look at right-tail probabilities of ##T##. However, if our table just looks at left-tail probabilities of the form ##P(T \leq v)##, then we need to use the fact that
$$P (T > t_O) = 1 - P(T \leq t_O),$$
so a critical value ##s_\alpha ## giving ##P(T > s_\alpha) = \alpha## would be given by ##s_\alpha = \text{table entry}(1-\alpha)##, because
$$P(T > s_\alpha) = 1 - P(T < s_\alpha) = 1 - (1 - \alpha) = \alpha.$$ We would reject ##H_0## if ##t_O > s_\alpha##, because our observed value of ##T## is too large to be believable if ##H_0## were true.

So, to avoid confusion, try to avoid using "canned formulas and prescriptions"; instead, think about whether you want to guard against values that are too small or too large, and then think about what that would mean if you had a graph of the cumulative distribution in front of you to stare at. Do you want a left-tail area or a right-tail area? Then think about how you would get that information from a limited table that you have available to you. However, it should also be noted that there are numerous free on-line sources that allow you to enter some values--such as degrees of freedom and a value of ##\chi^2## for example, and get a left-tail probability; alternatively, some may take inputs of degrees of freedom and a probability ##\alpha##, then give you the ##\chi^2## value having that left-tail probability.
 
Last edited:
  • #6
Vital said:
I am doing problem sets on very basic stat topics.
Your examples involve hypothesis testing. Begin by understanding the general framework of hypothesis testing. This is explained in detail by @Ray Vickson.

A simplified version goes like this.

A hypothesis test involves

1) A statistic
2) A null hypothesis,
3) The distribution of the statistic implied by the null hypothesis
4) A rejection region for the null hypothesis.

When solving problems that require the use of chi-squared table, I stumbled upon an unexpected issue. I seem to miss something important about how to correctly choose the right critical value using the chi-squared table.

You are expecting to find a pattern based on 3) the distribution involved in the test. However, different statistics can have the same distribution. To understand what procedure to use, you must consider 1) what statistic is involved. In your first example, you are to consider a statistic that involves the ratio of sample variance to hypothesized population variance. In other examples, you are to consider a statistic that measures "goodness of fit" between sample instances and expected values of those instances according to the null hypothesis.

When working a hypothesis testing problem, first consider what statistic will be used. Don't think of all problems involving the chi-square distribution as being the same sort of problem.
 

1. What is the purpose of a Chi-squared test?

The Chi-squared test is a statistical method used to determine whether there is a significant association between two categorical variables. It is commonly used in research to analyze data and determine if there is a relationship between two variables.

2. How is the Chi-squared test statistic calculated?

The Chi-squared test statistic is calculated by taking the sum of the squared differences between the observed and expected frequencies of a categorical variable. This calculation results in a single number that is compared to a critical value to determine the significance of the relationship between the variables.

3. What is the significance level for choosing a critical value in a Chi-squared test?

The significance level, also known as alpha, is the probability of rejecting the null hypothesis when it is actually true. This value is typically set at 0.05 or 5%, meaning that if the calculated Chi-squared test statistic is greater than the critical value at 0.05, the null hypothesis can be rejected and it can be concluded that there is a significant relationship between the variables.

4. How is the critical value chosen in a Chi-squared test?

The critical value is chosen based on the significance level and the degrees of freedom in the Chi-squared test. The degrees of freedom is calculated by subtracting 1 from the number of categories in each variable. The critical value can then be found in a Chi-squared table using the degrees of freedom and the desired significance level.

5. Can the Chi-squared test be used for more than two variables?

Yes, the Chi-squared test can be used for more than two categorical variables. In this case, it is called a Chi-squared test for independence. The test can be performed by creating a contingency table with all of the variables and calculating the Chi-squared test statistic and critical value accordingly.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
22
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
981
  • Set Theory, Logic, Probability, Statistics
Replies
17
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
20
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
20
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Back
Top