Register to reply

Some questions about hypothesis testing...

by Artusartos
Tags: hypothesis, testing
Share this thread:
Artusartos
#1
Dec1-12, 08:36 AM
P: 247
I think Iím a bit confused about what [tex]\alpha[/tex] is. For the definition that is given, what theta really means when they say [tex]\alpha = max_{\theta \in w_0} P_{\theta} [(X_1, Ö , X_n) \in C]. I just think that itís not really clear in my head, so is it ok if anybody explains/gives and example about this?

(my question is about the second attachment)

Thanks in advance
Attached Thumbnails
20121201_075858[1].jpg   20121201_075910[1].jpg  
Phys.Org News Partner Science news on Phys.org
Bees able to spot which flowers offer best rewards before landing
Classic Lewis Carroll character inspires new ecological model
When cooperation counts: Researchers find sperm benefit from grouping together in mice
Stephen Tashi
#2
Dec1-12, 12:46 PM
Sci Advisor
P: 3,256
Perhaps the example of a "one tailed" test is the best intuitive explanation for that. Suppose we have a coin and our "null hypothesis" is "the coin is not biased toward heads". This hypothesis is different that the hypothesis "the coin is fair" because "not biased toward heads" would include the cases where the probability of the coin landing heads was 0.0 or 0.01 or 0.499 etc.

Hypothesis testing is a procedure where you define some feature of experimental data as "the test statistic" and define some "acceptance region" for that feature. If the feature of the particular data that you observe is outside the "acceptance region", you "reject" the null hypothesis. (Hypothesis testing isn't a mathematical proof that the null hypothesis is true or that it is false and it doesn't compute the probability that the null hypothesis is true or the probability that the null hypothesis is false. It is simply a procedure.)

To apply hypothesis testing to the above example, you must pick a feature of the data. Let's say the data is 100 independent flips of the coin. Let the feature be the total number of heads that occurred. Lets say the "acceptance region" is "80 or fewer flips of the coin produced heads".

The intuitive idea of "alpha" is to answer the question "What is the probability that I will reject the null hypothesis when it is actually true?". In this example, the question is "What is the probability that the observed number of heads will be more than 80 when the coin is actually "not biased toward heads". However, we can't compute a probability for this event because the statement the statement that the coin is "not biased toward heads" doesn't give us a specific probability to compute with. We don't know whether to assume that the probability of a head is 0.0 or 0.01 or .499 etc. Intuitively, if we assume the probability of heads is 0.5 then we are letting the coin be "as prone as possible to produce a head, without actually being biased toward producing a head". Assuming the probability of a head is 0.5 gives us a specific probability to use and it captures the idea of "the situation where the null hypothesis would be most likely to produce a false result".

Your book's definition is more sophisticated than the definition found in many introductory texts. Many texts define alpha as "the probability of rejecting the null hypothesis when it is true". Your text defines it as "the maximum of the probabiilities of rejecting the null hypothesis taken over all the possible ways the null hypothesis can be true". That is the more general definition of alpha.
Artusartos
#3
Dec1-12, 03:29 PM
P: 247
Quote Quote by Stephen Tashi View Post
Perhaps the example of a "one tailed" test is the best intuitive explanation for that. Suppose we have a coin and our "null hypothesis" is "the coin is not biased toward heads". This hypothesis is different that the hypothesis "the coin is fair" because "not biased toward heads" would include the cases where the probability of the coin landing heads was 0.0 or 0.01 or 0.499 etc.

Hypothesis testing is a procedure where you define some feature of experimental data as "the test statistic" and define some "acceptance region" for that feature. If the feature of the particular data that you observe is outside the "acceptance region", you "reject" the null hypothesis. (Hypothesis testing isn't a mathematical proof that the null hypothesis is true or that it is false and it doesn't compute the probability that the null hypothesis is true or the probability that the null hypothesis is false. It is simply a procedure.)

To apply hypothesis testing to the above example, you must pick a feature of the data. Let's say the data is 100 independent flips of the coin. Let the feature be the total number of heads that occurred. Lets say the "acceptance region" is "80 or fewer flips of the coin produced heads".

The intuitive idea of "alpha" is to answer the question "What is the probability that I will reject the null hypothesis when it is actually true?". In this example, the question is "What is the probability that the observed number of heads will be more than 80 when the coin is actually "not biased toward heads". However, we can't compute a probability for this event because the statement the statement that the coin is "not biased toward heads" doesn't give us a specific probability to compute with. We don't know whether to assume that the probability of a head is 0.0 or 0.01 or .499 etc. Intuitively, if we assume the probability of heads is 0.5 then we are letting the coin be "as prone as possible to produce a head, without actually being biased toward producing a head". Assuming the probability of a head is 0.5 gives us a specific probability to use and it captures the idea of "the situation where the null hypothesis would be most likely to produce a false result".

Your book's definition is more sophisticated than the definition found in many introductory texts. Many texts define alpha as "the probability of rejecting the null hypothesis when it is true". Your text defines it as "the maximum of the probabiilities of rejecting the null hypothesis taken over all the possible ways the null hypothesis can be true". That is the more general definition of alpha.
Thank you so much for your help, but I think I'm a bit confused again...

There is an example in my textbook (which I attached)...

It says that [tex]\alpha = P_{H_0} [S \leq k][/tex] and [tex]\alpha = P_{p_0} [S \leq k][/tex]

I have two questions:

1) In my previous attachments the textbook defined alpha as the maximum of [tex]P_{\theta} [(X_1, ... , X_n) \in C][/tex] So in this example, are they saying that [tex]\theta = H_0[/tex] when they are saying [tex]\alpha = P_{H_0} [S \leq k][/tex]?

2) How did [tex]H_0[/tex] turn into [tex]p_0[/tex] when they wrote [tex]P_{p_0} [S \leq k][/tex]
instead of [tex]\alpha = P_{H_0} [S \leq k][/tex]

Thanks in advance
Attached Thumbnails
20121201_160139.jpg  

Stephen Tashi
#4
Dec1-12, 07:32 PM
Sci Advisor
P: 3,256
Some questions about hypothesis testing...

Quote Quote by Artusartos View Post
1) So in this example, are they saying that [tex]\theta = H_0[/tex] when they are saying [tex]\alpha = P_{H_0} [S \leq k][/tex]?
The null hypothesis [itex] H_0 [/itex] is a statement, not a number. So it wouldn't make sense to say [itex] \theta = H_0[/itex]. As I understand their notation [itex]P_{ H_0}[S \leq k ] [/itex] means "The probability that S is less than or equal to k under the assumption that [itex] H_0 [/itex] is true" and [itex] P_{p_0}[ S \leq k ] [/itex] means the "The probability that S is less than or equal to k under the assumption that the probability of success is [itex] p_0 [/itex]. So both expressions refer to the same probability. (Your text didn't do a good job of definining those notations.)

In that example it is not necessary to speak of the maximum probability computer over the set of all [itex] \theta [/itex]. The null hypothesis in the example is stated as [itex] \theta = p = p_0 [/itex]. So the null hypothesis only deals with a single value of [itex] \theta [/itex].

If they had chosen to state the null hypothesis as [itex] \theta = p \ge p_0 [/itex] then we would have done the same computation for [itex] \alpha [/itex] as the example did, since [itex] p = p_0 [/itex] is the value of [itex] p [/itex] that maximizes the probability that [itex] S \leq k [/itex] among all the possible values of [itex] p [/itex] that are allowed when the null hypothesis is true.

It's an interesting question whether the null hypothesis in the example should be "The new treatment has the same effectiveness as the old treatment" or whether it makes more sense to make it say "The new treatment is no more effective than the old treatment". Since the example proposes a "one tailed" acceptance region, I think it makes more sense to phrase the null hypothesis the second way.

Trying to prove whether a particular type of acceptance region (one tailed, two tailed, or a even bunch of isolated intervals) is "best" involves defining what "best" means. The only way I know to approach that topic in frequentist statistics is to compare the "power" of tests that use difference acceptance regions. The power of a test is defined by a function, not by a single number, so comparing the power of two tests is not straightforward either.
Artusartos
#5
Dec2-12, 08:46 AM
P: 247
Quote Quote by Stephen Tashi View Post
The null hypothesis [itex] H_0 [/itex] is a statement, not a number. So it wouldn't make sense to say [itex] \theta = H_0[/itex]. As I understand their notation [itex]P_{ H_0}[S \leq k ] [/itex] means "The probability that S is less than or equal to k under the assumption that [itex] H_0 [/itex] is true" and [itex] P_{p_0}[ S \leq k ] [/itex] means the "The probability that S is less than or equal to k under the assumption that the probability of success is [itex] p_0 [/itex]. So both expressions refer to the same probability. (Your text didn't do a good job of definining those notations.)

In that example it is not necessary to speak of the maximum probability computer over the set of all [itex] \theta [/itex]. The null hypothesis in the example is stated as [itex] \theta = p = p_0 [/itex]. So the null hypothesis only deals with a single value of [itex] \theta [/itex].

If they had chosen to state the null hypothesis as [itex] \theta = p \ge p_0 [/itex] then we would have done the same computation for [itex] \alpha [/itex] as the example did, since [itex] p = p_0 [/itex] is the value of [itex] p [/itex] that maximizes the probability that [itex] S \leq k [/itex] among all the possible values of [itex] p [/itex] that are allowed when the null hypothesis is true.

It's an interesting question whether the null hypothesis in the example should be "The new treatment has the same effectiveness as the old treatment" or whether it makes more sense to make it say "The new treatment is no more effective than the old treatment". Since the example proposes a "one tailed" acceptance region, I think it makes more sense to phrase the null hypothesis the second way.

Trying to prove whether a particular type of acceptance region (one tailed, two tailed, or a even bunch of isolated intervals) is "best" involves defining what "best" means. The only way I know to approach that topic in frequentist statistics is to compare the "power" of tests that use difference acceptance regions. The power of a test is defined by a function, not by a single number, so comparing the power of two tests is not straightforward either.
Thank you so much for your time.


Register to reply

Related Discussions
Hypothesis testing Set Theory, Logic, Probability, Statistics 0
Hypothesis Testing Precalculus Mathematics Homework 0
Hypothesis Testing Calculus & Beyond Homework 0
Hypothesis testing Calculus & Beyond Homework 4
Hypothesis Testing Set Theory, Logic, Probability, Statistics 1