Register to reply 
Error Rate and Its Role in Significance 
Share this thread: 
#1
Feb2713, 03:16 PM

P: 484

I am trying to understand the relationship between pvalues and Type II error rate. Type II error occurs when there is failure to reject the null when it is false. As an example, suppose that I find that my results are not significant. In this case, the pvalue is increased, or higher (in relation to the alpha level). If the pvalue increases, then we are more likely to retain the null, and doesn't this increase Type II error? If this error increases, then the error rate also increases?
Thanks. 


#2
Feb2713, 07:22 PM

Sci Advisor
P: 3,300

I think in most practical situations increasing alpha does increase the probability of type II error, but I don't know any mathematical proof that this must always be true. Unless you have a specific alternative hypothesis, you cannot compute type II error. For example in flipping a coin ten times, if null hypothesis is "the coin is fair" lets you compute the pvalue of a given number of heads. But the hypothesis "the coin is not fair" is not specific enough to let you compute any probability. So unless you are very specific about the way in which the null hypothesis is false, you can't compute type II error. In frequentist statistics people get a feeling for type II error by looking at graphs that represent, in a manner of speaking, all possible alternatives to the null hypothesis. These are called "power curves" for statistical tests. For example, for a specific probability Q of heads, you can compute the probability of a given number of heads in 10 tosses and (for a given alpha) you can compute the probability that a hyothesis test of "the coin is fair" will accept the null hypothesis that the coin is fair when the true probablity of heads is Q. The power curve gives you a probability of type II error for each possible value of Q. 


#3
Feb2813, 10:12 AM

Sci Advisor
P: 3,300




#4
Mar1113, 02:31 PM

P: 240

Error Rate and Its Role in Significance



#5
Mar1113, 03:26 PM

Sci Advisor
P: 3,300




#6
Mar1213, 03:28 AM

P: 240

Well, let us have a little discussion in the spirit of the game. Hope, no one minds.
Consider a test for sample mean, population normal with known σ. Notations usual. H is null and K is alt hypothesis. To test H:μ=μ_{0}, ag, K: μ>μ_{0}. Test statistic Z (follows normal) , size of the test =α, critical region w:Z> Z_{α}. Since K does not specify a value of μ, we need to compare the power curves corresponding to different α in context of the present problem. Suppose we plot the power curves for a number of different α values on the same graph paper, with μ (where, μ> μ_{0}, to see the type II error) values in the horizontal axis. It will be seen that for higher α, the curve is higher. This means, probabilities of type II error is lower for for higher α corresponding to any μ belonging to K. α increased implies higher probability of rejection of H, other factors remaining unchanged. Therefore higher α implies higher probability of rejection of H, even if μ belongs to K. Looking forward for discussions and counter statements, which are most welcome for refinement of understanding of the matter. 


#7
Mar1213, 10:30 AM

Sci Advisor
P: 3,300

I agree that your example shows a situation where a power curve argument proves increasing acceptance region increases type II error. It increases it by an unknown amount since we don't know where on the curve we are. There are many natural and good "beginners" questions about math that we see posted over and over on the forum, but there is one in statistics that I've yet to see. An example of it is this: Suppose we know the variance of a normal distribution is [itex] \sigma^2 = 1 [/itex] and we are doing a test of the hypothesis that its mean [itex] \mu = 0 [/itex] at a 5% significance level. Why must we set the acceptance region (for the sample) mean to be a symmetrical interval that contains zero? After all, we could define the acceptance region to be any sort of intervals that have a 95% probability of containing the sample mean and get the same probability of type I error. We could even define the acceptance region to be two disjoint intervals that don't contain the value 0 at all ! Perhaps the only answer to the above question is to resort to a power curve argument and show that a test with a symmetrical region about 0 is "uniformly most powerful" among all possible tests using the value of the sample mean. I've never read such a proof. (If it exists, I think it would have to be a very technical since "all possible" tests is a big class of tests!) 


#8
Mar1213, 12:40 PM

P: 240

I just gave the simplest (?) example. If σ is unknown, we simply have to think of a 't' test.
The fact about power curves and varying α remains very similar. 


#9
Mar1213, 02:40 PM

Sci Advisor
P: 3,300




#10
Mar1313, 10:25 AM

Sci Advisor
P: 3,300

Thinking more about the original question
Suppose we define an interval such as [itex] I_A [/itex] = {1.5,1.5}. And suppose we sign some contracts such as the following: What can we say about analagous contracts that are based on [itex] I_B [/itex] instead of [itex] I_A [/itex]? A contract such as If we restrict ourselves to situations not mentioned in the contract, such as particular weather, the same conclusions apply. The contract about dancing a jig didn't mention anything about the weather. So we can say: 


Register to reply 
Related Discussions  
Radio frequency amplifier and error rate  Electrical Engineering  3  
Literacy rate, jobless rate, inflation rate  General Math  1  
Does anyone here know about the Bit Error Rate in optical fibres?  Electrical Engineering  0  
Simpson's Rule/Trapezoidal Approximation  Error rate help  Calculus & Beyond Homework  1  
Rate of convergence and asymptotic error constant  Calculus  1 