Hypothesis testing with normal distribution

  • Thread starter Cheman
  • Start date
235
1
Hypothesis testing with normal distribution....

I've been learning about Hypothesis testing with normal distribution, but I dont understand the need for the significance level. By this I mean that i understand that according to the Central Limit Theorem a distribution of the means will be a normal distribution (for a sufficiently large value of n ) with a mean of the mean of the initial population and a standard deviation equal to the standard deviation of the population divided by the square root of n. However, to see whether we think a sample no longer fits the original popultion (as is the aim of hypothesis testing) I would have initially guessed that you would see if the mean of the sample being tested was an outlier - ie: its mean was 2 standard deviations out of the "mean distribution". However, this is apparently not the case - we instead use a significance level, and this should apparently tell us the boundary for whether the teasted sample is still relevant to the parent population or not - even if this significance level is above or below 2 sds of the mean; its is this significance level that matters not whether the mean of the tested sample is an outlier. Why is this the case?

Thanks in advance. :tongue2:
 
With hypothesis testing you can determine what the actual probability of making an error is. Saying "reject the hypothesis if the sample statistic is 2 st. dev. away from it" is just an arbitrary rule-of-thumb.
 

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
Top