Hypothesis tests

  • Thread starter muktl
  • Start date
6
0
Hi, i am just really confused about the hypothesis test part in my course.
all i know is i use the mean of the hypothesis and find the test point, but i don't understand when i need to reject/ not reject the hypothesis.
and what really is p-value ( significance level), how to use that to determine whether to reject or not.
and the alpha level ( % of making type I error)

Thanks.
 

Stephen Tashi

Science Advisor
6,814
1,127
One way to look at the typical statistics course is that it doesn't make logical sense!

The situation is this:

The type of question that a practical person wants to know is "Given the data what is the probability that statement H is true? ".

There is not enough given information to answer this question in the problems presented in a typical statistics course. So, instead, a different question is answered, namely: "Given that statement H is true, what is the probability of the data?".

If the probability of the data is "small" then you are supposed to "reject" the statement H, but this is a completely subjective decision. There is no objective way to determine how small "small" should be. The process of "accepting or rejecting the null hypothesis" is simply an arbitrary procedure. There is no proof offered that it is the only correct one to follow.

You will find that the terms used in statistics ("level of significance", "rejection region", "confidence level", "confidence limits") have been cleverly chosen. They make the methods of statistics sound very objective. They confuse most people into thinking that they are getting the answer to "Given the Data, what is the probability that my idea is true?". However, underneath the hood, the methods are subjective and the numbers you compute don't answer this question.

If you want to study the type of statistics that does compute "The probability that statement H is true given the data", you'll have to study Bayesian statistics. If you want to study how to objectively set p-values, you should study a book like "Optimal Statistical Decisions" by Morris DeGroot.

In some situations, it may be possible to judge empirically how p-values are working. For example, if you publish a medical journal and you require that studies show a p-value is .05 then you'll have a smaller pile of papers to consider than if you set your p-value to .10. Or suppose you run a lab that screens thousands of substances to pick out ones that hold promise as drugs. Suppose you set a p-value of .10 and find hardly any substances to send-on for further testing. Other companies discover drugs based on substances that you have rejected. Your boss complains. One obvious course of action is increase your p-value to .15.
 
2,080
79
In a typical experimental situation we compare a treatment group with a control group in terms of some experimental parameter (such as the mean). Under the null hypothesis we assume the treatment will have no effect, so there should be no significant difference in the parameter value between the two groups. The standard (frequentist) approach is to assume some population based distribution of the parameter, usually a Gaussian (normal) distribution. This has been shown to work pretty well. After the experiment, we set a high significance level (low probability) that the null hypothesis is true, say a probability of 0.025 because we want to have confidence that a real effect exists. If the level is met or exceeded (a probability of less then 0.025) we reject the null hypothesis. This is called the alpha error because that is the probability that we wrongly rejected the null hypothesis in favor of the alternative hypothesis, which is that the treatment had an effect, and the findings were not just random variation.

The Bayesian approach is different in that it states a prior probability of an outcome before the experiment and then revises that probability based on the new data. It has a lot of applications, but I think one should master frequentist methods before studying Bayesian methods. They are not necessarily incompatible but the method I described above is what' s generally required if you are trying to publish experimental results or submitting data to a government agency.
 
Last edited:
2,080
79
To clarify something I said in the previous post. When I said that we want to set a high significance level, I was referring to the alpha error which is the probability of rejecting the null hypothesis when it is true. This probability should be low.
 

Related Threads for: Hypothesis tests

  • Posted
Replies
5
Views
2K
  • Posted
Replies
1
Views
2K
  • Posted
Replies
1
Views
3K
  • Posted
Replies
2
Views
3K
  • Posted
Replies
3
Views
5K
  • Posted
Replies
3
Views
3K
  • Posted
Replies
0
Views
1K
Replies
1
Views
9K
Top