Register to reply 
Formulating null and alternative hypotheses for a chisquared test 
Share this thread: 
#1
Mar2812, 05:00 AM

P: 4

I am attempting to investigate to what quantitative degree a physical theory agrees with observations of the phenomena it predicts (specifically, Fraunhofer theory).
I want to use the chisquared test to produce some confidence levels in the measurements made in different sections of the experiment. The chisquared test, as far as I'm aware, is just like any other statistical test in that it requires both a null and an alternative hypothesis. I believe that these need to be quite specific in order to make valid conclusions. What I would like is a some advice as to how to proceed with wording these hypotheses. Currently I have: H_{0}: No difference exists between the results expected from Fraunhofer theory and observations made of diffraction phenomena in the Fraunhofer regime. H_{1}: The results expected from Fraunhofer theory and observations made of diffraction phenomena in the Fraunhofer regime disagree at a particular level of precision. I'm a little unsure on the alternative hypothesis in particular. I'm not quite sure how to word it; essentially what we are expecting is that to some quantitative degree, such as 1 in 50, 1 in 100 etc. the measured results will not line up with the expected results. Any and all help will be much appreciated. 


#2
Mar2812, 05:08 AM

P: 4,572

In terms of wording hypotheses, the best way to do this is to give an interval for the statistic to fall in. As an example consider that we are testing whether population mean is less than 50 for null and greater or equal to alternative. We would write this as: H0: mu < 50 HA (or H1): mu >= 50 For your chisquare, you need to figure out the interval for the statistic. It might be twosided or onesided. An example of a twosided test would look like this (for chisquare) H0: X < 10 OR X > 20 H1: 10 <= X <= 20 Where X^2 is the calculate statistics (for ChiSquare X^2 is always >= 0) So essentially you need to find out the interval and state that. I have a feeling you haven't done much statistics before and I don't know much about Fraunhofer theory so maybe you could tell us what you are trying to calculate from your book or source or point us to some relevant description of the problem if it's on a website. 


#3
Mar2812, 05:22 AM

P: 4

Hi Chiro and thank you for the prompt response!
You're right in assuming that I haven't done much statistics before. This isn't a question from a book or other source; I'm performing an independent investigation at university. The goal of the experiment is as described: to quantify to what degree observations agree with Fraunhofer theory. I don't think an understanding of Fraunhofer theory specifically is necessary for the task at hand, but I expect some experience in using statistics in an experimental environment is required (which I'm lacking, unfortunately!). I see what you're saying with regards to the wording of the hypotheses and the examples you've given certainly are specific. The null hypothesis I've provided is essentially equivalent to stating that χ^{2} = 0, right? The difficulty I'm having is that the aim of the experiment isn't as specific as setting out to prove that the theory agrees with observation to a particular quantitative degree, but that it does agree to some quantitative degree, meaning that χ^{2} > 0, I suppose...? I'm a little confused about using the test statistic in the hypothesis itself, though. I thought part of the point of the hypotheses was as a sort of justification for using a particular test i.e., the χ^{2} test requires a null and alternative hypothesis, and its value allows us to reject or fail to reject the null. Including the test statistic in the hypothesis seems a little... circular. I may be wrong, that's why I'm here and asking! 


#4
Mar2812, 05:29 AM

P: 4,572

Formulating null and alternative hypotheses for a chisquared test
H0: σ^{2} < 4 H1: σ^{2} >= 4 Where σ^{2} is the population variance. But this can be represented in the chisquare if you are testing variance by nothing χ^{2}(n1) ~ (n1)σ^{2}/S^2 where S^2 is the sample variance. You could just mention the final statistic and then rearrange to get σ or you could just write in first in terms of σ which is usually what is done because people automatically know that σ will refer to the standard deviation parameter. Does this help? 


#5
Mar2812, 05:45 AM

P: 4

I'm afraid I'm having a bit of trouble understanding your last post chiro.
I can't say that I've seen a lot of null/alternative hypothesis statements before, but the ones I have seen haven't been quite as terse as simply stating an expected value for a statistic. Stating a statistic doesn't seem to be quite as meaningful as making a statement about what is physically expected, but using it to back up those physical claims seems intuitive. I'm also unsure of how you intend one to use σ^{2}. The issue in the experiment isn't the value of σ^{2} as every datum carries Gaussian random error drawn from a distribution of known standard deviation. The point of the experiment isn't to determine a value with an associated uncertainty, but to determine to what degree theory and reality are out of line. I'm using chisquared to assign a confidence level to said degree (e.g., say some measurements agree to within 1 part in 50 with theory, but chisquared gives a probability of said agreement of 90%). Sorry if I'm being unclear but this is all quite confusing for me! 


#6
Mar2812, 11:35 AM

Sci Advisor
P: 3,252

What do you mean by "measurements agree to within 1 part in 50 with theory"? You haven't clearly described the problem. One guess is that you have some determinisitic theoretical formula y = f(x). You have taken several different value of the control variable x, say x0, x1,.. and for each value, you have measured several different values of y. So, for example, for x= x1, you have a set of measurements y11, y12, y13,.... If that is the case there are two things involved in a "disagreement" with theory. A given y value can differ from the predicted y value and, for given size difference, a certain number of the measurements can produce that difference or smaller. What are you asking about? Is this a question about analyzing the set of y's for an individual value of x as a separate problem? Or are you asking about how to test some hypothesis about entire set of data? It is advisable when discussing situations involving probability, to simply use the word "probability" to refer to probability instead of a variety of other words (such as "chance", "uncertainty", "confidence"). In particular, a "confidence level" has a technical meaning in statistics. It's a term used in the theory of estimation. What meaning do you wish to convey by the term "confidence level"? 


#7
Mar2912, 01:38 AM

P: 4,572

Since variance is greater than zero we don't consider all of the real line so in a typical hypothesis test for population variance using chisquare, we either have a twosided test which has one hypothesis relating to one interval and the other relating to everything left and right of the original interval. Like Stephen Tashi pointed above, you need to figure out what the interval is and how that corresponds to the parameter you are trying to estimate. An example of estimating population variance given a sample might look something like this: H0: σ^{2} = 4 H1: σ^{2} != 4 given that σ^{2} is always >= 0. Now if you have a certain confidence and a value for your sample variance along with your degrees of freedom, you would find the interval that corresponds to σ^{2} by using the definition of what chisquare with ndegrees of freedom actually represents. The chisquared distribution is used for many things and only one of them has to do with classical statistical calculation of the estimator of the population variance assuming the initial assumptions are met. If the chisquare model is a good estimator then we have: χ^{2}_(n1) ~ (n1)S^2/σ^{2} which can be seen here: http://en.wikipedia.org/wiki/Varianc...ample_variance What you would do is then take your values corresponding to the interval relating to 'confidence' as many scientists and engineers call it, and then find the corresponding interval for your parameter σ^{2} or even σ. You then get an interval for this and put that in your H0 and H1 definitions. You do the appropriate test, get a test statistic and evaluate it. Now I want to say that statistics is not this easy and you can't just treat these tests like they are 'magic boxes' that work all the time, but if you are asked to perform a test for whatever reason and there is some level of confidence that it gives a useful answer or some other useful metric then this is what you do. So basically for a 90% interval, the lower point corresponds to finding P(χ^{2}_n < 0.05) and the upper point corresponds to P(χ^{2} < 0.95). This leaves 10% in the tails which gives you the 90% interval (this is why it's called a 90% one). This is not necessarily the only interval but this is the conventional way that it's done. You find these numbers in statistical tables for a given n or you use a computer to calculate it directly for any probability. 


Register to reply 
Related Discussions  
Statistics: Null hypothesis/chisquared question.  Precalculus Mathematics Homework  2  
Help with Writing null & alternative Hypotheses  Calculus & Beyond Homework  1  
Chi squared test  General Math  0  
Hypotheses Test  Calculus & Beyond Homework  2  
Likelyhood ratio test hypotheses and normal distribution  Calculus & Beyond Homework  5 