# Showing that a model is not a good fit

ok so i have some data (d) of star counts (N=181), and a model (m = b-Fo where b=5 and Fo-constant flux)

I have found the chi squared value = 216
I know that the number of degrees of freedom here is N-parameters = 181-1 = 180

my question is:
"show that the model is not a good fit to the data, and use an appropriate statistical table to estimate the confidence at which you can reject the hypothesis of a constant source flux"

All i can come up with so far is that if we have a good model we usually expect the chi squared to be approx the number of degrees of freedom which is not the case here. As such one could imply that the data is not a good fit from that.
Also I know that as the degree of freedom is so large the probability function for this will approach gaussian so we would use the gaussian one tailed table.

However, notes i have read talk about comparing the chi squared to some significance level, but i do not know how to calculate this.

any help one getting started and for understanding please?

HallsofIvy
Homework Helper
To determine whether a model is a "good fit" one has to decide what is meant by "good". And that means determing a "level of significance"- Typically a probability of .10 or .05. Here is a pretty easy to use chi-square "calculator": http://www.stat.tamu.edu/~west/applets/chisqdemo.html [Broken]

Put in your degrees of freedom, then put in the level of significance you want- .10 or .05, and see if your value is too far to the right.

Last edited by a moderator:

so this is what i have got from your response:
so if my area calculated is the prob of getting more than the chi squared (216) is 0.0345, then this means that at a 5% significance level it is unlikely that we will get a result of more than 216.

but im not quite sure how this shows it is a bad model, or how i go about finding the confidence at which i can reject the hypothesis of a constant source

Stephen Tashi
how i go about finding the confidence at which i can reject the hypothesis of a constant source

As far as I can tell "confidence at which I can reject" is terminology that you have invented. If your course materials use that teminology, perhaps you can explain it to me using the language of probability.

In the ordinary scenario for hypothesis testing, once you establish a range of statistical values for which you will "accept" the null hypothesis, you can compute probabilities only if you assume the null hypothesis is true. The probabilities that you can compute are the probability of accepting the null hypothesis and the probability of (incorrectly) rejecting the null hypothesis.

Subjectively, if the observed statistic is outside the acceptance region and the probability of this happening by chance is "small" then the null hypothesis is "bad". However, you can't compute the probability that the null hypothesis is incorrect unless you use Bayesian statistics.

The term "confidence" is usually applied to the scenario of parameter estimation.