Showing that a model is not a good fit

  • Thread starter Thread starter indie452
  • Start date Start date
  • Tags Tags
    Fit Model
AI Thread Summary
The discussion centers on evaluating the fit of a model to star count data using chi-squared statistics. A chi-squared value of 216 was calculated with 180 degrees of freedom, indicating a poor model fit since a good model typically has a chi-squared value close to the degrees of freedom. The probability of obtaining a chi-squared value greater than 216 at a 5% significance level is 0.0345, suggesting that such a result is unlikely under the null hypothesis of constant source flux. The conversation highlights confusion over terminology regarding "confidence at which to reject" the null hypothesis, clarifying that traditional hypothesis testing focuses on acceptance and rejection probabilities rather than confidence levels. Ultimately, the discussion emphasizes the need for clear definitions and understanding of statistical concepts in model evaluation.
indie452
Messages
115
Reaction score
0
ok so i have some data (d) of star counts (N=181), and a model (m = b-Fo where b=5 and Fo-constant flux)

I have found the chi squared value = 216
I know that the number of degrees of freedom here is N-parameters = 181-1 = 180

my question is:
"show that the model is not a good fit to the data, and use an appropriate statistical table to estimate the confidence at which you can reject the hypothesis of a constant source flux"

All i can come up with so far is that if we have a good model we usually expect the chi squared to be approx the number of degrees of freedom which is not the case here. As such one could imply that the data is not a good fit from that.
Also I know that as the degree of freedom is so large the probability function for this will approach gaussian so we would use the gaussian one tailed table.

However, notes i have read talk about comparing the chi squared to some significance level, but i do not know how to calculate this.

any help one getting started and for understanding please?
 
Physics news on Phys.org
To determine whether a model is a "good fit" one has to decide what is meant by "good". And that means determing a "level of significance"- Typically a probability of .10 or .05. Here is a pretty easy to use chi-square "calculator": http://www.stat.tamu.edu/~west/applets/chisqdemo.html

Put in your degrees of freedom, then put in the level of significance you want- .10 or .05, and see if your value is too far to the right.
 
Last edited by a moderator:
thanks for replying

so this is what i have got from your response:
so if my area calculated is the prob of getting more than the chi squared (216) is 0.0345, then this means that at a 5% significance level it is unlikely that we will get a result of more than 216.

but I am not quite sure how this shows it is a bad model, or how i go about finding the confidence at which i can reject the hypothesis of a constant source
 
indie452 said:
how i go about finding the confidence at which i can reject the hypothesis of a constant source

As far as I can tell "confidence at which I can reject" is terminology that you have invented. If your course materials use that teminology, perhaps you can explain it to me using the language of probability.

In the ordinary scenario for hypothesis testing, once you establish a range of statistical values for which you will "accept" the null hypothesis, you can compute probabilities only if you assume the null hypothesis is true. The probabilities that you can compute are the probability of accepting the null hypothesis and the probability of (incorrectly) rejecting the null hypothesis.

Subjectively, if the observed statistic is outside the acceptance region and the probability of this happening by chance is "small" then the null hypothesis is "bad". However, you can't compute the probability that the null hypothesis is incorrect unless you use Bayesian statistics.

The term "confidence" is usually applied to the scenario of parameter estimation.
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top