Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Amount of data vs. degrees of freedom in fit

  1. Sep 22, 2012 #1
    Hi

    I am struggling with a problem here. I have 6 data points, and I have found the solution of a model which I believe should describe the behavior of the data. Now I am trying to fit the parameters of the solution to the 6 data points.

    The model contains 5 degrees of freedom, all of which are known - but not very precise. When I fit the expression to the data, I get very large standard deviations on my parameters - in addition they are not close at all to their expected values.

    Naturally it is very well a possibility that my model is simply wrong. However I am also doubting how much value I can assign to the fit. Visually it looks good, but the reduced χ2 >>1.

    My question is, is it possible to be in a situation where the number of data points is so few that the statistics is simply too bad in order to determine so many degrees of freedom?


    Thanks for any feedback in advance.

    Best,
    Niles.
     
  2. jcsd
  3. Sep 22, 2012 #2

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Of course. More data means more precision, so less data means less precision. Honestly, I don't think you have enough data to confidently fit one degree of freedom, let alone 5. (but then, I don't know the particulars of your situation)
     
  4. Sep 22, 2012 #3

    Stephen Tashi

    User Avatar
    Science Advisor

    Do you mean that the model contains 5 unknown and independent parameters?

    I'm not sure whether "expected values" is meant in the sense of a values that you'd expect to get based on expert scientific knowledge (like the known mass of a molecule) , or whether you mean this in a statistical sense as an average of a set of numbers.

    How are you computing "standard deviations on my parameters"? After all, to do this in a straightforward way, you have to have various samples of your parameters and you only have samples of data, not samples of parameters.

    There are curve-fitting software packages that purport to both find the parametes and assign a standard deviation to them, but as I understand what that software is doing, it must make many assumptions in order to do this computation. It would be best to understand if these assumptions aply to your problem.
     
  5. Sep 22, 2012 #4
    Hi

    Thanks for the replies.

    Yes, these I use as degrees of freedom when doing a least-squares fit.


    The first case.


    The standard deviation I referred to was the results given by a least-squares fit. But I think you are both right, my problem is simply I have way too little data.
     
  6. Sep 22, 2012 #5

    chiro

    User Avatar
    Science Advisor

    One way people deal with having very little amount of data is to use priors and Bayesian analysis to do statistical inference (the medical area has this problem frequently).

    In saying the above though, small data sets require you to have a good deal of expert knowledge about the data to make sure that if you have a small amount of data, that the data is as useful as it can be.

    But you have a ridiculously low amount of data (even when you consider some situations that have low data like for example a data-set of surgical data for some exotic niche specialty), so the question I have for you is two-fold:

    1) Why do you have this amount of data? and
    2) Can you collect more data if at all possible?
     
  7. Sep 23, 2012 #6

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Does your fit program give you a p-value for the fit quality? If that is too small even with just 6 data points, forget the fit.
    If not, you can use a table to check your (χ2, ndf) value.

    In general, with less data it is easier to get high p-values, as the fit is not so sensitive to details of the real distribution.

    3 signs of a bad model or some other error.
     
  8. Sep 23, 2012 #7
    Thanks. However I don't understand the quoted part. My fit routine does give me the p-value. In my case I am interested in a small p-value, such that I know the model is correct - that must be true regardless of the number of data points I would say. Maybe I misunderstood something?

    Best,
    Niles.
     
  9. Sep 23, 2012 #8

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    ?
    The p-value is the probability that random data, based on your fitted model, looks as good as the actual data or worse.
    If you have a small p-value (for example 0.0001), it means that a) your model is wrong or b) you were extremely unlucky, and got random deviations which just occur once in 10000 repetitions of the experiment. While (b) is possible, (a) looks more likely.

    If your model is correct, you would expect a p-value somewhere around 0.5. Might be 0.05, might be 0.95, but everything below 0.01 is very suspicious.
     
  10. Sep 23, 2012 #9
    Then I may have misunderstood the concept of p-value. I thought small p-values (p < α) are evidence against the null hypothesis so the data is statistically significant.
     
  11. Sep 24, 2012 #10
    I'd say you're right. I don't know what mfb is talking about.
     
  12. Sep 24, 2012 #11

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Small p-values of a fit are evidence that your fit function ("null hypothesis") is wrong, and the difference between data and fit function is significant.
    That is what I said.
     
  13. Sep 24, 2012 #12
    Alright, I see what you mean, but it does depend on what exactly you are testing for. There are other tests where a low p-value indicates you can reject the null hypothesis in favor of your model.
     
  14. Sep 24, 2012 #13

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    That is exactly backwards. The null hypothesis is that the data are just random numbers drawn from a hat. A small p-value is evidence that this null hypothesis should be rejected.


    There's a big problem with using the p-value for your analysis. Suppose you dropped one of those data points, leaving five data points for five unknowns. You will get a perfect fit (zero residual) if the matrix isn't singular. Zero residual means a p-value of zero. Does this mean your model is good? Absolutely not. It might be a perfect fit, but it is perfectly useless. You are going to get similar problems when you only have a few more data points than unknowns. The p-value statistic is meaningless in these cases.

    There are a number of other tricks beyond the p-value if you know the errors/uncertainties in the individual data points. Note well: Any regression will be of dubious value if you don't know those uncertainties. One thing you can do is perform a principal components analysis. This will give you a fairly good idea of how many statistically meaningful degrees of freedom you have.

    Another thing you can do is to build the model up from scratch. Build a set of five one variable models and pick the one that best explains the data. You don't have a model if none of the variables does a good job (here the p-value might be of help). Let's assume you do have something meaningful. Regress the remaining four variables against that one that gives the best fit to yield four new variables, and regress the residual from the first regression against each of these four new variables. Pick the new variable that does the best job of explaining the residual from the first regression. Repeat until either you have run out of variables or until the decrease in the residual is statistically insignificant. You can also use the opposite approach, start with the kitchen sink model (all variables tossed into the mix) and repeatedly delete variables from the mix until you finally get to the point where a deletion would be statistically significant.
     
  15. Sep 24, 2012 #14

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    You cannot reject the hypothesis "data is drawn from some unknown distribution". It fits with ALL datasets.
    You can reject specific distributions - if the p-value (with this distribution as model) is small.

    Don't do that.

    It not meaningless - while a high p-value does not mean that your model is right, a small p-value means that your model is probably wrong.
     
  16. Sep 24, 2012 #15

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    This is wrong, very wrong. You are either using a very non-standard definition of the p-value or you completely misunderstand what it means. A small p-value, typically < 0.05, is a prerequisite for statistical significance. You are doing something very wrong if you are using the standard definition of p-value and are rejecting models with a small p-value / accepting models with a high p-value.
     
  17. Sep 24, 2012 #16

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Statistical significance of a deviation from the model you used to calculate the p-value.
    If you have that significant deviation, the model you used to calculate the p-value might be wrong (or you had bad luck).

    0.05... :D. I'm particle physicist, everything beyond 3 standard deviations (or something like p<=0.003) is just considered as a fluctuation (or error in the analysis) there.
     
  18. Sep 24, 2012 #17
    I believe mfb is referring to the F test for lack of fit, where the null hypothesis actually is that your model is correct, and a low p value means you reject the model.

    http://en.wikipedia.org/wiki/Lack-of-fit_sum_of_squares

    This is a valid use of p-values, despite being somewhat reversed from how they are usually used.
     
  19. Sep 25, 2012 #18

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Maybe it is clearer with an example. Consider the Higgs boson search (ATLAS experiment as example, CMS has similar plots), and especially the decay channel Higgs to 2 photons (figure 4 in the pdf). The y-axis is the number of events (a) / number of weighted events (b), the x-axis is a measured parameter (mass).

    We have two hypotheses: "No Higgs, background only" (dashed line) and "Higgs + background" (solid line).
    How probable is the data with the background-only hypothesis? Well, we have large deviations close to 126 GeV, the p-value is very small (especially if you care about the region around 126 GeV only).
    How probable is the data with the Higgs+background hypothesis? It fits well, the p-value is something reasonable.

    Figure 7 and 8 give the local p-values for the background-only hypothesis, the dips in the logarithmic plots correspond to a very small p-value.
    The background-only hypothesis is rejected, as its p-value is too small. As result, the discovery of a new boson was announced.

    Your turn.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Amount of data vs. degrees of freedom in fit
  1. Degrees of Freedom (Replies: 1)

  2. Degrees of Freedom (Replies: 25)

Loading...