- #1
Malamala
- 299
- 27
Hello! I just discovered (maybe a bit late) that most fitting programs (Python lmfit or scipy, for example) have a parameter (by default turned on) that allows a scaling of the covariance matrix for calculating the errors (usually called scale_covar or something similar). After some reading I figured out (hopefully correctly) that setting that parameter on (scale_covar=True) means basically to adjust the errors on the data until the chi-square would be 1, and report the errors on parameters using these adjusted values. I have noticed that doing so, if you scale all the y error by the same amount the error on the parameter fit doesn't change. On the other hand if I set the parameter off (scale_covar=False), scaling the errors on y changes the errors on the parameters of the fit, too.
In my case I need ,to do a linear fit to some data. If I use scale_covar=True (which is the default) I get something around (ignoring decimals) ##25 \pm 1##. If I set it to False I get half the error ##25 \pm 0.5##. I am quite confident about the errors on my points and the fit looks good. Which value should I report? And in general, for any fit, when should I set this parameter to True and when to False?
Lastly, I don't really remember reading any experimental physics paper where they actually talk about which of these 2 methods they use to get the errors from a fit. They just state the errors on the parameters. Is there a generally agreed upon way of doing this, such that everyone is setting that parameter (in their fitting program) to True or False? And if so what is the convention.
In principle, I would like to know, if I were to publish my data in a journal (say PRL), without talking about this covariance scaling stuff (as no one does, it seems) should I use (in my linear case fit) 0.5 or 1 for the error?
Thank you!
In my case I need ,to do a linear fit to some data. If I use scale_covar=True (which is the default) I get something around (ignoring decimals) ##25 \pm 1##. If I set it to False I get half the error ##25 \pm 0.5##. I am quite confident about the errors on my points and the fit looks good. Which value should I report? And in general, for any fit, when should I set this parameter to True and when to False?
Lastly, I don't really remember reading any experimental physics paper where they actually talk about which of these 2 methods they use to get the errors from a fit. They just state the errors on the parameters. Is there a generally agreed upon way of doing this, such that everyone is setting that parameter (in their fitting program) to True or False? And if so what is the convention.
In principle, I would like to know, if I were to publish my data in a journal (say PRL), without talking about this covariance scaling stuff (as no one does, it seems) should I use (in my linear case fit) 0.5 or 1 for the error?
Thank you!