Uncertainty on best fit gradient

AI Thread Summary
The discussion centers on the limitations of a software tool that calculates the best fit gradient without considering the error bars on data points. Participants express concern that this oversight could lead to misleading standard error estimates, especially if the data points are inaccurately measured. The conversation highlights the importance of incorporating error bars into gradient uncertainty calculations for more accurate statistical results. References to additional resources, including lecture notes and articles on regression, are shared for further exploration of the topic. Overall, the need for improved statistical methods that account for measurement uncertainty is emphasized.
kop442000
Messages
32
Reaction score
0
Hi everyone,

I have a plot of some data points that have error bars on the y axis.

A bit of software I am using gives me the best fit gradient and a "Standard Error", but it doesn't take the size of error bars into consideration. I'm assuming that it just looks at how well the gradient fits the data points to give the standard error on the gradient.

But doesn't one need to take into consideration the size of the error bars on the data points??

Thank you.
 
Physics news on Phys.org
It does help to know how your software arrives at it's conclusions.

Perhaps the program just uses the statistics from the data points to figure an uncertainty in it's estimations. You can imagine if you had inaccurate measurements which just happened to fall close to a line, then the program would provide a very small error. You can test by giving it made-up data.

http://www.learningmeasure.com/articles/mathematics/LeastSquaresLineFits.shtml

Example including error-bars, using mathematica:
http://mathematica.stackexchange.co...egression-given-data-with-associated-uncertai
 
Last edited:
Thanks Simon.

Yes it seems that the software (within Python) just does a least squares fit, compares the data points to the best fit gradient and let's you know how well they fit. I wanted something that took into account the error bars on the data too - for the reason that you mentioned above.

Statistically speaking, a gradient uncertainty that took into account the error bars on the data points would be better right?

I'll check out both the links - thank you!
 
Last edited:
Statistically speaking, a gradient uncertainty that took into account the error bars on the data points would be better right?
I'm not actually sure ... the assumption behind the least-squares is that the data comes from a normal distribution so the uncertainty estimate will be based on a distribution of means.

Sze Meng Tan's Inverse Problems lecture notes provide in in-depth into regression in one of the chapters... the notes are available here:
http://home.comcast.net/~szemengtan/
 
Last edited by a moderator:
Thank you for your help!
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top