- **Set Theory, Logic, Probability, Statistics**
(*http://www.physicsforums.com/forumdisplay.php?f=78*)

- - **Measurement error analyses, fitting min/max slopes to data with error bars.**
(*http://www.physicsforums.com/showthread.php?t=221087*)

Measurement error analyses, fitting min/max slopes to data with error bars.I have measurement dataset [tex] (x_i,y_i) [/tex] -pairs with error for each value [tex] \Delta x_i [/tex] and [tex] \Delta y_i [/tex] so that I can plot datapoints with vertical as well as horizontal errorbar. I want to fit linear regression line [tex]y=a_1 x + a_0[/tex] and also error lines to the data.
But how I take into account as I'm fitting regression line that each datapoint will have its error in x- and y-direction? And what about the error lines so that I get min and max value of [tex]a_1, a_0[/tex]. I could use standard deviation, but then again this does not take into account the errors [tex] \Delta x_i [/tex] and [tex] \Delta y_i [/tex]. This picture enlightens my problem the picture I'm interested only in mathematical ways to do this. I already know how to do this by hand. Especially any Matlab example would be greatly appreciated. |

So you want to use the indidvidual error of each measurement as as well as their "standard deviation" (a measurement of how well they all lie on a straight line) to compute the error of your fitting parameters?
How did you do it by hand, maybe it is just a matter of converting it into Matlab code (or whatever)? |

I have tried to figure this out by myself and I have managed to get the error for fitted regression line to data points with each its own error, but this still is not what i want.
So, again I have I have measurement dataset [tex](x_i,y_i), i=1..k[/tex]-pairs with error for each value [tex]\Delta x_i[/tex] and [tex]\Delta y_i[/tex] so that I can plot datapoints with vertical as well as horizontal errorbar. I'm going to use weighted fit to solve the regression line for the data. I can use weights [tex]w_i=1/(\Delta y_i)^2[/tex] in order to find the solution vector [tex]S[/tex] that will minimize [tex](B - AS)^T diag(W)(B - AS)[/tex], where [tex]A[/tex] is a [tex]k \times 2[/tex] matrix of columns of ones and [tex]x_i[/tex]values, [tex]B[/tex] is a vector made of [tex]y_i[/tex]values, and respectively [tex]W[/tex] is a vector made of [tex]w_i[/tex]values. This is basically what Matlab's function lscov does.But before I can use the weights, I need to remember that I have also defined errors for [tex]x_i[/tex] values. Thus I cannot directly use the weights [tex]w_i=1/(\Delta y_i)^2[/tex]. I'm get this around by actually solving first, "regular", non-weighted regression line with slope [tex]p[/tex]. Because we are fitting straight linear line to data we actually can find out what a certain [tex]\Delta x_i[/tex] error is going to be from y-axis point of view by multiplying it by the slope and so we get the total error for i:th point.[tex]\sigma_i = \sqrt{\Delta y_i^2 +(p*\Delta x_i)^2}[/tex] and the weights [tex]w_i=1/(\sigma_i)^2[/tex] The obvious flaw here is that we have to presume that the slope of weighted regression line is somewhat the same that it is for unweighted regression line. Any way to solve this? And now the error estimates. For the previous fit we can get estimated standard error. Matlab defines it by Code:
` X = inv(A'*inv(V)*A)*A'*inv(V)*B` Now comes the major problem. If I have regression fit that has relatively small residuals compared to the errors suggesting small random error and large systematic error. The error estimates will actually become smaller than they really are. How I'm going to find the true error for my fit?Quote:
Well actually by saying by hand I meant that I will draw the lines with ruler on real paper. So there is no way that I could convert that into Matlab code ;) |

All times are GMT -5. The time now is 09:56 AM. |

Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.

© 2014 Physics Forums