Uncertainty of linear and non linear least-squares fit

In summary, the conversation discussed the process of finding the linear fit between displacement and voltage reading, using the equation V = m*X + b. The standard errors of parameters m and b, as well as the R-squared value, can be found using regression analysis and the definition of R-squared. For the non-linear fit, the cubic equation X = a3 * (V ^3) + a2 * (V ^2) + a1 * (V) + a0 was used and the uncertainty of the parameters a3, a2, a1, and a0 was discussed. The same formula used for the linear case can be applied to find the R-squared value for the non-linear fit.
  • #1
Shaddyab
19
0
1) Linear fit

I am running an experiment to find the linear relation between displacement (X) and voltage reading (V) of the measurement, when fitting a linear line to my measured data, such that:

V= m*X + b;

Based on the following links http://en.wikipedia.org/wiki/Regression_analysis (under Linear regression)


d5c6eeaa7a53d17a89c4dc1f2ebfa3fd.png

3804d3bb6d5987b2855c20da49c50254.png

f8d7fac226210b447148f7cef1e5cd69.png


and
http://en.wikipedia.org/wiki/R-squared ( under Definitions)
9b67a43676735e465566d871deb925f1.png



I can find the The standard errors of 'm' and 'b' parameters, Sm and Sb, respectively. and R-squared

So the Uncertainty of my measurement is :

m -/+ Sm

and

b -/+ Sb

But I would like to use the fit in the following way :

x=(V-b)/m

How can I relate the fit error to this form of the equation ?


2) Non-Linear fit

In the second experiment I have the following cubic fit:

X= a3 * (V ^3) + a2 * (V ^2) + a1 * (V) + a0

How can I find the uncertainty of a3, a2, a1, and a0 ? because I could not find any reference for doing so.

Can I use the same formula I used in the linear case to find the R-squared here for the non-linear fit?
 
Physics news on Phys.org
  • #2
1) Why don't you estimate x = a0 + a1 V, where a0 = -b/m and a1 = 1/m?
2) Using matrix notation, X = UA, where U = [V^3 V^2 V 1] and A = [a3 a2 a1 a0]'. Define Z = ((U'U)^-1)U'. The least squares estimator is A* = ZX, a 4x1 vector. Var(A*) = Z' Var(X) Z = (U'U)^-1 Var(X), which is a 4x4 matrix. The diagonal terms are variances; off-diagonal terms are covariances.
 
Last edited:
  • #3
1)

I disagree with that:
if I fit a line as
V=m1 * X + b1

and

X= m2* V + b2;

Then

m2 IS NOT equal to 1/m1
and b2 is NOT equal to -b1/m2

you can run a simple Matlab code and see this.2)
Can you please re-write the equation to find the Var of A ( Var[A] ) I was not able to understand it.
is it
Var(A*) = Z' Var(X) Z ?
What does MSE means?
is (Z'Z)^-1 part of the equation?

Thank you
 
Last edited:
  • #4
You are correct; m2 is not equal to 1/m1 and and b2 is not equal to -b1/m2. [However, the standardized coefficients m1/s.e.(m1) and m2/s.e.(m2) are equal, where "s.e." is the standard error of the coefficient. This topic came up before in this forum, see https://www.physicsforums.com/showthread.php?t=316155.] All of this is tangential to your main question, which is, "how do you incorporate measurement error of X into the overall error of the regression?" Sorry to send you in the wrong direction there.

Var(A*) = Var(ZX) = Z Var(X) Z' is the matrix equivalent of the formula Var(aX) = Var(X)a^2. When you make the substitution Z = ((U'U)^-1)U', Var(A*) simplifies to (U'U)^-1 Var(X). Note that as long as X is the dependent variable, it can be separated into a nonrandom part (UA) and a random part (the error, or the residual); the complete model is X = UA + error. Since UA is nonrandom, Var(X) = Var(error) = MSE.
 
  • #5
Once again I would like to thank you for your valuable help.

I am using what you mentioned in your previous posts to solve the following system of equations for a very simple linear case. X= a1*V+a0
where,

X = UA
U = [V 1] and A = [a1 a0]'.
Defining Z = ((U'U)^-1)U'.
and solving the least squares estimator A = ZX.
I am comparing my results with what I get from 'polyfit' in Matlab, obviously I get the same result.

The problem is when I solve the following equation to find the uncertainty of a0 and a1

Var(A*) = Z' Var(X) Z = (U'U)^-1 Var(X)

The numbers here are NOT similar what I get from finding the uncertainty using the three equations I mentioned in my first post.

In addition, Can I use the same formula I used in the linear case ( X=a*V+b ) to find the R-squared for the non-linear fit ( X= a3 * (V ^3) + a2 * (V ^2) + a1 * (V) + a0 )


Thank you
 
  • #6
Have you tried defining var X = SSE/(N-2)? Do you find a different result using the matrix formula?

What you are referring as the nonlinear fit is in fact linear, so yes, the same formula applies. The term nonlinear is reserved for nonlinearity in parameters, for example X = ab + V^b.
 
  • #7
As I already mentioned, I am trying to run a very simple case of linear line (X= a1*V+a0) becuase I already know how to find the uncirtinty in a0 and a1 for this case ( based on the three equations that I have in my first post where var X = SSE/(N-2) )

While:

Var(A*) = Z' Var(X) Z = (U'U)^-1 Var(X)
is giving me a different result for the uncertainty in a0 and a1 .

I can post the MATLAB code if this could be on any help.
 

1. What is the purpose of a linear and non-linear least-squares fit?

The purpose of a linear and non-linear least-squares fit is to find the best fitting line or curve for a set of data points. This is done by minimizing the sum of the squared differences between the data points and the line or curve.

2. How is the uncertainty of a linear and non-linear least-squares fit calculated?

The uncertainty of a linear and non-linear least-squares fit is calculated using the standard error of the regression, which takes into account the variability of the data points around the fitted line or curve.

3. What factors can affect the uncertainty of a linear and non-linear least-squares fit?

The uncertainty of a linear and non-linear least-squares fit can be affected by the number of data points, the variability of the data, and the quality of the fit. In general, the more data points and the better the fit, the lower the uncertainty will be.

4. How is the uncertainty of a linear and non-linear least-squares fit represented?

The uncertainty of a linear and non-linear least-squares fit is typically represented by the standard error or standard deviation of the fitted parameters, such as the slope and intercept for a linear fit or the coefficients for a non-linear fit.

5. Can the uncertainty of a linear and non-linear least-squares fit be reduced?

Yes, the uncertainty of a linear and non-linear least-squares fit can be reduced by increasing the number of data points, improving the fit through adjustments to the model or data, and using more advanced techniques such as weighted least-squares or nonlinear regression.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
28
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
886
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
453
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
26
Views
2K
Back
Top