Uncertainty in parameters -> Gauss-Newton

AI Thread Summary
The discussion focuses on using the Gauss-Newton method for nonlinear least squares regression to fit a Lorentzian model to a set of data points. Participants explore how to estimate errors in function parameters based on the errors in the data points, suggesting methods like bootstrapping and Monte Carlo simulations. They discuss generating fictitious residuals from an approximate distribution and the importance of mixing computed errors to create new datasets for parameter estimation. The conversation emphasizes that the Gauss-Newton method's linearization in each iteration can yield reliable results when combined with weighted regression and the covariance matrix. Overall, the thread highlights effective strategies for managing uncertainty in parameter estimation.
Or Entity?
Messages
16
Reaction score
0
Uncertainty in parameters --> Gauss-Newton

Hi guys!

I have a set of datapoints, and i´m about to use Gauss-Newton to fit a model function (Lorentzian) to these points. So we´re talking abut a nonlinear least squares regression.

How do I estimate error in function parameters given errors in data points?

Thanks.
 
Physics news on Phys.org


Simulation using pseudo-random number generation?
 


EnumaElish said:
Simulation using pseudo-random number generation?


Nothing I'm familiar with. Could you develop?

Isn´t there any method equivalent to that using the covariance matrix in weighted linear least squares?
 


Allright, have read it through. Thanks
One thing though, where does the errors from my original data come in?
 


I will write vectors in bold, so for example y = {y(1), ..., y(i), ..., y(n)}.

In a proper bootstrap, you are just "mixing and matching" the errors you've already computed: y*(i) = y(i) + u(j), where j is "almost surely" different from i, u(.) are the computed errors, and y* is a convolution of y. This "remix" algorithm is repeated many times (say 100 times), so you have 100 different estimates of your model parameters coming from convoluted vectors y1*, ..., y100*.

In the approximate bootstrap (monte carlo?), you use the computed errors to derive the approximate "population distribution." Suppose the errors "look like" they are distributed normally with mean = 0 and standard deviation = s, i.e. u*(i) ~ N(0,s2). Then, you can use a pseudo-random generator to produce repeated draws from Normal(0,s2) and define y**(i) = y(i) + u*(i). Again, if you run this 100 times, you will have 100 parameter estimates from y1**, ..., y100**.
 


Amazing.. I was experimenting with a a method just like that one when I saw your reply.
Interesting method that can be applied to any linear/nonlinear method.

By the way since gauss-Newton linearizes the problem in each iteration I should get pretty decent results by making a weighted regression and taking the covariance matrix in the last step. (Like in linear least squares)
 

Similar threads

Replies
4
Views
1K
Replies
7
Views
2K
Replies
30
Views
3K
Replies
2
Views
2K
Replies
19
Views
3K
Replies
3
Views
2K
Replies
16
Views
3K
Replies
9
Views
2K
Back
Top