Uncertainty in parameters -> Gauss-Newton

  • Context: Graduate 
  • Thread starter Thread starter Or Entity?
  • Start date Start date
  • Tags Tags
    Parameters Uncertainty
Click For Summary

Discussion Overview

The discussion revolves around estimating errors in function parameters when using the Gauss-Newton method for nonlinear least squares regression, specifically for fitting a Lorentzian model to a set of datapoints. Participants explore various methods for incorporating uncertainties from the data points into the parameter estimation process.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant seeks guidance on estimating errors in function parameters given errors in data points while using the Gauss-Newton method.
  • Another participant suggests using simulation with pseudo-random number generation as a potential approach.
  • A follow-up question asks for clarification on the method and its equivalence to using the covariance matrix in weighted linear least squares.
  • Participants reference resampling techniques, including bootstrapping, to generate estimates of model parameters from computed errors.
  • A detailed explanation is provided on how to implement a bootstrap method by mixing and matching computed errors to create new datasets for parameter estimation.
  • One participant notes that since Gauss-Newton linearizes the problem in each iteration, a weighted regression approach combined with the covariance matrix could yield reasonable results.

Areas of Agreement / Disagreement

Participants express interest in various methods for estimating parameter errors, but there is no consensus on a single approach. Multiple competing views and techniques are discussed without resolution.

Contextual Notes

The discussion includes assumptions about the distribution of errors and the applicability of different statistical methods, which remain unresolved. Specific mathematical steps and dependencies on definitions are not fully explored.

Who May Find This Useful

Researchers and practitioners involved in nonlinear regression analysis, particularly those using the Gauss-Newton method and interested in error estimation techniques.

Or Entity?
Messages
16
Reaction score
0
Uncertainty in parameters --> Gauss-Newton

Hi guys!

I have a set of datapoints, and i´m about to use Gauss-Newton to fit a model function (Lorentzian) to these points. So we´re talking abut a nonlinear least squares regression.

How do I estimate error in function parameters given errors in data points?

Thanks.
 
Physics news on Phys.org


Simulation using pseudo-random number generation?
 


EnumaElish said:
Simulation using pseudo-random number generation?


Nothing I'm familiar with. Could you develop?

Isn´t there any method equivalent to that using the covariance matrix in weighted linear least squares?
 


Allright, have read it through. Thanks
One thing though, where does the errors from my original data come in?
 


I will write vectors in bold, so for example y = {y(1), ..., y(i), ..., y(n)}.

In a proper bootstrap, you are just "mixing and matching" the errors you've already computed: y*(i) = y(i) + u(j), where j is "almost surely" different from i, u(.) are the computed errors, and y* is a convolution of y. This "remix" algorithm is repeated many times (say 100 times), so you have 100 different estimates of your model parameters coming from convoluted vectors y1*, ..., y100*.

In the approximate bootstrap (monte carlo?), you use the computed errors to derive the approximate "population distribution." Suppose the errors "look like" they are distributed normally with mean = 0 and standard deviation = s, i.e. u*(i) ~ N(0,s2). Then, you can use a pseudo-random generator to produce repeated draws from Normal(0,s2) and define y**(i) = y(i) + u*(i). Again, if you run this 100 times, you will have 100 parameter estimates from y1**, ..., y100**.
 


Amazing.. I was experimenting with a a method just like that one when I saw your reply.
Interesting method that can be applied to any linear/nonlinear method.

By the way since gauss-Newton linearizes the problem in each iteration I should get pretty decent results by making a weighted regression and taking the covariance matrix in the last step. (Like in linear least squares)
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 23 ·
Replies
23
Views
4K