Fit with least squares, Levenberg–Marquardt algorithm

Click For Summary
The discussion focuses on using the Levenberg–Marquardt algorithm for fitting a specific function related to frequency, with parameters f0, phi0, a, and b. The user seeks clarification on how to implement the algorithm in C++, particularly regarding the transition from derivatives to matrices and the iterative process of updating parameter values. They have calculated the chi-square error function and the Jacobian matrix but are unsure how to determine the delta values for parameter updates. The user expresses frustration over the lack of practical examples online for applying the algorithm to nonlinear least squares problems. Overall, they request assistance in understanding and applying the algorithm effectively.
TheDestroyer
Messages
401
Reaction score
1
Hello guys,

I need your help with understanding a fitting Algorithm, so that I could make it in a C++ program.

I have the following function:

g(f; f0, phi0, a, b) = phi0 + a ArcTan((f-f0)/b)

Which is a function of "f", frequency.

I would like to fit this function with the parameters f0, phi0, a and b.

I read many references about using Levenberg–Marquardt algorithm:

http://en.wikipedia.org/wiki/Levenberg–Marquardt_algorithm

but I'm getting always lost where I don't understand how the things turn to Matrices. Please read what I did and explain how I can continue in a simple language, because all websites explaining this thingy are going complicated so fast.

I understand the idea of minimising squares of difference between my function and data points as follows

χ^2 = Ʃ (y_i - g(f_i; f0, phi0, a, b) )^2 // chi square, the error function to be minimised.

where y_i are the data point number i, that corresponds to the point f_i.

I calculated the derivatives of the function with respect to the parameters f0, phi0, a, b, let's call these derivatives g_f0, g_phi0, g_a, g_b. We set all these as a row matrix, and call it Jacobian (as far as I understood it).

J = {g_f0, g_phi0, g_a, g_b} //derivatives at the
δ = {f0, phi0, a, b}

and we multiply it with our parameters, after evaluating the derivatives at the initial points. Then we substitute this in χ^2:

χ^2 = Ʃ (y_i - g(f_i; f0, phi0, a, b) - J.δ)^2.

The way I understand it, I think there has to be a way to know what the values of delta this and next time has to become, so that the process is really "iterative".

The question is: how do we determine delta? and how do we determine the direction and magnitude of its update to reach the right function?

Thank you for any efforts
 
Physics news on Phys.org
Why hasn't no one answered? is the question bad formulated? or what exactly?
 
hi guys i need some to please help me with some worked examples on how to use levenberg-Marquardt Algorithm for non linear least squares optimization problems.

Most of the work you find on the internet are just descriptions of the algorithm, no one seems to be using it to solve nonlinear least square problems.

This is what i need; CAN SOMEONE PLEASE HELP ME TO USE THIS ALGORITHM TO SOLVE SOME NON LINEAR LEAST SQUARE PROBLEMS.

Thanks
 
There are probably loads of proofs of this online, but I do not want to cheat. Here is my attempt: Convexity says that $$f(\lambda a + (1-\lambda)b) \leq \lambda f(a) + (1-\lambda) f(b)$$ $$f(b + \lambda(a-b)) \leq f(b) + \lambda (f(a) - f(b))$$ We know from the intermediate value theorem that there exists a ##c \in (b,a)## such that $$\frac{f(a) - f(b)}{a-b} = f'(c).$$ Hence $$f(b + \lambda(a-b)) \leq f(b) + \lambda (a - b) f'(c))$$ $$\frac{f(b + \lambda(a-b)) - f(b)}{\lambda(a-b)}...

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 11 ·
Replies
11
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
0
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 9 ·
Replies
9
Views
1K