Fit with least squares, Levenberg–Marquardt algorithm

In summary, this person needs help with understanding a fitting Algorithm, and is looking for help from others. They understand the idea of minimising squares of difference between their function and data points, and have calculated the derivatives of their function with respect to the parameters f0, phi0, a, b. They set all these as a row matrix, and call it Jacobian. They then multiply it with their parameters, after evaluating the derivatives at the initial points. They substitute this in χ^2 to find χ^2 = Ʃ (y_i - g(f_i; f0, phi0, a, b) - J.δ)^2.
  • #1
TheDestroyer
402
1
Hello guys,

I need your help with understanding a fitting Algorithm, so that I could make it in a C++ program.

I have the following function:

g(f; f0, phi0, a, b) = phi0 + a ArcTan((f-f0)/b)

Which is a function of "f", frequency.

I would like to fit this function with the parameters f0, phi0, a and b.

I read many references about using Levenberg–Marquardt algorithm:

http://en.wikipedia.org/wiki/Levenberg–Marquardt_algorithm

but I'm getting always lost where I don't understand how the things turn to Matrices. Please read what I did and explain how I can continue in a simple language, because all websites explaining this thingy are going complicated so fast.

I understand the idea of minimising squares of difference between my function and data points as follows

χ^2 = Ʃ (y_i - g(f_i; f0, phi0, a, b) )^2 // chi square, the error function to be minimised.

where y_i are the data point number i, that corresponds to the point f_i.

I calculated the derivatives of the function with respect to the parameters f0, phi0, a, b, let's call these derivatives g_f0, g_phi0, g_a, g_b. We set all these as a row matrix, and call it Jacobian (as far as I understood it).

J = {g_f0, g_phi0, g_a, g_b} //derivatives at the
δ = {f0, phi0, a, b}

and we multiply it with our parameters, after evaluating the derivatives at the initial points. Then we substitute this in χ^2:

χ^2 = Ʃ (y_i - g(f_i; f0, phi0, a, b) - J.δ)^2.

The way I understand it, I think there has to be a way to know what the values of delta this and next time has to become, so that the process is really "iterative".

The question is: how do we determine delta? and how do we determine the direction and magnitude of its update to reach the right function?

Thank you for any efforts
 
Physics news on Phys.org
  • #2
Why hasn't no one answered? is the question bad formulated? or what exactly?
 
  • #3
hi guys i need some to please help me with some worked examples on how to use levenberg-Marquardt Algorithm for non linear least squares optimization problems.

Most of the work you find on the internet are just descriptions of the algorithm, no one seems to be using it to solve nonlinear least square problems.

This is what i need; CAN SOMEONE PLEASE HELP ME TO USE THIS ALGORITHM TO SOLVE SOME NON LINEAR LEAST SQUARE PROBLEMS.

Thanks
 

1. What is the least squares method?

The least squares method is a mathematical technique used to find the best-fitting line or curve for a set of data points. It minimizes the sum of the squared distances between the data points and the fitted line or curve. This method is commonly used in regression analysis to determine the relationship between variables.

2. How does the Levenberg-Marquardt algorithm work?

The Levenberg-Marquardt algorithm is an optimization method used to solve non-linear least squares problems. It combines the steepest descent method with the Gauss-Newton algorithm to find the minimum of the objective function. It uses a damping parameter to control the step size, allowing it to converge to a solution even for difficult problems.

3. What are the advantages of using the Levenberg-Marquardt algorithm?

The Levenberg-Marquardt algorithm has several advantages over other optimization methods. It is efficient and can converge to a solution in fewer iterations compared to other methods. It is also robust and can handle non-linear problems with a large number of variables. Additionally, it provides estimates of the uncertainty in the solution, allowing for better understanding and interpretation of the results.

4. When is the Levenberg-Marquardt algorithm most commonly used?

The Levenberg-Marquardt algorithm is commonly used in applications that involve non-linear regression, such as curve fitting, parameter estimation, and time series analysis. It is also used in computer vision, robotics, and other fields that require optimization of non-linear functions.

5. Are there any drawbacks to using the Levenberg-Marquardt algorithm?

While the Levenberg-Marquardt algorithm has many advantages, it also has some limitations. It may fail to converge or produce incorrect results if the initial guess is far from the true solution. It may also get stuck in a local minimum instead of finding the global minimum, especially for highly non-linear problems. Therefore, it is important to carefully choose the initial guess and to check the results for consistency and accuracy.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
475
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
0
Views
240
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
Replies
5
Views
1K
Replies
3
Views
2K
Replies
7
Views
586
Replies
20
Views
2K
Back
Top