Golden section, levenberg-Marquadt

  • Thread starter Thread starter a.mlw.walker
  • Start date Start date
  • Tags Tags
    Section
a.mlw.walker
Messages
144
Reaction score
0
Hi I am a third year mechanical engineer, and have been reading around the course. I have never heard of the two methods above, and they have never been mentioned on the course - and therefore arent relevent, but I am reading an article that seems to use them. The following is what I have:

It is some form of least squares method for curves, used to estimate a few unknown parameters.

I am trying to find out a and b, although c0 is included in the brackets of S:

The first image, is the first equation, with a, b and c0 although the equation for c0 is shown. The second equation I think is how to solve for a?

Anyway, i have five or 6 values for T0 which apparently using one of the above methods means I can approximate using these equations, the values for a and b?

Anyone know how to do this and what they are talking about?

THanks

Alex
 
Last edited:
Mathematics news on Phys.org
a.mlw.walker said:
It is some form of least squares method for curves, used to estimate a few unknown parameters.
It looks like you have been reading the wikipedia articles. These are optimization methods. The Levenberg–Marquardt wiki article says "The primary application of the Levenberg–Marquardt algorithm is in the least squares curve fitting problem." That's true in the sense that least squares curve fitting is one of the most widespread uses of all optimization techniques. It is also true in the sense that Levenberg–Marquardt is fairly well-suited to non-linear least squares problems. It is not true in the sense that this Levenberg–Marquardt cares in the least what motivates the function to be minimized.

Back up to the problem of finding a zero of a scalar function of one variable. One easy way: Find a pair of values that bracket the zero. For example, assume you have magically found x1 and x2 such that f(x1)<0 and f(x2)>0. You can find the zero without taking any derivatives of f(x) by looking at the point halfway between x1 and x2. Call this point x3. Either f(x3) is zero, the zero lies between x1 and x3, or the zero lies between x3 and x2. All you need to do is evaluate f(x3). This is binary search. It's not very fast, but it is very robust, and it doesn't use any derivative information. If you know the derivative f'(x) you can take advantage of this and zoom to the zero using Newton-Raphson. If you don't know the derivative you can approximate it using; this is what secant method does. Newton-Raphson can send your search to never-never land if the function isn't parabolic. The secant method is more robust but slower than Newton-Raphson, less robust but faster than binary search.

Now back to the problem at hand: Optimization. Think of golden ratio search as the equivalent of binary search, Gauss-Newton as the equivalent of Newton-Raphson, and Levenberg–Marquardt as the equivalent of the secant method. Each technique has certain strengths and weaknesses compared to the others. Golden ratio doesn't use derivative information at all, is quite robust, but converges quite slow. Gauss-Newton requires that you know the gradient and the Hessian, is very fast to converge (if it works), but is not robust. Levenberg–Marquardt estimates the the gradient and the Hessian, is fairly fast to converge (if it works), and is intermediary in terms of robustness.
 
OK, thank you DH for the intro to Levenberg–Marquardt. So looking at simage.bmp can you see how it is possible to find an estimation for a and b, if I have a few values for Tk?
 
a.mlw.walker said:
OK, thank you DH for the intro to Levenberg–Marquardt. So looking at simage.bmp can you see how it is possible to find an estimation for a and b, if I have a few values for Tk?

We might be of more help if you post the image.

Also, it may not be necessary to use Levenberg-Marquardt if the problem is well behaved. Standard nonlinear least squares algorithm might work just fine.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top