Spectral Method for optimization

  1. trying to find a global min/max while avoiding saddle points ..... If we expand a polynomial of several dimensions we get the following:

    f(x + del x) = f(x) + del x * g + .5 (del x) H (del x)^t + O(||x||^3) : g is the gradient, H is the Hessian.

    then we can take the eigenvectors of H and make them into a matrix U s.t. UU^t = I, and use the eigenvalues to construct a diagonal matrix, D; so that H = UDU^t.

    then define: z = (del x ) * U
    and define d = (U^t)* g

    so that our original equation reduces to
    f(x+del x ) = f(x) + z*d +.5*z*D*z^t + O(||x||^3) ; equivalently,
    f(x+del x ) ~ f(x) + z*d +.5*z*D*z^t

    (got all of this from: http://linneus20.ethz.ch:8080/3_7.html)

    Just curious how we are supposed to update the iterations? I get to the step where if the method finds a saddle point it avoids the saddle point by going in the opposite direction for the given dimension and then scaling by a factor, h , such that the original function f(x+del x ) is optimized as best as possible.... in other words, after the N-dimension problem is put through this method, we find all the "saddle point dimensions" and reduce the problem to 1-dimension and solve for h. But i get a little confused from the websites examples, as the numbers don't work out in some of the problems, and also i'm not completely clear how the update works in this method. Finally, what's the best method to solve for h? A sub-routine of 1-D Newton's Method? Golden Search? Bisection? etc.....
     
  2. jcsd
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook

Have something to add?

0
Draft saved Draft deleted