Spectral Method for optimization

In summary, this method allows us to find a global minimum/maximum while avoiding saddle points by transforming the problem into a one-dimensional minimization problem.
  • #1
brydustin
205
0
trying to find a global min/max while avoiding saddle points ... If we expand a polynomial of several dimensions we get the following:

f(x + del x) = f(x) + del x * g + .5 (del x) H (del x)^t + O(||x||^3) : g is the gradient, H is the Hessian.

then we can take the eigenvectors of H and make them into a matrix U s.t. UU^t = I, and use the eigenvalues to construct a diagonal matrix, D; so that H = UDU^t.

then define: z = (del x ) * U
and define d = (U^t)* g

so that our original equation reduces to
f(x+del x ) = f(x) + z*d +.5*z*D*z^t + O(||x||^3) ; equivalently,
f(x+del x ) ~ f(x) + z*d +.5*z*D*z^t

(got all of this from: http://linneus20.ethz.ch:8080/3_7.html )

Just curious how we are supposed to update the iterations? I get to the step where if the method finds a saddle point it avoids the saddle point by going in the opposite direction for the given dimension and then scaling by a factor, h , such that the original function f(x+del x ) is optimized as best as possible... in other words, after the N-dimension problem is put through this method, we find all the "saddle point dimensions" and reduce the problem to 1-dimension and solve for h. But i get a little confused from the websites examples, as the numbers don't work out in some of the problems, and also I'm not completely clear how the update works in this method. Finally, what's the best method to solve for h? A sub-routine of 1-D Newton's Method? Golden Search? Bisection? etc...
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
The idea behind this method is to find a global minimum/maximum while avoiding saddle points. By expanding the polynomial of several dimensions, you can transform it into a minimization problem in a single variable. This transformation uses the eigenvectors of the Hessian matrix, which is the matrix of second partial derivatives of the original function. The Hessian tells us about the curvature of the surface, specifically how curved it is in different directions. Once the problem is transformed into one dimension, you can solve for h using a variety of methods, including sub-routine of 1-D Newton's Method, Golden Search, and Bisection. The method you use will depend on the specific problem and the desired accuracy.
 

1. What is the Spectral Method for optimization?

The Spectral Method for optimization is a mathematical technique used to solve optimization problems, such as finding the minimum or maximum value of a function. It involves using the eigenvalues and eigenvectors of a matrix representation of the problem to find the optimal solution.

2. How does the Spectral Method differ from other optimization techniques?

The Spectral Method is different from other optimization techniques because it utilizes spectral properties of the problem to find the optimal solution. This means that it can handle non-convex and high-dimensional problems more efficiently, and often provides a more accurate solution compared to other methods.

3. What are some real-world applications of the Spectral Method for optimization?

The Spectral Method has been successfully applied in various fields, such as machine learning, signal processing, and computer vision. It has been used to solve problems in image and audio processing, data clustering, and recommendation systems, among others.

4. What are the advantages of using the Spectral Method for optimization?

One of the main advantages of the Spectral Method is its ability to handle non-convex and high-dimensional problems, which are common in real-world applications. It also provides accurate solutions and can be more computationally efficient compared to other optimization methods in certain cases.

5. Are there any limitations to using the Spectral Method for optimization?

While the Spectral Method has many advantages, it does have some limitations. It may not be suitable for problems with a large number of constraints, and it can be computationally expensive for certain types of problems. It also requires some knowledge of linear algebra and matrix operations to implement effectively.

Similar threads

Replies
7
Views
2K
Replies
4
Views
1K
  • Differential Equations
Replies
7
Views
374
  • Differential Equations
Replies
1
Views
747
  • Differential Equations
Replies
5
Views
2K
  • Differential Equations
Replies
1
Views
962
Replies
2
Views
1K
  • Differential Equations
Replies
2
Views
2K
Replies
4
Views
1K
  • Differential Equations
Replies
2
Views
2K
Back
Top