# Newton's Method for Optimization

Just curious if Newton's method in high dimensions should always quickly converge to a min/max or saddle point. I can't seem to get the value of my gradient below 12-16; so, its not "diverging" but its not converging either. I want to avoid saddle points so I'm using Fletcher-Reeves method, but I figure if I test it with Newton-Raphson then it should at least converge to a saddle point quickly, right? (assuming my initial starting point is "good" in some sense).

thanks all

## Answers and Replies

HallsofIvy
Science Advisor
Homework Helper
could you please given an example? What is the function for which you cannot get the value of the gradient below 12-16?

Even in one dimension it is possible to get a function and intitial value where you just alternate between two x-values- but just a slight change in initial value will correct that.

AlephZero
Science Advisor
Homework Helper
The basic problem for any multivariable optimisation algorithm is the fact that (unlike the 1-D case) the optimum is usually at the bottom of a "curved valley" in N dimensions. Any method that only looks at the local gradient at a point tends to follow the steepest gradient, and the route to the optimum tends to be many short steps that "bounce off the sides" of the valley rather than "following the curve" along the bottom. A classic example of this for 2 variables (where it is fairly easy to draw pictures to see what is happening) is "Rosenbrock's banana function" (see Google).

To do multi-variable optimization efficiently, you usually need to know a lot about the function you are trying to optimize.