# Newton's Method for Optimization

Just curious if Newton's method in high dimensions should always quickly converge to a min/max or saddle point. I can't seem to get the value of my gradient below 12-16; so, its not "diverging" but its not converging either. I want to avoid saddle points so I'm using Fletcher-Reeves method, but I figure if I test it with Newton-Raphson then it should at least converge to a saddle point quickly, right? (assuming my initial starting point is "good" in some sense).

thanks all

HallsofIvy
Homework Helper
could you please given an example? What is the function for which you cannot get the value of the gradient below 12-16?

Even in one dimension it is possible to get a function and intitial value where you just alternate between two x-values- but just a slight change in initial value will correct that.

AlephZero