Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Gradient descent, anything better than golden section line search

  1. May 31, 2010 #1
    This is a long story, I make it short:
    I am working in a project where I need to find a matrix defined by a third degree polynomial, the solution can be found iteratively using a gradient descent technique, I am using the golden section line search already implemented in matlab (with the code described below).
    The algorithm looks powerful (it finds automatically the perfect step), but unfortunately the golden section line search does not avoid being stuck in local minima. How can I implement more efficiently this. The problem is that sometime it converges and sometimes no (sometimes is not science :-).

    %Initial third degree polynomial
    Cest = initial_guess;
    normdev=inf ;

    %Stopping condition
    stopping_condition = 10^(-5) *norm(X*X'/no_samples,'fro');

    while abs(normdev*stepsize) > stopping_condition
    %Third degree polynomial
    dnew = Cest - 1/no_samples*(X*X' - 2/sigma^2 * (Cest*Cest'*Cest-Cest*B'*Cest));
    %Find the best stepsize as a minimum using the goldensection line search
    stepsize = fminbnd( @(stepsize) step(stepsize,Cest,dnew,X*X',B,sigma,no_samples),-.1,.1);

    Cest = Cest + stepsize*dnew;
    normdev = norm(dnew,'fro');

    function error = step(stepsize,Cest,dnew,XX,B,sigma,no_samples)
    Cest = Cest + stepsize*dnew;
    error = norm(Cest - 1/no_samples*(XX - 2/sigma^2 * (Cest^3-Cest*B*Cest)),'fro');

    I tried :
    stepsize = fminunc( @(stepsize) step(stepsize,Cest,dnew,X*X',B,sigma,no_samples),dnew);
    But matlab get stuck (no heap memory) probably due the fact that this function is suppose to be used when we don't have a trust region.

    Any suggestions ?
  2. jcsd
  3. Jun 2, 2010 #2
    So I moved to a stochastic approach (gradient descent approaches find only local minima, stochastic approaches can find global minima) like simulated annealing, now the problem is that with my objective function the default convergence never happens.
    So I have to force annealing to stop. BUT WHEN ?
    I tried after some iteration but the annealing has the defect that at some point it randomize his trusted region and maybe when I stop it, it is really far from the minima (I am using too many iterations). And I have the same problem with time limit. So when I should stop the minima the search ?

    Code using the simulated annealing:
    Cest = initial_point;

    lb = -20*ones(size(Cest));
    ub = 20*ones(size(Cest));

    options = saoptimset('MaxIter',300) ;

    Cest = simulannealbnd(@(funcobj) myfuncobj(Cest ,X*X',B,sigma,no_samples),Cest,lb,ub, options);

    function error = myfuncobj( Cest, XX, B, sigma, no_samples)
    error = norm(Cest - 1/no_samples*(XX - 2/sigma^2 * (Cest^3-Cest*B*Cest)),'fro');
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Threads - Gradient descent anything Date
I Geometrical interpretation of gradient May 26, 2017
A Gradient of scalar product Apr 24, 2017
Negative Gradient and Gradient Descent Method Nov 11, 2014
Gradient Descent Mar 8, 2011
Newton's method vs gradient descent Mar 10, 2010