Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

When to stop a simulate annealing when it doesn't stop by itself

  1. Jun 2, 2010 #1
    Hi guys
    I posted something similar some time ago:
    This is a long story, I make it short:
    I am working in a project where I need to find a matrix defined by a third degree polynomial.
    Originally I thought that a good idea can be to find the solution iteratively using a gradient descent technique, so I implemented a golden section line search in matlab (with the code described below).
    The algorithm looks powerful (it finds automatically the perfect step), but unfortunately the golden section line search does not avoid being stuck in local minima.
    So I moved to a stochastic approach like simulated annealing, now the problem is that with my objective function the default convergence never happens.
    So I have to force annealing to stop. BUT WHEN ?
    I tried after some iteration but the annealing has the defect that at some point it randomize his trusted region and maybe when I stop it, it is really far from the minima (I am using too many iterations). And I have the same problem with time limit. So when I should stop the minima the search ?
    Code using the golden section:

    %Initial third degree polynomial
    Cest = initial_guess;
    normdev=inf ;

    %Stopping condition
    stopping_condition = 10^(-5) *norm(X*X'/no_samples,'fro');

    while abs(normdev*stepsize) > stopping_condition
    %Third degree polynomial
    dnew = Cest - 1/no_samples*(X*X' - 2/sigma^2 * (Cest*Cest'*Cest-Cest*B'*Cest));
    %Find the best stepsize as a minimum using the goldensection line search
    stepsize = fminbnd( @(stepsize) step(stepsize,Cest,dnew,X*X',B,sigma,no_samples),-.1,.1);

    Cest = Cest + stepsize*dnew;
    normdev = norm(dnew,'fro');

    function error = step(stepsize,Cest,dnew,XX,B,sigma,no_samples)
    Cest = Cest + stepsize*dnew;
    error = norm(Cest - 1/no_samples*(XX - 2/sigma^2 * (Cest^3-Cest*B*Cest)),'fro');

    Code using the simulated annealing:
    Cest = initial_point;

    lb = -20*ones(size(Cest));
    ub = 20*ones(size(Cest));

    options = saoptimset('MaxIter',300) ;

    Cest = simulannealbnd(@(funcobj) myfuncobj(Cest ,X*X',B,sigma,no_samples),Cest,lb,ub, options);

    function error = myfuncobj( Cest, XX, B, sigma, no_samples)
    error = norm(Cest - 1/no_samples*(XX - 2/sigma^2 * (Cest^3-Cest*B*Cest)),'fro');
  2. jcsd
  3. Jun 9, 2010 #2
    Have you considered using code from a book or published paper?
  4. Jun 9, 2010 #3
    Thank you for your reply.
    Unfortunately... I can't
    Have you seen my target matrix ? Does it look like anything you can find in a book or was already published ?

    The expression for a gradient descent works (but it is stuck in a local minimum).
    I guess it will be solved with a stochastic approach, but reading all the tutorial, they assume explicit functions as error function, I haven't seen anything like I need to minimize.
  5. Jun 9, 2010 #4
    Why can't you use code from a book?

    Also, if you want me to look at your objective function, you should write it down and clearly label all its variables.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook