When to stop a simulate annealing when it doesn't stop by itself

  • Context: Graduate 
  • Thread starter Thread starter alecrimi
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the challenges of determining when to stop a simulated annealing algorithm in the context of finding a matrix defined by a third degree polynomial. Participants explore the limitations of gradient descent and the stochastic nature of simulated annealing, particularly focusing on convergence issues and stopping criteria.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant describes their experience with gradient descent, noting that it becomes stuck in local minima and thus seeks an alternative approach using simulated annealing.
  • The same participant expresses uncertainty about when to stop the simulated annealing process, as they fear stopping too early may lead to a solution far from the minima.
  • Another participant suggests considering code from published sources, implying that established methods might provide insights or solutions.
  • The original poster responds by stating that their target matrix is unique and not found in existing literature, suggesting a lack of applicable resources.
  • A further inquiry is made about the objective function, indicating that clarity on the function and its variables is necessary for better assistance.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the stopping criteria for simulated annealing, and there are competing views regarding the applicability of existing code and resources. The discussion remains unresolved regarding the best approach to take.

Contextual Notes

The discussion highlights limitations related to the specificity of the objective function and the challenges of applying general methods to unique problems. There is also an indication of unresolved mathematical steps in the optimization process.

alecrimi
Messages
18
Reaction score
0
Hi guys
I posted something similar some time ago:
This is a long story, I make it short:
I am working in a project where I need to find a matrix defined by a third degree polynomial.
Originally I thought that a good idea can be to find the solution iteratively using a gradient descent technique, so I implemented a golden section line search in MATLAB (with the code described below).
The algorithm looks powerful (it finds automatically the perfect step), but unfortunately the golden section line search does not avoid being stuck in local minima.
So I moved to a stochastic approach like simulated annealing, now the problem is that with my objective function the default convergence never happens.
So I have to force annealing to stop. BUT WHEN ?
I tried after some iteration but the annealing has the defect that at some point it randomize his trusted region and maybe when I stop it, it is really far from the minima (I am using too many iterations). And I have the same problem with time limit. So when I should stop the minima the search ?
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Code using the golden section:

%Initial third degree polynomial
Cest = initial_guess;
normdev=inf ;
stepsize=inf;

%Stopping condition
stopping_condition = 10^(-5) *norm(X*X'/no_samples,'fro');

while abs(normdev*stepsize) > stopping_condition
%Third degree polynomial
dnew = Cest - 1/no_samples*(X*X' - 2/sigma^2 * (Cest*Cest'*Cest-Cest*B'*Cest));
%Find the best stepsize as a minimum using the goldensection line search
stepsize = fminbnd( @(stepsize) step(stepsize,Cest,dnew,X*X',B,sigma,no_samples),-.1,.1);

%Update
Cest = Cest + stepsize*dnew;
normdev = norm(dnew,'fro');
end

function error = step(stepsize,Cest,dnew,XX,B,sigma,no_samples)
Cest = Cest + stepsize*dnew;
error = norm(Cest - 1/no_samples*(XX - 2/sigma^2 * (Cest^3-Cest*B*Cest)),'fro');

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Code using the simulated annealing:
Cest = initial_point;

lb = -20*ones(size(Cest));
ub = 20*ones(size(Cest));

options = saoptimset('MaxIter',300) ;

Cest = simulannealbnd(@(funcobj) myfuncobj(Cest ,X*X',B,sigma,no_samples),Cest,lb,ub, options);


function error = myfuncobj( Cest, XX, B, sigma, no_samples)
error = norm(Cest - 1/no_samples*(XX - 2/sigma^2 * (Cest^3-Cest*B*Cest)),'fro');
 
Physics news on Phys.org
Have you considered using code from a book or published paper?
 
Hi
Thank you for your reply.
Unfortunately... I can't
Have you seen my target matrix ? Does it look like anything you can find in a book or was already published ?

The expression for a gradient descent works (but it is stuck in a local minimum).
I guess it will be solved with a stochastic approach, but reading all the tutorial, they assume explicit functions as error function, I haven't seen anything like I need to minimize.
 
Why can't you use code from a book?

Also, if you want me to look at your objective function, you should write it down and clearly label all its variables.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K