When to stop a simulate annealing when it doesn't stop by itself

  • Thread starter alecrimi
  • Start date
In summary, you are working on a project where you need to find a matrix defined by a third degree polynomial. You started with a golden section line search, but it did not avoid being stuck in local minima. You moved to a stochastic approach like simulated annealing, but it does not work well. You have to force annealing to stop, but you do not know when to stop.
  • #1
alecrimi
18
0
Hi guys
I posted something similar some time ago:
This is a long story, I make it short:
I am working in a project where I need to find a matrix defined by a third degree polynomial.
Originally I thought that a good idea can be to find the solution iteratively using a gradient descent technique, so I implemented a golden section line search in MATLAB (with the code described below).
The algorithm looks powerful (it finds automatically the perfect step), but unfortunately the golden section line search does not avoid being stuck in local minima.
So I moved to a stochastic approach like simulated annealing, now the problem is that with my objective function the default convergence never happens.
So I have to force annealing to stop. BUT WHEN ?
I tried after some iteration but the annealing has the defect that at some point it randomize his trusted region and maybe when I stop it, it is really far from the minima (I am using too many iterations). And I have the same problem with time limit. So when I should stop the minima the search ?
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Code using the golden section:

%Initial third degree polynomial
Cest = initial_guess;
normdev=inf ;
stepsize=inf;

%Stopping condition
stopping_condition = 10^(-5) *norm(X*X'/no_samples,'fro');

while abs(normdev*stepsize) > stopping_condition
%Third degree polynomial
dnew = Cest - 1/no_samples*(X*X' - 2/sigma^2 * (Cest*Cest'*Cest-Cest*B'*Cest));
%Find the best stepsize as a minimum using the goldensection line search
stepsize = fminbnd( @(stepsize) step(stepsize,Cest,dnew,X*X',B,sigma,no_samples),-.1,.1);

%Update
Cest = Cest + stepsize*dnew;
normdev = norm(dnew,'fro');
end

function error = step(stepsize,Cest,dnew,XX,B,sigma,no_samples)
Cest = Cest + stepsize*dnew;
error = norm(Cest - 1/no_samples*(XX - 2/sigma^2 * (Cest^3-Cest*B*Cest)),'fro');

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Code using the simulated annealing:
Cest = initial_point;

lb = -20*ones(size(Cest));
ub = 20*ones(size(Cest));

options = saoptimset('MaxIter',300) ;

Cest = simulannealbnd(@(funcobj) myfuncobj(Cest ,X*X',B,sigma,no_samples),Cest,lb,ub, options);


function error = myfuncobj( Cest, XX, B, sigma, no_samples)
error = norm(Cest - 1/no_samples*(XX - 2/sigma^2 * (Cest^3-Cest*B*Cest)),'fro');
 
Physics news on Phys.org
  • #2
Have you considered using code from a book or published paper?
 
  • #3
Hi
Thank you for your reply.
Unfortunately... I can't
Have you seen my target matrix ? Does it look like anything you can find in a book or was already published ?

The expression for a gradient descent works (but it is stuck in a local minimum).
I guess it will be solved with a stochastic approach, but reading all the tutorial, they assume explicit functions as error function, I haven't seen anything like I need to minimize.
 
  • #4
Why can't you use code from a book?

Also, if you want me to look at your objective function, you should write it down and clearly label all its variables.
 
  • #5


As a scientist, my response would be to carefully evaluate the convergence of the simulated annealing algorithm before considering stopping it manually. This could involve analyzing the behavior of the objective function over multiple runs, and determining if it is consistently reaching a certain value or if it is fluctuating. Additionally, it may be helpful to plot the trajectory of the algorithm to visually see if it is getting closer to the global minimum or if it is stuck in a local minimum.

If it is determined that the algorithm is not converging, there are a few potential options for stopping it manually. One option could be to set a maximum number of iterations or a time limit, as mentioned in the post. However, this may not always be reliable as the algorithm may still be far from the global minimum when it reaches these limits.

Another option could be to monitor the temperature and energy of the system during the annealing process. If the temperature remains high and the energy does not decrease significantly, it may be an indication that the algorithm is stuck and may benefit from being stopped.

Overall, it is important to carefully evaluate the behavior of the algorithm and consider the potential consequences of stopping it manually before making a decision. It may also be helpful to consult with other experts or colleagues for their insights and advice.
 

1. When should I stop a simulated annealing algorithm if it doesn't stop by itself?

The decision to stop a simulated annealing algorithm ultimately depends on the specific problem being solved and the convergence criteria being used. Generally, if the algorithm is not showing any significant improvement after a certain number of iterations, it may be a good idea to stop it. Another approach is to set a maximum number of iterations and stop the algorithm once that limit is reached.

2. How do I know if a simulated annealing algorithm is stuck in a local minimum?

A simulated annealing algorithm may get stuck in a local minimum if it is unable to escape from a sub-optimal solution. One way to identify this is by monitoring the objective function values at each iteration. If the values are not improving or fluctuating around a certain value, it may indicate that the algorithm is stuck. Additionally, comparing the results of multiple runs with different initial conditions can also help identify if the algorithm is stuck in a local minimum.

3. Can I use a fixed temperature schedule to stop a simulated annealing algorithm?

Using a fixed temperature schedule to stop a simulated annealing algorithm may not be the most effective approach. This is because the algorithm may still be in the process of exploring the solution space and may not have reached the optimal solution yet. It is better to use a convergence criterion based on the objective function values or a maximum number of iterations instead.

4. What are some common convergence criteria used to stop simulated annealing?

Some common convergence criteria used to stop simulated annealing include a fixed number of iterations, a maximum number of iterations without improvement, a threshold for the objective function values, or a combination of these. It is also important to consider the complexity of the problem being solved and adjust the convergence criteria accordingly.

5. Should I always stop a simulated annealing algorithm when it reaches the global minimum?

Not necessarily. While the goal of a simulated annealing algorithm is to find the global minimum, it is important to consider the computational resources and time constraints. For some problems, the global minimum may not be significantly better than a sub-optimal solution and it may not be worth the additional computational effort to find it. In such cases, it may be more practical to stop the algorithm once it reaches a satisfactory solution.

Similar threads

Replies
6
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • STEM Academic Advising
Replies
13
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
1K
Back
Top