Particle Swarm Optimization vs. Newton's Method

Click For Summary
Particle Swarm Optimization (PSO) can be effective for certain optimization problems, especially when traditional gradient methods like Newton's method struggle with non-differentiable or noisy functions. Each optimization technique has its strengths, and the choice depends on the specific problem structure; for convex problems, convex optimization methods are preferable. If the problem allows for easy derivative computation, gradient methods should be utilized, while Newton's methods are advantageous for problems with second derivatives. PSO and similar black-box methods do not leverage problem structure, making them versatile but typically slower in convergence. Ultimately, selecting the right optimization approach hinges on understanding the problem's characteristics and constraints.
newphysist
Messages
12
Reaction score
0
I have been reading Stephen Boyd's book Convex Optimization and I have learned to form various problems like LP, QP, QCQP, SOCP or SDPs. I also learned about formulating SVM for classification problem as optimization problem.

Now I am reading about Gradient Methods, Newton's method, etc. which would be used to solve the above mentioned problems.

However, I just came across Particle Swarm Optimization (PSO). So, it makes me wonder whether PSO is as good and effective in solving the above problem types? Does it perform as well as Newton's method? Does anyone have some experience with this?

Thanks
 
Mathematics news on Phys.org
It depends on the specifics, Particle Swarm Optimization can be useful when Gradient Methods fail. For example when the function is not differentiable or not well behaved, noisy and so on. Each method will be good at solving certain instances and not as good at others.
 
Q: so if you have an array of methods, each well suited to certain conditions and you come onto a situation that is not an a priori fit for your existing methods what do you do?

Iterate through methods? I hope not. Develop extended heuristics? I would really like to know.

Thanks.
 
Different optimization problems have different levels of structure. As a broad principle, you want to use a method that takes advantage of as much structure from your problem as possible.

For convex problems, it's good to use convex optimization techniques.

If the unknowns are separated into groups, where the unknowns are very tightly coupled within each group but loosely coupled between groups, then you probably want to use some sort of decomposition method to split the problem up into smaller pieces.

If your problem has an easily computed derivative, use a gradient-type method that takes advantage of it.

If your problem has 2 derivatives, Newton-type methods will do well since they use Hessian information.

etc.. etc..

Things like particle swarm methods, evolutionary algorithms, simulated annealing, etc, are basically black-box. All they use is function evaluation, and they make no assumptions about the special structure of the problem you're solving. As a result, they will work on a wider class of problems where other methods fail, but probably will converge much slower.
 
Last edited:
Here is a little puzzle from the book 100 Geometric Games by Pierre Berloquin. The side of a small square is one meter long and the side of a larger square one and a half meters long. One vertex of the large square is at the center of the small square. The side of the large square cuts two sides of the small square into one- third parts and two-thirds parts. What is the area where the squares overlap?

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 12 ·
Replies
12
Views
13K