Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Particle Swarm Optimization vs. Newton's Method

  1. Dec 17, 2012 #1
    I have been reading Stephen Boyd's book Convex Optimization and I have learned to form various problems like LP, QP, QCQP, SOCP or SDPs. I also learned about formulating SVM for classification problem as optimization problem.

    Now I am reading about Gradient Methods, Newton's method, etc. which would be used to solve the above mentioned problems.

    However, I just came across Particle Swarm Optimization (PSO). So, it makes me wonder whether PSO is as good and effective in solving the above problem types? Does it perform as well as Newton's method? Does anyone have some experience with this?

  2. jcsd
  3. Dec 17, 2012 #2


    User Avatar
    Homework Helper

    It depends on the specifics, Particle Swarm Optimization can be useful when Gradient Methods fail. For example when the function is not differentiable or not well behaved, noisy and so on. Each method will be good at solving certain instances and not as good at others.
  4. Dec 17, 2012 #3

    jim mcnamara

    User Avatar

    Staff: Mentor

    Q: so if you have an array of methods, each well suited to certain conditions and you come onto a situation that is not an a priori fit for your existing methods what do you do?

    Iterate through methods? I hope not. Develop extended heuristics? I would really like to know.

  5. Dec 18, 2012 #4
    Different optimization problems have different levels of structure. As a broad principle, you want to use a method that takes advantage of as much structure from your problem as possible.

    For convex problems, it's good to use convex optimization techniques.

    If the unknowns are separated into groups, where the unknowns are very tightly coupled within each group but loosely coupled between groups, then you probably want to use some sort of decomposition method to split the problem up into smaller pieces.

    If your problem has an easily computed derivative, use a gradient-type method that takes advantage of it.

    If your problem has 2 derivatives, Newton-type methods will do well since they use Hessian information.

    etc.. etc..

    Things like particle swarm methods, evolutionary algorithms, simulated annealing, etc, are basically black-box. All they use is function evaluation, and they make no assumptions about the special structure of the problem you're solving. As a result, they will work on a wider class of problems where other methods fail, but probably will converge much slower.
    Last edited: Dec 18, 2012
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook