Conjugate Gradient Methods Aren't Working

In summary, the conversation is about implementing a solution in Matlab for a control theoretical problem that involves minimizing a function f(x). The predecessor's conjugate gradient method is not converging, even after replacing it with a canned algorithm. The problem may not be suitable for gradient methods due to lack of differentiability. Suggestions for alternative minimization algorithms are the gradient descent method, Nelder-Mead Simplex, and Random (Monte Carlo) method. It is mentioned that more information about the function is needed to determine the best approach.
  • #1
Kreizhn
743
1
I'm working on a control theoretical problem and trying to implement the solution in Matlab. Part of the solution requires minimizing a function f(x), for which my predecessor has opted to use a conjugate gradient method. He wrote his own conjugate gradient method, but it's not converging. I've replaced his method with a canned algorithm, but it is still not converging. This suggests to me that the problem is ill-suited to gradient methods.

Can anybody suggest to me why this might be the case? Is it likely because the surface lacks a sufficient degree of differentiability? Also, can anybody suggest another minimization algorithm that I could attempt to use instead of gradient descent?
 
Mathematics news on Phys.org
  • #2
Last edited:

1. Why are my conjugate gradient methods not working?

There could be several reasons why your conjugate gradient methods are not working. One possible reason is that your initial guess is too far from the actual solution, causing the algorithm to struggle to converge. Another reason could be that your matrix is ill-conditioned, making it difficult for the algorithm to find a good solution. Additionally, there may be a bug in your implementation of the algorithm.

2. How can I improve the performance of my conjugate gradient methods?

To improve the performance of your conjugate gradient methods, you can try using a better initial guess, such as the zero vector or the solution from a previous iteration. You can also try preconditioning your matrix to improve its condition number. Lastly, double-check your code for any errors that may be causing issues with the convergence of the algorithm.

3. Is it normal for conjugate gradient methods to take a long time to converge?

The convergence rate of conjugate gradient methods depends on various factors, such as the condition number of the matrix and the initial guess. In some cases, it may take longer for the algorithm to converge, especially if the matrix is ill-conditioned or the initial guess is far from the solution. However, if the algorithm is taking an unusually long time to converge, there may be an issue with the implementation or the problem itself.

4. Can I use conjugate gradient methods for non-linear systems?

No, conjugate gradient methods are only applicable for solving linear systems of equations. For non-linear systems, other methods such as Newton's method or gradient descent may be more suitable.

5. Are there any limitations to using conjugate gradient methods?

One limitation of conjugate gradient methods is that they can only be used for solving linear systems. Additionally, they may not perform well if the matrix is ill-conditioned or has a large condition number. Moreover, the algorithm may struggle to converge if the initial guess is too far from the solution. It is important to consider these limitations when deciding whether to use conjugate gradient methods for a particular problem.

Similar threads

  • General Math
Replies
5
Views
842
  • General Math
Replies
13
Views
1K
Replies
0
Views
315
  • General Math
Replies
1
Views
2K
Replies
1
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
1K
Replies
7
Views
1K
  • Programming and Computer Science
Replies
5
Views
913
Replies
12
Views
12K
  • STEM Academic Advising
Replies
10
Views
904
Back
Top