Iterative methods: system of linear equations

In summary, the person is looking for an efficient technique for solving a system of linear equations that always converges. They mention that methods like Jacobi and Gauss-Seidel do not always converge and that Cramer's Rule is not efficient. They suggest looking into LU-factorization schemes, but note that these methods may only have conditional convergence and may compromise accuracy for speed. They suggest considering the conjugate-gradient method, which is useful for solving symmetric positive-definite systems.
  • #1
defunc
55
0
Hi all,

I'm looking for a an effective technique for solving a system of linear equations. It should always converge, unlike jacobi or gauss seidel etc. It has to be more efficient than ordinary gauss elimination or kramers rule for large matrices.

Thanks!
 
Mathematics news on Phys.org
  • #2
First off, anything is more efficient than Cramer's Rule!

Secondly, why do you think Gauss elimination is focused so much upon?
It is precisely because it IS the major technique tat always produces convergence.

You may look up into LU-factorization schemes and so on, but typically, these faster (and often preferred) methods will only have conditional convergence.

Simply put, calculation speed is gained by dropping mathematical safe-guards that ensure absolute convergence.

Thus, what you are seeking after is, really, a contradiction in terms.
 
  • #3
The more advanced methods usually deal with specific subclasses of matrices. For example, if you're trying to solve symmetric positive-definite systems you might want to look at the conjugate-gradient method:
http://en.wikipedia.org/wiki/Conjugate_gradient_method
 

What are iterative methods for solving a system of linear equations?

Iterative methods are algorithms used to approximate the solution to a system of linear equations. Instead of solving the entire system at once, these methods use an iterative process to refine the solution until it reaches a desired level of accuracy.

Why are iterative methods used instead of traditional methods for solving systems of linear equations?

Iterative methods are often used because they can be more efficient for large systems of equations. Traditional methods, such as Gaussian elimination, require a significant amount of computation and memory. Iterative methods, on the other hand, only require the storage of a few vectors and matrices and can often converge to a solution faster.

How do you choose which iterative method to use for a specific system of linear equations?

The choice of iterative method depends on various factors, such as the size and structure of the system, the desired level of accuracy, and the availability of initial guesses for the solution. Some common iterative methods include Jacobi, Gauss-Seidel, and Successive Over-Relaxation (SOR).

What is the convergence criterion for iterative methods?

The convergence criterion for iterative methods is a condition that defines when the iterative process should stop. It is typically based on the error between the current and previous iterations. One common criterion is to stop when the relative error falls below a certain threshold.

Can iterative methods be used for non-linear systems of equations?

Yes, iterative methods can also be used for non-linear systems of equations. In these cases, the iterative process may involve finding successive approximations of the solution until it converges to a desired level of accuracy. However, the convergence of iterative methods for non-linear systems may not be guaranteed, and the choice of method and convergence criterion may differ from those used for linear systems.

Similar threads

Replies
9
Views
1K
Replies
8
Views
1K
Replies
2
Views
838
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Programming and Computer Science
Replies
1
Views
2K
  • Linear and Abstract Algebra
Replies
26
Views
4K
Replies
16
Views
1K
Replies
12
Views
2K
Replies
12
Views
1K
Back
Top