Iterative methods: system of linear equations

AI Thread Summary
An effective technique for solving systems of linear equations that guarantees convergence is sought, particularly one that outperforms ordinary Gauss elimination and Cramer's Rule for large matrices. While Gauss elimination is widely used due to its reliable convergence, it may not be the most efficient for all cases. Advanced methods like LU-factorization can offer speed but often sacrifice absolute convergence, leading to conditional results. For specific types of matrices, such as symmetric positive-definite systems, the conjugate-gradient method is recommended. Ultimately, achieving both efficiency and guaranteed convergence in iterative methods presents inherent challenges.
defunc
Messages
55
Reaction score
0
Hi all,

I'm looking for a an effective technique for solving a system of linear equations. It should always converge, unlike jacobi or gauss seidel etc. It has to be more efficient than ordinary gauss elimination or kramers rule for large matrices.

Thanks!
 
Mathematics news on Phys.org
First off, anything is more efficient than Cramer's Rule!

Secondly, why do you think Gauss elimination is focused so much upon?
It is precisely because it IS the major technique tat always produces convergence.

You may look up into LU-factorization schemes and so on, but typically, these faster (and often preferred) methods will only have conditional convergence.

Simply put, calculation speed is gained by dropping mathematical safe-guards that ensure absolute convergence.

Thus, what you are seeking after is, really, a contradiction in terms.
 
The more advanced methods usually deal with specific subclasses of matrices. For example, if you're trying to solve symmetric positive-definite systems you might want to look at the conjugate-gradient method:
http://en.wikipedia.org/wiki/Conjugate_gradient_method
 
Thread 'Video on imaginary numbers and some queries'
Hi, I was watching the following video. I found some points confusing. Could you please help me to understand the gaps? Thanks, in advance! Question 1: Around 4:22, the video says the following. So for those mathematicians, negative numbers didn't exist. You could subtract, that is find the difference between two positive quantities, but you couldn't have a negative answer or negative coefficients. Mathematicians were so averse to negative numbers that there was no single quadratic...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Back
Top