Iterative methods for solving systems of linear differential equations

In summary: Iterative methods are a natural approach to solving numerical problems, and they have been proven to be effective in many applications. While it may seem counterintuitive to repeatedly plug in values and hope for convergence, the math and theory behind these methods show that they can indeed lead to a solution. The key is to carefully choose the initial values and follow the prescribed steps, and with patience and understanding, one can see the convergence of these methods. Thank you for asking for clarification and seeking to understand, rather than simply dismissing the methods as "magical". In summary, iterative methods for solving systems of linear equations, such as the Jacobi and Gauss-Seidel method, may seem counterintuitive at first, but they have been proven to be effective and
  • #1
ThorX89
13
0
Hi.
Anyway, learning this sort of makes me feel like I've chosen the wrong school for myself, but I'd like to try and see if I can understand this nevertheless.
See, I'm supposed to learn various numerical methods for solving systems of linear equations, like the Jacobi or the Gauss-Seidel method, and I looked them both up on wikipedia and I understand them up to the point where we get an equation expressing the x vector as a simple function of the same x vector. What I don't get, however, is what would make anyone think that choosing an initial value for the x vector on the right hand side, computing the left hand side from it, and then plugging that to the right hand side again and repeating that over and over again will create a series of x_(k+1)=f(x_k) that will eventually converge.
I mean I could easily derive the
x=f(x) parts
(that is: http://upload.wikimedia.org/math/4/e/f/4ef16455a2e751a078ce9dff8d4d9ad0.png
http://upload.wikimedia.org/math/f/5/1/f5137bf3761d2757a21824fbfc9d7c30.png
in particular for Jacobi or Gauss-Seidel method respectively)
but never would I have thought of deriving an iterative method for getting the values for x from this.
I know it doesn't converge everytime - What I'm asking is what would make anyone think it ever will or in other words: that creating series from the above mentioned equations will lead to the solution.
Thanks if anyone is willing to explain this to me.
 
Physics news on Phys.org
  • #2
Isn't it natural to daydream that iterations will work?

Suppose y = f(x) is a function that you can "solve for x", so you know f^(-1)(x) and suppose someone asks you to solve the equation: f(x) = g(x). Guess x = x0. Plug this into g(x) get y0 = g(x0). Solve f(x) = y0 for x to get x1 etc. This process can bounce around forever or converge. You seem to be asking why anyone would ever think it would converge, but one could equally ask why should it bounce around forever.

The details will depend on the specific functions involved, but the general idea of iteration is (to me) a very natural thing to daydream about. For example, there are natural systems that can be perturbed so they wobble and return to a state of stability. Or think of how an organization tries to publish a document. Someone writes a draft and then it goes to another person and gets revised and that draft is sent to someone else etc. If the document ever goes out the door, there has to be a version that everyone "signs off" on.
 
  • #3
I'm not sure what you are really asking here, but Gauss was arguably THE greatest mathematician who has ever lived, period, and Jacobi would probably make it into the top 100. So the fact that you personally can't see how they discovered things is nothing to be ashamed about!

On the other hand if you are asking how to prove these iterative methods converge, and find the conditions for when they converge and diverge, the answers to those sort of questions should be in any good numerical analysis textbook.

There is nothing "magic" about iterative methods. They are used all the time for solving numerical problems in physics and engineering, because they are often the the quickest methods that are known.
 
  • #4
Thanks a lot you guys for the answers. They provided some insight.
 
  • #5
Apologies if I go into too much detail, but I think that sketching the proofs of convergence of each method actually helps a lot to understand why they work:

If, for the system Ax=b, you express A as the sum of upper triangular, diagonal, and lower triangular matrices, then both Gauss-Siedel and Jacobi methods can actually but put into the same generic form:

[tex]x_{n+1} = Px_n + q[/tex]

Though obviously, P and q are not the same in each case. If we then define an error e as:

[tex]e_n = x_n - s[/tex]

where s is the exact solution to the original system. We then have the relationship between errors:

[tex]e_{n+1} = Pe_n[/tex]

So, intuitively, if multiplication by P tends to make the magnitude of vectors smaller then the iterative method will converge to the correct solution, as the error will approach 0 as n goes large.

To make this less subjective, we define norms for vectors and matrices and can guarantee convergence if a matrix norm of P is less than one.
 
  • #6
Greta explanation, JMF.
 

1. What are iterative methods for solving systems of linear differential equations?

Iterative methods for solving systems of linear differential equations are numerical techniques used to approximate the solutions of a system of linear differential equations. These methods involve repeatedly applying a formula or algorithm to approximate the solution to a desired level of accuracy.

2. How do iterative methods differ from other methods of solving systems of linear differential equations?

Unlike direct methods, which involve solving the entire system of equations at once, iterative methods solve the equations one at a time, using the solution from the previous equation as an initial guess for the next one. This allows for a more efficient and flexible approach to solving systems of linear differential equations.

3. What are the advantages of using iterative methods for solving systems of linear differential equations?

Iterative methods are often more computationally efficient and require less memory than direct methods, making them suitable for larger systems of equations. They also allow for the solution to be refined and improved by repeating the iteration process with a smaller tolerance level.

4. What are some common iterative methods used for solving systems of linear differential equations?

Some common iterative methods include the Jacobi method, Gauss-Seidel method, and successive over-relaxation (SOR) method. These methods differ in their approach to approximating the solution, but all involve repeatedly updating the values of the unknown variables until a desired level of accuracy is achieved.

5. How do I know when to use iterative methods for solving systems of linear differential equations?

Iterative methods are typically used when the system of equations is too large or complex to solve using direct methods. They are also useful when an approximate solution is acceptable, or when the system is ill-conditioned and direct methods may produce inaccurate results. However, for smaller and well-conditioned systems, direct methods may be more efficient.

Similar threads

  • Linear and Abstract Algebra
Replies
11
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
10
Views
476
  • Differential Equations
Replies
8
Views
525
  • Linear and Abstract Algebra
Replies
9
Views
2K
  • Science and Math Textbooks
Replies
5
Views
2K
  • Introductory Physics Homework Help
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
26
Views
4K
Replies
11
Views
485
Back
Top