Iterative methods for solving systems of linear differential equations

ThorX89
Messages
13
Reaction score
0
Hi.
Anyway, learning this sort of makes me feel like I've chosen the wrong school for myself, but I'd like to try and see if I can understand this nevertheless.
See, I'm supposed to learn various numerical methods for solving systems of linear equations, like the Jacobi or the Gauss-Seidel method, and I looked them both up on wikipedia and I understand them up to the point where we get an equation expressing the x vector as a simple function of the same x vector. What I don't get, however, is what would make anyone think that choosing an initial value for the x vector on the right hand side, computing the left hand side from it, and then plugging that to the right hand side again and repeating that over and over again will create a series of x_(k+1)=f(x_k) that will eventually converge.
I mean I could easily derive the
x=f(x) parts
(that is: http://upload.wikimedia.org/math/4/e/f/4ef16455a2e751a078ce9dff8d4d9ad0.png
http://upload.wikimedia.org/math/f/5/1/f5137bf3761d2757a21824fbfc9d7c30.png
in particular for Jacobi or Gauss-Seidel method respectively)
but never would I have thought of deriving an iterative method for getting the values for x from this.
I know it doesn't converge everytime - What I'm asking is what would make anyone think it ever will or in other words: that creating series from the above mentioned equations will lead to the solution.
Thanks if anyone is willing to explain this to me.
 
Physics news on Phys.org
Isn't it natural to daydream that iterations will work?

Suppose y = f(x) is a function that you can "solve for x", so you know f^(-1)(x) and suppose someone asks you to solve the equation: f(x) = g(x). Guess x = x0. Plug this into g(x) get y0 = g(x0). Solve f(x) = y0 for x to get x1 etc. This process can bounce around forever or converge. You seem to be asking why anyone would ever think it would converge, but one could equally ask why should it bounce around forever.

The details will depend on the specific functions involved, but the general idea of iteration is (to me) a very natural thing to daydream about. For example, there are natural systems that can be perturbed so they wobble and return to a state of stability. Or think of how an organization tries to publish a document. Someone writes a draft and then it goes to another person and gets revised and that draft is sent to someone else etc. If the document ever goes out the door, there has to be a version that everyone "signs off" on.
 
I'm not sure what you are really asking here, but Gauss was arguably THE greatest mathematician who has ever lived, period, and Jacobi would probably make it into the top 100. So the fact that you personally can't see how they discovered things is nothing to be ashamed about!

On the other hand if you are asking how to prove these iterative methods converge, and find the conditions for when they converge and diverge, the answers to those sort of questions should be in any good numerical analysis textbook.

There is nothing "magic" about iterative methods. They are used all the time for solving numerical problems in physics and engineering, because they are often the the quickest methods that are known.
 
Thanks a lot you guys for the answers. They provided some insight.
 
Apologies if I go into too much detail, but I think that sketching the proofs of convergence of each method actually helps a lot to understand why they work:

If, for the system Ax=b, you express A as the sum of upper triangular, diagonal, and lower triangular matrices, then both Gauss-Siedel and Jacobi methods can actually but put into the same generic form:

x_{n+1} = Px_n + q

Though obviously, P and q are not the same in each case. If we then define an error e as:

e_n = x_n - s

where s is the exact solution to the original system. We then have the relationship between errors:

e_{n+1} = Pe_n

So, intuitively, if multiplication by P tends to make the magnitude of vectors smaller then the iterative method will converge to the correct solution, as the error will approach 0 as n goes large.

To make this less subjective, we define norms for vectors and matrices and can guarantee convergence if a matrix norm of P is less than one.
 
Greta explanation, JMF.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...

Similar threads

Back
Top