- #1
Iseous
- 38
- 0
I haven’t really done much related to systems of linear equations for a couple years, so my memory is a bit faded. But, I was wondering if anyone could tell me what iterative method this is. I made it up a while ago to see if it would work, and have recently been coding it to see that it does, and seemingly quite well. I do not recall ever learning about it in any classes, though I’m sure it already exists. The procedure is as follows:
A*x = b
A = n by n Matrix
x = solution to our system of linear equations
b = right-hand-side of the system of linear equations
1) Normalize A with respect to each row, making sure each row of b is normalized with respect to the corresponding row in A.
2) Guess a solution to the system. Any row of the matrix or all zeros work fine.
3) Choose the first row of A, Row1.
4) We wish to determine what multiple of this row to add to our guess in order to achieve a better solution. Since Row1 multiplied by our guess will give us the first value in b, we want to determine what multiple(alpha) of Row1 to add to our guess to achieve that value. Thus,
Row1*(Guess + alpha*Row1) = b1.
5) Solving for alpha:
alpha = (b1 - Row1*Guess)/(Row1*Row1)
6) Since the dot product of a vector with itself is its magnitude squared, which was normalized, it reduces to 1. This will be the same for all rows.
7) Thus, alpha = b1 - Row1*Guess
8) We can then add this multiple of the first row to our guess. NewGuess = Guess + alpha*Row1
9) Repeat steps 3-8 for all n rows of matrix, A.
10) Calculate the residual, R = norm(A*Guess - b)
11) Repeat steps 9-10 to cycle through the matrix as many times as needed to reach a desired residual.
This method can be used for most matrices, although the more linearly independent the rows are, the faster it converges. For an orthogonal matrix, it will converge in 1 iteration. The method can also be modified in many ways. For instance, rather than adding only 1 row to our guess at a time, we can add multiple rows, and then use other methods to solve for the coefficients. For instance, we could solve for 2 or 3 multiples simultaneously, since these can be calculated relatively easily algebraically. That is, if we took the first three rows, we would result in the following system of equations:
1) Row1*(Guess + alpha1*Row1 + alpha2*Row2 + alpha3*Row3) = b1
2) Row2*(Guess + alpha1*Row1 + alpha2*Row2 + alpha3*Row3) = b2
3) Row3*(Guess + alpha1*Row1 + alpha2*Row2 + alpha3*Row3) = b3
We could then solve for alpha1, alpha2, and alpha3 quite simply algebraically. The method can even be used on itself for much larger matrices. For instance, if dealing with a very large matrix such as n = 1,000,000, we could split it. We could take 10,000 rows at a time to solve for 10,000 different multiples, and then use the method itself again to solve each of those 10,000 splits by 3 at a time, as shown above. For sparse matrices, it is easy to only take the parts of the rows that are nonzero to reduce the number of calculations necessary.
A*x = b
A = n by n Matrix
x = solution to our system of linear equations
b = right-hand-side of the system of linear equations
1) Normalize A with respect to each row, making sure each row of b is normalized with respect to the corresponding row in A.
2) Guess a solution to the system. Any row of the matrix or all zeros work fine.
3) Choose the first row of A, Row1.
4) We wish to determine what multiple of this row to add to our guess in order to achieve a better solution. Since Row1 multiplied by our guess will give us the first value in b, we want to determine what multiple(alpha) of Row1 to add to our guess to achieve that value. Thus,
Row1*(Guess + alpha*Row1) = b1.
5) Solving for alpha:
alpha = (b1 - Row1*Guess)/(Row1*Row1)
6) Since the dot product of a vector with itself is its magnitude squared, which was normalized, it reduces to 1. This will be the same for all rows.
7) Thus, alpha = b1 - Row1*Guess
8) We can then add this multiple of the first row to our guess. NewGuess = Guess + alpha*Row1
9) Repeat steps 3-8 for all n rows of matrix, A.
10) Calculate the residual, R = norm(A*Guess - b)
11) Repeat steps 9-10 to cycle through the matrix as many times as needed to reach a desired residual.
This method can be used for most matrices, although the more linearly independent the rows are, the faster it converges. For an orthogonal matrix, it will converge in 1 iteration. The method can also be modified in many ways. For instance, rather than adding only 1 row to our guess at a time, we can add multiple rows, and then use other methods to solve for the coefficients. For instance, we could solve for 2 or 3 multiples simultaneously, since these can be calculated relatively easily algebraically. That is, if we took the first three rows, we would result in the following system of equations:
1) Row1*(Guess + alpha1*Row1 + alpha2*Row2 + alpha3*Row3) = b1
2) Row2*(Guess + alpha1*Row1 + alpha2*Row2 + alpha3*Row3) = b2
3) Row3*(Guess + alpha1*Row1 + alpha2*Row2 + alpha3*Row3) = b3
We could then solve for alpha1, alpha2, and alpha3 quite simply algebraically. The method can even be used on itself for much larger matrices. For instance, if dealing with a very large matrix such as n = 1,000,000, we could split it. We could take 10,000 rows at a time to solve for 10,000 different multiples, and then use the method itself again to solve each of those 10,000 splits by 3 at a time, as shown above. For sparse matrices, it is easy to only take the parts of the rows that are nonzero to reduce the number of calculations necessary.