Understanding L U Factorization to Solving Linear Systems

  • Thread starter Thread starter symbolipoint
  • Start date Start date
  • Tags Tags
    Factorization
symbolipoint
Homework Helper
Education Advisor
Gold Member
Messages
7,567
Reaction score
2,008
I'm still confused about L U matrix factorization. I'm trying to understand how to do it and why doing so is valuable. Would elementary row operations to solve Ax=b be easier?

I'm not in any class. I am looking in the Larson & Edwards Linear Algebra book. Chapter 2. I have trouble following the different elementary matrices that are used in finding the U and the L matrices. I am confused about how certain numbers are selected to be part of each elementary matrix, the ones that have mostly the 1's along the main diagonal, and that some in the main diagonal are not 1's. Then, there is some other matrix containing other elements that came from using the elementary matrices.
 
Physics news on Phys.org
LU is useful for a couple reasons. It can make solving some equations easier, it can make inverting a matrix easier, it can make calculating the determinant a little faster, I am sure there are more but just a couple off the top of my head.

For the U matrix, you should just get the reduced echelon form. For the L matrix you should be getting the numbers that you would need to row reduce from to A to U and then with the entries of the identity matrix on the diagonal.
 
The main advantage of LU decomposition over simpler methods arises when you have many b's. Suppose there are M b's that you need to solve. Using simple methods requires solving Ax=b for each b. This is an O(M*N3) problem. With LU decomposition, solving Ax=LUx=b becomes two O(N2) problems, first solving Ly=b and then solving Ux=y. With many b's, this becomes an O(M*N2) problem. The O(N3) problem of decomposing A into A=LU is a one-time cost that vanishes against the problem of solving Ax=b for M b's.
 
It might help to understand it a different way, when there is no pivoting. Write the matrix equation out in full for a small example:
$$\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33}\end{bmatrix} =
\begin{bmatrix}1 \\ l_{21} & 1 \\ l_{31} & l_{32} & 1\end{bmatrix}
\begin{bmatrix} u_{11} & u_{12} & u_{13} \\ & u_{22} & u_{23} \\ & & u_{33}\end{bmatrix}
$$
Now work through A term by term, and see what the LU product for that term is:

##a_{11} = u_{11}##
##a_{12} = u_{12}##
##a_{13} = u_{13}##
##a_{21} = l_{21}u_{11}##, and we already know ##u_{11}##, so we can find ##l_{21}##
##a_{22} = l_{21}u_{12} + u_{22}##, and the only unknown is ##u_{22}##
and so on.
This is Crout's algorithm for the LU decomposition without pivoting.

Textbooks do seem to enjoy finding complcated ways to explain something that is really quite simple. Even with pivoting, it is just a systematic way of doing row operations on A, and remembering what you did, so you can do the same thing to the right hand side vector later.
 
AlephZero,
Your response is still complicated, but interesting. I'll need to recheck it from time to time. As for HOW to perform LU decomposition, there is a couple of youtube videos in which the presenter refers specificly to the identity matrix for each row operation to give each elementary matrix. Interesting and specific, but still not enough for me yet.
 
Thread 'Derivation of equations of stress tensor transformation'
Hello ! I derived equations of stress tensor 2D transformation. Some details: I have plane ABCD in two cases (see top on the pic) and I know tensor components for case 1 only. Only plane ABCD rotate in two cases (top of the picture) but not coordinate system. Coordinate system rotates only on the bottom of picture. I want to obtain expression that connects tensor for case 1 and tensor for case 2. My attempt: Are these equations correct? Is there more easier expression for stress tensor...
Back
Top