MHB LU decomposition: With or without pivoting?

  • Thread starter Thread starter mathmari
  • Start date Start date
  • Tags Tags
    Decomposition
mathmari
Gold Member
MHB
Messages
4,984
Reaction score
7
Hey! 😊

We consider the matrix $$A=\begin{pmatrix}1 & -2 & 5 & 0 & 5\\ 1 & 0 & 2 & 0 & 3\\ 1 & 2 & 5 & 4 & 6 \\ -2 & 2 & -4 & 1 & -6 \\ 3 & 4 & 9 & 5 & 11\end{pmatrix}$$ I want to find the LU decomposition.

How do we know if we have to do the decomposition with pivoting or without? :unsure:
 
Mathematics news on Phys.org
Hey mathmari!

Wiki explains that a decomposition without pivoting ($A=LU$) can fail for a non-singular matrix A, and that this is a procedural problem.
It happens when during the procedure we get a sub matrix with a $0$ at the left top.
Decomposition is still possible, but we need to reorder rows or columns to make it happen.

In other words, I believe we simply have to try the LU decomposition procedure without pivoting to find out if it is possible.
And if we find that decomposition without pivoting is not possible, we can reorder the rows and continue the algorithm. šŸ¤”
 
I found the following code for the LU-decomposition with partial pivoting:

Code:
Algorithm LUP-Decomposition(A)
Input matrix A
Output matrices L,U,P: PA=LU
n <- A.rows 
L <- new n x n matrix
U <- new n x n matrix
P <- new n x n matrix
#initialization, as before
#for L, U, and P = I
for i <- 0 to n do
       for j <- 0 to n do
               if i > j then
                     U[i][j] <- 0
                     P[i][j] <- 0
               else if i == j then
                     L[i][j] <- 1
                     P[i][j] <- 1
               else
                     L[i][j] <- 0
                     P[i][j] <- 0
for k <- 0 to n do # for each equation
       p <- 0
       for i <- k to n dĪæ #find pivot row
                if abs(A[i][k]) > p then
                       p <- abs(A[i][k])
                       pivot_row <- i
       if p == 0 then
               error("singular matrix")
       for j <- 0 to n do #swap rows
               swap L[k][j] with L[pivot_row][j]
               swap U[k][j] with U[pivot_row][j]
               swap P[k][j] with P[pivot_row][j]
               swap A[k][j] with A[pivot_row][j]
       U[k][k] <- A[k][k]
       for i <- k + 1 to n dĪæ
               L[i][k] <- A[i][k] / U[k][k]  # L column
               U[k][i] <- A[k][i]  # U row
       for i <- k + 1 to n do #gauss
               for j <- k + 1 to n do #elimination
                        A[i][j] <- A[i][j] – L[i][k]*U[k][j]
return L, U, P
Are the lines:

swap L[k][j] with L[pivot_row][j]
swap U[k][j] with U[pivot_row][j]

correct? Why do we have to swap also these matrices? After swapping them they are no more a lower and upper matrix respectively, are they? :unsure:I tried to apply that algorithm to a matrix, for example $A=\begin{pmatrix}1 & 2 & 4 \\ 3 & 8 & 14 \\ 2 & 6 & 13\end{pmatrix}$ :

At the initializations we get $L=\begin{pmatrix}1 & 0 & 0 \\ \ell_{21} & 1 & 0 \\ \ell_{31} & \ell_{32} & 1\end{pmatrix}$, $U=\begin{pmatrix}u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33}\end{pmatrix}$ and $P=I_{3\times 3}$.

The maximum element by absolute value at the first column is $3$, at the second row.

Now we swap the first the second lines at all the matrices $L,U,P,A$ and we get:
$$L=\begin{pmatrix} \ell_{21} & 1 & 0 \\ 1 & 0 & 0 \\ \ell_{31} & \ell_{32} & 1\end{pmatrix}, \ U=\begin{pmatrix} 0 & u_{22} & u_{23} \\ u_{11} & u_{12} & u_{13} \\ 0 & 0 & u_{33}\end{pmatrix}, \ P=\begin{pmatrix}0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{pmatrix}, \ A=\begin{pmatrix} 3 & 8 & 14 \\ 1 & 2 & 4 \\ 2 & 6 & 13\end{pmatrix}$$

Then we get $U[1,1]=3$.

We also get $L[2,1]=\frac{a_{21}'}{3}=\frac{1}{3}$, $U[1,2]=a_{12}'=8$ and $L[3,1]=\frac{a_{31}'}{3}=\frac{2}{3}$, $U[1,3]=a_{13}'=14$Is the first loop correct?:unsure:
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a ā€œconvenient notationā€ he referred to as a ā€œdelta functionā€ which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.

Similar threads

Replies
12
Views
2K
Replies
5
Views
2K
Replies
10
Views
2K
Replies
4
Views
1K
Replies
8
Views
993
Replies
11
Views
2K
Back
Top