LU decomposition can be performed with or without pivoting, but without pivoting, the process may fail if a leading submatrix has a zero at the top left. If decomposition without pivoting is not possible, row or column reordering is necessary to proceed. The algorithm for LU decomposition with partial pivoting involves swapping rows in the matrices L, U, and P to maintain the correct structure after identifying the pivot row. The discussion also raises questions about the correctness of the swapping process in the algorithm, particularly regarding the integrity of the lower and upper triangular matrices. Overall, understanding when to use pivoting is crucial for successful LU decomposition.
#1
mathmari
Gold Member
MHB
4,984
7
Hey!
We consider the matrix $$A=\begin{pmatrix}1 & -2 & 5 & 0 & 5\\ 1 & 0 & 2 & 0 & 3\\ 1 & 2 & 5 & 4 & 6 \\ -2 & 2 & -4 & 1 & -6 \\ 3 & 4 & 9 & 5 & 11\end{pmatrix}$$ I want to find the LU decomposition.
How do we know if we have to do the decomposition with pivoting or without? :unsure:
Wiki explains that a decomposition without pivoting ($A=LU$) can fail for a non-singular matrix A, and that this is a procedural problem.
It happens when during the procedure we get a sub matrix with a $0$ at the left top.
Decomposition is still possible, but we need to reorder rows or columns to make it happen.
In other words, I believe we simply have to try the LU decomposition procedure without pivoting to find out if it is possible.
And if we find that decomposition without pivoting is not possible, we can reorder the rows and continue the algorithm.
#3
mathmari
Gold Member
MHB
4,984
7
I found the following code for the LU-decomposition with partial pivoting:
Code:
Algorithm LUP-Decomposition(A)
Input matrix A
Output matrices L,U,P: PA=LU
n <- A.rows
L <- new n x n matrix
U <- new n x n matrix
P <- new n x n matrix
#initialization, as before
#for L, U, and P = I
for i <- 0 to n do
for j <- 0 to n do
if i > j then
U[i][j] <- 0
P[i][j] <- 0
else if i == j then
L[i][j] <- 1
P[i][j] <- 1
else
L[i][j] <- 0
P[i][j] <- 0
for k <- 0 to n do # for each equation
p <- 0
for i <- k to n dĪæ #find pivot row
if abs(A[i][k]) > p then
p <- abs(A[i][k])
pivot_row <- i
if p == 0 then
error("singular matrix")
for j <- 0 to n do #swap rows
swap L[k][j] with L[pivot_row][j]
swap U[k][j] with U[pivot_row][j]
swap P[k][j] with P[pivot_row][j]
swap A[k][j] with A[pivot_row][j]
U[k][k] <- A[k][k]
for i <- k + 1 to n dĪæ
L[i][k] <- A[i][k] / U[k][k] # L column
U[k][i] <- A[k][i] # U row
for i <- k + 1 to n do #gauss
for j <- k + 1 to n do #elimination
A[i][j] <- A[i][j] ā L[i][k]*U[k][j]
return L, U, P
Are the lines:
swap L[k][j] with L[pivot_row][j]
swap U[k][j] with U[pivot_row][j]
correct? Why do we have to swap also these matrices? After swapping them they are no more a lower and upper matrix respectively, are they? :unsure:I tried to apply that algorithm to a matrix, for example $A=\begin{pmatrix}1 & 2 & 4 \\ 3 & 8 & 14 \\ 2 & 6 & 13\end{pmatrix}$ :
At the initializations we get $L=\begin{pmatrix}1 & 0 & 0 \\ \ell_{21} & 1 & 0 \\ \ell_{31} & \ell_{32} & 1\end{pmatrix}$, $U=\begin{pmatrix}u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33}\end{pmatrix}$ and $P=I_{3\times 3}$.
The maximum element by absolute value at the first column is $3$, at the second row.
Now we swap the first the second lines at all the matrices $L,U,P,A$ and we get:
$$L=\begin{pmatrix} \ell_{21} & 1 & 0 \\ 1 & 0 & 0 \\ \ell_{31} & \ell_{32} & 1\end{pmatrix}, \ U=\begin{pmatrix} 0 & u_{22} & u_{23} \\ u_{11} & u_{12} & u_{13} \\ 0 & 0 & u_{33}\end{pmatrix}, \ P=\begin{pmatrix}0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{pmatrix}, \ A=\begin{pmatrix} 3 & 8 & 14 \\ 1 & 2 & 4 \\ 2 & 6 & 13\end{pmatrix}$$
Then we get $U[1,1]=3$.
We also get $L[2,1]=\frac{a_{21}'}{3}=\frac{1}{3}$, $U[1,2]=a_{12}'=8$ and $L[3,1]=\frac{a_{31}'}{3}=\frac{2}{3}$, $U[1,3]=a_{13}'=14$Is the first loop correct?:unsure:
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes.
I have seen that this is an important subject in maths
My question is what physical applications does such a model apply to?
I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation
$$
a^n+b^n=c^n
$$
has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy
"Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles.
In Diracās Principles of Quantum Mechanics published in 1930 he introduced a āconvenient notationā he referred to as a ādelta functionā which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra
Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/
by...