LU decomposition can be performed with or without pivoting, but without pivoting, the process may fail if a leading submatrix has a zero at the top left. If decomposition without pivoting is not possible, row or column reordering is necessary to proceed. The algorithm for LU decomposition with partial pivoting involves swapping rows in the matrices L, U, and P to maintain the correct structure after identifying the pivot row. The discussion also raises questions about the correctness of the swapping process in the algorithm, particularly regarding the integrity of the lower and upper triangular matrices. Overall, understanding when to use pivoting is crucial for successful LU decomposition.
#1
mathmari
Gold Member
MHB
4,984
7
Hey!
We consider the matrix $$A=\begin{pmatrix}1 & -2 & 5 & 0 & 5\\ 1 & 0 & 2 & 0 & 3\\ 1 & 2 & 5 & 4 & 6 \\ -2 & 2 & -4 & 1 & -6 \\ 3 & 4 & 9 & 5 & 11\end{pmatrix}$$ I want to find the LU decomposition.
How do we know if we have to do the decomposition with pivoting or without? :unsure:
Wiki explains that a decomposition without pivoting ($A=LU$) can fail for a non-singular matrix A, and that this is a procedural problem.
It happens when during the procedure we get a sub matrix with a $0$ at the left top.
Decomposition is still possible, but we need to reorder rows or columns to make it happen.
In other words, I believe we simply have to try the LU decomposition procedure without pivoting to find out if it is possible.
And if we find that decomposition without pivoting is not possible, we can reorder the rows and continue the algorithm.
#3
mathmari
Gold Member
MHB
4,984
7
I found the following code for the LU-decomposition with partial pivoting:
Code:
Algorithm LUP-Decomposition(A)
Input matrix A
Output matrices L,U,P: PA=LU
n <- A.rows
L <- new n x n matrix
U <- new n x n matrix
P <- new n x n matrix
#initialization, as before
#for L, U, and P = I
for i <- 0 to n do
for j <- 0 to n do
if i > j then
U[i][j] <- 0
P[i][j] <- 0
else if i == j then
L[i][j] <- 1
P[i][j] <- 1
else
L[i][j] <- 0
P[i][j] <- 0
for k <- 0 to n do # for each equation
p <- 0
for i <- k to n dο #find pivot row
if abs(A[i][k]) > p then
p <- abs(A[i][k])
pivot_row <- i
if p == 0 then
error("singular matrix")
for j <- 0 to n do #swap rows
swap L[k][j] with L[pivot_row][j]
swap U[k][j] with U[pivot_row][j]
swap P[k][j] with P[pivot_row][j]
swap A[k][j] with A[pivot_row][j]
U[k][k] <- A[k][k]
for i <- k + 1 to n dο
L[i][k] <- A[i][k] / U[k][k] # L column
U[k][i] <- A[k][i] # U row
for i <- k + 1 to n do #gauss
for j <- k + 1 to n do #elimination
A[i][j] <- A[i][j] – L[i][k]*U[k][j]
return L, U, P
Are the lines:
swap L[k][j] with L[pivot_row][j]
swap U[k][j] with U[pivot_row][j]
correct? Why do we have to swap also these matrices? After swapping them they are no more a lower and upper matrix respectively, are they? :unsure:I tried to apply that algorithm to a matrix, for example $A=\begin{pmatrix}1 & 2 & 4 \\ 3 & 8 & 14 \\ 2 & 6 & 13\end{pmatrix}$ :
At the initializations we get $L=\begin{pmatrix}1 & 0 & 0 \\ \ell_{21} & 1 & 0 \\ \ell_{31} & \ell_{32} & 1\end{pmatrix}$, $U=\begin{pmatrix}u_{11} & u_{12} & u_{13} \\ 0 & u_{22} & u_{23} \\ 0 & 0 & u_{33}\end{pmatrix}$ and $P=I_{3\times 3}$.
The maximum element by absolute value at the first column is $3$, at the second row.
Now we swap the first the second lines at all the matrices $L,U,P,A$ and we get:
$$L=\begin{pmatrix} \ell_{21} & 1 & 0 \\ 1 & 0 & 0 \\ \ell_{31} & \ell_{32} & 1\end{pmatrix}, \ U=\begin{pmatrix} 0 & u_{22} & u_{23} \\ u_{11} & u_{12} & u_{13} \\ 0 & 0 & u_{33}\end{pmatrix}, \ P=\begin{pmatrix}0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{pmatrix}, \ A=\begin{pmatrix} 3 & 8 & 14 \\ 1 & 2 & 4 \\ 2 & 6 & 13\end{pmatrix}$$
Then we get $U[1,1]=3$.
We also get $L[2,1]=\frac{a_{21}'}{3}=\frac{1}{3}$, $U[1,2]=a_{12}'=8$ and $L[3,1]=\frac{a_{31}'}{3}=\frac{2}{3}$, $U[1,3]=a_{13}'=14$Is the first loop correct?:unsure:
Hi,
I was watching the following video. I found some points confusing. Could you please help me to understand the gaps? Thanks, in advance!
Question 1:
Around 4:22, the video says the following.
So for those mathematicians, negative numbers didn't exist. You could subtract, that is find the difference between two positive quantities, but you couldn't have a negative answer or negative coefficients. Mathematicians were so averse to negative numbers that there was no single quadratic...
Here I made a terrible mistake of assuming this to be an equilateral triangle and set 2sinx=1 => x=pi/6. Although this did derive the double angle formulas it also led into a terrible mess trying to find all the combinations of sides. I must have been tired and just assumed 6x=180 and 2sinx=1. By that time, I was so mindset that I nearly scolded a person for even saying 90-x. I wonder if this is a case of biased observation that seeks to dis credit me like Jesus of Nazareth since in reality...
I posted this in the Lame Math thread, but it's got me thinking.
Is there any validity to this? Or is it really just a mathematical trick?
Naively, I see that i2 + plus 12 does equal zero2.
But does this have a meaning?
I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero?
Ibix offered a rendering of the diagram using what I assume is matrix* notation...