LU decomposition: Total pivoting

In summary, the LU decomposition of $A$ using total pivoting is $A=LU$ where $L=\begin{pmatrix}1 & 0 & 0 \\ \frac{1}{10} & 1 & 0 \\ \frac{1}{5} & -\frac{2}{9} & 1\end{pmatrix}$ and $U=\begin{pmatrix}10 & 1 & 1 \\ 0 & \frac{9}{10} & \frac{9}{10} \\ 0 & 0 & 1\end{pmatrix}$. The matrices $P_0$ and $P_1$ are defined as row swap operations, and $
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! :eek:

I want to determine the LU decomposition of
$A=\begin{pmatrix}0 & 2 & 1\\1 & 10 & 1 \\1 & 1 & 1\end{pmatrix}$ with total pivoting. I have done the following:

The biggest element of the whole matrix is $10$, so we exchange the first two rows and the first two columns and then we get $\begin{pmatrix}10 & 1 & 1 \\2 & 0 & 1\\1 & 1 & 1\end{pmatrix}$.
Applying now the Gauss algorithm we get $\begin{pmatrix}10 & 1 & 1 \\0 & -\frac{1}{5} & \frac{4}{5}\\0 & \frac{9}{10} & \frac{9}{10}\end{pmatrix}$.
The biggest element of the submatrix is $\frac{9}{10}$ and so we exchange the last two rows and get: $\begin{pmatrix}10 & 1 & 1 \\ 0 & \frac{9}{10} & \frac{9}{10} \\ 0 & -\frac{1}{5} & \frac{4}{5} \end{pmatrix}$. Now we apply the Gauss algorithm and get: $\begin{pmatrix}10 & 1 & 1 \\ 0 & \frac{9}{10} & \frac{9}{10} \\ 0 & 0 & 1 \end{pmatrix}$.

The matrix $U$ is the resulting matrix, $U=\begin{pmatrix}10 & 1 & 1 \\ 0 & \frac{9}{10} & \frac{9}{10} \\ 0 & 0 & 1 \end{pmatrix}$.

The matrix $L$ is $L=P\cdot P_0\cdot G_1^{-1}\cdot P_1\cdot G_2^{-1}$, or not? (Wondering)

The matrices $G_i^{-1}$ are defined as:
$$G_1^{-1}=\begin{pmatrix}1 & 0 & 0 \\\frac{2}{10} & 1 & 0\\\frac{1}{10} & 0 & 1\end{pmatrix} \ \text{ and } \ G_2^{-1}=\begin{pmatrix}1 & 0 & 0 \\0 & 1 & 0\\0 & -\frac{2}{9} & 1\end{pmatrix}$$ or not? (Wondering)


Are the matrices $P_i$ defined as follows?
$$P_0=\begin{pmatrix}1 & 0 & 0 \\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}$$ since this describes the step at which we exchanged the first two rows and the first two columns. (Wondering)
$$P_1=\begin{pmatrix}1 & 0 & 0 \\0 & 0 & 1\\0 & 1 & 0\end{pmatrix}$$ since this describes the step at which we exchanged the two last rows. (Wondering) If these are correct, it doesn't hold that $LU=PA$, does it? (Wondering)
 
Last edited by a moderator:
Mathematics news on Phys.org
  • #2
mathmari said:
Are the matrices $P_i$ defined as follows?
$$P_0=\begin{pmatrix}1 & 0 & 0 \\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}$$ since this describes the step at which we exchanged the first two rows and the first two columns. (Wondering)
$$P_1=\begin{pmatrix}1 & 0 & 0 \\0 & 0 & 1\\0 & 1 & 0\end{pmatrix}$$ since this describes the step at which we exchanged the two last rows.

Hey mathmari!

It should be clear that
$$P_0=\begin{pmatrix}1 & 0 & 0 \\0 & 1 & 0\\0 & 0 & 1\end{pmatrix}$$
cannot be correct. It is the identity matrix, isn't it? (Worried)

Let's define instead:
$$P_0=\begin{pmatrix}0 & 1 & 0 \\1 & 0 & 0\\0 & 0 & 1\end{pmatrix}$$
This is the matrix that swaps either 2 rows or 2 columns, isn't it?
And it's its own inverse. That is, $P_0^{-1}=P_0$.

Swapping 2 rows is a permutation on the left.
However, swapping 2 columns is a permutation on the right.
So after those 2 permutations we have $P_0 A P_0$. (Thinking)

mathmari said:
If these are correct, it doesn't hold that $LU=PA$, does it?

Shouldn't it be $LU=PAQ$, where $P$ reorders the rows of $A$ and $Q$ reorders the columns of $A$? (Wondering)

mathmari said:
The matrix $L$ is $L=P\cdot P_0\cdot G_1^{-1}\cdot P_1\cdot G_2^{-1}$, or not?

I don't think so. We have:
$$R = G_2 \cdot P_1 \cdot G_1 \cdot P_0 \cdot A \cdot P_0$$
From this we have to deduce $L$ such that:
$$LR = PAQ$$
Don't we? (Wondering)
 
  • #3
We have $R=\begin{pmatrix}10 & 1 & 1 \\ 0 & \frac{9}{10} & \frac{9}{10} \\ 0 & 0 & 1 \end{pmatrix}$.

Since $R = G_2 \cdot P_1 \cdot G_1 \cdot P_0 \cdot A \cdot P_0$ we get that $$L\cdot G_2 \cdot P_1 \cdot G_1 \cdot P_0 \cdot A \cdot P_0=P\cdot A\cdot P_0 \Rightarrow L =P\cdot P_0^{-1}\cdot G_1^{-1}\cdot P_1^{-1}\cdot G_2^{-1} \Rightarrow L=P_1\cdot G_1^{-1}\cdot P_1\cdot G_2^{-1}$$

We have the matrices $$G_1^{-1}=\begin{pmatrix}1 & 0 & 0 \\\frac{2}{10} & 1 & 0\\\frac{1}{10} & 0 & 1\end{pmatrix} \ \text{ and } \ G_2^{-1}=\begin{pmatrix}1 & 0 & 0 \\0 & 1 & 0\\0 & -\frac{2}{9} & 1\end{pmatrix}$$ We also have the matrices $$P_0=\begin{pmatrix}0 & 1 & 0 \\1 & 0 & 0\\0 & 0 & 1\end{pmatrix} \ \text{ and } \ P_1=\begin{pmatrix}1 & 0 & 0 \\0 & 0 & 1\\0 & 1 & 0\end{pmatrix}$$

So we get $$L=\begin{pmatrix}1 & 0 & 0 \\0 & 0 & 1\\0 & 1 & 0\end{pmatrix}\begin{pmatrix}1 & 0 & 0 \\\frac{2}{10} & 1 & 0\\\frac{1}{10} & 0 & 1\end{pmatrix}\begin{pmatrix}1 & 0 & 0 \\0 & 0 & 1\\0 & 1 & 0\end{pmatrix}\begin{pmatrix}1 & 0 & 0 \\0 & 1 & 0\\0 & -\frac{2}{9} & 1\end{pmatrix}=\begin{pmatrix}1 & 0 & 0 \\ \frac{1}{10} & 1 & 0 \\ \frac{1}{5} & -\frac{2}{9} & 1\end{pmatrix}$$

Is this correct? (Wondering)
 
  • #4
mathmari said:
Since $R = G_2 \cdot P_1 \cdot G_1 \cdot P_0 \cdot A \cdot P_0$ we get that $$L\cdot G_2 \cdot P_1 \cdot G_1 \cdot P_0 \cdot A \cdot P_0=P\cdot A\cdot P_0 \Rightarrow L =P\cdot P_0^{-1}\cdot G_1^{-1}\cdot P_1^{-1}\cdot G_2^{-1} \Rightarrow L=P_1\cdot G_1^{-1}\cdot P_1\cdot G_2^{-1}$$

How did you get $P\cdot P_0^{-1}=P_1$ ?
I think that the $P$ we will get, will be incorrect. (Worried)

mathmari said:
So we get $$L=\begin{pmatrix}1 & 0 & 0 \\0 & 0 & 1\\0 & 1 & 0\end{pmatrix}\begin{pmatrix}1 & 0 & 0 \\\frac{2}{10} & 1 & 0\\\frac{1}{10} & 0 & 1\end{pmatrix}\begin{pmatrix}1 & 0 & 0 \\0 & 0 & 1\\0 & 1 & 0\end{pmatrix}\begin{pmatrix}1 & 0 & 0 \\0 & 1 & 0\\0 & -\frac{2}{9} & 1\end{pmatrix}=\begin{pmatrix}1 & 0 & 0 \\ \frac{1}{10} & 1 & 0 \\ \frac{1}{5} & -\frac{2}{9} & 1\end{pmatrix}$$

Is this correct?

Yes. (Nod)
 
  • #5
Klaas van Aarsen said:
How did you get $P\cdot P_0^{-1}=P_1$ ?
I think that the $P$ we will get, will be incorrect. (Worried)

Do we not have $P=P_1P_0 \Rightarrow PP_0^{-1}=P_1$ ? (Wondering)
 
  • #6
mathmari said:
Do we not have $P=P_1P_0 \Rightarrow PP_0^{-1}=P_1$ ?

Ah yes, $P$ has to be whatever row permutation it takes to ensure that $L$ is a lower triangular matrix.

We have:
$$L =P\cdot P_0^{-1}\cdot G_1^{-1}\cdot P_1^{-1}\cdot G_2^{-1}$$
And $G_1^{-1}$ and $G_2^{-1}$ are lower triangular.
Since $P_1\cdot G_1^{-1}\cdot P_1^{-1}$ is a conjugation, it is lower triangular as well.
That is, we swap 2 rows and we also swap the corresponding 2 columns, so that the triangular form is retained. (Nerd)

So we can indeed pick $\smash{P\cdot P_0^{-1}=P_1}$.
I must have made a calculation mistake earlier. (Blush)
 

1. What is LU decomposition?

LU decomposition is a method used in linear algebra to factorize a square matrix into a lower triangular matrix (L) and an upper triangular matrix (U). This decomposition can be used to solve systems of linear equations and to calculate determinants and inverses of matrices.

2. What is total pivoting in LU decomposition?

Total pivoting is a type of pivoting used in LU decomposition where the largest element in the current column is chosen as the pivot element. This ensures that the magnitude of the pivot element is always the largest in its row and column, resulting in a more stable and accurate decomposition.

3. How is total pivoting performed in LU decomposition?

In total pivoting, the largest element in the entire submatrix is identified and used as the pivot element. This involves swapping rows and columns to bring the pivot element to the diagonal position. The remaining elements in the same row and column are then updated accordingly.

4. What are the advantages of using total pivoting in LU decomposition?

Total pivoting ensures that the pivot element is always the largest in its row and column, resulting in a more stable and accurate decomposition. This can also help to reduce the effects of round-off errors and can improve the convergence of iterative methods for solving linear systems.

5. In what situations is total pivoting recommended for LU decomposition?

Total pivoting is recommended in cases where the matrix being decomposed has large differences in magnitude between its elements. This can help to avoid issues with ill-conditioning and can improve the accuracy of the decomposition. It is also useful when using iterative methods to solve linear systems, as it can help to improve convergence.

Similar threads

  • General Math
Replies
2
Views
914
  • General Math
Replies
4
Views
801
  • General Math
Replies
10
Views
2K
Replies
13
Views
3K
Replies
22
Views
2K
  • General Math
Replies
9
Views
2K
  • Calculus and Beyond Homework Help
Replies
6
Views
297
  • General Math
Replies
16
Views
3K
  • Linear and Abstract Algebra
Replies
2
Views
420
  • General Math
Replies
21
Views
4K
Back
Top