Number of multiplications and divisions for LU decomposition

In summary, the conversation discusses the number of multiplications and divisions needed to obtain an LU decomposition for an $(n+1)\times (n+1)$ matrix, given a known LU decomposition for a given $n\times n$ matrix. It is shown that the complexity is $O(n^2)$, as each elimination requires $n$ multiplications and $1$ division, and there are $n+1$ elements in the last row and column of the new $L$ and $U$ matrices. The conversation also explores the relationship between the new matrices and the given $L$ and $U$ matrices.
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! 😊

Let $A$ a $n\times n$ matrix with known LU decomposition, let $u\in \mathbb{R}^n, v\in \mathbb{R}^{n+1}$.

Show that the number of multiplications and divisions that are needed to get a LU decomposition of the $(n+1)\times (n+1)$ matrix $$\begin{pmatrix}A & u \\ v^T\end{pmatrix}$$ is at most $O(n^2)$.
To get $U$, we have to eliminate $n$ terms below the main diagonal (which is the elements of $u^T$ except the last element of that row). Each elimination requires computing the row multiplier, which involves division by the pivotal element.

So we have $n$ divisions, or not? :unsure:
 
Last edited by a moderator:
Mathematics news on Phys.org
  • #2
mathmari said:
So we have $n$ divisions, or not?
Hey mathmari!

Yep.
That leaves counting the number of multiplications that we need for each elimination, doesn't it? 🤔
 
  • #3
Klaas van Aarsen said:
That leaves counting the number of multiplications that we need for each elimination, doesn't it? 🤔

So for each element except the $(n+1)$th one of the last row we have to multiply the row multiplier with the respective row and subtract it from the last row, right?
Therefore we have to do $n$ multiplications, or not? :unsure:
 
  • #4
mathmari said:
So for each element except the $(n+1)$th one of the last row we have to multiply the row multiplier with the respective row and subtract it from the last row, right?
Therefore we have to do $n$ multiplications, or not?
That sounds about right to find a single unknown entry, although a number of those multiplications are actually with zero.
Either way, for the worst case entry in row $n+1$ and column $n$ of the new $L$ matrix, we have indeed $n$ multiplications and $1$ division. 🤔

We still need to repeat for every entry in the extra row below the given L matrix, don't we?
And also for the extra column to the right of the given U matrix? 🤔
 
  • #5
Klaas van Aarsen said:
That sounds about right to find a single unknown entry, although a number of those multiplications are actually with zero.
Either way, for the worst case entry in row $n+1$ and column $n$ of the new $L$ matrix, we have indeed $n$ multiplications and $1$ division. 🤔

We still need to repeat for every entry in the extra row below the given L matrix, don't we?
And also for the extra column to the right of the given U matrix? 🤔

I got stuk right now. Could you explain that further to me? I haven't really understood that. :unsure:
 
  • #6
mathmari said:
I got stuk right now. Could you explain that further to me? I haven't really understood that.
We want to find that the order of complexity is $O(n^2)$ don't we?
To find that, we need an algorithm to find the LU-decomposition of the matrix, don't we? 🤔

Let $A=L\cdot U$ be the given LU-decomposition of $A$.
Let $\tilde A=\begin{pmatrix}A & u \\ u^T\end{pmatrix}$ be the matrix to decompose.
And let $\tilde L,\,\tilde U$ be its LU-decomposition.

Can we write $\tilde L,\,\tilde U$ in terms of $L$, $U$, and $u$? (Wondering)
 
  • #7
Klaas van Aarsen said:
We want to find that the order of complexity is $O(n^2)$ don't we?
To find that, we need an algorithm to find the LU-decomposition of the matrix, don't we? 🤔

Let $A=L\cdot U$ be the given LU-decomposition of $A$.
Let $\tilde A=\begin{pmatrix}A & u \\ u^T\end{pmatrix}$ be the matrix to decompose.
And let $\tilde L,\,\tilde U$ be its LU-decomposition.

Can we write $\tilde L,\,\tilde U$ in terms of $L$, $U$, and $u$? (Wondering)

We have that $$\tilde{L}=\begin{pmatrix}L\\ 0 & 0 & \ldots & 0& \ell\end{pmatrix}$$ with $\ell>0$, or not? :unsure:
 
  • #8
mathmari said:
We have that $$\tilde{L}=\begin{pmatrix}L\\ 0 & 0 & \ldots & 0& \ell\end{pmatrix}$$ with $\ell>0$, or not?
I don't think so. (Shake)
If it would be, wouldn't we get that $u_1$ is always zero? (Worried)
 
  • #9
Klaas van Aarsen said:
I don't think so. (Shake)
If it would be, wouldn't we get that $u_1$ is always zero? (Worried)

I got stuck right now. Why is $u_1$ always zero then? :unsure:
 
  • #10
mathmari said:
I got stuck right now. Why is $u_1$ always zero then?

What do we get if we calculate:
$$\tilde{L} \cdot \tilde U =\begin{pmatrix}
& \\ & \\ L\\ 0 & \ldots & 0& \ell
\end{pmatrix}\cdot \begin{pmatrix}
&&U& * \\ &&& \vdots \\ &&& *
\\ 0 & \ldots & 0& *
\end{pmatrix}$$
?
In particular for the entry in the bottom left corner? (Wondering)

Can it be equal to
$$\tilde A=\begin{pmatrix}A & u \\ u^T\end{pmatrix}$$
with $u_1\ne 0$? 🤔
 
  • #11
Klaas van Aarsen said:
What do we get if we calculate:
$$\tilde{L} \cdot \tilde U =\begin{pmatrix}
& \\ & \\ L\\ 0 & \ldots & 0& \ell
\end{pmatrix}\cdot \begin{pmatrix}
&&U& * \\ &&& \vdots \\ &&& *
\\ 0 & \ldots & 0& *
\end{pmatrix}$$
?
In particular for the entry in the bottom left corner? (Wondering)

Can it be equal to
$$\tilde A=\begin{pmatrix}A & u \\ u^T\end{pmatrix}$$
with $u_1\ne 0$? 🤔

We have the following, or not?

$$\tilde{L} \cdot \tilde U =\begin{pmatrix}
& & & 0\\ & & & 0\\ L & & & \vdots\\ \ell_1 & \ldots & \ell_n& \ell_{n+1}
\end{pmatrix}\cdot \begin{pmatrix}
&&U& u_1 \\ &&& \vdots \\ &&& u_n
\\ 0 & \ldots & 0& u_{n+1}
\end{pmatrix}=\begin{pmatrix}
&&A& \ell_{11}u_1 \\ &&& \vdots \\ &&& \displaystyle{\sum_{i=1}^n\ell_{ni}u_i }
\\ \ell_1u_{11} & \ldots & \displaystyle{\sum_{i=1}^n\ell_iu_{in}} & \displaystyle{\sum_{i=1}^{n+1}\ell_iu_i}
\end{pmatrix}$$
where $L=(\ell_{ij})_{i,j\in \{1, 2, \ldots , n\}}$ and $U=(u_{ij})_{i,j\in \{1, 2, \ldots , n\}}$.

:unsure:
 
  • #12
mathmari said:
We have the following, or not?

$$\tilde{L} \cdot \tilde U =\begin{pmatrix}
& & & 0\\ & & & 0\\ L & & & \vdots\\ \ell_1 & \ldots & \ell_n& \ell_{n+1}
\end{pmatrix}\cdot \begin{pmatrix}
&&U& u_1 \\ &&& \vdots \\ &&& u_n
\\ 0 & \ldots & 0& u_{n+1}
\end{pmatrix}=\begin{pmatrix}
&&A& \ell_{11}u_1 \\ &&& \vdots \\ &&& \displaystyle{\sum_{i=1}^n\ell_{ni}u_i }
\\ \ell_1u_{11} & \ldots & \displaystyle{\sum_{i=1}^n\ell_iu_{in}} & \displaystyle{\sum_{i=1}^{n+1}\ell_iu_i}
\end{pmatrix}$$
where $L=(\ell_{ij})_{i,j\in \{1, 2, \ldots , n\}}$ and $U=(u_{ij})_{i,j\in \{1, 2, \ldots , n\}}$.
Yep. (Nod)

It's a bit problematic to introduce $u_1,\ldots,u_{n+1}$ here though.
Doesn't that clash with the given vector $u$? (Worried)

Anyway, we would get that for instance $\ell_1 u_{11}$ must be equal to the first entry of the vector $u$, mustn't it?
If that first entry of $u$ is non-zero, then $\ell_1$ cannot be zero can it? 🤔
 
  • #13
Klaas van Aarsen said:
It's a bit problematic to introduce $u_1,\ldots,u_{n+1}$ here though.
Doesn't that clash with the given vector $u$? (Worried)

Ok, it is better to use other letters.
Klaas van Aarsen said:
Anyway, we would get that for instance $\ell_1 u_{11}$ must be equal to the first entry of the vector $u$, mustn't it?
If that first entry of $u$ is non-zero, then $\ell_1$ cannot be zero can it? 🤔

So have:
$$\tilde{L} \cdot \tilde U =\begin{pmatrix}
& & & 0\\ & & & 0\\ L & & & \vdots\\ \alpha_1 & \ldots & \alpha_n& \alpha_{n+1}
\end{pmatrix}\cdot \begin{pmatrix}
&&U& \beta_1 \\ &&& \vdots \\ &&& \beta_n
\\ 0 & \ldots & 0& \beta_{n+1}
\end{pmatrix}=\begin{pmatrix}
&&A& \ell_{11}\beta_1 \\ &&& \vdots \\ &&& \displaystyle{\sum_{i=1}^n\ell_{ni}\beta_i }
\\ \alpha_1u_{11} & \ldots & \displaystyle{\sum_{i=1}^n\alpha_iu_{in}} & \displaystyle{\sum_{i=1}^{n+1}\alpha_i\beta_i}
\end{pmatrix}$$
where $L=(\ell_{ij})_{i,j\in \{1, 2, \ldots , n\}}$ and $U=(u_{ij})_{i,j\in \{1, 2, \ldots , n\}}$.

The result must be of the form $\tilde A=\begin{pmatrix}A & u \\ u^T\end{pmatrix}$, so it must hold \begin{align*}&\ell_{11}\beta_1=\alpha_1u_{11} \\ &\ldots \\ & \displaystyle{\sum_{i=1}^n\ell_{ni}\beta_i }=\displaystyle{\sum_{i=1}^n\alpha_iu_{in}} \end{align*}

What do we get from that? :unsure:
 
  • #14
mathmari said:
The result must be of the form $\tilde A=\begin{pmatrix}A & u \\ u^T\end{pmatrix}$, so it must hold \begin{align*}&\ell_{11}\beta_1=\alpha_1u_{11} \\ &\ldots\end{align*}

What do we get from that?

From the first equation we get $$\ell_{11}\beta_1=\alpha_1u_{11} = u_1 \implies \begin{cases}\alpha_1 = \frac {u_1}{u_{11}} \\ \beta_1=\frac{u_1}{\ell_{11}}\end{cases}$$ don't we? 🤔
 
  • #15
I have just noticed a vector $v$ in the problem statement:
mathmari said:
Let $A$ a $n\times n$ matrix with known LU decomposition, let $u\in \mathbb{R}^n, v\in \mathbb{R}^{n+1}$.

Show that the number of multiplications and divisions that are needed to get a LU decomposition of the $(n+1)\times (n+1)$ matrix $$\begin{pmatrix}A & u \\ u^T\end{pmatrix}$$ is at most $O(n^2)$.

Is there a typo?
Shouldn't that $v$ be used somewhere? (Worried)
 
  • #16
Klaas van Aarsen said:
I have just noticed a vector $v$ in the problem statement:Is there a typo?
Shouldn't that $v$ be used somewhere? (Worried)

Ahh yes, it is a typo. :oops: It should be:
$$\begin{pmatrix}A & u \\ v^T\end{pmatrix}$$

So from $$
\tilde{L} \cdot \tilde U =\begin{pmatrix}
& & & 0\\ & & & 0\\ L & & & \vdots\\ \alpha_1 & \ldots & \alpha_n& \alpha_{n+1}
\end{pmatrix}\cdot \begin{pmatrix}
&&U& \beta_1 \\ &&& \vdots \\ &&& \beta_n
\\ 0 & \ldots & 0& \beta_{n+1}
\end{pmatrix}=\begin{pmatrix}
&&A& \ell_{11}\beta_1 \\ &&& \vdots \\ &&& \displaystyle{\sum_{i=1}^n\ell_{ni}\beta_i }
\\ \alpha_1u_{11} & \ldots & \displaystyle{\sum_{i=1}^n\alpha_iu_{in}} & \displaystyle{\sum_{i=1}^{n+1}\alpha_i\beta_i}
\end{pmatrix}$$ what do we get? :unsure:
 
  • #17
We can deduce the values of the $\alpha_i$ and $\beta_i$ can't we?
Once we have those, we have the LU-decomposition of $\tilde A$. 🤔
 
  • #18
Klaas van Aarsen said:
We can deduce the values of the $\alpha_i$ and $\beta_i$ can't we?
Once we have those, we have the LU-decomposition of $\tilde A$. 🤔

Let $v=(v_1, \ldots , v_{n+1})$ and $u=(x_1, \ldots, x_{n+1})$.

We have that \begin{align*}&\alpha_1u_{11}=v_1 \Rightarrow \alpha_1=\frac{v_1}{u_{11}} \\ &\ldots \\ &\sum_{i=1}^n\alpha_iu_{in}=v_n \Rightarrow \alpha_n=\frac{v_n-\sum_{i=1}^{n-1}\alpha_iu_{in}}{u_{nn}}\\ &\sum_{i=1}^{n+1}\alpha_i\beta_i=v_{n+1}=x_{n+1} \Rightarrow a_{n+1}\beta_{n+1}=v_{n+1}-\sum_{i=1}^{n}\alpha_i\beta_i \ \text{ and } \ a_{n+1}\beta_{n+1}=x_{n+1}-\sum_{i=1}^{n}\alpha_i\beta_i\\ &\ell_{11}\beta_1=x_1 \Rightarrow \beta_i=\frac{x_1}{\ell_{11}} \\ &\ldots \\ &\sum_{i=1}^n\ell_{ni}\beta_i=x_n \Rightarrow \beta_n=\frac{x_n-\sum_{i=1}^{n-1}\beta_i\ell_{ni}}{\ell_{nn}}\end{align*}

Is everything correct? :unsure:
 
Last edited by a moderator:
  • #19
I see an $u_n$ that I think should be a $v_n$.
And I also see an $u_{n+1}$ that should be a $v_{n+1}$. (Worried)

Other than that it looks good to me! (Nod)
 
  • #20
Klaas van Aarsen said:
I see an $u_n$ that I think should be a $v_n$.
And I also see an $u_{n+1}$ that should be a $v_{n+1}$. (Worried)

Other than that it looks good to me! (Nod)

I changed it!

So in total we do $\displaystyle{2\cdot \sum_{i=1}^ni=2\cdot \frac{n(n+1)}{2}=n(n+1)}$ mutliplications and $\displaystyle{2n}$ divisions, or not? :unsure:
 
  • #21
mathmari said:
I changed it!

So in total we do $\displaystyle{2\cdot \sum_{i=1}^ni=2\cdot \frac{n(n+1)}{2}=n(n+1)}$ mutliplications and $\displaystyle{2n}$ divisions, or not?
Yep. (Nod)
 
  • #22
Klaas van Aarsen said:
Yep. (Nod)

Great! Is this a formal proof to write the matrices and the formulas that we get? Or do we have to do also something else/more? :unsure:
 
  • #23
mathmari said:
Great! Is this a formal proof to write the matrices and the formulas that we get? Or do we have to do also something else/more?
We have identified an algorithm with order $O(n^2)$ that solves the problem.
So that qualifies as a proof that the order of the desired LU-decomposition is at most $O(n^2)$, doesn't it? 🤔
 

1. How many multiplications and divisions are required for LU decomposition?

The number of multiplications and divisions required for LU decomposition depends on the size of the matrix being decomposed. In general, for an n x n matrix, the number of multiplications is approximately n3/3 and the number of divisions is approximately n2/2. This means that the number of operations grows significantly as the size of the matrix increases.

2. Is there a way to reduce the number of multiplications and divisions for LU decomposition?

Yes, there are several methods for reducing the number of operations required for LU decomposition. One approach is to use a pivoting strategy, which rearranges the rows and columns of the matrix to minimize the number of operations. Another approach is to use a more efficient algorithm, such as the Crout or Doolittle method, which reduces the number of operations by eliminating unnecessary calculations.

3. How does the number of multiplications and divisions for LU decomposition compare to other matrix decomposition methods?

Compared to other matrix decomposition methods, such as Cholesky decomposition or QR decomposition, LU decomposition typically requires a larger number of operations. However, it is still a commonly used method due to its simplicity and ability to handle a wide range of matrices.

4. Can the number of multiplications and divisions for LU decomposition be calculated beforehand?

Yes, the number of multiplications and divisions for LU decomposition can be calculated beforehand by using the formula mentioned in the first question. However, this is only an approximation and the actual number of operations may vary slightly depending on the specific matrix being decomposed.

5. How does the number of multiplications and divisions for LU decomposition affect the efficiency of solving linear systems?

The number of multiplications and divisions for LU decomposition is a significant factor in the efficiency of solving linear systems. Generally, the more operations required for the decomposition, the longer it will take to solve the system. Therefore, reducing the number of operations can greatly improve the efficiency of solving linear systems using LU decomposition.

Similar threads

  • General Math
Replies
12
Views
1K
  • General Math
Replies
2
Views
921
  • General Math
Replies
5
Views
2K
Replies
13
Views
2K
  • General Math
Replies
8
Views
762
Replies
13
Views
3K
Replies
2
Views
3K
  • General Math
Replies
10
Views
2K
  • Linear and Abstract Algebra
Replies
6
Views
885
  • Linear and Abstract Algebra
Replies
1
Views
1K
Back
Top