Express the given matrix in the form ##L_1DU_1##

D##...$$\begin{bmatrix}4& 0 \\0 & -13 & \\\end{bmatrix}$$...and then calculating ##U## and ##L## also works, but with non-integer values for the coefficients in L. So e.g. using ##-3R_1+2R_2## for the first row operation gives...$$\begin{bmatrix}4& 3 \\0 & -13 & \\\end{bmatrix}$$...and then using ##R_2\div -
  • #1
chwala
Gold Member
2,650
351
Homework Statement
This is my own question ...refreshing on matrices after a long time... i do not think there is more insight on this...i checked my working by multiplying the three matrices and confirmed that product is correct.

If there is anything new then i would appreciate to hear from you. Cheers!

##\begin{bmatrix}
4& 3 \\
6 & -2 & \\
\end{bmatrix}##
Relevant Equations
understanding of matrix decomposition.

$$\begin{bmatrix}
4& 3 \\
6 & -2 & \\
\end{bmatrix}=
\begin{bmatrix}
1& 0 \\
a& 1& \\
\end{bmatrix}⋅
\begin{bmatrix}
b& c \\
0& d& \\
\end{bmatrix}$$

##b=4, ab=6,⇒b=1.5, d=-6.5, c=3##
$$\begin{bmatrix}

4& 3 \\
6 & -2 & \\
\end{bmatrix}

=
\begin{bmatrix}
1& 0\\
\dfrac{3}{2} & 1 & \\
\end{bmatrix}⋅
\begin{bmatrix}
4& 3\\
0 & \dfrac{-13}{2} & \\
\end{bmatrix}
$$$$\begin{bmatrix}
4& 3 \\
0 & \dfrac{-13}{2}& \\
\end{bmatrix}
=
\begin{bmatrix}
e& 0\\
0& f & \\
\end{bmatrix}⋅
\begin{bmatrix}
1& g\\
0 & 1 & \\
\end{bmatrix}
$$

It follows that,
##e=4, f=\dfrac{-13}{2}## therefore,

$$\begin{bmatrix}
4& 3 \\
6 & -2 & \\
\end{bmatrix}
=
\begin{bmatrix}
1& 0\\
1.5 & 1 & \\
\end{bmatrix}⋅
\begin{bmatrix}
4& 0\\
0 & -6.5 & \\
\end{bmatrix}⋅
\begin{bmatrix}
1& 0.75\\
0 & 1 & \\
\end{bmatrix}
$$
 
Last edited:
Physics news on Phys.org
  • #2
Not new, and probably not much to say about the matrix case. One can consider this decomposition in terms of linear transformations with certain properties of a vector space, and how this corresponds to a decomposition of that vector space. Of course, the two-dimensional case is not very enlightening.

However, there is a deeply dug relation between this decomposition and quantum physics.

\begin{matrix}
L D U &=&\text{ { lower unipotent } }&\cdot&\text{ { diagonal } }&\cdot&\text{ { upper unipotent } }&\in &\operatorname{GL(V)}\\
L +D+ U &=&\text{ { lower nilpotent } }&+&\text{ { diagonal } }&+&\text{ { upper nilpotent } }&\in &\operatorname{\mathfrak{gl}(V)}\\
X&=&\underbrace{\displaystyle{\sum_{\alpha \in \Delta^\mathbf{-}}y_\alpha E_\alpha }}_{\in \mathfrak{N^\mathbf{-}}}&+&\underbrace{\displaystyle{\sum_{\alpha \in \Delta}h_\alpha H_\alpha }}_{\in \mathfrak{h}}&+&\underbrace{\displaystyle{\sum_{\alpha \in \Delta^\mathbf{+}}x_\alpha E_\alpha }}_{\in \mathfrak{N^\mathbf{+}}}&\in&\mathfrak{gl(g)}\\
&&\text{ negative weights }&&\text{ semisimple }&&\text{ positive weights }&&\\
\text{operator}&:&\text{ lowering ladder }&&\text{ stationary }&&\text{ raising ladder }&&
\end{matrix}
Yes, there is a long way to go from your example above to quantum mechanics, and I left out several hundred pages of theory or roughly four textbooks, but in the end, it comes down to that list.
 
  • Like
Likes chwala
  • #3
chwala said:
If there is anything new then i would appreciate to hear from you. Cheers!
Not sure if this answers what you are asking but there is an equivalent (but IMO quicker) procedure.

This starts by reducing the original matrix to row echelon form. You can easily find examples/explanations, e.g.
 
  • Like
Likes chwala
  • #4
Steve4Physics said:
Not sure if this answers what you are asking but there is an equivalent (but IMO quicker) procedure.

This starts by reducing the original matrix to row echelon form. You can easily find examples/explanations, e.g.

I am following your video example; but there is a problem...from

$$\begin{bmatrix}
4& 3 \\
6 & -2 & \\
\end{bmatrix}$$

I used ##-3R_1+2R_2## to get;

$$\begin{bmatrix}
4& 3 \\
0 & -13 & \\
\end{bmatrix}$$

The next step here as suggested is to factor out the diagonals ...to get matrix ##D##. When you pull out (factor out as suggested) you get;

$$\begin{bmatrix}
4& 0 \\
0 & -13 & \\
\end{bmatrix}$$

instead of

$$\begin{bmatrix}
4& 0 \\
0 & -6.5 & \\
\end{bmatrix}$$

I can tell that we have to divide row 2 by 2...but the problem is how would you tell that by using the indicated approach? I went ahead with the other steps i.e in getting matrix ##U## this was ok. The steps i followed were

$$\begin{bmatrix}
4& 3 \\
0 & -13 & \\
\end{bmatrix}$$

##R_1\div4##

##R_2\div(-13)##

to realize;

$$\begin{bmatrix}
1& 0.75 \\
0 & 1 & \\
\end{bmatrix}$$

I also found matrix ##L## i.e by using the identity matrix as suggested; using the reverse operation ##3R_1+2R_2## and ##R_2\div 2## we shall have;

$$\begin{bmatrix}
1& 0 \\
0 & 1 & \\
\end{bmatrix}=

\begin{bmatrix}
1& 0 \\
3 & 2 & \\
\end{bmatrix}=
\begin{bmatrix}
1& 0 \\
1.5 & 1 & \\
\end{bmatrix}
$$

The only hitch on this approach is on how to get the correct matrix for ##D##. You may counter check this...thanks.
 
Last edited:
  • #5
chwala said:
$$\begin{bmatrix}
4& 3 \\
6 & -2 & \\
\end{bmatrix}$$

I used ##-3R_1+2R_2## ...
Edit. Aplogies. You hadn't made any mistakes so my reply (below) is wrong. I've struck it through.

I think you have performed the wrong row operation and also got the arithmetic wrong!$$\begin{bmatrix}
4& 3 \\
6 & -2 & \\
\end{bmatrix}$$Replace row-2 by ##R_2 +(-1.5R_1)##$$\begin{bmatrix}
4& 3 \\
{6+(-6)} & {-2+(-4.5)}& \\
\end{bmatrix}$$giving$$\begin{bmatrix}
4& 3 \\
0&-6.5& \\
\end{bmatrix}$$which has the required diagonal. values.
 
Last edited:
  • Like
Likes chwala
  • #6
Steve4Physics said:
I think you have performed the wrong row operation and also got the arithmetic wrong!$$\begin{bmatrix}
4& 3 \\
6 & -2 & \\
\end{bmatrix}$$Replace row-2 by ##R_2 +(-1.5R_1)##$$\begin{bmatrix}
4& 3 \\
{6+(-6)} & {-2+(-4.5)}& \\
\end{bmatrix}$$giving$$\begin{bmatrix}
4& 3 \\
0&-6.5& \\
\end{bmatrix}$$which has the required diagonal. values.
Ok thanks...I will look at it again.
 
  • #7
Steve4Physics said:
I think you have performed the wrong row operation and also got the arithmetic wrong!$$\begin{bmatrix}
4& 3 \\
6 & -2 & \\
\end{bmatrix}$$Replace row-2 by ##R_2 +(-1.5R_1)##$$\begin{bmatrix}
4& 3 \\
{6+(-6)} & {-2+(-4.5)}& \\
\end{bmatrix}$$giving$$\begin{bmatrix}
4& 3 \\
0&-6.5& \\
\end{bmatrix}$$which has the correct required diagonal values.
The row operation is correct...the only problem is that it's twice the one you've posted...what maybe I would want to know is why is it wrong?...my understanding of echelon matrices is that we can reduce rows and columns in whichever way as long as we are consistent in the arithmetic...

Not unless the approach requires ##R_2## not to have a coefficient...then that would be clear.
 
  • #8
chwala said:
The row operation is correct...
Yes. My apologies. Your row reduction was correct. My bad. I’ve updated Post #5 to reflect this.

When I get chance I’ll go through your working and get back to you.
 
  • Like
Likes chwala
  • #9
chwala said:
Not unless the approach requires ##R_2## not to have a coefficient...then that would be clear.
Yes - that seems correct. Though we should really say that the coefficient of ##R_2## must be 1.

This is more formally stated (though in the context of LU decomposition) here: https://en.wikipedia.org/wiki/LU_decomposition#Using_Gaussian_elimination

If that rule is not followed, I believe a modified procedure would be needed which would result in a valid alternative LDU decomposition - but such that L would not have 1s in its diagonal.

However, I must admit no claim to real expertise - just a dabbler. So I’ll leave it to the experts to provide any further thoughts on this.

Also, note that the video referenced in Post #3 was picked more or less at random. I’ve watched it through now and it’s not that good – the second example (factorisation of a 3x3 matrix) is rushed and has errors.
 
  • Like
Likes chwala
  • #10
Steve4Physics said:
Yes - that seems correct. Though we should really say that the coefficient of ##R_2## must be 1.

This is more formally stated (though in the context of LU decomposition) here: https://en.wikipedia.org/wiki/LU_decomposition#Using_Gaussian_elimination

If that rule is not followed, I believe a modified procedure would be needed which would result in a valid alternative LDU decomposition - but such that L would not have 1s in its diagonal.
This may tidy up a loose end...

As suspected, using ##-3R_1+2R_2## leads to a different LDU factorisation:$$LDU = \begin{bmatrix}
1& 0 \\
1.5 & 0.5 & \\
\end{bmatrix}
\begin{bmatrix}
4& 0 \\
0 & -13 & \\
\end{bmatrix}
\begin{bmatrix}
1& 0.75 \\
0 & 1 & \\
\end{bmatrix}$$which gives the correct original matrix.

If you want to try this for yourself, note that ##-3R_1+2R_2## is equivalent to applying the matrix$$E = \begin{bmatrix}
1& 0 \\
-3 & 2 & \\
\end{bmatrix}$$If the original matrix is ##A## then ##EA = U## and it is not hard to show that ##L = E^{-1}##.

(If this is unfamiliar, read about elimination matrices.)
 
  • Like
Likes chwala

1. What is the purpose of expressing a matrix in the form ##L_1DU_1##?

The purpose of expressing a matrix in the form ##L_1DU_1## is to decompose the original matrix into three matrices: a lower triangular matrix (L), a diagonal matrix (D), and an upper triangular matrix (U). This form allows for easier computation and analysis of the original matrix.

2. How do you determine the values of ##L_1##, ##D##, and ##U_1## in the expression ##L_1DU_1##?

The values of ##L_1##, ##D##, and ##U_1## can be determined through a process called LU decomposition, which involves using row operations to transform the original matrix into the form ##L_1DU_1##. The values of ##L_1## and ##U_1## will be the inverse of the row operations used, while the values of ##D## will be the diagonal elements of the resulting matrix.

3. Can any matrix be expressed in the form ##L_1DU_1##?

No, not all matrices can be expressed in the form ##L_1DU_1##. This form is only applicable to square matrices, meaning the number of rows and columns are equal. Additionally, the matrix must be invertible, meaning it has a unique solution when solving for the unknown variables.

4. How is expressing a matrix in the form ##L_1DU_1## useful in solving systems of linear equations?

Expressing a matrix in the form ##L_1DU_1## is useful in solving systems of linear equations because it allows for easier computation and manipulation of the original matrix. This form can help to identify patterns and relationships between the variables, making it easier to solve for the unknowns.

5. Are there any other forms besides ##L_1DU_1## that can be used to express a matrix?

Yes, there are other forms that can be used to express a matrix, such as the QR decomposition, Cholesky decomposition, and Schur decomposition. Each form has its own unique purpose and advantages, but the ##L_1DU_1## form is commonly used in linear algebra and matrix computations.

Similar threads

  • Precalculus Mathematics Homework Help
Replies
19
Views
1K
  • Precalculus Mathematics Homework Help
Replies
21
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
142
  • Differential Equations
Replies
2
Views
1K
Replies
2
Views
420
Replies
12
Views
1K
Replies
22
Views
1K
Replies
24
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
792
  • Quantum Physics
Replies
3
Views
1K
Back
Top