Gaussian elimination for homogeneous linear systems

  • #1
243
18

Summary:

Remaining equations due to Gauss elimination of a first set of unknowns from an homogeneous linear system

Main Question or Discussion Point

Hi,

I ask for a clarification about the following: consider for instance a 10 x 12 homogeneous linear system and perform Gauss elimination for the first 8 unknowns. Suppose you end up with 5 equations in the remaining 12-8 = 4 unknowns (because in the process of the first 8 unknowns elimination you end up in a raw echelon form with a non-pivot column - basically a free unknown).

For sure the remaining 5 equations are linearly dependent (because of 5 homogeneous equation in 4 unknowns) but my doubt is: for some reason has to be one of the 5 equations identically null (a zero raw) ?

thanks
 
Last edited:

Answers and Replies

  • #2
13,253
10,232
Summary: Remaining equations due to Gauss elimination of a first set of unknowns from an homogeneous linear system

has to be one of the 5 equations identically null (a zero raw) ?
If you apply Gauß elimination until the very end, then yes. If you stop earlier as there is no reason to continue, then no.
 
  • #3
243
18
If you apply Gauss elimination until the very end, then yes. If you stop earlier as there is no reason to continue, then no.
Sorry for the confusion....in the given example eliminating the first 8 unknowns you get 3 equations: 10 - (8-1) = 3 (because of the non-pivot column in the row echelon form attained). Furthermore, as you said, as general case you get 3 equations in 4 (=12-8) unknowns, but.....

take this time an homogeneous square linear system...what if the square matrix ##A## associated to it is symmetric ? (##A = A^T##)
 
Last edited:
  • Like
Likes WWGD
  • #4
13,253
10,232
what if the square matrix ##A## associated to it is symmetric ? (##A = A^T##)
In how far does this matter? If we have a singular matrix we will always end up with rows of all zeroes and an identity submatrix, if divisions are allowed, or a diagonal matrix otherwise.
 
  • #5
243
18
If we have a singular matrix we will always end up with rows of all zeroes and an identity submatrix, if divisions are allowed, or a diagonal matrix otherwise.
just to be clear: the symmetric matrix ##A## I was talking about is the original homogeneous system matrix...Gauss elementary row operations should not preserve the symmetric property , I believe.
 
  • #6
13,253
10,232
just to be clear: the symmetric matrix ##A## I was talking about is the original homogeneous system matrix...Gauss elementary row operations should not preserve the symmetric property , I believe.
Depends on whether you allow row and column operations or only one kind. If both are allowed and divisions are allowed, then we end up with ##\begin{bmatrix}I&0\\0&0\end{bmatrix}##, otherwise not.
$$
\begin{bmatrix}1&2\\2&4\end{bmatrix} \longrightarrow \begin{bmatrix}1&2\\0&0\end{bmatrix}\longrightarrow \begin{bmatrix}1&0\\0&0\end{bmatrix}
$$
 
  • #7
243
18
Depends on whether you allow row and column operations or only one kind. If both are allowed and divisions are allowed, then we end up with ##\begin{bmatrix}I&0\\0&0\end{bmatrix}##, otherwise not.
$$
\begin{bmatrix}1&2\\2&4\end{bmatrix} \longrightarrow \begin{bmatrix}1&2\\0&0\end{bmatrix}\longrightarrow \begin{bmatrix}1&0\\0&0\end{bmatrix}
$$
As far as I can understand this type of "reduction" (using both row and column operations in order to end up with the matrix ##R##) can be applied to any square matrix ##A## not just to symmetric ones, right ?

By the way, the solution set for ##Rx=0## is different from the original ##Ax=0## I believe...
 
Last edited:
  • #8
13,253
10,232
As far as I can understand this type of "reduction" (using both row and column operations in order to end up with the matrix ##R##) can be applied to any square matrix ##A## not just to symmetric ones, right ?

By the way, the solution set for ##Rx=0## is different from the original ##Ax=0## I believe...
Yes, to both. Row operations are operations among the equations. Column operations are a change of variables, so one needs to keep track of them. E.g. adding the second column to the first means to change ##x \longmapsto x'=x+y##. To get a solution in the old coordinates, these steps have to be reversed at the end. It's basically only helpful if you want to know the dimensions, the rank and the defect, the dimension of the nullspace, or the eigenvalues if you don't use divisions.

Using only row operations leads to something like ##\begin{bmatrix}a & b & \ldots \\ 0 & c & \ldots \\ 0&0& \ldots \\ \ldots&\ldots&\ldots\\\ldots&\ldots&\ldots\\0&0&0\\0&0&0 \end{bmatrix}##, an upper triangular matrix with possibly some rows of zeros at the end. In any case, symmetry doesn't play a role in this process, but you can still read out all information you would get if you allowed column operations; so column operations are not really needed - only to answer your question as you didn't mention whether they are allowed or not.
 
  • #9
243
18
Using only row operations leads to something like ##\begin{bmatrix}a & b & \ldots \\ 0 & c & \ldots \\ 0&0& \ldots \\ \ldots&\ldots&\ldots\\\ldots&\ldots&\ldots\\0&0&0\\0&0&0 \end{bmatrix}##, an upper triangular matrix with possibly some rows of zeros at the end. In any case, symmetry doesn't play a role
Just to check I got it correctly: take just the upper triangular matrix you are talking about (forgetting for a moment the possibly rows of zero following it): it will be actually in "raw echelon form" but because of every square matrix in raw echelon form is also upper triangular we can conclude that.
 
  • #10
13,253
10,232
Just to check I got it correctly: take just the upper triangular matrix you are talking about (forgetting for a moment the possibly rows of zero following it): it will be actually in "raw echelon form" but because of every square matrix in raw echelon form is also upper triangular we can conclude that.
It is too long ago that I am firm with the names and the different stages of the procedure. Gauß elimination is mainly done to get this upper triangular matrix, it tells rank, defect, and eigenvalues. And with backward substitution you also get all variables. Of course you can proceed further and try to make a diagonal matrix out of it, but this isn't always necessary. Here comes your symmetry into play: if the matrix is symmetric, then you can achieve such a diagonal matrix at the end of the process.

If you want, there is a matrix calculator on this list:
https://www.physicsforums.com/threa...h-physics-earth-and-other-curiosities.970262/which allows you to play around with a few examples.
 
  • #11
243
18
Here comes your symmetry into play: if the matrix is symmetric, then you can achieve such a diagonal matrix at the end of the process.
If the symmetric matrix A admits ##LDL^T## decomposition then your point is clear: starting from upper triangular matrix (attained by row-based Gauss elimination) then you can achieve the diagonal matrix ##D## via elementary column operations: ##(L^T)^{-1}## right-side multiplication. What if the symmetric matrix A does not admit ##LDL^T## decomposition (or equivalently an ##LU## decomposition without involving any permutation matrix ##P##) ?
 
Last edited:
  • #12
13,253
10,232
The transformation matrices to get a diagonal matrix from a real, symmetric matrix are orthogonal matrices, i.e. ##T^{-1}=T^\tau##. I don't know what happens if you exclude permutation matrices.
 
  • #13
243
18
The transformation matrices to get a diagonal matrix from a real, symmetric matrix are orthogonal matrices, i.e. ##T^{-1}=T^\tau##. I don't know what happens if you exclude permutation matrices.
Not sure to understand: take a generic symmetric matrix ##A## having a ##PA=LU## factorization. Starting from the upper triangular matrix ##U## are we always able to transform it into a diagonal one using just elementary column operation ?
Thanks for your help
 
  • #14
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,163
566
What if the symmetric matrix A does not admit ##LDL^T## decomposition
Over reals (or even rational numbers) it always admits ##LDL^T## factorization. The algorithmic reasoning is close to why real symmetric positive definite matrices admit a cholesky decomposition. This seems a bit more advanced than vanilla gaussian elimination.

The transformation matrices to get a diagonal matrix from a real, symmetric matrix are orthogonal matrices, i.e. ##T^{-1}=T^\tau##. I don't know what happens if you exclude permutation matrices.
Think Sylvester's Law of Inertia here, not diagonalization per se. OP is talking about congruence transforms, sort of.
 
  • #15
243
18
Over reals (or even rational numbers) it always admits ##LDL^T## factorization. The algorithmic reasoning is close to why real symmetric positive definite matrices admit a cholesky decomposition. This seems a bit more advanced than vanilla
Is matrix ##L## involved in this factorization a lower triangular one or just an orthogonal matrix as required by spectral theorem for real symmetric matrices ?
 
  • #16
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,163
566
Is matrix ##L## involved in this factorization a lower triangular one or just an orthogonal matrix as required by spectral theorem for real symmetric matrices ?
##L## stands for Lower Triangular. I think you are confusing yourself with many different topics.

as a hint: you should pick up on how I said this factorization exists over rationals.... orthogonal matrices (outside simple ones like permutations matrices) generally don't exist over rationals because you can't take square roots in general -- i.e. you can't normalize and get the 2 norm to be 1 for each vector in a set of orthogonal vectors
 
  • #17
243
18
##L## stands for Lower Triangular. I think you are confusing yourself with many different topics
Sorry for that....

Please, could you help me in understanding why there exist such factorization for a generic symmetric matrix (having not special properties aside to be symmetric) ? thanks
 
  • #18
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,163
566
Sorry for that....

Please, could you help me in understanding why there exist such factorization for a generic symmetric matrix (having not special properties aside to be symmetric) ? thanks
The typical derivation is algorithmic -- i.e. follow the algorithm exactly, and presto, you get the result. Consider this

https://math.stackexchange.com/questions/2512046/give-an-algorithm-to-compute-the-ldlt-decomposition

using google

- - - - -
I generally advise to start with special cases and build -- so you may be better off first getting comfortable with the special case of a real symmetric positive definite matrix, and deriving cholesky, which is ##LDL^T## in the special case of ##D = I## (and rescaling so L has only positive entries). I think this was a challenge problem last month...
 
  • #19
243
18
The typical derivation is algorithmic -- i.e. follow the algorithm exactly, and presto, you get the result.
Take the symmetric matrix ##\begin{bmatrix}0 & 1\\1 & 2\end{bmatrix}## It seem to me it does not admit any ##LDL^T## factorization. What do you think about?
 
Last edited:
  • #20
13,253
10,232
I told you to try some examples with the matrix calculator:
https://matrixcalc.org/en/#diagonalize({{0,1},{1,2}})
There is no way around practice. You should compute such examples on your own instead of asking others to do it for you. That won't help you at all, and even the matrix calculator is problematic, if you don't use it as helping hand but as solution machine instead.
 
  • #21
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,163
566
Take the symmetric matrix ##\begin{bmatrix}0 & 1\\1 & 2\end{bmatrix}## It seem to me it does not admit any ##LDL^T## factorization. What do you think about?
I agree with Fresh.

It also may be that the result for general real symmetric (but not positive semidefinite) matrices only holds up to graph isomorphism -- i.e. permutation similarity. Equivalently,

when ##A = A^T##
##PAP^{-1} = PAP^T = LDL^T##

for some permutation matrix ##P##. That is how it is setup in scipy; I haven't looked a the general real symmetric case in a while... and graph isomorphism is just about an equality, as it preserves all the usual invariants and graph isomorphism really just means 're-labelling' your problem.
 
  • #22
243
18
It also may be that the result for general real symmetric (but not positive semidefinite) matrices only holds up to graph isomorphism -- i.e. permutation similarity. Equivalently,

when ##A = A^T##
##PAP^{-1} = PAP^T = LDL^T##

for some permutation matrix ##P##.
Can you help me to understand why such permutation matrix ##P## does exist ? Thanks
 
  • #23
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,163
566
Can you help me to understand why such permutation matrix ##P## does exist ? Thanks
If you put in a serious amount of work and show us what you've done, then yes. If not, no I won't. Echoing Fresh here, your approach seems very passive and this isn't conductive to you learning anything.

I would, again, suggest you start with the simpler case of Cholesky.
 
  • #24
243
18
If you put in a serious amount of work and show us what you've done, then yes. If not, no I won't. Echoing Fresh here, your approach seems very passive and this isn't conductive to you learning anything.

I would, again, suggest you start with the simpler case of Cholesky.
When A is symmetric positive definite (Cholesky) then ##P## is ##I## and ##D## in ##LDL^T##has all positive entries, thus we can write ##A=L^*L^{*T}##.

For the general case ##A=A^T## I worked out the following:

##A=QEQ^T## with ##Q## orthogonal and ##E## eigenvalues diagonal

##PAP^{-1}=PAP^T=P(QEQ^T)P^T=Q(PEP^T)Q^T##

Thus via permutation matrix P we are able to reorder ##A## eigenvalues putting its possibly zero ##k## eigenvalues at the last ##k## diagonal entries.

Now if we could show that the first ##PAP^T## ##n-k## leading minors had ##det(A_i)\neq 0## then it would admit an ##LU## factorization (with ##k## zero rows at the end in ##U##) hence the conclusion ##PAP^T=LDL^T##
 
Last edited:
  • #25
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,163
566
When A is symmetric positive definite (Cholesky) then ##P## is ##I## and ##D## in ##LDL^T##has all positive entries, thus we can write ##A=L^*L^{*T}##.

For the general case ##A=A^T## I worked out the following:

##A=QEQ^T## with ##Q## orthogonal and ##E## eigenvalues diagonal

##PAP^{-1}=PAP^T=P(QEQ^T)P^T=Q(PEP^T)Q^T##....
it is quite rare that ##Q## and ##P## commute
 
  • Like
Likes cianfa72

Related Threads on Gaussian elimination for homogeneous linear systems

Replies
19
Views
405
Replies
5
Views
6K
Replies
3
Views
907
Replies
2
Views
5K
Replies
2
Views
2K
Replies
2
Views
1K
  • Last Post
Replies
2
Views
2K
Replies
4
Views
2K
  • Last Post
Replies
5
Views
1K
Replies
1
Views
3K
Top