Gaussian elimination for homogeneous linear systems

In summary: I got it correctly: take just the upper triangular matrix you are talking about (forgetting for a moment the possibly rows of zero following it): it will have an eigenvalue at the origin and all other eigenvalues will be zero.Yes. As long as you don't allow the matrix to be Symmetric, then both row and column operations will lead to the same solution.Yes.
  • #1
cianfa72
1,677
191
TL;DR Summary
Remaining equations due to Gauss elimination of a first set of unknowns from an homogeneous linear system
Hi,

I ask for a clarification about the following: consider for instance a 10 x 12 homogeneous linear system and perform Gauss elimination for the first 8 unknowns. Suppose you end up with 5 equations in the remaining 12-8 = 4 unknowns (because in the process of the first 8 unknowns elimination you end up in a raw echelon form with a non-pivot column - basically a free unknown).

For sure the remaining 5 equations are linearly dependent (because of 5 homogeneous equation in 4 unknowns) but my doubt is: for some reason has to be one of the 5 equations identically null (a zero raw) ?

thanks
 
Last edited:
Physics news on Phys.org
  • #2
cianfa72 said:
Summary: Remaining equations due to Gauss elimination of a first set of unknowns from an homogeneous linear system

has to be one of the 5 equations identically null (a zero raw) ?
If you apply Gauß elimination until the very end, then yes. If you stop earlier as there is no reason to continue, then no.
 
  • #3
fresh_42 said:
If you apply Gauss elimination until the very end, then yes. If you stop earlier as there is no reason to continue, then no.
Sorry for the confusion...in the given example eliminating the first 8 unknowns you get 3 equations: 10 - (8-1) = 3 (because of the non-pivot column in the row echelon form attained). Furthermore, as you said, as general case you get 3 equations in 4 (=12-8) unknowns, but...

take this time an homogeneous square linear system...what if the square matrix ##A## associated to it is symmetric ? (##A = A^T##)
 
Last edited:
  • Like
Likes WWGD
  • #4
cianfa72 said:
what if the square matrix ##A## associated to it is symmetric ? (##A = A^T##)
In how far does this matter? If we have a singular matrix we will always end up with rows of all zeroes and an identity submatrix, if divisions are allowed, or a diagonal matrix otherwise.
 
  • #5
fresh_42 said:
If we have a singular matrix we will always end up with rows of all zeroes and an identity submatrix, if divisions are allowed, or a diagonal matrix otherwise.
just to be clear: the symmetric matrix ##A## I was talking about is the original homogeneous system matrix...Gauss elementary row operations should not preserve the symmetric property , I believe.
 
  • #6
cianfa72 said:
just to be clear: the symmetric matrix ##A## I was talking about is the original homogeneous system matrix...Gauss elementary row operations should not preserve the symmetric property , I believe.
Depends on whether you allow row and column operations or only one kind. If both are allowed and divisions are allowed, then we end up with ##\begin{bmatrix}I&0\\0&0\end{bmatrix}##, otherwise not.
$$
\begin{bmatrix}1&2\\2&4\end{bmatrix} \longrightarrow \begin{bmatrix}1&2\\0&0\end{bmatrix}\longrightarrow \begin{bmatrix}1&0\\0&0\end{bmatrix}
$$
 
  • #7
fresh_42 said:
Depends on whether you allow row and column operations or only one kind. If both are allowed and divisions are allowed, then we end up with ##\begin{bmatrix}I&0\\0&0\end{bmatrix}##, otherwise not.
$$
\begin{bmatrix}1&2\\2&4\end{bmatrix} \longrightarrow \begin{bmatrix}1&2\\0&0\end{bmatrix}\longrightarrow \begin{bmatrix}1&0\\0&0\end{bmatrix}
$$
As far as I can understand this type of "reduction" (using both row and column operations in order to end up with the matrix ##R##) can be applied to any square matrix ##A## not just to symmetric ones, right ?

By the way, the solution set for ##Rx=0## is different from the original ##Ax=0## I believe...
 
Last edited:
  • #8
cianfa72 said:
As far as I can understand this type of "reduction" (using both row and column operations in order to end up with the matrix ##R##) can be applied to any square matrix ##A## not just to symmetric ones, right ?

By the way, the solution set for ##Rx=0## is different from the original ##Ax=0## I believe...
Yes, to both. Row operations are operations among the equations. Column operations are a change of variables, so one needs to keep track of them. E.g. adding the second column to the first means to change ##x \longmapsto x'=x+y##. To get a solution in the old coordinates, these steps have to be reversed at the end. It's basically only helpful if you want to know the dimensions, the rank and the defect, the dimension of the nullspace, or the eigenvalues if you don't use divisions.

Using only row operations leads to something like ##\begin{bmatrix}a & b & \ldots \\ 0 & c & \ldots \\ 0&0& \ldots \\ \ldots&\ldots&\ldots\\\ldots&\ldots&\ldots\\0&0&0\\0&0&0 \end{bmatrix}##, an upper triangular matrix with possibly some rows of zeros at the end. In any case, symmetry doesn't play a role in this process, but you can still read out all information you would get if you allowed column operations; so column operations are not really needed - only to answer your question as you didn't mention whether they are allowed or not.
 
  • #9
fresh_42 said:
Using only row operations leads to something like ##\begin{bmatrix}a & b & \ldots \\ 0 & c & \ldots \\ 0&0& \ldots \\ \ldots&\ldots&\ldots\\\ldots&\ldots&\ldots\\0&0&0\\0&0&0 \end{bmatrix}##, an upper triangular matrix with possibly some rows of zeros at the end. In any case, symmetry doesn't play a role
Just to check I got it correctly: take just the upper triangular matrix you are talking about (forgetting for a moment the possibly rows of zero following it): it will be actually in "raw echelon form" but because of every square matrix in raw echelon form is also upper triangular we can conclude that.
 
  • #10
cianfa72 said:
Just to check I got it correctly: take just the upper triangular matrix you are talking about (forgetting for a moment the possibly rows of zero following it): it will be actually in "raw echelon form" but because of every square matrix in raw echelon form is also upper triangular we can conclude that.
It is too long ago that I am firm with the names and the different stages of the procedure. Gauß elimination is mainly done to get this upper triangular matrix, it tells rank, defect, and eigenvalues. And with backward substitution you also get all variables. Of course you can proceed further and try to make a diagonal matrix out of it, but this isn't always necessary. Here comes your symmetry into play: if the matrix is symmetric, then you can achieve such a diagonal matrix at the end of the process.

If you want, there is a matrix calculator on this list:
https://www.physicsforums.com/threa...h-physics-earth-and-other-curiosities.970262/which allows you to play around with a few examples.
 
  • #11
fresh_42 said:
Here comes your symmetry into play: if the matrix is symmetric, then you can achieve such a diagonal matrix at the end of the process.
If the symmetric matrix A admits ##LDL^T## decomposition then your point is clear: starting from upper triangular matrix (attained by row-based Gauss elimination) then you can achieve the diagonal matrix ##D## via elementary column operations: ##(L^T)^{-1}## right-side multiplication. What if the symmetric matrix A does not admit ##LDL^T## decomposition (or equivalently an ##LU## decomposition without involving any permutation matrix ##P##) ?
 
Last edited:
  • #12
The transformation matrices to get a diagonal matrix from a real, symmetric matrix are orthogonal matrices, i.e. ##T^{-1}=T^\tau##. I don't know what happens if you exclude permutation matrices.
 
  • #13
fresh_42 said:
The transformation matrices to get a diagonal matrix from a real, symmetric matrix are orthogonal matrices, i.e. ##T^{-1}=T^\tau##. I don't know what happens if you exclude permutation matrices.
Not sure to understand: take a generic symmetric matrix ##A## having a ##PA=LU## factorization. Starting from the upper triangular matrix ##U## are we always able to transform it into a diagonal one using just elementary column operation ?
Thanks for your help
 
  • #14
cianfa72 said:
What if the symmetric matrix A does not admit ##LDL^T## decomposition

Over reals (or even rational numbers) it always admits ##LDL^T## factorization. The algorithmic reasoning is close to why real symmetric positive definite matrices admit a cholesky decomposition. This seems a bit more advanced than vanilla gaussian elimination.

fresh_42 said:
The transformation matrices to get a diagonal matrix from a real, symmetric matrix are orthogonal matrices, i.e. ##T^{-1}=T^\tau##. I don't know what happens if you exclude permutation matrices.

Think Sylvester's Law of Inertia here, not diagonalization per se. OP is talking about congruence transforms, sort of.
 
  • #15
StoneTemplePython said:
Over reals (or even rational numbers) it always admits ##LDL^T## factorization. The algorithmic reasoning is close to why real symmetric positive definite matrices admit a cholesky decomposition. This seems a bit more advanced than vanilla
Is matrix ##L## involved in this factorization a lower triangular one or just an orthogonal matrix as required by spectral theorem for real symmetric matrices ?
 
  • #16
cianfa72 said:
Is matrix ##L## involved in this factorization a lower triangular one or just an orthogonal matrix as required by spectral theorem for real symmetric matrices ?
##L## stands for Lower Triangular. I think you are confusing yourself with many different topics.

as a hint: you should pick up on how I said this factorization exists over rationals... orthogonal matrices (outside simple ones like permutations matrices) generally don't exist over rationals because you can't take square roots in general -- i.e. you can't normalize and get the 2 norm to be 1 for each vector in a set of orthogonal vectors
 
  • #17
StoneTemplePython said:
##L## stands for Lower Triangular. I think you are confusing yourself with many different topics
Sorry for that...

Please, could you help me in understanding why there exist such factorization for a generic symmetric matrix (having not special properties aside to be symmetric) ? thanks
 
  • #18
cianfa72 said:
Sorry for that...

Please, could you help me in understanding why there exist such factorization for a generic symmetric matrix (having not special properties aside to be symmetric) ? thanks
The typical derivation is algorithmic -- i.e. follow the algorithm exactly, and presto, you get the result. Consider this

https://math.stackexchange.com/questions/2512046/give-an-algorithm-to-compute-the-ldlt-decomposition

using google

- - - - -
I generally advise to start with special cases and build -- so you may be better off first getting comfortable with the special case of a real symmetric positive definite matrix, and deriving cholesky, which is ##LDL^T## in the special case of ##D = I## (and rescaling so L has only positive entries). I think this was a challenge problem last month...
 
  • #19
StoneTemplePython said:
The typical derivation is algorithmic -- i.e. follow the algorithm exactly, and presto, you get the result.
Take the symmetric matrix ##\begin{bmatrix}0 & 1\\1 & 2\end{bmatrix}## It seem to me it does not admit any ##LDL^T## factorization. What do you think about?
 
Last edited:
  • #20
I told you to try some examples with the matrix calculator:
https://matrixcalc.org/en/#diagonalize({{0,1},{1,2}})
There is no way around practice. You should compute such examples on your own instead of asking others to do it for you. That won't help you at all, and even the matrix calculator is problematic, if you don't use it as helping hand but as solution machine instead.
 
  • #21
cianfa72 said:
Take the symmetric matrix ##\begin{bmatrix}0 & 1\\1 & 2\end{bmatrix}## It seem to me it does not admit any ##LDL^T## factorization. What do you think about?

I agree with Fresh.

It also may be that the result for general real symmetric (but not positive semidefinite) matrices only holds up to graph isomorphism -- i.e. permutation similarity. Equivalently,

when ##A = A^T##
##PAP^{-1} = PAP^T = LDL^T##

for some permutation matrix ##P##. That is how it is setup in scipy; I haven't looked a the general real symmetric case in a while... and graph isomorphism is just about an equality, as it preserves all the usual invariants and graph isomorphism really just means 're-labelling' your problem.
 
  • #22
StoneTemplePython said:
It also may be that the result for general real symmetric (but not positive semidefinite) matrices only holds up to graph isomorphism -- i.e. permutation similarity. Equivalently,

when ##A = A^T##
##PAP^{-1} = PAP^T = LDL^T##

for some permutation matrix ##P##.
Can you help me to understand why such permutation matrix ##P## does exist ? Thanks
 
  • #23
cianfa72 said:
Can you help me to understand why such permutation matrix ##P## does exist ? Thanks
If you put in a serious amount of work and show us what you've done, then yes. If not, no I won't. Echoing Fresh here, your approach seems very passive and this isn't conductive to you learning anything.

I would, again, suggest you start with the simpler case of Cholesky.
 
  • #24
StoneTemplePython said:
If you put in a serious amount of work and show us what you've done, then yes. If not, no I won't. Echoing Fresh here, your approach seems very passive and this isn't conductive to you learning anything.

I would, again, suggest you start with the simpler case of Cholesky.
When A is symmetric positive definite (Cholesky) then ##P## is ##I## and ##D## in ##LDL^T##has all positive entries, thus we can write ##A=L^*L^{*T}##.

For the general case ##A=A^T## I worked out the following:

##A=QEQ^T## with ##Q## orthogonal and ##E## eigenvalues diagonal

##PAP^{-1}=PAP^T=P(QEQ^T)P^T=Q(PEP^T)Q^T##

Thus via permutation matrix P we are able to reorder ##A## eigenvalues putting its possibly zero ##k## eigenvalues at the last ##k## diagonal entries.

Now if we could show that the first ##PAP^T## ##n-k## leading minors had ##det(A_i)\neq 0## then it would admit an ##LU## factorization (with ##k## zero rows at the end in ##U##) hence the conclusion ##PAP^T=LDL^T##
 
Last edited:
  • #25
cianfa72 said:
When A is symmetric positive definite (Cholesky) then ##P## is ##I## and ##D## in ##LDL^T##has all positive entries, thus we can write ##A=L^*L^{*T}##.

For the general case ##A=A^T## I worked out the following:

##A=QEQ^T## with ##Q## orthogonal and ##E## eigenvalues diagonal

##PAP^{-1}=PAP^T=P(QEQ^T)P^T=Q(PEP^T)Q^T##...
it is quite rare that ##Q## and ##P## commute
 
  • Like
Likes cianfa72
  • #26
StoneTemplePython said:
it is quite rare that ##Q## and ##P## commute
Sorry you are right, it was my fault

Trying again: the permutation matrix ##P## needed to reorder ##A## eigenvalues has the special property ##P=P^T=P^{-1}##.

##E=PE_1P##, ##E_1## is the ##A## eigenvalues diagonal matrix ##E## reordered with its ##k## possibly zero eigenvalues as last ##k## entries.

it turn out the following:
##A=QEQ^T##
##A=QPE_1PQ^T##
##PAP^T=PQPE_1PQ^TP^T=PQPE_1P^TQ^TP^T=(PQP)E_1(PQP)^T##
 
Last edited by a moderator:
  • #27
cianfa72 said:
Sorry you are right, it was my fault

Trying again: the permutation matrix ##P## needed to reorder ##A## eigenvalues has the special property ##P=P^T=P^{-1}##.

##E=PE_1P##, ##E_1## is the ##A## eigenvalues diagonal matrix ##E## reordered with its ##k## possibly zero eigenvalues as last ##k## entries.

it turn out the following:
##A=QEQ^T##
##A=QPE_1PQ^T##
##PAP^T=PQPE_1PQ^TP^T=PQPE_1P^TQ^TP^T=(PQP)E_1(PQP)^T##
Maybe that result really does not help much.
I found the following https://mathoverflow.net/questions/155147/cholesky-decomposition-of-a-positive-semi-definite
Neverthless it seems to apply to just symmetric positive semi-definite matrices (p.s.d), right ?
 

1. What is Gaussian elimination for homogeneous linear systems?

Gaussian elimination is a method used to solve systems of linear equations. In the case of homogeneous systems, all of the constants on the right side of the equations are equal to zero. The goal of Gaussian elimination is to transform the system into an upper triangular form, making it easier to solve for the variables.

2. How does Gaussian elimination work?

Gaussian elimination works by using elementary row operations to transform the system of equations into an upper triangular form. These operations include multiplying a row by a non-zero constant, swapping two rows, and adding a multiple of one row to another. Once the system is in upper triangular form, the variables can be solved for by back substitution.

3. What are the benefits of using Gaussian elimination for homogeneous linear systems?

Gaussian elimination is a systematic and efficient method for solving systems of linear equations. It reduces the number of calculations needed and avoids the potential for errors that can occur when solving equations by hand. It is also useful for solving larger systems of equations that would be difficult to solve using other methods.

4. Are there any limitations to using Gaussian elimination for homogeneous linear systems?

Gaussian elimination can only be used for systems of linear equations. It cannot be used for non-linear systems or systems with variables in the denominators. Additionally, if the system is ill-conditioned, meaning that the coefficients are very large or very small, Gaussian elimination may produce inaccurate results.

5. Can Gaussian elimination be used for non-homogeneous linear systems?

Yes, Gaussian elimination can be used for non-homogeneous systems by first converting them into homogeneous systems. This can be done by subtracting the constants on the right side of the equations from both sides, making them equal to zero. The system can then be solved using Gaussian elimination as usual.

Similar threads

  • Linear and Abstract Algebra
Replies
14
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
863
  • Linear and Abstract Algebra
Replies
11
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
604
  • Calculus and Beyond Homework Help
Replies
7
Views
767
Replies
4
Views
3K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
2K
Back
Top