Understanding the Degree of Singularity in Matrix Operations

Click For Summary
The discussion centers on understanding the concept of "singularity" in matrix operations, particularly in relation to singular value decomposition (SVD). A matrix is considered singular if it is not invertible, which occurs when zero is an eigenvalue of A^TA, indicating that not all singular values are non-zero. The "degree of singularity" is explored through the condition number, defined as the ratio of the largest to the smallest singular values, which measures the sensitivity of solutions to errors in data. The conversation also touches on the idea of "almost singular" matrices, which may appear singular due to computational limitations, as calculators cannot accurately determine very small determinants. Overall, the dialogue emphasizes the need for a deeper mathematical understanding to fully grasp these concepts.
gnome
Messages
1,031
Reaction score
1
I understand that the "singular values" used to form the diagonal matrix D in a singular value decomposition of a matrix A are the square roots of the eigenvalues of A^TA.

But I'm having trouble getting a grip on the concept of "singularity" in the context of a matrix. Previously I've only come across singularity as a point where a function is non-continuous and therefore not differentiable. I don't see how that definition applies here. I read in at least one book that "a square matrix A is nonsingular if and only if all its singular values are different from zero." So it sounds like that means that 0 must NOT be an eigenvalue of A^TA. (On second thought, this author is probably not saying that at all; rather just using "A" to represent several different matrices in the course of his discussion.)

But in another book, I got the distinct impression that an SVD for a matrix A can be formed just by using the nonzero singular values of A; i.e. if 0 is an eigenvalue of A^TA, just don't use 0; use the nonzero eigenvalues to form a smaller matrix D.

I also read that the condition number \frac{\sigma_1}{\sigma_n} is a measure of the degree of singularity of A.

But I have no feel at all for what is meant by "degree of singularity" of A, and what it means qualitatively for a matrix to be singular or nonsingular.

Can anybody give a qualitative explanation of these terms?
 
Last edited by a moderator:
Physics news on Phys.org
a singular matrix is one that is not invertible. ie the map X to X^{-1} isn't defined there, just like the map x to 1/x has a singularity at 0 where it isn't defined. (nb singularities and differentiability aren't really linked - a function isn't *defined* at a singularity so it makes no sense to talk about it being differentiable there. note also that some singularities are removable, that is there is a continuation to a function defined at that point too which is differentiable there, such as (x^2-1)/(x-1) which has a singularity at 1 but can be extended so it doesn't - it is after all equivalent to x+1)
 
Thanks Matt.

But what about 'degree of singularity'? Can a matrix be 'almost invertible'?
 
hmm, those terms mean nothing to me to be honest. i could guess something like "dimension of the (generalized) eigen space of eigen value 0, or equiavlently the number of zeroes on the diagonal". i don't know of any use of "almost invertible". it might help if you explained what sigma_0 and sigma_n are in your last post.

Note, the singular values are the numbers the eigenvalues, t where A^t A_tI is not invertible.
 
Given an m \times n matrix A with rank r, the singular value decomposition (SVD) of A is a factorization
A = U\Sigma V^T
where \Sigma is an m \times n matrix of the form
\Sigma = \left [ \begin{array}{cc} D &0 \\ 0 &0 \end{array} \right ]
except that row 2 actually represents m - r rows of zeros and col 2 represents n - r columns of zeros
D is an r \times r diagonal matrix, the diagonal entries of which are the square roots of the non-zero eigenvalues of A^TA, arranged in order by \sigma_1 \geq \sigma_2 \geq ... \geq \sigma_r > 0
I'm not 100% clear about U and V. One book says that the columns of U are eigenvectors of the m x m matrix AA^T and the columns of N are eigenvectors of the n x n matrix A^TA, arranged in the same order as the singular values in the columns of \Sigma. But that doesn't seem to be the whole story since it seems to leave some columns unacccounted-for, so I'm obviously missing something there.

So, finally, \frac{\sigma_1}{\sigma_n} is the ratio of the biggest to the smallest of the singular values.

I don't know why in both books, they refer to the denominator as \sigma_n rather than \sigma_r; I believe r is ≤ the lesser of m or n.
 
Last edited:
since the singular matrices form a nowhere dense closed subset of all matrices and hence the non singualr ones form a dense open subset, it seems all matrices are almost non singular, and it makes more sense to say a matrix is almost singular. i.e. has very small determinant, or lies very close to the closed subset of singular matrices.
 
mathwonk,

I get the impression that I must take courses in analysis, set theory, theory of numbers, abstract algebra and topology in order to fully comprehend the replies that you post for me. Not that they wouldn't all be interesting, of course. Perhaps some day ...

Keep 'em coming. :wink:

PS: Have you had any success in getting the latex graphics to display in your browser?
 
Perhaps I can give a qualitative explanation of what Mathwonk is saying. Think of nxn matrices as nothing more than n^2 dimensional realspace. The criterion for being singular is that det(X)=0. Det is just a polynomial in the matrix entries, or the coordinates in that n^2 dimensional space. And just like the locuso f points Y-X^2=0 in R^2 such a set defiend by te vanishing of one polynomial has smaller "dimension" (this is actually in algebraic terms, the vansihing of a set of polynomials defines something called a variety) than the space it sits in.

So just like every point in the plane is either in the complement to the curve y-x^2=0 or is arbitrarily close to a point that is in the complement (if (u,v)is on the curve then (u+e,v) isn't for any e (except e=-2u)).

this SVD thing seems to be a way of picking the bases nicely in both the image and preimage spaces so that the map A has a nicer representation.
 
Thanks, that does help clear it up for me.
 
  • #10
the concept of almost singular comes up in using computers to determine singularity as well. i.e. we think a matrix is singular if the determinant is zero, but a calculator canot actually tell whetehr a given number is zerto or not, just whetehr it is smaller than the aCCURACY of the calculator.

so if your calculator has only 7 decimal places, then any matrix whose determinant is msaller than 1/10,000,000 or so will look singular. so using a calculator, all you can say about any matrix is that it is sinmgular up to the accuracy of the calculator, i.e. almost singular.

geometrically that means the matrix lies close to the closed subvariety of matrix space defined the determinant polynomial which vanishes on the singular matrices.
 
  • #11
I can't disagree with that. :biggrin:

All kidding aside, that's exactly the context in which I first came across the "condition number" \left (\frac{\sigma_1}{\sigma_n} \right )

Matlab's manual describes it as a measure of the "sensitivity of the solution of a system of linear equations to errors in the data." Now I have a better feel for why that is so.

Thanks again.
 
  • #12
well that may be something realted but different. some matrices have entries that, if approximated slightly badly, yield approximate" solutions that are nowhere near the right ones.

of cousre since you sometimes solve by dividing by the determinant, that could occur when the determinant is very small.

note that, assuming those symbols are eigenvalues, the number you are looking is a ratio of eigenvalues, not their product.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
2
Views
2K
Replies
2
Views
2K