SVD of a reduced rank matrix still has non-zero U and V`?

  • Thread starter Thread starter Adel Makram
  • Start date Start date
  • Tags Tags
    Matrix rank Svd
Adel Makram
Messages
632
Reaction score
15
In a given matrix A, the singular value decomposition (SVD), yields A=USV`. Now let's make dimension reduction of the matrix by keeping only one column vector from U, one singular value from S and one row vector from V`. Then do another SVD of the resulted rank reduced matrix Ar.

Now, if Ar is the result of multiplication of Ur , Sr and V`r, then why the result, shown in the right picture in the attached doc, still has non-vanishing columns of Ur and non-vanishing rows of V`r? in other words, where do Ur1, Ur2, V`r1 and V`r2 come from as long as other values of S, namely S2 and S3 are zero?
 

Attachments

Physics news on Phys.org
Adel Makram said:
the singular value decomposition (SVD),
It's better to say "a singular value decomposition" since singular value decompositions are not unique.

In the right hand side of your page, you could set columns 2 and 3 of U equal to zeroes and rows 2 and 3 of V ' equal to zeroes and you'd still have a singular value decomposition.
one singular value from S

It isn't clear what you mean by "keeping" only one of the singular values.

One way to visualize the singular value decomposition of M = USV' is to say that the entries of M are a table of data of some sort and the the singular value decomposition of M expresses it as a linear combination of "simple" data tables where the singular values are the coefficients in the linear combination. The simple data table are imagined to have a list of "row headings" on their left margin and a list of "column headings" across the top and each entry in the simple table is the product of the corresponding row heading and column heading for that entry. A given simple table has as the jth column of U as its row headings and the jth row of V' as its column headings.

According to that way of looking at things, the way to "keep only one" a singular value would be to keep only the corresponding term in the linear combination. This amounts to setting the other singular values equal to zero.

A linear combination where zeroes are allowed let's us write vector equations like (1,2,-4) = (1)(1,2,-4) = (1)(1,2,-4) + (0)(5,6,7) + (0)(9,3,14) where arbitrary vectors can appear as long as their coefficients are zero. A similar statement applies to linear combinations of simple data tables. This implies that some columns of U and some rows of V' can be chosen arbitrarily.
 
Stephen Tashi said:
A similar statement applies to linear combinations of simple data tables. This implies that some columns of U and some rows of V' can be chosen arbitrarily.
So I have two concerns here:
1) does that mean we can construct infinite number of matrices U and V' by arbitrarily choosing orthonormal columns vectors of U and row vectors of V'?
2) back to my question, U And V' are constructed from AA'=USSU' and A'A=VSSV', but if this is applied for the reduced form of A, where we only have one non-zero singular value, then where do other U and V' vectors apart from the first ones arise as long as A has only rank1 so do AA' and A'A?
 
Last edited:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Replies
1
Views
1K
Replies
9
Views
5K
Replies
5
Views
3K
Replies
1
Views
3K
Replies
1
Views
2K
Replies
18
Views
6K
Back
Top