SVD of a reduced rank matrix still has non-zero U and V`?

In summary, the singular value decomposition of a matrix A yields A=USV` where the singular values are the coefficients in a linear combination of simple data tables.
  • #1
Adel Makram
635
15
In a given matrix A, the singular value decomposition (SVD), yields A=USV`. Now let's make dimension reduction of the matrix by keeping only one column vector from U, one singular value from S and one row vector from V`. Then do another SVD of the resulted rank reduced matrix Ar.

Now, if Ar is the result of multiplication of Ur , Sr and V`r, then why the result, shown in the right picture in the attached doc, still has non-vanishing columns of Ur and non-vanishing rows of V`r? in other words, where do Ur1, Ur2, V`r1 and V`r2 come from as long as other values of S, namely S2 and S3 are zero?
 

Attachments

  • SVD.pdf
    94.4 KB · Views: 252
Physics news on Phys.org
  • #2
Adel Makram said:
the singular value decomposition (SVD),
It's better to say "a singular value decomposition" since singular value decompositions are not unique.

In the right hand side of your page, you could set columns 2 and 3 of [itex] U [/itex] equal to zeroes and rows 2 and 3 of [itex] V '[/itex] equal to zeroes and you'd still have a singular value decomposition.
one singular value from S

It isn't clear what you mean by "keeping" only one of the singular values.

One way to visualize the singular value decomposition of [itex] M = USV' [/itex] is to say that the entries of [itex] M [/itex] are a table of data of some sort and the the singular value decomposition of [itex] M [/itex] expresses it as a linear combination of "simple" data tables where the singular values are the coefficients in the linear combination. The simple data table are imagined to have a list of "row headings" on their left margin and a list of "column headings" across the top and each entry in the simple table is the product of the corresponding row heading and column heading for that entry. A given simple table has as the jth column of [itex] U [/itex] as its row headings and the jth row of [itex] V' [/itex] as its column headings.

According to that way of looking at things, the way to "keep only one" a singular value would be to keep only the corresponding term in the linear combination. This amounts to setting the other singular values equal to zero.

A linear combination where zeroes are allowed let's us write vector equations like (1,2,-4) = (1)(1,2,-4) = (1)(1,2,-4) + (0)(5,6,7) + (0)(9,3,14) where arbitrary vectors can appear as long as their coefficients are zero. A similar statement applies to linear combinations of simple data tables. This implies that some columns of [itex] U [/itex] and some rows of [itex] V' [/itex] can be chosen arbitrarily.
 
  • #3
Stephen Tashi said:
A similar statement applies to linear combinations of simple data tables. This implies that some columns of [itex] U [/itex] and some rows of [itex] V' [/itex] can be chosen arbitrarily.
So I have two concerns here:
1) does that mean we can construct infinite number of matrices U and V' by arbitrarily choosing orthonormal columns vectors of U and row vectors of V'?
2) back to my question, U And V' are constructed from AA'=USSU' and A'A=VSSV', but if this is applied for the reduced form of A, where we only have one non-zero singular value, then where do other U and V' vectors apart from the first ones arise as long as A has only rank1 so do AA' and A'A?
 
Last edited:

1. What does SVD stand for in relation to a reduced rank matrix?

SVD stands for Singular Value Decomposition. It is a mathematical method used to decompose a matrix into three smaller matrices: U, Σ, and V.

2. Why does a reduced rank matrix still have non-zero U and V in its SVD?

A reduced rank matrix means that some of its columns or rows are linearly dependent, resulting in a lower rank. However, the SVD of a matrix is unique and cannot be changed based on the rank. Therefore, even in a reduced rank matrix, the U and V matrices will still have non-zero values.

3. How does SVD help in understanding a reduced rank matrix?

SVD helps in understanding a reduced rank matrix by providing insights into the linear relationships between its columns and rows. The U and V matrices in the SVD represent the left and right singular vectors, respectively, which can reveal the underlying structure and patterns in the data.

4. Can a reduced rank matrix have a zero singular value in its SVD?

Yes, it is possible for a reduced rank matrix to have a zero singular value in its SVD. This indicates that there is a linearly dependent column or row in the matrix, and it contributes no information to the decomposition.

5. How is SVD used in practical applications involving reduced rank matrices?

SVD is widely used in various practical applications involving reduced rank matrices, such as image processing, data compression, and data approximation. It helps in reducing the dimensionality of the data and identifying the most significant features or patterns in the data, making it useful for data analysis and machine learning tasks.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
4K
  • Linear and Abstract Algebra
Replies
1
Views
885
Replies
10
Views
3K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Special and General Relativity
Replies
1
Views
543
Replies
4
Views
2K
Back
Top