What Happens When the V Matrix is Zero in SVD?

  • Thread starter Thread starter SELFMADE
  • Start date Start date
  • Tags Tags
    Matrix Svd Zero
SELFMADE
Messages
80
Reaction score
0
Semester is over but still want to figure this out

Prob 8 on here:

http://www.math.uic.edu/~akers/310PracticeFinal.pdf

When trying to SVD that matrix one of the U or V matrix turns out to be zero but the answer key has just the general formula for SVD

can anyone explain thanks
 
Last edited by a moderator:
Physics news on Phys.org
SELFMADE said:
Semester is over but still want to figure this out

Prob 8 on here:

http://www.math.uic.edu/~akers/310PracticeFinal.pdf

When trying to SVD that matrix one of the U or V matrix turns out to be zero but the answer key has just the general formula for SVD

can anyone explain thanks

U and V can't be zero. The the defintion of the SVD says they are orthonormal matrices.

How are you doing this by hand? One way is to finding the eigenvalues and vectors of A^T.A and A.A^T . The U and V matrices are the eigenvectors of those matrices, so it doesn't make any sense for them to be zero.
 
Last edited by a moderator:
Since the eigenvalues of O*O are all 0, Σ would be the same exact zero matrix as the given. SVD is unique only to Σ in general, so you can pick any orthonormal bases for your domain and codomain and you have U and V.
 
zcd said:
SVD is unique only to Σ in general, so you can pick any orthonormal bases for your domain and codomain and you have U and V.

Are you sure about that?

A = U S VT

AT A = V S UT U S VT = V S2 VT

AT A V = V S2 VT V = V S2

So S2 and V are the eigenvalues and vectors of AT A

Except in the special case where there are repeated eigenvalues, V is uniquely defined (apart from scaling factors of +/-1).

Starting with A AT, the same is true of U.
 
But in the case of T:R^2 -> R^2 where T is the zero transformation, any two linearly independent vectors from R^2 can be eigenvectors of T. Even with the restriction that the two eigenvectors we pick from R^2 are orthogonal, there's more than one way we can pick the vectors aside from ordering. Since his matrix in question is specifically the zero matrix, the pathological example is actually relevant.
 
I think the OP's problem was that he got a zero V matrix, but that is wrong. Since we don't know what he/she actually did to get V = 0, it's impossible to say what the mistake was.

If you look at the question in the OP's link, the matrix does not have any repeated singular values.

I agree that U and V are not unique if there are repeated singular values.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
Back
Top