# Simple problem using eigenvalue/eigenvector

I have a simple question related to eigen values and eigen vectors. Now, I have
the eigen values of a matrix (that is unknown) and I have the eigen vectors.
I made some small changes to the diagonal eigen value matrix and would like to obtain
a full rank matrix based on the small change to eigen value that I made. I am not sure
how to proceed although I do know the following works.

Given D = diagonal eigen value matrix and V = eigen vectors. If C is the matrix
that we need to know, then C.V= D.V . I need to know C and found this solution
in a book:

C = transpose(Sqrt(D)*V )* (Sqrt(D)*V)

Note that the D here is positive diagonal matrix and that sqrt(D)*V has been normalized
such that the length of the all the rows is a unit vector.

Can someone point to the proof of this equality. I know that for an orthogonal matrix such as V, the inverse is the transpose. But, I still don't get it as to why to introduce the transpose and not use the inverse. And if I do that I just get D as

D*V*inv(V)=D.

Can someone comment.

thank

## Answers and Replies

HallsofIvy
Science Advisor
Homework Helper
It's a little hard to tell exactly what you are asking. You start by talking about the eigenvalues of a matrix but then you seem to be assuming 1) the eigenvalues are all positive and 2) the matrix is diagonalizable.

It's a little hard to tell exactly what you are asking. You start by talking about the eigenvalues of a matrix but then you seem to be assuming 1) the eigenvalues are all positive and 2) the matrix is diagonalizable.

Allow me to rephrase it.
1) First I have a matrix C_hat whose eigenvales and eigen vectors are obtained. But some of the eigen values of are negative and this causes problem for me because I need to use the matrix C_hat in an optimization framework. Hence, it is required that C_hat be invertible. The idea is to change C_hat in a minimum possible way so as to have positive eigen values. One easy way to do is to fix all negative eigen values of C_hat to be zero and then build a resulting matrix using the new eigen values and the old eigen vectors.

2) After the new set of eigen values (call it D) are obtained, we know that it has non negative values. Hence, I assumed it to be positive.

3) If V is the eigen vector matrix , then as per the book V*Sqrt(D) normalized such that all the row vectors are unit length can be used to produce the matrix C, such that

C= V*Sqrt(D) * transpose(V*sqrt(D)).

Now, C shall be close enough to C (provide only 1 or two of the eigen values are negative in original C_hat). Also, C shall have all positive eigen value (by construction). So, C can be used in the optimization. But, I am trying to prove the above equation.

Thanks.

Last edited:
HallsofIvy
Science Advisor
Homework Helper
Allow me to rephrase it.
1) First I have a matrix C_hat whose eigenvales and eigen vectors are obtained. But some of the eigen values of are negative and this causes problem for me because I need to use the matrix C_hat in an optimization framework. Hence, it is required that C_hat be invertible. The idea is to change C_hat in a minimum possible way so as to have positive eigen values.
What does "having negative eigenvalues" have to do with not being invertible?

One easy way to do is to fix all negative eigen values of C_hat to be zero and then build a resulting matrix using the new eigen values and the old eigen vectors.
Okay, now you will have changed from a matrix that is invertible (it has no 0 eigenvalues) to one that is not invertible (it has 0 eigenvalues)! Surely that isn't what you mean.

2) After the new set of eigen values (call it D) are obtained, we know that it has non negative values. Hence, I assumed it to be positive.

3) If V is the eigen vector matrix , then as per the book V*Sqrt(D) normalized such that all the row vectors are unit length can be used to produce the matrix C, such that

C= V*Sqrt(D) * transpose(V*sqrt(D)).

Now, C shall be close enough to C (provide only 1 or two of the eigen values are negative in original C_hat). Also, C shall have all positive eigen value (by construction). So, C can be used in the optimization. But, I am trying to prove the above equation.

Thanks.

What does "having negative eigenvalues" have to do with not being invertible?

Thanks for your time.
Ok, It is not required to have a positive semi definite matrix to be invertible. Having an invertible matrix is another requirement but that is not what I am trying to achieve my doing this exercise. Again, lets start from scratch. I just need a matrix that is positive semi definite. I am going to leave the invertible requirement to some other time. So, let's ignore that. But, I need a positive semi definite matrix. This is because the objective function for optimization that uses this matrix is such that the value should be non-negative always. One way to make sure that this is possible is by having a positive semi definite matrix. Because the objective function takes the form of

Summation(lambda*square(w)*square(a_vector)) where, lambda is the set of eigen values. And w is the set that I need to obtain from optimization. This summation has to be nonnegative at all cost. Hence, making sure that all eigen values are non-negative is important for me.

Okay, now you will have changed from a matrix that is invertible (it has no 0 eigenvalues) to one that is not invertible (it has 0 eigenvalues)! Surely that isn't what you mean.

I did not mean that all the eigen values are zero. Only the negative ones and certainly if all the eigen values are negative (and replaced by zeros), then I am going to have to call it an exception and move on to another matrix or solve the problem some other way. But I don't have matrices with all of the eigen values negative. Only one or two of the 10 /12 odd eigen values are negative.

So, to recap, I have D a set of eigen values (non negative real numbers). And V a set of non-zero eigen vectors. How do I obtain the matrix that might have this set of eigen values and eigen vectors. The equation that I mentioned earlier about C = B*B', where
B= V*Sqrt(D) (normalized such that all row vectors are unit length) is what I need to prove. Again, I need the unit length for the row vectors such that the diagonals of C have 1.

Any help?

Last edited:
Well, have you tried $A = VDV^{-1}$ That is the most basic definition of eigen decomposition...

Well, have you tried $A = VDV^{-1}$ That is the most basic definition of eigen decomposition...

Perfect. Thanks. That should work and it does. I suppose in the proof, they just replaced the inv(V) with transpose(V) as V is orthogonal and it should be the same.

And regarding the positive/negative eigenvalue discussion, I think you have a matrix form such as $AA^T$ and that is how you can get only positive and zero eigenvalues depending on the rank of a tall A matrix. It is indeed hard to understand your remarks, but anyway, good luck.

Well, I suppose you need to have the optimization problem that I am dealing with to really get a sense of what I am talking about. But I might have been all around the the map in trying to explain the problem, and that I agree can be a bit confusion. I anyways, found the solution. Anyways...