Gram-Shmidt getting PtAP is diagonal

  • Thread starter Thread starter nautolian
  • Start date Start date
nautolian
Messages
34
Reaction score
0

Homework Statement



in light of the gram-schmidt orthogonalization process, if A: Rn -> Rn and we have a basis of Rn of eigenvectors of A, can't we just orthonormalize them and get a matrix P such that P-1=PT and thus PTAP is diagonal?

Homework Equations





The Attempt at a Solution


I believe the answer to this is yes, but I'm not sure how to prove this. Any ideas would be appreciated, thanks!
 
Physics news on Phys.org
nautolian said:

Homework Statement



in light of the gram-schmidt orthogonalization process, if A: Rn -> Rn and we have a basis of Rn of eigenvectors of A, can't we just orthonormalize them and get a matrix P such that P-1=PT and thus PTAP is diagonal?

Homework Equations


The Attempt at a Solution


I believe the answer to this is yes, but I'm not sure how to prove this. Any ideas would be appreciated, thanks!

No, not true. If your eigenvectors have different eigenvalues and aren't already orthogonal, you can apply gram-schmidt but the results won't be eigenvectors.
 
Why aren't those eigenvectors? Because you've changed the vectors too much?
 
nautolian said:
Why aren't those eigenvectors? Because you've changed the vectors too much?

Sure, you changed them too much. Why would you think they would be eigenvectors? A linear combination of two eigenvectors is not necessarily an eigenvector.
 
I thought it might be an eigenvector because i was under the impression for some reason that the orthonormalization would no necessarily mean using the Gram-Shmidt process, but rather just normalizing it (dividing by sqrt(sum of squares of eigenvector values)). Is this not the case?
 
nautolian said:
I thought it might be an eigenvector because i was under the impression for some reason that the orthonormalization would no necessarily mean using the Gram-Shmidt process, but rather just normalizing it (dividing by sqrt(sum of squares of eigenvector values)). Is this not the case?

That's pretty confused. Look, take the matrix [[1,1],[0,2]]. The eigenvalues are 1 and 2. Find the eigenvectors. It's easy. Now you can normalize them, but you can't force them to be orthogonal.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...

Similar threads

Back
Top