Different matrices for the same linear operator

octol
Messages
61
Reaction score
0
Consider the linear operator T on \mathcal{C}^2 with the matrix
\bmatrix 2 && -3\\3 && 2 \endbmatrix
in the standard basis. With the basis vectors
\frac{1}{\sqrt{2}} \bmatrix i \\ 1 \endbmatrix, \quad \frac{1}{\sqrt{2}} \bmatrix -i \\ 1 \endbmatrix
this operator can be written
\bmatrix 2+3i && 0\\0 && 2-3i \endbmatrix

My question is, how can I see this? How can I see that the two matrices represent the same operator? I understand that what these matrices represent is how they transform the basis vectors, e.g., for the first matrix above we have
T \bmatrix 1 \\ 0 \endbmatrix = 2 \bmatrix 1 \\ 0 \endbmatrix + 3 \bmatrix 0 \\ 1 \endbmatrix.

Was along time since I took a class in linear algebra so I have totally forgotten how to think about these things.
 
Physics news on Phys.org
Let v be the matrix whose columns contain the new basis vectors: v = \bmatrix i && -i\\1 && 1 \endbmatrix and let
v* be the the complex conjugate transpose of v. Compute
v* T v/2, which is the operator in the new basis. In other
words, the matrix v contains the eigenvectors of T and the matrix T
in the new basis contains the eigenvalues on the diagonal.
 
Two matrices represent the same operator w.r.t. different bases if and only if they're conjugate. Which is if and only if they have the same Jordan decomposition.
 
matt grime said:
Two matrices represent the same operator w.r.t. different bases if and only if they're conjugate. Which is if and only if they have the same Jordan decomposition.

How do you check this explicitly then i the above case? Sorry I'm a bit lost here.
 
Sorry, I appear to have written something completely inappropriate.

Just write down the matrix that represents the change of coordinates.

If you have a linear map with matrix M with respect to a basis B_1, and P is the change of basis matrix from B_1 to B_2 for some other basis B_2, then the linear map has matrix PMP^-1 with respect to B_2.
 
Lets call the standard basis E and the other basis B. And let's say that in the base E the matrix of T is A and in basis B it's A'. You want to find A'.
If M is the change of coordinates matrix from E to B (which is just a matrix
whose columns are the vectors of B) the A' = M^-1 A M.
 
Beyond the fact that I can never get my inverses in the right place, normally, in what way does that say something, daniel, that has not already been said?
 
Sorry, I didn't read all the posts before posting :blushing:
 
I'm pretty sure that what you're talking about amounts to the two matrices having the same eigenvalues.
 
  • #10
Not at all. It is easy to construct non-conjugate matrices with the same eigen-values, the same number of eigen-vectors for each eigen-value etc.
 
  • #11
daniel_i_l said:
...M is the change of coordinates matrix from E to B (which is just a matrix
whose columns are the vectors of B) ...

Is this really correct? I thought it was the other way around? *confused*
i.e that the matrix to change the basis from B to E is the matrix where the columns are given by the vectors of B?
 
  • #12
octol said:
Is this really correct? I thought it was the other way around? *confused*
i.e that the matrix to change the basis from B to E is the matrix where the columns are given by the vectors of B?
Darned if I know! I, like matt, can never get the inverses in the right place.

matt grime said:
Not at all. It is easy to construct non-conjugate matrices with the same eigen-values, the same number of eigen-vectors for each eigen-value etc.
matt, is it sufficient that the two matrices have the same eigenvalues and the same eigenvectors (not just the same number of eigenvectors) for each eigenvalue?
 
  • #13
All that means is that they have the same number of jordan blocks, and that the e-vectors for each are 'the same' whatever that might mean*. Two matrices are 'the same' if they have identical Jordan decompositions. This is easy to visualize - there is a bijection between blocks that preserves the diagonal element and the size of the block - but hard to formalize succinctly on an ad hoc basis. I suppose we should say

M~N iff for any decompositions

M \sim \oplus_{i=1}^m J(\lambda_i,r_i)

and

N \sim \oplus_{i=i}^n J(\mu_i,s_i)

then, up to reordering, n=m, and \lambda_i =\mu_i and r_i=s_i

where J(x,y) means the jordan block of size y with x in the diagonal.




* I really don't know what you mean by 'the same eigenvectors', to be honest: you give me any set of linearly independent vectors, and any other set with the same cardinality, and there is a change of basis (and usually infinitely many) that maps one to the other.
 
Last edited:
  • #14
HallsofIvy said:
matt, is it sufficient that the two matrices have the same eigenvalues and the same eigenvectors (not just the same number of eigenvectors) for each eigenvalue?

If two matrices have the same eigenvalues and corresponding eigenvectors, then they must be the same transformation if those eigenvectors are a basis of the space.

However, it's not necessarily the case that the eigenvectors are unique if the eigenvalues aren't distinct. (If there are two eigenvectors \vec{v}_1,\vec{v}_2 with the same eigenvalue \lambda[/tex] then any linear combination of those two will also be an eigenvector with the same eigenvalue - trivial examples include the zero or identity matrix in 2 or more dimensions.)<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> Not at all. It is easy to construct non-conjugate matrices with the same eigen-values, the same number of eigen-vectors for each eigen-value etc. </div> </div> </blockquote><br /> Yeah, that only works if the matrix is diagonalizable.
 
Last edited:
Back
Top