Different matrices for the same linear operator

Click For Summary

Homework Help Overview

The discussion revolves around understanding how different matrices can represent the same linear operator in the context of linear algebra, specifically focusing on the operator T on \mathcal{C}^2 and its representation in different bases. Participants explore the relationship between matrices and their corresponding bases, particularly through the lens of eigenvalues and Jordan decomposition.

Discussion Character

  • Conceptual clarification, Mathematical reasoning, Problem interpretation, Assumption checking

Approaches and Questions Raised

  • Participants discuss the transformation of basis vectors and the implications of using different matrices to represent the same operator. There are inquiries about the conditions under which two matrices are considered to represent the same operator, particularly focusing on concepts like conjugacy and Jordan decomposition.

Discussion Status

The discussion is active, with participants offering various perspectives on the relationship between matrices, eigenvalues, and eigenvectors. Some guidance has been provided regarding the change of basis and the implications of having the same eigenvalues, but there remains uncertainty about the specifics of these relationships and their implications.

Contextual Notes

There is mention of confusion regarding the correct formulation of change of basis matrices and the conditions under which matrices can be considered equivalent. Participants express uncertainty about the uniqueness of eigenvectors and the implications of having the same eigenvalues.

octol
Messages
61
Reaction score
0
Consider the linear operator T on [tex]\mathcal{C}^2[/tex] with the matrix
[tex]\bmatrix 2 && -3\\3 && 2 \endbmatrix[/tex]
in the standard basis. With the basis vectors
[tex]\frac{1}{\sqrt{2}} \bmatrix i \\ 1 \endbmatrix, \quad \frac{1}{\sqrt{2}} \bmatrix -i \\ 1 \endbmatrix[/tex]
this operator can be written
[tex]\bmatrix 2+3i && 0\\0 && 2-3i \endbmatrix[/tex]

My question is, how can I see this? How can I see that the two matrices represent the same operator? I understand that what these matrices represent is how they transform the basis vectors, e.g., for the first matrix above we have
[tex]T \bmatrix 1 \\ 0 \endbmatrix = 2 \bmatrix 1 \\ 0 \endbmatrix + 3 \bmatrix 0 \\ 1 \endbmatrix.[/tex]

Was along time since I took a class in linear algebra so I have totally forgotten how to think about these things.
 
Physics news on Phys.org
Let v be the matrix whose columns contain the new basis vectors: [tex]v = \bmatrix i && -i\\1 && 1 \endbmatrix[/tex] and let
v* be the the complex conjugate transpose of v. Compute
v* T v/2, which is the operator in the new basis. In other
words, the matrix v contains the eigenvectors of T and the matrix T
in the new basis contains the eigenvalues on the diagonal.
 
Two matrices represent the same operator w.r.t. different bases if and only if they're conjugate. Which is if and only if they have the same Jordan decomposition.
 
matt grime said:
Two matrices represent the same operator w.r.t. different bases if and only if they're conjugate. Which is if and only if they have the same Jordan decomposition.

How do you check this explicitly then i the above case? Sorry I'm a bit lost here.
 
Sorry, I appear to have written something completely inappropriate.

Just write down the matrix that represents the change of coordinates.

If you have a linear map with matrix M with respect to a basis B_1, and P is the change of basis matrix from B_1 to B_2 for some other basis B_2, then the linear map has matrix PMP^-1 with respect to B_2.
 
Lets call the standard basis E and the other basis B. And let's say that in the base E the matrix of T is A and in basis B it's A'. You want to find A'.
If M is the change of coordinates matrix from E to B (which is just a matrix
whose columns are the vectors of B) the A' = M^-1 A M.
 
Beyond the fact that I can never get my inverses in the right place, normally, in what way does that say something, daniel, that has not already been said?
 
Sorry, I didn't read all the posts before posting :blushing:
 
I'm pretty sure that what you're talking about amounts to the two matrices having the same eigenvalues.
 
  • #10
Not at all. It is easy to construct non-conjugate matrices with the same eigen-values, the same number of eigen-vectors for each eigen-value etc.
 
  • #11
daniel_i_l said:
...M is the change of coordinates matrix from E to B (which is just a matrix
whose columns are the vectors of B) ...

Is this really correct? I thought it was the other way around? *confused*
i.e that the matrix to change the basis from B to E is the matrix where the columns are given by the vectors of B?
 
  • #12
octol said:
Is this really correct? I thought it was the other way around? *confused*
i.e that the matrix to change the basis from B to E is the matrix where the columns are given by the vectors of B?
Darned if I know! I, like matt, can never get the inverses in the right place.

matt grime said:
Not at all. It is easy to construct non-conjugate matrices with the same eigen-values, the same number of eigen-vectors for each eigen-value etc.
matt, is it sufficient that the two matrices have the same eigenvalues and the same eigenvectors (not just the same number of eigenvectors) for each eigenvalue?
 
  • #13
All that means is that they have the same number of jordan blocks, and that the e-vectors for each are 'the same' whatever that might mean*. Two matrices are 'the same' if they have identical Jordan decompositions. This is easy to visualize - there is a bijection between blocks that preserves the diagonal element and the size of the block - but hard to formalize succinctly on an ad hoc basis. I suppose we should say

M~N iff for any decompositions

[tex]M \sim \oplus_{i=1}^m J(\lambda_i,r_i)[/tex]

and

[tex]N \sim \oplus_{i=i}^n J(\mu_i,s_i)[/tex]

then, up to reordering, n=m, and [itex]\lambda_i =\mu_i[/itex] and [itex]r_i=s_i[/itex]

where J(x,y) means the jordan block of size y with x in the diagonal.




* I really don't know what you mean by 'the same eigenvectors', to be honest: you give me any set of linearly independent vectors, and any other set with the same cardinality, and there is a change of basis (and usually infinitely many) that maps one to the other.
 
Last edited:
  • #14
HallsofIvy said:
matt, is it sufficient that the two matrices have the same eigenvalues and the same eigenvectors (not just the same number of eigenvectors) for each eigenvalue?

If two matrices have the same eigenvalues and corresponding eigenvectors, then they must be the same transformation if those eigenvectors are a basis of the space.

However, it's not necessarily the case that the eigenvectors are unique if the eigenvalues aren't distinct. (If there are two eigenvectors [itex]\vec{v}_1,\vec{v}_2[/itex] with the same eigenvalue [itex]\lambda[/tex] then any linear combination of those two will also be an eigenvector with the same eigenvalue - trivial examples include the zero or identity matrix in 2 or more dimensions.)<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> Not at all. It is easy to construct non-conjugate matrices with the same eigen-values, the same number of eigen-vectors for each eigen-value etc. </div> </div> </blockquote><br /> Yeah, that only works if the matrix is diagonalizable.[/itex]
 
Last edited:

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
5
Views
2K