Simultaneous Diagonalizability of A and B

  • Thread starter Thread starter Daron
  • Start date Start date
Click For Summary
SUMMARY

The discussion centers on the simultaneous diagonalizability of commuting diagonalizable linear operators A and B, established through the relationship AB = BA. The proof involves demonstrating that if A is diagonal, then B must also be diagonal, sharing the same Jordan basis. The analysis reveals that B can be expressed as a block diagonal matrix, with block dimensions corresponding to those of A, thus confirming that both operators can be simultaneously diagonalized in the same eigenvector basis.

PREREQUISITES
  • Understanding of diagonalizable linear operators
  • Familiarity with Jordan forms and bases
  • Knowledge of eigenvalues and eigenvectors
  • Comprehension of matrix commutativity
NEXT STEPS
  • Study the properties of commuting matrices in linear algebra
  • Learn about Jordan canonical forms and their applications
  • Explore the implications of eigenvalue multiplicity on diagonalizability
  • Investigate the relationship between eigenvectors of commuting operators
USEFUL FOR

Mathematicians, linear algebra students, and anyone studying operator theory or matrix diagonalization will benefit from this discussion.

Daron
Messages
13
Reaction score
0

Homework Statement



A and B are commuting diagonalizable linear operators. prove that they are simultaneously diagonalizable.

Homework Equations



AB = B
A


The Attempt at a Solution



We deal with the problem in the Jordan basis of A, where A is diagonal, as Jordan forms are unique.

Then by rearranging the basis vectors, we can treat A as a block diagonal matrix, where the blocks are of the form λiI.

I aim to prove that, if A is diagonal, and commutes with B, then B must also be diagonal, so they have the same Jordan basis.

I can prove that B must also be a block diagonal matrix, with the dimensions of the blocks mirroring those of A.
This is because if a nonzero entry exists outside of and of B's blocks, the corresponding entries in AB and BA would be this entry multiplied by different eigenvectors. So the multiplication would not be commutative.

But from here I don't know what to do next. Is there some restriction that a diagonalizable matrix may not be put in block diagonal form where the blocks are not diagonalizable themselves?
 
Last edited:
Physics news on Phys.org


You can use the fact that AB=BA to show that if x is an eigenvector of A with eigenvalue λ, then Bx is also an eigenvector of A with eigenvalue λ.

If the eigenvalue has multiplicity 1, then what can you say about x in relation to B?

What happens when the multiplicity is greater than 1?
 


If the multiplicity is 1, then A(Bx) = λBx, so Bx is an eigenvector of A with eigenvalue λ, but since there is only one eigenvector with eigenvalue 1. then Bx = x.

I've moved on to trying to prove how commuting matrices share a basis of eigenvectors, which implies that both are diagonal in the basis of eigenvectors.

If multiplicity is over 1, then B can shuffle through the eigenvectors. For example if there are two eigenvectors with eigenvalue λ, then I can't see why the following can't be true:

Bxi = λxj
Bxj = λxi.
 


That can be true, but what's important is that B merely shuffles among the eigenvectors of A with the same eigenvalue so that any linear combination of xi and xj will still be an eigenvector of A.
 


Is the idea that every eigenvector of B with eigenvalue λ can be formed from a linear combination of the eigenvectors of A with eigenvalue λ, and that these combinations are still eigenvectors of A due to linearity?
 


Yes, that's the idea.

I'm not sure if you meant to say that the eigenvalue associated with B and the eigenvalue associated with A are equal. They're generally not.
 


They're equal up to a scalar multiple.

So for every eigenvalue λ with multiplicity m, we will get a system of m linear equations of the form Bxi = aixaj which define an eigenspace that is invariant under B.

And because B has an orthonormal basis of eigenvectors, we may consider B acting only on this eigenspace and find a basis of m perpendicular eigenvectors within it. And each will be a linear combination of eigenvectors of A and hence an eigenvector itself.

And then vice-verse. Thanks.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 9 ·
Replies
9
Views
7K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K