Proving Simultaneous Diagonalizability of nxn Matrices A and B with AB = BA

  • Context: Graduate 
  • Thread starter Thread starter Bachelier
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the conditions under which two nxn matrices A and B, which commute (AB = BA), can be simultaneously diagonalized. Participants explore the implications of diagonalizability and the structure of eigenvectors in relation to the matrices involved.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants assert that if A and B are diagonalizable and commute, then they can be simultaneously diagonalized by a matrix P.
  • One participant proposes that if v is an eigenvector of A, then Bv must lie in the eigenspace of A, suggesting that a basis of eigenvectors can be constructed.
  • Another participant challenges the claim that P^{-1}BP commutes with arbitrary diagonal matrices, arguing it only holds for specific cases, such as when A has distinct eigenvalues.
  • A participant questions the existence of a non-identity matrix B that commutes with a diagonal matrix A, seeking examples beyond trivial cases.
  • There is a suggestion that if A is the identity matrix, the proof becomes trivial and does not hold for non-identity cases.
  • One participant introduces a claim that if B commutes with any diagonal matrix A, then B must also be diagonal, providing a pseudo-proof based on distinct eigenvalues.
  • Another participant discusses the implications of restricting A to diagonal matrices with identical entries and how this affects the diagonalizability of B.

Areas of Agreement / Disagreement

Participants express differing views on the conditions necessary for simultaneous diagonalizability, with some supporting the initial claim while others raise counterexamples and challenge the assumptions made in the proofs. The discussion remains unresolved, with multiple competing perspectives presented.

Contextual Notes

Limitations include the dependence on the distinctness of eigenvalues and the specific forms of matrices A and B. The proofs presented are conditional and may not apply universally across all cases of diagonalizable matrices.

Bachelier
Messages
375
Reaction score
0
A, B nxn matrices are called simultaneously diagonalizable if there exists P such that both P^-1AP and P^-1BP are diagonal.

Prove if A and B are diagonalizable and AB = BA, then A, B are simultaneousely diagonalizable?
 
Physics news on Phys.org
Suppose A and B commute. Then let v be an eigenvector of A with eigenvalue \lambda. Then we have A(Bv)=(AB)v=BAv=\lambda Bv.

So Bv is in the eigenspace of A.

Choose a candidate basis {b_1,b_2,...,b_n} consisting of eigenvectors of A such that the eigenvectors are ordered to correspond with the eigenvalues (ie, if \lambda_1 has multiplicity 2, then b_1 and b_2 are eigenvectors corresponding to \lambda_1).

Now this isn't necessarily a basis of eigenvectors of B. But because Bv is in the eigenspace of A, we can write B in this basis as a block diagonal matrix (where each block is mxm, where m is the multiplicity of an eigenvalue of A). But B is diagonalizable, so each block can be diagonalized, and if we do that, then we have n independent vectors that are eigenvectors of A and of B, so we win.
 
Or from the commutativity

<br /> AB =BA = P \Lambda_A P^{-1}B = BP \Lambda_A P^{-1}<br />

Since P is invertible, by a similarity transformation on both sides, (pre multiply with P^{-1} and post multiply with P)

<br /> \Lambda_A P^{-1}BP = P^{-1}BP \Lambda_A <br />

Since, P^{-1}BP commutes with arbitrary diagonal matrix, itself is a diagonal matrix. Thus, P diagonalizes simultaneously A and B.
 
Thanks.
 
trambolin said:
Since, P^{-1}BP commutes with arbitrary diagonal matrix, itself is a diagonal matrix. Thus, P diagonalizes simultaneously A and B.
Why does it commute with an arbitrary diagonal matrix? It commutes with a specific diagonal matrix, namely \Lambda_A, the diagonal matrix whose diagonal values are the eigenvalues of A.

I think from this you can only conclude that P^{-1}BP is diagonal if all eigenvalues of A are different... A simple counteraxmple to your proof would be A=P equal to the identity!
 
If A is already diagonal matrix with arbitrary (which means "any" which then means "choose any diagonalizable A and diagonalize it" by the way also note that the claim is only sufficient not necessary) real numbers as entries, can you give me a nondiagonal matrix B that commutes with A other than identity? Because if you have it, I really need it.

If P = A and P is not diagonal then P^{-1}AP is not diagonal and does not satisfy the assumption in the original claim. If P is diagonal, then A is diagonal (in your case leads to trivial B = B) so back to my question.
 
Last edited:
If I read your proof (post #3) with A equal to the identity (so necessarily P=A), then it says
trambolin said:
Or from the commutativity

<br /> AB =BA = B = B<br />

Since I is invertible, by a similarity transformation on both sides, (pre multiply with I^{-1} and post multiply with I)

<br /> B = B<br />

Since, B commutes with arbitrary diagonal matrix, itself is a diagonal matrix. Thus, I diagonalizes simultaneously A and B.
which is of course not correct. In your last post you seem to be fixing this by considering different cases (A,B both diagonal, one of them not, or both not), but I am not quite following. Could you elaborate?
 
Sure. Let's limit the discussion to the commutativity part for now and use the notation \mathbb{D} for the set of all diagonal matrices and \mathbb{D}_{=} \subset \mathbb{D} for the set of all diagonal matrices with identical entries such as identity. What I am trying to say is the following.

Claim: If a matrix B is commuting with any diagonal matrix A\in\mathbb{D}. Then B is also diagonal.

My pseudo-proof goes like this. Suppose A is a diagonal 2x2 matrix with distinct elements. Then,
<br /> AB = \begin{pmatrix} \lambda_1B_{11} &amp;\lambda_1B_{12}\\ \lambda_2B_{21} &amp;\lambda_2B_{22}\end{pmatrix} \neq \begin{pmatrix} \lambda_1B_{11} &amp;\lambda_2B_{12}\\ \lambda_1B_{21} &amp;\lambda_2B_{22}\end{pmatrix} = BA<br />
if \lambda_1 \neq \lambda_2, or B_{21}, B_{12} \neq 0.


Now, your examples are using the elements of A\in\mathbb{D}_{=}. But my claim is about the A\in\mathbb{D}, hence a bigger set to test with because we can start with any diagonalizable matrix A which might have completely distinct eigenvalues. So if you plug in any element from the bigger set, it puts additional constraints on the off-diagonal entries of B forcing it to be diagonal as I provided a small example above.
 
Now is the second part about the cases where A is restricted to be A\in\mathbb{D}_{=}. Then you can start arguing as follows. I diagonalize B with a matrix Q. And to show that this also diagonalizes A is trivial since Q^{-1}AQ= Q^{-1}QA = A since A\in\mathbb{D}_{=} and commutes with any matrix.

A mixture of these can be done for the matrices that has eigenvalues of multiplicity more than one but also have distinct eigenvaues. But a slightly more tedious proof will lead to similar confirming answer with block diagonal matrix arguments.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 40 ·
2
Replies
40
Views
6K
Replies
2
Views
9K