If a matrix commutes with all nxn matrices, then A must be scalar.

fishshoe
Messages
15
Reaction score
0

Homework Statement


Prove: If a matrix A commutes with all matrices B \in M_{nxn}(F), then A must be scalar - i.e., A=diag.(λ,...,λ), for some λ \in F.


Homework Equations


If two nxn matrices A and B commute, then AB=BA.


The Attempt at a Solution


I understand that if A is scalar, it will definitely commute with all nxn matrices. But I don't get the intuition behind why commuting with more than one matrix implies that A must be scalar. The way I tried to solve it was by comparing an individual entry in the product, (AB)_{ij} = (BA)_{ij} = (AC)_{ij} = (CA)_{ji}, etc. This implies that
Ʃa_{ik}b_{kj} = Ʃb_{ik}a_{kj} = ...
But I'm not sure how that implies that A is scalar.
 
Physics news on Phys.org
Think about eigenvectors. Pick special matrices B. Given any vector v, extend it to a basis {v,b_2,b_3,...b_n} and define the matrix B by Bv=v, Bb_k=0. Can you show that A commuting with B means that v must also be an eigenvector of A?
 
Last edited:
I'm trying to figure out what v, b2, b3,..., bn is a basis for. Is it for all nxn matrices?

If Bv=v, then v is an eigenvector of B corresponding to eigenvalue λ=1, and B is the identity operator on the one-dimensional subspace spanned by v.

I know that det(B-I) = 0, so maybe something with determinants?

AB=BA

-> det(AB) = det(BA)

and det(B-I) = 0
det(A)det(B-I)=0
det(A(B-I))=0
det(AB - BI) = 0
det(AB - B) = 0

I'm sorry, that's as far as I've gotten with that. Please let me know if I'm on the right track. Thanks!
 
The vectors are just supposed to be a basis for F^n, the vector space your matrices act on. But, yes, the point is that the eigenvectors of B with eigenvalue 1 are a one dimensional subspace of F^n spanned by v! Now forget about the det's. BAv=ABv put together with Bv=v tells you B(Av)=(Av). So Av is an eigenvector of B with eigenvalue 1. It must lie in the same one dimensional subspace as v. So?
 
Last edited:
So since Av is in the same one-dimensional subspace as v, we know that Av is a scalar multiple of v, and so A is a scalar nxn matrix!

But does this apply to any nxn matrix B? Or does it have something to do with the specific B that we defined, such that we have to generalize it further to prove for all cases?
 
fishshoe said:
So since Av is in the same one-dimensional subspace as v, we know that Av is a scalar multiple of v, and so A is a scalar nxn matrix!

But does this apply to any nxn matrix B? Or does it have something to do with the specific B that we defined, such that we have to generalize it further to prove for all cases?

No. You don't have to show anything for all matrices B. You can pick any specific ones you want. A has to commute with all of them. What you have so far is that Av is a multiple of v for ANY v. So ANY vector v is an eigenvector of A. So A is a diagonal matrix in any basis. You haven't shown it's a scalar matrix yet. To do that you have to show all of the eigenvectors of A have the same eigenvalue. Keep going.
 
So if A is a diagonal matrix in any bases β and γ, then

[A]_β = diag(a_1,..., a_n)
and
[A]_γ = diag(b_1,..., b_n)

And for the eigenvectors in any basis,

[A]_βe_i = a_ie_i

But I'm stuck there. How do I show that

a_1 = a_2 = ... = a_n?
 
fishshoe said:
So if A is a diagonal matrix in any bases β and γ, then

[A]_β = diag(a_1,..., a_n)
and
[A]_γ = diag(b_1,..., b_n)

And for the eigenvectors in any basis,

[A]_βe_i = a_ie_i

But I'm stuck there. How do I show that

a_1 = a_2 = ... = a_n?

Suppose A has two linearly independent eigenvectors with two different eigenvalues. We know every vector is an eigenvector of A. See if you can find a contradiction.
 
Last edited:
So if I have A = diag(a_1,...,a_n), then

A\vec{e_1} = a_1\vec{e_1}
A\vec{e_2} = a_2\vec{e_2}
...
A\vec{e_n} = a_n\vec{e_n}

But a vector of all 1's should also be an eigenvector of A.

A * (1,1,...,1)^T = (a_1, a_2, ..., a_n)^T

And therefore this can only be an eigenvector if all the diagonal elements of A are equal! Is that right?
 
  • #10
fishshoe said:
So if I have A = diag(a_1,...,a_n), then

A\vec{e_1} = a_1\vec{e_1}
A\vec{e_2} = a_2\vec{e_2}
...
A\vec{e_n} = a_n\vec{e_n}

But a vector of all 1's should also be an eigenvector of A.

A * (1,1,...,1)^T = (a_1, a_2, ..., a_n)^T

And therefore this can only be an eigenvector if all the diagonal elements of A are equal! Is that right?

That's right. Put a little more simply, if u and v are eigenvectors with different eigenvalues then u+v can't be an eigenvector.
 
Back
Top