Does the Rank of a Commutator Determine Common Eigenvectors?

  • Context: Graduate 
  • Thread starter Thread starter Hurin
  • Start date Start date
  • Tags Tags
    Commutator rank
Click For Summary
SUMMARY

The theorem discussed states that for linear operators A and B on a complex vector space V, if the rank of their commutator [A,B] is less than or equal to 1, then A and B share a common eigenvector. The proof employs mathematical induction on the dimension of V, addressing two cases based on the relationship between the kernels of A and B. The discussion clarifies that the rank condition is crucial for establishing the existence of a common eigenvector, and it resolves doubts regarding the assumptions made during the proof process.

PREREQUISITES
  • Understanding of linear algebra concepts, specifically eigenvectors and eigenvalues.
  • Familiarity with commutators in linear operators, denoted as [A,B].
  • Knowledge of vector space dimensions and kernel properties.
  • Experience with mathematical induction techniques in proofs.
NEXT STEPS
  • Study the properties of commutators in linear algebra, focusing on their implications for eigenvectors.
  • Explore the concept of kernels in linear transformations and their role in eigenvalue problems.
  • Learn about induction proofs in mathematics, particularly in the context of linear algebra.
  • Investigate the implications of rank conditions on linear operators and their eigenvectors.
USEFUL FOR

Mathematicians, students of linear algebra, and researchers interested in operator theory and eigenvalue problems will benefit from this discussion.

Hurin
Messages
8
Reaction score
0
I found this theorem on Prasolov's Problems and Theorems in Linear Algebra:

Let V be a \mathbb{C}-vector space and A,B \in \mathcal{L}(V)such that rank([A,B])\leq 1. Then A and B has a common eigenvector.


He gives this proof:
The proof will be carried out by induction on n=dim(V). He states that we can assume that ker(A)\neq \{0\}, otherwise we can replace A byA - \lambda I; doubt one: why can we assume that? For n=1 it's clear that the property holds, because V = span(v) for some v. Supposing that holds for some n. Now he divides into cases:
1. ker(A)\subseteq ker(C); and
2. ker(A)\not\subset ker(C).

Doubt two: the cases 1 and 2 come from (or is equivalent to) the division rank([A,B])= 1 or rank([A,B])=0?

After this division he continues for case one: B(ker(A))\subseteq ker(A), since if A(x) = 0, then [A,B](x) = 0 and AB(x) = BA(x) + [A,B](x) = 0. Now, the doubt three is concerning the following step in witch is considered the restriction B' of B in ker(A) and a selection of an eigenvector v\in ker(A) of B and the statement that v is also a eigenvector of A. This proves the case 1.

Now, if ker(A)\not\subset ker(C) then A(x) = 0 and [A,B](x)\neq 0 for some x\in V. Since rank([A,B]) = 1 then Im([A,B]) = span(v), for some v\in V, where v=[A,B](x), so that y = AB(x) - BA(x) = AB(x) \in Im(A). It follows that B(Im(A))\subseteq Im(A). Now, comes doubt four, that is similar to three: he takes the restrictions A',B' of A,B to Im(A) and the states that rank[A',B']\leq 1 and therefor by the inductive hypothesis the operators A' and B' have a common eigenvector. And this proves the case 2, concluding the entire proof.

-Thanks
 
Physics news on Phys.org
Hurin said:
I found this theorem on Prasolov's Problems and Theorems in Linear Algebra:

Let V be a \mathbb{C}-vector space and A,B \in \mathcal{L}(V)such that rank([A,B])\leq 1. Then A and B has a common eigenvector. He gives this proof:
The proof will be carried out by induction on n=dim(V). He states that we can assume that ker(A)\neq \{0\}, otherwise we can replace A byA - \lambda I; doubt one: why can we assume that?
We need a common eigenvector, not necessarily to the same eigenvalues. If ##A## is injective and ##\lambda## an eigenvalue of ##A##, which always exists over ##\mathbb{C}##, then
$$
\operatorname{rank}([A-\lambda I, B])=\operatorname{rank}([A,B]) \leq 1
$$
and the condition still holds, and ##A-\lambda I## is not injective. Now let's assume we found an eigenvector ##v \in \mathbb{C}^n## for both, i.e. ##(A-\lambda I)(v)=\nu v\, , \, B(v)=\mu v##. Then ##A(v)=(\nu+\lambda)v## and ##v## is an eigenvector for ##A##, too. Hence we may assume that ##A## is not injective, since a result in this case delivers a result for the injective case, too.

For n=1 it's clear that the property holds, because V = span(v) for some v. Supposing that holds for some n. Now he divides into cases:
1. ker(A)\subseteq ker(C); and
2. ker(A)\not\subset ker(C).

Doubt two: the cases 1 and 2 come from (or is equivalent to) the division rank([A,B])= 1 or rank([A,B])=0?
What is ##C##? I assume it stands for ##C=[A,B]##?

Anyway. The answer is no, since we can always distinguish these two cases. There is simply no third possibility, regardless of how ##C## is defined.
After this division he continues for case one: B(ker(A))\subseteq ker(A), since if A(x) = 0, then [A,B](x) = 0 and AB(x) = BA(x) + [A,B](x) = 0. Now, the doubt three is concerning the following step in witch is considered the restriction B' of B in ker(A) and a selection of an eigenvector v\in ker(A) of B and the statement that v is also a eigenvector of A. This proves the case 1.
Yes. But where is the problem?

Say ##v\in\ker A \subseteq \ker [A,B]## by assumption of case 1. Thus ##AB(v)=[A,B](v)+B(A(v)) = 0+0=0## and ##B## can be restricted to ##B':=\left.B\right|_{\ker A}\in \mathcal{L}(\ker A) \neq \{\,0\,\}##, and ##\dim (\ker A)<n##, because ##\dim (\ker A)=n## means ##A=0## and all vectors are eigenvectors to the eigenvalue ##0##, i.e. any eigenvector of ##B## will do. Hence we have a vector ##v\in \ker A ## by induction, which is an eigenvector of ##B'## and and an eigenvector of ##A##: ##B'(v)=B(v)=\mu v## and ##A(v)=0\cdot v=0##. Remains to check the rank condition for the induction step, but
$$
\left. \operatorname{rank} [A,B']\right|_{\ker A}=\left.AB'\right|_{\ker A}=\left.AB\right|_{\ker A}= \left. \operatorname{rank} [A,B]\right|_{\ker A} \leq \operatorname{rank} [A,B] \leq 1
$$
Now, if ker(A)\not\subset ker(C) then A(x) = 0 and [A,B](x)\neq 0 for some x\in V. Since rank([A,B]) = 1 then Im([A,B]) = span(v), for some v\in V, where v=[A,B](x), so that y = AB(x) - BA(x) = AB(x) \in Im(A). It follows that B(Im(A))\subseteq Im(A).
##y=v :=[A,B](x)\, , \,x \in \ker A - \{\,0\,\}##
Now, comes doubt four, that is similar to three: he takes the restrictions A&#039;,B&#039; of A,B to Im(A) and the states that rank[A&#039;,B&#039;]\leq 1 and therefor by the inductive hypothesis the operators A&#039; and B&#039; have a common eigenvector. And this proves the case 2, concluding the entire proof.

-Thanks
##[A,B](V)## is at most one dimensional, and ##[A,B](x)\neq 0##. Hence
$$
[A,B](V)=[A,B](x)=A(B(x)) = v = [\left.A\right|_{\mathbb{C}x},\left.B\right|_{\mathbb{C}x}] \neq 0
$$
and ##\operatorname{rank}[A',B'] = 1## so the induction hypothesis applies. Since we only restricted the transformations to invariant subspaces, all vectors, including the solution eigenvector are still vectors in ##V##. Note that we remained in the non injective case for ##A## the entire proof!
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K