Does the Rank of a Commutator Determine Common Eigenvectors?

  • Thread starter Thread starter Hurin
  • Start date Start date
  • Tags Tags
    Commutator rank
Hurin
Messages
8
Reaction score
0
I found this theorem on Prasolov's Problems and Theorems in Linear Algebra:

Let V be a \mathbb{C}-vector space and A,B \in \mathcal{L}(V)such that rank([A,B])\leq 1. Then A and B has a common eigenvector.


He gives this proof:
The proof will be carried out by induction on n=dim(V). He states that we can assume that ker(A)\neq \{0\}, otherwise we can replace A byA - \lambda I; doubt one: why can we assume that? For n=1 it's clear that the property holds, because V = span(v) for some v. Supposing that holds for some n. Now he divides into cases:
1. ker(A)\subseteq ker(C); and
2. ker(A)\not\subset ker(C).

Doubt two: the cases 1 and 2 come from (or is equivalent to) the division rank([A,B])= 1 or rank([A,B])=0?

After this division he continues for case one: B(ker(A))\subseteq ker(A), since if A(x) = 0, then [A,B](x) = 0 and AB(x) = BA(x) + [A,B](x) = 0. Now, the doubt three is concerning the following step in witch is considered the restriction B' of B in ker(A) and a selection of an eigenvector v\in ker(A) of B and the statement that v is also a eigenvector of A. This proves the case 1.

Now, if ker(A)\not\subset ker(C) then A(x) = 0 and [A,B](x)\neq 0 for some x\in V. Since rank([A,B]) = 1 then Im([A,B]) = span(v), for some v\in V, where v=[A,B](x), so that y = AB(x) - BA(x) = AB(x) \in Im(A). It follows that B(Im(A))\subseteq Im(A). Now, comes doubt four, that is similar to three: he takes the restrictions A',B' of A,B to Im(A) and the states that rank[A',B']\leq 1 and therefor by the inductive hypothesis the operators A' and B' have a common eigenvector. And this proves the case 2, concluding the entire proof.

-Thanks
 
Physics news on Phys.org
Hurin said:
I found this theorem on Prasolov's Problems and Theorems in Linear Algebra:

Let V be a \mathbb{C}-vector space and A,B \in \mathcal{L}(V)such that rank([A,B])\leq 1. Then A and B has a common eigenvector. He gives this proof:
The proof will be carried out by induction on n=dim(V). He states that we can assume that ker(A)\neq \{0\}, otherwise we can replace A byA - \lambda I; doubt one: why can we assume that?
We need a common eigenvector, not necessarily to the same eigenvalues. If ##A## is injective and ##\lambda## an eigenvalue of ##A##, which always exists over ##\mathbb{C}##, then
$$
\operatorname{rank}([A-\lambda I, B])=\operatorname{rank}([A,B]) \leq 1
$$
and the condition still holds, and ##A-\lambda I## is not injective. Now let's assume we found an eigenvector ##v \in \mathbb{C}^n## for both, i.e. ##(A-\lambda I)(v)=\nu v\, , \, B(v)=\mu v##. Then ##A(v)=(\nu+\lambda)v## and ##v## is an eigenvector for ##A##, too. Hence we may assume that ##A## is not injective, since a result in this case delivers a result for the injective case, too.

For n=1 it's clear that the property holds, because V = span(v) for some v. Supposing that holds for some n. Now he divides into cases:
1. ker(A)\subseteq ker(C); and
2. ker(A)\not\subset ker(C).

Doubt two: the cases 1 and 2 come from (or is equivalent to) the division rank([A,B])= 1 or rank([A,B])=0?
What is ##C##? I assume it stands for ##C=[A,B]##?

Anyway. The answer is no, since we can always distinguish these two cases. There is simply no third possibility, regardless of how ##C## is defined.
After this division he continues for case one: B(ker(A))\subseteq ker(A), since if A(x) = 0, then [A,B](x) = 0 and AB(x) = BA(x) + [A,B](x) = 0. Now, the doubt three is concerning the following step in witch is considered the restriction B' of B in ker(A) and a selection of an eigenvector v\in ker(A) of B and the statement that v is also a eigenvector of A. This proves the case 1.
Yes. But where is the problem?

Say ##v\in\ker A \subseteq \ker [A,B]## by assumption of case 1. Thus ##AB(v)=[A,B](v)+B(A(v)) = 0+0=0## and ##B## can be restricted to ##B':=\left.B\right|_{\ker A}\in \mathcal{L}(\ker A) \neq \{\,0\,\}##, and ##\dim (\ker A)<n##, because ##\dim (\ker A)=n## means ##A=0## and all vectors are eigenvectors to the eigenvalue ##0##, i.e. any eigenvector of ##B## will do. Hence we have a vector ##v\in \ker A ## by induction, which is an eigenvector of ##B'## and and an eigenvector of ##A##: ##B'(v)=B(v)=\mu v## and ##A(v)=0\cdot v=0##. Remains to check the rank condition for the induction step, but
$$
\left. \operatorname{rank} [A,B']\right|_{\ker A}=\left.AB'\right|_{\ker A}=\left.AB\right|_{\ker A}= \left. \operatorname{rank} [A,B]\right|_{\ker A} \leq \operatorname{rank} [A,B] \leq 1
$$
Now, if ker(A)\not\subset ker(C) then A(x) = 0 and [A,B](x)\neq 0 for some x\in V. Since rank([A,B]) = 1 then Im([A,B]) = span(v), for some v\in V, where v=[A,B](x), so that y = AB(x) - BA(x) = AB(x) \in Im(A). It follows that B(Im(A))\subseteq Im(A).
##y=v :=[A,B](x)\, , \,x \in \ker A - \{\,0\,\}##
Now, comes doubt four, that is similar to three: he takes the restrictions A&#039;,B&#039; of A,B to Im(A) and the states that rank[A&#039;,B&#039;]\leq 1 and therefor by the inductive hypothesis the operators A&#039; and B&#039; have a common eigenvector. And this proves the case 2, concluding the entire proof.

-Thanks
##[A,B](V)## is at most one dimensional, and ##[A,B](x)\neq 0##. Hence
$$
[A,B](V)=[A,B](x)=A(B(x)) = v = [\left.A\right|_{\mathbb{C}x},\left.B\right|_{\mathbb{C}x}] \neq 0
$$
and ##\operatorname{rank}[A',B'] = 1## so the induction hypothesis applies. Since we only restricted the transformations to invariant subspaces, all vectors, including the solution eigenvector are still vectors in ##V##. Note that we remained in the non injective case for ##A## the entire proof!
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
Back
Top