Does the Rank of a Commutator Determine Common Eigenvectors?

In summary, the rank of the commutator refers to the number of independent elements in the commutator matrix and is a measure of the non-commutativity of two operators. It is important in quantum mechanics as it determines the uncertainty in a measurement of two observables. The rank is equal to the number of non-zero eigenvalues of the operators, which can also be used to determine their degeneracy. The rank can be zero if the operators commute with each other, indicating no uncertainty in measurements. Under unitary transformations, the rank remains unchanged, meaning uncertainty in measurements is invariant under rotations in state space.
  • #1
Hurin
8
0
I found this theorem on Prasolov's Problems and Theorems in Linear Algebra:

Let V be a [itex]\mathbb{C}[/itex]-vector space and [itex]A,B \in \mathcal{L}(V) [/itex]such that [itex] rank([A,B])\leq 1[/itex]. Then [itex]A[/itex] and [itex]B[/itex] has a common eigenvector.


He gives this proof:
The proof will be carried out by induction on [itex]n=dim(V)[/itex]. He states that we can assume that [itex]ker(A)\neq \{0\}[/itex], otherwise we can replace [itex] A[/itex] by[itex] A - \lambda I[/itex]; doubt one: why can we assume that? For [itex]n=1[/itex] it's clear that the property holds, because [itex] V = span(v) [/itex] for some [itex]v[/itex]. Supposing that holds for some [itex]n[/itex]. Now he divides into cases:
1. [itex]ker(A)\subseteq ker(C)[/itex]; and
2. [itex]ker(A)\not\subset ker(C)[/itex].

Doubt two: the cases 1 and 2 come from (or is equivalent to) the division [itex] rank([A,B])= 1[/itex] or [itex] rank([A,B])=0[/itex]?

After this division he continues for case one: [itex]B(ker(A))\subseteq ker(A)[/itex], since if [itex] A(x) = 0 [/itex], then [itex] [A,B](x) = 0 [/itex] and [itex]AB(x) = BA(x) + [A,B](x) = 0 [/itex]. Now, the doubt three is concerning the following step in witch is considered the restriction [itex]B'[/itex] of [itex]B[/itex] in [itex]ker(A)[/itex] and a selection of an eigenvector [itex]v\in ker(A)[/itex] of [itex]B[/itex] and the statement that [itex]v[/itex] is also a eigenvector of [itex]A[/itex]. This proves the case 1.

Now, if [itex]ker(A)\not\subset ker(C)[/itex] then [itex]A(x) = 0[/itex] and [itex][A,B](x)\neq 0[/itex] for some [itex] x\in V[/itex]. Since [itex] rank([A,B]) = 1 [/itex] then [itex] Im([A,B]) = span(v)[/itex], for some [itex]v\in V[/itex], where [itex]v=[A,B](x)[/itex], so that [itex]y = AB(x) - BA(x) = AB(x) \in Im(A)[/itex]. It follows that [itex]B(Im(A))\subseteq Im(A)[/itex]. Now, comes doubt four, that is similar to three: he takes the restrictions [itex] A',B'[/itex] of [itex]A,B[/itex] to [itex]Im(A)[/itex] and the states that [itex]rank[A',B']\leq 1[/itex] and therefor by the inductive hypothesis the operators [itex]A'[/itex] and [itex]B'[/itex] have a common eigenvector. And this proves the case 2, concluding the entire proof.

-Thanks
 
Physics news on Phys.org
  • #2
Hurin said:
I found this theorem on Prasolov's Problems and Theorems in Linear Algebra:

Let V be a [itex]\mathbb{C}[/itex]-vector space and [itex]A,B \in \mathcal{L}(V) [/itex]such that [itex] rank([A,B])\leq 1[/itex]. Then [itex]A[/itex] and [itex]B[/itex] has a common eigenvector. He gives this proof:
The proof will be carried out by induction on [itex]n=dim(V)[/itex]. He states that we can assume that [itex]ker(A)\neq \{0\}[/itex], otherwise we can replace [itex] A[/itex] by[itex] A - \lambda I[/itex]; doubt one: why can we assume that?
We need a common eigenvector, not necessarily to the same eigenvalues. If ##A## is injective and ##\lambda## an eigenvalue of ##A##, which always exists over ##\mathbb{C}##, then
$$
\operatorname{rank}([A-\lambda I, B])=\operatorname{rank}([A,B]) \leq 1
$$
and the condition still holds, and ##A-\lambda I## is not injective. Now let's assume we found an eigenvector ##v \in \mathbb{C}^n## for both, i.e. ##(A-\lambda I)(v)=\nu v\, , \, B(v)=\mu v##. Then ##A(v)=(\nu+\lambda)v## and ##v## is an eigenvector for ##A##, too. Hence we may assume that ##A## is not injective, since a result in this case delivers a result for the injective case, too.

For [itex]n=1[/itex] it's clear that the property holds, because [itex] V = span(v) [/itex] for some [itex]v[/itex]. Supposing that holds for some [itex]n[/itex]. Now he divides into cases:
1. [itex]ker(A)\subseteq ker(C)[/itex]; and
2. [itex]ker(A)\not\subset ker(C)[/itex].

Doubt two: the cases 1 and 2 come from (or is equivalent to) the division [itex] rank([A,B])= 1[/itex] or [itex] rank([A,B])=0[/itex]?
What is ##C##? I assume it stands for ##C=[A,B]##?

Anyway. The answer is no, since we can always distinguish these two cases. There is simply no third possibility, regardless of how ##C## is defined.
After this division he continues for case one: [itex]B(ker(A))\subseteq ker(A)[/itex], since if [itex] A(x) = 0 [/itex], then [itex] [A,B](x) = 0 [/itex] and [itex]AB(x) = BA(x) + [A,B](x) = 0 [/itex]. Now, the doubt three is concerning the following step in witch is considered the restriction [itex]B'[/itex] of [itex]B[/itex] in [itex]ker(A)[/itex] and a selection of an eigenvector [itex]v\in ker(A)[/itex] of [itex]B[/itex] and the statement that [itex]v[/itex] is also a eigenvector of [itex]A[/itex]. This proves the case 1.
Yes. But where is the problem?

Say ##v\in\ker A \subseteq \ker [A,B]## by assumption of case 1. Thus ##AB(v)=[A,B](v)+B(A(v)) = 0+0=0## and ##B## can be restricted to ##B':=\left.B\right|_{\ker A}\in \mathcal{L}(\ker A) \neq \{\,0\,\}##, and ##\dim (\ker A)<n##, because ##\dim (\ker A)=n## means ##A=0## and all vectors are eigenvectors to the eigenvalue ##0##, i.e. any eigenvector of ##B## will do. Hence we have a vector ##v\in \ker A ## by induction, which is an eigenvector of ##B'## and and an eigenvector of ##A##: ##B'(v)=B(v)=\mu v## and ##A(v)=0\cdot v=0##. Remains to check the rank condition for the induction step, but
$$
\left. \operatorname{rank} [A,B']\right|_{\ker A}=\left.AB'\right|_{\ker A}=\left.AB\right|_{\ker A}= \left. \operatorname{rank} [A,B]\right|_{\ker A} \leq \operatorname{rank} [A,B] \leq 1
$$
Now, if [itex]ker(A)\not\subset ker(C)[/itex] then [itex]A(x) = 0[/itex] and [itex][A,B](x)\neq 0[/itex] for some [itex] x\in V[/itex]. Since [itex] rank([A,B]) = 1 [/itex] then [itex] Im([A,B]) = span(v)[/itex], for some [itex]v\in V[/itex], where [itex]v=[A,B](x)[/itex], so that [itex]y = AB(x) - BA(x) = AB(x) \in Im(A)[/itex]. It follows that [itex]B(Im(A))\subseteq Im(A)[/itex].
##y=v :=[A,B](x)\, , \,x \in \ker A - \{\,0\,\}##
Now, comes doubt four, that is similar to three: he takes the restrictions [itex] A',B'[/itex] of [itex]A,B[/itex] to [itex]Im(A)[/itex] and the states that [itex]rank[A',B']\leq 1[/itex] and therefor by the inductive hypothesis the operators [itex]A'[/itex] and [itex]B'[/itex] have a common eigenvector. And this proves the case 2, concluding the entire proof.

-Thanks
##[A,B](V)## is at most one dimensional, and ##[A,B](x)\neq 0##. Hence
$$
[A,B](V)=[A,B](x)=A(B(x)) = v = [\left.A\right|_{\mathbb{C}x},\left.B\right|_{\mathbb{C}x}] \neq 0
$$
and ##\operatorname{rank}[A',B'] = 1## so the induction hypothesis applies. Since we only restricted the transformations to invariant subspaces, all vectors, including the solution eigenvector are still vectors in ##V##. Note that we remained in the non injective case for ##A## the entire proof!
 

Related to Does the Rank of a Commutator Determine Common Eigenvectors?

What is the rank of the commutator?

The rank of the commutator refers to the number of independent elements in the commutator matrix. It is a measure of the non-commutativity of two operators.

Why is the rank of the commutator important in quantum mechanics?

The rank of the commutator is important because it determines the uncertainty in a measurement of two observables. If the rank is non-zero, there is an inherent uncertainty in the measurement.

How is the rank of the commutator related to the eigenvalues of the operators?

The rank of the commutator is equal to the number of non-zero eigenvalues of the operators. This means that the rank can also be used to determine the degeneracy of the eigenvalues.

Can the rank of the commutator be zero?

Yes, the rank of the commutator can be zero if the operators commute with each other. This means that they have the same set of eigenvalues and their measurements are not uncertain.

How does the rank of the commutator change under unitary transformations?

The rank of the commutator remains unchanged under unitary transformations. This means that the uncertainty in measurements of two observables is invariant under rotations in the state space.

Similar threads

Replies
2
Views
871
  • Linear and Abstract Algebra
Replies
7
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
9
Views
2K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
26
Views
4K
  • Linear and Abstract Algebra
Replies
5
Views
950
  • Calculus and Beyond Homework Help
Replies
5
Views
555
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Linear and Abstract Algebra
Replies
18
Views
1K
Back
Top