Proving simultaneous eigenvectors for commuting operators

Thus the eigenvectors of ##A## are exactly the eigenvectors of ##B##.In summary, if two operators commute and each have a complete set of eigenvectors, then they have a common set of eigenvectors. This is a useful concept in quantum mechanics, as it allows us to find simultaneous eigenvectors for operators that commute. However, this assumption does not hold for non-linear operators, and it is important to note that the eigenvectors must be distinct in order for this proof to hold.
  • #1
chrisd
5
0

Homework Statement


In my quantum class we learned that if two operators commute, we can always find a set of simultaneous eigenvectors for both operators. I'm having trouble proving this for the case of degenerate eigenvalues.

Homework Equations


Commutator: [itex][A,B]=AB-BA [/itex]
Eigenvalue equation:[itex]A \mid v \rangle = a \mid v \rangle[/itex]

The Attempt at a Solution


Start off by assuming operators A and B commute so AB=BA.
I think I have the proof for non-degenerate eigenvalues correct:
[itex]A \mid v \rangle = a \mid v \rangle[/itex]
[itex]BA \mid v \rangle = Ba \mid v \rangle[/itex]
[itex]A(B \mid v \rangle) = a(B \mid v \rangle)[/itex]

So [itex] B \mid v \rangle[/itex] is also an eigenvector of A associated with eigenvalue a.
If a is non-degenerate, [itex] B \mid v \rangle[/itex] must be the same eigenvector as [itex] \mid v \rangle[/itex], only multiplied by a scalar.

[itex] B \mid v\rangle=b\mid v \rangle[/itex] which is just the eigenvalue equation for [itex]B[/itex].

For the degenerate case I'm stuck. I can prove that if [itex] \mid v_1 \rangle,\mid v_2 \rangle,...,\mid v_n \rangle[/itex] are eigenvectors of A associated with eigenvalue a, any linear combination of these eigenvectors is also an eigenvector of A with eigenvalue a.
[itex]A (c_1\mid v_1 \rangle + c_2\mid v_2 \rangle) [/itex]
[itex]=c_1A\mid v_1 \rangle + c_2A\mid v_2 \rangle[/itex]
[itex]=c_1a\mid v_1 \rangle + c_2a\mid v_2 \rangle [/itex]
[itex]=a(c_1\mid v_1 \rangle + c_2\mid v_2 \rangle) [/itex]

and so from the fact that [itex] B \mid v_i \rangle[/itex] is an eigenvector of A, [itex] B \mid v_i \rangle[/itex] is a linear combination of the eigenvectors of A associated with eigenvalue a...

[itex] B \mid v_i \rangle= c_1\mid v_1 \rangle + c_2\mid v_2 \rangle + ... + c_n\mid v_n \rangle[/itex]

so I know [itex] B \mid v_i \rangle[/itex] exists within the eigenspace of a, but I'm not sure how to use this to prove that [itex]\mid v_i \rangle[/itex] is an eigenvector of B.

Any hints would be appreciated.
 
Physics news on Phys.org
  • #2
let A and B have a complete set of eigenvectors. expand any eigenvector of A in terms of the set of eigenvectors of B. act on this result with B, and you will find that the set of eigenfunctions of B is an eigenfunction for B acting on any eigenvector of A. The coefficients can be degenerate in this proof without any burden on the generality.
alternatively read page 24 of Quantum Mechanics by Ballentine
 
  • #3
chrisd said:
so I know [itex] B \mid v_i \rangle[/itex] exists within the eigenspace of a, but I'm not sure how to use this to prove that [itex]\mid v_i \rangle[/itex] is an eigenvector of B.

Any hints would be appreciated.
You can't prove that because it's generally not true for an arbitrary eigenvector of A with eigenvalue a. For example, suppose in some basis, A is represented by the matrix ##\begin{pmatrix} a & 0 \\ 0 & a \end{pmatrix}## and B is represented by the matrix ##\begin{pmatrix} 1 & 1 \\ -1 & 1\end{pmatrix}##. Clearly, A and B commute. Now any linear combination of basis states is still an eigenvector of A, but that's not true for B. There are only certain linear combinations that are eigenvectors of B. Those are the simultaneous eigenstates of A and B.

What you have so far is that B maps an eigenvector of A back into the subspace spanned by eigenvectors with the same eigenvalue, so if you wrote down the matrix representing B with respect to the basis consisting of eigenvectors of A, what will the matrix look like?
 
  • #4
summarizing ardie's suggestion

I took a look at pg. 24 of Ballentine, ardie's suggestion. Hope you don't mind, I've summarized:

Let ##V## be a vector space and ##A,B## linear operators on ##V##. Assume that ##A, B## each have a complete set of eigenvectors -- that is, the eigenvectors of ##A## form a basis of ##V## (and similar for ##B##).

Claim: ##A,B## have a common set of eigenvectors.

Note: In quantum mechanics, this assumption is OK because ##V## is a set of physical states and linear ##A,B## are observables -- assuming ##A,B## each have a complete set of eigenstates is saying that physical states can be written as a linear combo of eigenstates of ##A## (or of eigenstates of ##B##).

Also Note: this claim does not hold if ##A,B## are not linear. For example, if ##V(x)=V^*(x)## then time reversal T (which is anti-linear) commutes with the Hamiltonian H (linear) but this is not enough to guarantee simultaneous eigenstates.

Proof:
Let ##v## be such that
\begin{equation}(A-aI)v=0.\end{equation}
Since ##B## has a complete set, write
\begin{equation}v = w_1 + \cdots + w_n \end{equation}
where each of ##w_i## is an eigenvector of ##B## with eigenvalue ##b_i##. Since a multiple of an eigenvector is an eigenvector, we have for convenience omitted constant multipliers in the linear combo. Also, WLOG we assume that the eigenvalues of the ##w_i##'s are distinct (if they are not, clump together any same-eigenvalue ##w##'s into one big ##w##). Note: ##n\neq dim(V)##.

Combining our two equations gives
\begin{equation}(A-aI)w_1 + \cdots + (A-aI)w_n = 0.\end{equation}
Now the question is whether the terms of this sum are linearly independent ?

Applying ##B## to individual terms in the sum gives
\begin{eqnarray}
B(A-aI)w_i&=&(A-aI)Bw_i\mbox{ using that BA=AB and B linear}\\
&=&(A-aI)b_iw_i\\
&=&b_i(A-aI)w_i\mbox{ using A linear}
\end{eqnarray}
This shows that ##u_i=(A-aI)w_i## are eigenvectors of ##B## with distinct eigenvalues ! Thus ##u_i## are linearly independent.

Each term in the sum then must be zero so that
\begin{equation}(A-aI)w_i=0\end{equation}
showing that ##w_i## is also an eigenvector of ##A##.

Since ##n\neq dim(V)## we did not get all the eigenvectors of ##B##, but just repeat this process for all eigenvectors ##v## of ##A##.
 
  • #5
Thank you for your help.First, let's define the eigenspace of A associated with eigenvalue a as E_a, which is spanned by the set of eigenvectors {v_1, v_2, ..., v_n}. Since Bv_i is a linear combination of {v_1, v_2, ..., v_n}, we can say that Bv_i is also in E_a.

Now, let's consider the eigenvalue equation for B:
Bv = bv

Since Bv_i is in E_a, we can write it as a linear combination of {v_1, v_2, ..., v_n}:
Bv_i = c_1v_1 + c_2v_2 + ... + c_nv_n

Substituting this into the eigenvalue equation for B, we get:
(c_1v_1 + c_2v_2 + ... + c_nv_n) = bv_i

Since {v_1, v_2, ..., v_n} are linearly independent, the coefficients c_i must be unique. Therefore, we can equate the coefficients on both sides of the equation:
c_1 = b
c_2 = 0
...
c_n = 0

This shows that Bv_i = bv_i, meaning that v_i is an eigenvector of B associated with eigenvalue b. Since this is true for all eigenvectors in E_a, we can conclude that the set of eigenvectors {v_1, v_2, ..., v_n} is also a set of simultaneous eigenvectors for both A and B.
 

What does it mean for operators to commute?

Two operators are said to commute if they can be applied in any order without changing the result. In other words, the order of application does not matter.

Why is it important to prove simultaneous eigenvectors for commuting operators?

Proving simultaneous eigenvectors for commuting operators is important because it allows us to simplify calculations and make predictions about the behavior of a physical system. It also helps us understand the underlying structure and relationships between different operators.

How do you prove that two operators have simultaneous eigenvectors?

To prove simultaneous eigenvectors for commuting operators, you need to show that the operators share at least one common eigenvector. This can be done by solving the eigenvalue equations for each operator and checking if there is an overlap in the eigenvectors.

What are some examples of commuting operators?

Some common examples of commuting operators include position and momentum operators in quantum mechanics, angular momentum operators, and energy and momentum operators in relativistic physics.

Can operators that do not commute have simultaneous eigenvectors?

No, if two operators do not commute, they cannot have simultaneous eigenvectors. This is because if the operators do not commute, the order of application matters and the eigenvectors will not be the same for both operators.

Similar threads

  • Advanced Physics Homework Help
Replies
4
Views
1K
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Advanced Physics Homework Help
Replies
4
Views
3K
  • Advanced Physics Homework Help
Replies
9
Views
2K
  • Advanced Physics Homework Help
Replies
5
Views
1K
  • Quantum Physics
Replies
8
Views
1K
  • Advanced Physics Homework Help
Replies
5
Views
2K
Replies
15
Views
2K
  • Advanced Physics Homework Help
Replies
1
Views
2K
Back
Top