Commutator expectation value in an Eigenstate

Click For Summary
SUMMARY

The discussion centers on the expectation value of the commutator of two Hermitian operators, $$\hat{A}$$ and $$\hat{B}$$, in an eigenstate $$|A\rangle$$ of $$\hat{A}$$ with eigenvalue $$a$$. The participants explore the implications of non-commuting operators, concluding that if $$[A,B]=cI$$, where $$c$$ is a complex number, then the expectation value can lead to contradictions such as $$1=0$$. They emphasize that the domains of the operators are crucial, as the eigenfunctions of one operator may not reside within the domain of the commutator, affecting the validity of the expectation value calculations.

PREREQUISITES
  • Understanding of Hermitian operators in quantum mechanics
  • Familiarity with the concept of eigenstates and eigenvalues
  • Knowledge of commutators and their significance in quantum mechanics
  • Basic principles of Hilbert spaces and bounded operators
NEXT STEPS
  • Study the implications of non-commuting operators in quantum mechanics
  • Learn about the domains of operators in Hilbert spaces
  • Explore the significance of the C*-algebra in operator theory
  • Investigate the theorem regarding bounded linear operators and their commutators
USEFUL FOR

Quantum physicists, mathematicians specializing in functional analysis, and students studying operator theory in quantum mechanics will benefit from this discussion.

Matterwave
Science Advisor
Homework Helper
Gold Member
Messages
3,971
Reaction score
329
Hi, suppose that the operators $$\hat{A}$$ and $$\hat{B}$$ are Hermitean operators which do not commute corresponding to observables. Suppose further that $$\left|A\right>$$ is an Eigenstate of $$A$$ with eigenvalue a.

Therefore, isn't the expectation value of the commutator in the eigenstate:

$$\left<A\right|\left[\hat{A},\hat{B}\right]\left|A\right>=\left<A\right|\left(\hat{A}\hat{B}-\hat{B}\hat{A}\right)\left|A\right>$$

Now if I act with the operator A on the two sides onto the Eigenstates, I would get:

$$=a\left<A\right|\hat{B}\left|A\right>-a\left<A\right|\hat{B}\left|A\right>=0$$

Obviously I went wrong somewhere because the operators do not commute (and, for example for x and p, the commutator is just a number...which obviously can't ever have expectation value 0). But for the life of me I can't figure out why... and this is really bothering me now. So help please?
 
Last edited:
Physics news on Phys.org
Looks like you've discovered an already famous "proof" of the equality 1=0. If ##[A,B]=cI##, where c is a complex number and I is the identity operator, then
$$1=\frac 1 c\langle a|[A,B]|a\rangle =\frac{a-a}{c}\langle a|B|a\rangle= 0.$$ There are several threads about it already. Here's one: https://www.physicsforums.com/showthread.php?t=420635. There may be newer and better threads about it, but I'm not sure what to search for.
 
If the operators don't commute, then you generally have

[A,B] = C ~ \mbox{on} ~ D(AB)\cap D(BA)

\langle \psi, C\psi\rangle =0

which doesn't necessarily entail that C=0, because C isn't nessarily dense everywhere in the first place.
 
I found an example in which it's fairly easy to see exactly what's going on. Consider the set of square-integrable complex-valued functions on the interval ##[0,2\pi]##, with the usual vector space structure. Define a bilinear form on this space by
$$\langle f,g\rangle =\int_0^{2\pi} f(x)^*g(x)\mathrm dx.$$ This is a semi-inner product. Define Q and P by
$$(Qf)(x)=xf(x)$$ for all f and all x, and
$$Pf=-if'$$ for all differentiable f such that ##f(2\pi)=f(0)##. (Without this requirement, P isn't self-adjoint). P has eigenfunctions ##f_\lambda=e^{i\lambda x}##, where ##\lambda## is the eigenvalue. In particular, we have ##Pf_0=0##.

We have ##[Q,P]=i I_E##, where ##I_E## is the identity operator on the subspace E on which both QP and PQ are well-defined. So what is this subspace? We need to look at the domains of the operators.

Q: All f
P: All differentiable and periodic f.
QP: All differentiable and periodic f.
PQ: All f such that ##x\mapsto xf(x)## is differentiable and periodic.
[Q,P]: All differentiable and periodic f such that ##x\mapsto xf(x)## is differentiable and periodic.

That last one is the set E. Note that it doesn't contain any of the eigenfunctions of P.

If we make the mistake of thinking that E is the entire vector space, then every step of the following incorrect argument would seem reasonable, even though the conclusion is absurd:
$$1=\frac{1}{2\pi}\langle f_0,f_0\rangle=\frac{-i}{2\pi}\langle f_0,[Q,P]f_0\rangle =0.$$
 
  • Like
Likes 2 people
So, is this result generally true? That all commutators which are simply just numbers times the identity arise from operators such that the eigenfunctions of one operator is not in the domain of the commutator?

I thought about this problem in terms of spin, for example, and there it actually works out because of the spin operator on the right hand side will have expectation value 0 in the eigenstates of the other two operators, so you get 0=0 instead of 0=1. In other words:

$$\left[S_x,S_y\right]=i\hbar S_z$$
so that in an Eigenstate of Sx:
$$i\hbar \left<S_z\right>=0$$

So it seems this problem only arises if the commutator is some product of the identity? Is there a general proof for this? I saw the other thread you posted, and they basically said that the eigenfunctions are non-normalizable and therefore not in the Hilbert space, but I didn't understand their proof for such a statement (I know it to be true for x and p, but I don't know it to be true for all such operators).
 
Matterwave said:
So, is this result generally true? That all commutators which are simply just numbers times the identity arise from operators such that the eigenfunctions of one operator is not in the domain of the commutator?
Not sure.

Matterwave said:
I saw the other thread you posted, and they basically said that the eigenfunctions are non-normalizable and therefore not in the Hilbert space, but I didn't understand their proof for such a statement.
Edit: I should have read your comment more carefully. You're talking about a result about eigenfunctions, and I'm proving a result about operators. D'oh.

OK, this I can explain. I haven't yet thought about how much of this applies to semi-inner product spaces, so I'll prove it for the case of Hilbert spaces. I will need to cover some of the basics: A linear operator A on a Hilbert space H is said to be bounded if there's an M>0 such that
$$\frac{\|Ax\|}{\|x\|}\leq M$$ for all ##x\in H##. The norm of a bounded linear operator A is defined by
$$\|A\|=\sup_{x\in H}\frac{\|Ax\|}{\|x\|}.$$ The right-hand side is easily seen to be equal to ##\sup_{\|x\|=1}\|Ax\|##. You just prove this:
$$\big\{\|Ax\|/\|x\| : x\in H\big\}=\big\{\|Ax\|:x\in H,\ \|x\|=1\big\}.$$ If the sets are equal, their supremums are too.

Theorem: If A and B are bounded linear operators on H, then
(a) ##\|Ax\|\leq \|A\|\|x\|## for all ##x\in H##.
(b) ##\|AB\|\leq\|A\|\|B\|##.

Proof: Let ##x\in H## be arbitrary.
(a) We have ##\frac{\|Ax\|}{\|x\|}\leq\|A\|##, because the left-hand side is an element of the set that the right-hand side is the supremum of.
(b) We have ##\|ABx\|\leq \|A\|\|Bx\|\leq\|A\|\|B\|\|x\|##. Since x is arbitrary, this implies that ##\|A\|\|B\|## is an upper bound of the set that ##\|AB\|## is the supremum of.

Lemma: If A and B are bounded linear operators on H such that ##[A,B]=cI##, where c is a complex number and I is the identity operator, then for all ##n\in\mathbb Z^+##, we have ##[A,B]=ncB^{n-1}##.

Proof: We will use induction, obviously. For each ##n\in\mathbb Z^+##, let P(n) be the statement ##[A,B^n]=ncB^{n-1}##. Since ##[A,B]=cI=cIB^{1-1}##, P(1) is true. Let ##n\in\mathbb Z^+## be arbitrary and suppose that ##P(n)## is true. Since
$$[A,B^{n+1}]=B[A,B^n]+[A,B]B^n=BncB^{n-1}+cIB^n = (n+1)B^n,$$ P(n+1) is true. By induction, ##P(k)## is true for all ##k\in\mathbb Z^+##.

Hmm...I see a problem with the final theorem that I haven't had time to solve yet. Here's a theorem and proof that may or may not have a division by zero issue:

"Theorem:" If A and B are bounded linear operators on H, then [A,B] can't be a number times the identity operator on H.

"Proof." Suppose that A and B are bounded linear operators on H. We will prove that there's no complex number c such that [A,B]=cI by deriving a contradiction from the assumption that this is false. So suppose that there's a ##c\in\mathbb C## such that ##[A,B]=cI##. Let ##n\in\mathbb Z^+## be arbitrary. The lemma tells us that ##[A,B^n]=ncB^{n-1}##. This implies that
$$n|c|\|B^{n-1}\|=\|ncB^{n-1}\|=\|[A,B^n\| =\|AB^n-B^nA\|\leq \|AB^n\|+\|B^nA\|\leq 2\|A\|\|B^{n-1}\|.$$ Since n is arbitrary, this implies that ##\|A\|\geq nc/2## for all n, contradicting the assumption that ##\|A\|## is bounded.

The problem here is that if ##\|B^{n-1}\|=0## for some n, then we're dividing by zero. I haven't yet thought about whether this is something we need to worry about. It's possible that we may have to modify the proof and maybe even the theorem a bit to deal with this issue.
 
Last edited:
Fredrik said:
The problem here is that if ##\|B^{n-1}\|=0## for some n, then we're dividing by zero. I haven't yet thought about whether this is something we need to worry about.

No, since then ##B=0## and ##[A,B]=0## which can then certainly not be a (nonzero) multiple of ##I##.
 
micromass said:
No, since then ##B=0## and ##[A,B]=0## which can then certainly not be a (nonzero) multiple of ##I##.
I see that ##\|B\|=0## implies ##B=0##, but is it impossible that ##B\neq 0## and still ##B^n=0## for some n? I assume that it is, but I don't immediately see how to prove it.
 
  • #10
Fredrik said:
I see that ##\|B\|=0## implies ##B=0##, but is it impossible that ##B\neq 0## and still ##B^n=0## for some n? I assume that it is, but I don't immediately see how to prove it.

Yes, that is possible. For example, take the matrix

\left(\begin{array}{cc} 0 &amp; 1\\ 0 &amp; 0\end{array}\right)

But the crucial part is that such an element is not normal. That is it doesn't satisfy ##B^* B = BB^*##.

First, we recall the ##C^*##-identity for bounded operators: so if ##C## is any bounded operator, then ##\|C^*C\|= \|C\|^2##. In particular, if ##C## is self-adjoint, then

\|C^2\| = \|C\|^2

So by induction, we can now show for any positive integer ##a## that

\|C^{2^a}\| = \|C\|^{2^a}

So in particular, if ##C## is self-adjoint such that ##C^n = 0## for some ##n##, then we can keep multiplying with ##C## until we found an ##a## such that ##C^{2^a} = 0##. Thus

\|C\|^{2^a} = 0

which implies ##C=0##.

Now, if ##B##, were merely normal with ##B^n = 0##. Then ##C = B^*B## satisfies ##C^n = (B^*B)^n = (B^n)^*B^n = 0##. But ##C## is self-adjoint and thus ##B^*B = 0##. Applying the ##C^*##-identity once more, we see that

\|B\|^2 = \|B^*B\| = 0

and thus ##B=0##.
 
  • Like
Likes 1 person
  • #11
Thanks for your proof. If we take your proof to be true (neglecting the divide by 0 possibility for now), then this shows that for all A and B operators which have [A,B]=cI, one of either A or B is unbounded right? So does this immediately mean then that the expression in my first post is of the form:

$$a\left<A\left|B\right|A\right>-a\left<A\left|B\right|A\right>=\infty-\infty$$

And therefore is indeterminant? This would show why I have made a 0=1 error. Are there a couple more steps involved?
 
  • #12
Matterwave said:
Thanks for your proof. If we take your proof to be true (neglecting the divide by 0 possibility for now), then this shows that for all A and B operators which have [A,B]=cI, one of either A or B is unbounded right? So does this immediately mean then that the expression in my first post is of the form:

$$a\left<A\left|B\right|A\right>-a\left<A\left|B\right|A\right>=\infty-\infty$$

And therefore is indeterminant? This would show why I have made a 0=1 error. Are there a couple more steps involved?

I'm not sure what you mean with ##<A|B|A>##, could you define this for me?
 
  • #13
It's the expectation value of the operator B for the eigenstate |A> of A.
 
  • #14
Matterwave said:
It's the expectation value of the operator B for the eigenstate |A> of A.

The eigenstate? Surely it is not unique?

But anyway, let ##a## be an eigenstate for ##B## with value ##\lambda##, then you want to form

&lt;a,Ba&gt;

The issue is however that ##B## is not defined for all elements in the Hilbert space. In particular, ##Ba## makes no sense since it is not defined. So what you call

&lt;A|B|A&gt;

is not defined for all ##|A>##.
 
  • #15
micromass: Thanks for the proof. It was very easy to understand.

I think that what Matterwave is asking right now is what happens if we assume that A and B are two self-adjoint and not necessarily bounded operators such that [A,B]=I, where I denotes the identity operator on the whole space. If [A,B] is defined on the whole space, then so is AB and BA, and therefore both A and B.

Edit: In particular, I think he's asking if the reason why ##\langle a,[A,B]a\rangle## isn't simply equal to 0 is that ##\langle a,Ba\rangle=\infty##.

Matterwave: Feel free to correct me if I'm interpreting you wrong.
 
  • #16
Yea, I should have used "an eigenstate". My question was that will it be the case that for operators A and B and state |A> which I have defined previously, that <A|B|A> will always be undefined?
 
  • #17
Fredrik said:
micromass: Thanks for the proof. It was very easy to understand.

Well, it leaves us the possibility that ##[A,B]=cI## where ##A## and ##B## are both nonnormal. I want to find a proof/counterexample for this case too...

I think that what Matterwave is asking right now is what happens if we assume that A and B are two self-adjoint and not necessarily bounded operators such that [A,B]=I, where I denotes the identity operator on the whole space. If [A,B] is defined on the whole space, then so is AB and BA, and therefore both A and B.

OK, but if a self-adjoint operator is defined on the whole space, then it must be bounded. This is Hellinger-Toeplitz: http://en.wikipedia.org/wiki/Hellinger–Toeplitz_theorem

Edit: In particular, he's asking if the reason why ##\langle a,[A,B]a\rangle## isn't simply equal to 0 is that ##\langle a,Ba\rangle=\infty##.

I understand. But that's not the reason. The reason is that ##<a,Ba>## is undefined, and thus also ##<a,[A,B]a>## is undefined, so it isn't equal to ##0##.
Matterwave said:
Yea, I should have used "an eigenstate". My question was that will it be the case that for operators A and B and state |A> which I have defined previously, that <A|B|A> will always be undefined?

Yes, if ##[A,B]=cI## and if ##A## is hermitian, then for any eigenvector ##a## of ##A## holds that ##Ba## is undefined (in particular ##<a,Ba>## is undefined).
If it were defined, then ##[A,B](a)## is defined, and thus

<br /> \begin{eqnarray*}<br /> c&lt;a,a&gt; &amp; = &amp; &lt;a,cIa&gt;\\<br /> &amp; = &amp; &lt;a,[A,B]a&gt;\\<br /> &amp; = &amp; &lt;a, ABa - BAa&gt;\\<br /> &amp; = &amp; &lt;a,ABa&gt; -\lambda &lt;a,Ba&gt;\\<br /> &amp; = &amp; &lt;Aa,Ba&gt; - \lambda &lt;a,Ba&gt;\\<br /> &amp; = &amp; \lambda &lt;a,Ba&gt; - \lambda &lt;a,Ba&gt;\\<br /> &amp; = &amp; 0<br /> \end{eqnarray*}
 
  • #19
Here is a proof that ##[S,T]=cI## is impossible without using normality of either ##S## or ##T##. I will do this in a general nontrivial unital Banach algebra (and I will not even use completeness). This is called the Wielandt-Wintner theorem.

So, let ##\mathcal{B}## be a nontrivial unital Banach algebra (for example ##\mathcal{B}## the bounded operators on a Hibert space).

Take ##S,T\in \mathcal{B}## and ##c\neq 0## such that

ST - TS = cI.

I will find I contradiction. First, I claim that for each ##n\geq 0## holds that

ST^{n+1} - T^{n+1}S = c(n+1)T^n.

Indeed, the case ##n=0## follows immediately from the hypothesis. So assume the step is true for ##n##, then

\begin{eqnarray*}
c(n+1)T^{n+1}
& = & c(n+1)T^n T\\
& = & (ST^{n+1} - T^{n+1}S) T\\
& = & ST^{n+2} - T^{n+1}ST\\
& = & ST^{n+2} - T^{n+1}(TS + cI)\\
& = & ST^{n+2} - T^{n+2}S - cT^{n+1}.
\end{eqnarray*}

Thus

ST^{n+2} - T^{n+2}S = c(n+1)T^{n+1} + cT^{n+1} = c(n+2)T^{n+1}.

There are two cases now. Assume first that ##T^N=0## for some ##N##. Then we can find the smallest ##n## such that ##T^{n+1} = 0##.
Then

0 = ST^{n+1} - T^{n+1}S = c(n+1)T^n.

Thus ##T^n = 0## which contradicts the choice of ##n##.

On the other hand, assume that ##T^n\neq 0## for all ##n##. Then we get

|c|(n+1)\|T^n\| = \|c(n+1)T^n\| = \|ST^{n+1} - T^{n+1}S\| \leq 2\|S\|\|T^n\|\|T\|.

Hence ##|c|(n+1)\leq 2\|S\|\|T\|## for each ##n##, which is impossible. Also note that the completeness of ##\mathcal{B}## was never necessary.

We cannot generalize this result further by possibly dropping the norm, since there are counterexamples in that case. For example, the famous Weyl-algebra: http://en.wikipedia.org/wiki/Weyl_algebra This proof thus shows that the Weyl-algebra cannot be normed.

For more information see "A Hilbert Space Problem Book" by Halmos in the chapter on commutators.
 
Last edited:
  • #20
This thread has gone astray. See my last post on the 1st page. The article by E. Galapon is behind a paywall, nonetheless he fortunately uploaded a preprint version of it on arxiv.org.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
845
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K