# A peculiarity in uncertainty principle

Emeritus
PF Gold
P: 9,506
 Quote by K^2 Uhu. Now I take my particle, and I place it in a box of length L. The definitions of operators did not change. The eigen functions of P are still complex Bloch waves. (Albeit, with discrete k.) None of the algebra changed. The [X,P] commutator is exactly the same, and the problem persists. Hilbert space - check. Square-integrable - check. What's the catch?
I'm not sure about this. I've been saying (incorrectly) in other threads that the box potential is just a way of saying that the Hilbert space is now ##L^2(\text{box})## instead of ##L^2(\mathbb R)##, but now that I think about it, that seems very wrong. Our wavefunctions aren't just zero or undefined outside of the box. They are required to got to zero as we approach the edge of the box. So if the Hilbert space is either of those two spaces I just mentioned, then the energy eigenstates only span a proper subspace of it. We should probably view one of those subspaces as the system's Hilbert space*. In that case, plane waves aren't included.

*) ...or rather, as the system's semi-inner product space, which can be used to define the system's Hilbert space.

Maybe you had periodic boundary conditions in mind, instead of a box potential. I guess that would solve the problem with momentum eigenvectors, but not position eigenvectors.

Anyway, I'm not so sure that non-existence of eigenvectors is really the issue in the 0=1 "proof". The other thread talks mainly about the domains of the operators, so that's probably the real problem here. The commutator [x,p]=i has to be interpreted as in dextercioby's post (in this thread) to make sense.
 P: 923 I asked a professor He said that the problem is,as I understood,that when $[A,B]=cI$ then non of the operators has normalizable eigenvectors. In fact you can find no A and B for which both $[A,B]=cI$ and $\langle a | a \rangle=1$,$\langle b | b \rangle =1$ are true where a and b are eigenvectors of A and B But I will be more happy if I see a proof of the latter
 P: 3,014 The problem with x and p is that they have continuous spectra, so their eigenvectors are elements of a rigged Hilbert space. You may get rid of your confusion by the following artifice: Imagine you impose periodic boundary conditions, so that x and x + L are physically equivalent points. The consequence of this is that momentum acquires only a dicrete set of eigenvalues: $$k_n = \frac{2\pi n}{L}, \ n = 0, \pm 1, \pm 2, \ldots$$ and the corresponding eigenfunctions are $\langle x \vert k_n \rangle = L^{-1/2} e_{i k_n x}$, normalized by the condition: $$\int_{0}^{L} \langle k_m \vert x \rangle \, \langle x \vert k_n \rangle \, dx = \delta_{m,n}$$ Now, let us expand an arbitrary state ket in the basis of (discrete) momentum eigenvectors. This is nothin more than a Fourier series: $$\vert \psi \rangle = \sum_{n = -\infty}^{\infty} a_n \, \vert k_n \rangle, \ a_n = \langle k_n \vert \psi \rangle = L^{-1/2} \, \int_{0}^{L} dx e^{-i k_n x} \langle x \vert \psi \rangle$$ Then, the commutator is: $$\langle k_m \vert [ x, p ] \vert \psi \rangle \hbar \, \sum_{n} a_n (k_n - k_m) \langle k_m \vert x \vert k_n \rangle$$ The matrix element of the position operator between discrete momentum eigenstates is: $$\langle k_m \vert x \vert k_n \rangle = \frac{1}{L} \, \int_{0}^{L} dx x \, e^{-i (k_m - k_n)} = \left\lbrace \begin{array}{lc} \frac{1}{2} & m = n \\ \frac{i}{k_m - k_n} & m \neq n \end{array}\right.$$ Thus, the commuator matrix element is $$\langle k_m \vert [x, p] \vert \psi \rangle = -i \hbar \sum_{n \neq m} a_n$$ If you take into account the following identity $$\sum_{n = -\infty}^{\infty} e^{i \frac{2 \pi n x}{L}} = L \sum_{n = -\infty}^{\infty} \delta(x - n L)$$ you get the following expression: $$[x, p] \vert \psi \rangle= i \hbar \left( \vert \psi \rangle - L \sum_{n = -\infty}^{\infty} e^{-\frac{i}{\hbar} n L p} \vert x = 0 \rangle \right)$$ where the p on the r.h.s. is an operator. Thus, the commutator has changed.
P: 923
 Quote by Dickfore Then, the commutator is: $$\langle k_m \vert [ x, p ] \vert \psi \rangle \hbar \, \sum_{n} a_n (k_n - k_m) \langle k_m \vert x \vert k_n \rangle$$ The matrix element of the position operator between discrete momentum eigenstates is: $$\langle k_m \vert x \vert k_n \rangle = \frac{1}{L} \, \int_{0}^{L} dx x \, e^{-i (k_m - k_n)} = \left\lbrace \begin{array}{lc} \frac{1}{2} & m = n \\ \frac{i}{k_m - k_n} & m \neq n \end{array}\right.$$ Thus, the commuator matrix element is $$\langle k_m \vert [x, p] \vert \psi \rangle = -i \hbar \sum_{n \neq m} a_n$$ If you take into account the following identity $$\sum_{n = -\infty}^{\infty} e^{i \frac{2 \pi n x}{L}} = L \sum_{n = -\infty}^{\infty} \delta(x - n L)$$ you get the following expression: $$[x, p] \vert \psi \rangle= i \hbar \left( \vert \psi \rangle - L \sum_{n = -\infty}^{\infty} e^{-\frac{i}{\hbar} n L p} \vert x = 0 \rangle \right)$$ where the p on the r.h.s. is an operator. Thus, the commutator has changed.
I don't understand what you've written in this part.
Looks like some symbols aren't in their place.
Could you write them again and also explain?

And the other point is that,even with this,we can find other operators with the same illness.Can you cure them the same way too?

Thanks
 Sci Advisor HW Helper P: 11,946 The idea is that they are not cured. It all stems from the x operator being defined the way it is, namely as a mere multiplication by the real variable also denoted by x. This operator has a purely continuous spectrum, its 'eigenvectors' do not fit in the Hilbert space, they are delta Dirac distributions.
 Sci Advisor P: 2,470 Dickfore, what you've written is consistent with results I got numerically earlier by testing X and P operators in discrete space with periodic boundary conditions. The [X,P] operator ended up having zeroes on diagonal, which immediately tells you it has zero expectation in eigen vectors of X operator. I've separately verified that it has zero expectations for eigen vectors of P as well. So in finite, discrete space, Shyan's proof works, and it is consistent with how we expect commutator to work based on your explanation. So there is nothing strange going on there. I guess, I just expected the operators to behave very similar in continuous space and discrete space with enough points. It's very interesting to see that there is a qualitative change. I wonder if that says anything about how much we can trust the Lattice calculations.
 Mentor P: 6,248 I am working on a long post that gives an example that uses fairly elementary, but somewhat lengthy, calculations to illustrate the problem with domains. If my wife has work (marking) to do tonight, then I might finish the post tonight; if my wife wants to watch a movie, then I won't finish tonight.
 P: 923 I don't know much math compared to other ones who have posted here. I see you guys are really good in this stuff. But let me give it a try too. I've tried to figure out what are two general operators which satisfy $[A,B]=cI$ and both are hermitian. I found the following(Maybe not the most general but general enough) $A=a \frac{d}{d \alpha}+f(\alpha) \\ B=b \alpha$ Now consider: $A \psi(\alpha)=\lambda \psi(\alpha) \Rightarrow \psi(\alpha)=A e^{- \int \frac{f(\alpha)-\lambda}{a} d\alpha}$ You see that the above function isn't normalizable. We also have: $B \psi(\alpha)=\lambda \psi(\alpha) \Rightarrow b \alpha \psi(\alpha)=\lambda \psi(\alpha)$ Which can't be true unless $\alpha$ is a constant which can't be. So when $[A,B]=cI$ and both A and B are hermitian,One of them will not have normalizable eigenstates and the other doesn't have eigenstates at all and the assumption of their existence is the wrong part of the proof. I will be happy to hear corrections. Thanks all
HW Helper
P: 11,946
 Quote by Shyan [...] $A=a \frac{d}{d \alpha}+f(\alpha) \\ B=b \alpha$ Now consider: $A \psi(\alpha)=\lambda \psi(\alpha) \Rightarrow \psi(\alpha)=A e^{- \int \frac{f(\alpha)-\lambda}{a} d\alpha}$ You see that the above function isn't normalizable. [...]
Take $f(\alpha) = e^{-\alpha^2}$ with $\alpha\in\mathbb{R}$
 P: 923 I guess you meant: $f(\alpha)=2ca \alpha+\lambda \Rightarrow \psi(\alpha)=A e^{-c \alpha^2}$ Which is normalizable. Well,I think I'll wait for the answer!
 Sci Advisor HW Helper P: 11,946 Yes, I thought of the convergent exponential in the shape of a gaussian. So you can find potentials for which the wavefunction is a well-behaved mathematical object.
 P: 923 So we have concluded that the problem is NOT the assumption of the existence of A's and B's normalizable eigenstates?
 P: 923 In Quantum mechanics:Concepts and applications by Nouredine Zettili (2nd edition),one can find the following: $| \psi \rangle = \int d^3 r | \vec{r} \rangle \langle \vec{r} | \psi \rangle = \int d^3 r \psi(\vec{r}) | \vec{r} \rangle$ With $\langle \vec{r} | \psi \rangle = \psi(\vec{r})$ I tried to write the paradox with this convention: $1=\langle a | a \rangle=\frac{1}{c} \langle a | cI | a \rangle=\frac{1}{c} [ \int d^3 r a^*(\vec{r}) (cI) \int d^3 r a(\vec{r}) ] | \vec{r} \rangle=\frac{1}{c} [ \int d^3 r a^*(\vec{r}) [A,B] \int d^3 r a(\vec{r}) ] | \vec{r} \rangle=\frac{1}{c} [ \int d^3 r a^*(\vec{r}) AB \int d^3 r a(\vec{r})-\int d^3 r a^*(\vec{r}) BA \int d^3 r a(\vec{r}) ] | \vec{r} \rangle$ I'm not sure how to continue but I think it may help.So I give it to more capable hands.
 P: 923 Yeah,again me. This time i asked a professor of pure mathematics. He said the problem is that $[A,B] | \psi \rangle =c |\psi \rangle$ doesn't mean $[A,B]=cI$ I think the reason is that the domain of I is the whole space but the domain of [A,B] is more restricted so the two operators only have the same prescription but different domains so they're not equal.
P: 3,014
 Quote by Shyan Yeah,again me. This time i asked a professor of pure mathematics. He said the problem is that $[A,B] | \psi \rangle =c |\psi \rangle$ doesn't mean $[A,B]=cI$ I think the reason is that the domain of I is the whole space but the domain of [A,B] is more restricted so the two operators only have the same prescription but different domains so they're not equal.
I think the simplest way to put it is that $\langle a \vert a \rangle$ and $\langle a \vert B \vert a \rangle$, where $\vert a \rangle$ is an eigenvector of A formally do not exist (they diverge, are infinite)!
Calculate the M.E. of this commutation relation between two different eigenstates:
$$\langle a \vert [A, B] \vert a' \rangle = (a - a') \, \langle a \vert B \vert a' \rangle = c \langle a \vert a' \rangle$$
When $a' \neq a$, the r.h.s. is zero, so we conclude $\langle a \vert B \vert a' \rangle = 0, \ a' \neq a$. But, if we take the limit $a' \rightarrow a$, the l.h.s. is an indeterminate form of the type $0 \cdot \infty$, whereas the r.h.s. is $c \, \delta(a - a')$, according to the orthonormalization condition for continuous spectra.

This was the motivation behind making the eigenvalues of at least one of the operators A or B, discrete (in my case, the momentum operator p), because then you may normalize everything to a finite norm.
P: 923
 Quote by Dickfore I think the simplest way to put it is that $\langle a \vert a \rangle$ and $\langle a \vert B \vert a \rangle$, where $\vert a \rangle$ is an eigenvector of A formally do not exist (they diverge, are infinite)! Calculate the M.E. of this commutation relation between two different eigenstates: $$\langle a \vert [A, B] \vert a' \rangle = (a - a') \, \langle a \vert B \vert a' \rangle = c \langle a \vert a' \rangle$$ When $a' \neq a$, the r.h.s. is zero, so we conclude $\langle a \vert B \vert a' \rangle = 0, \ a' \neq a$. But, if we take the limit $a' \rightarrow a$, the l.h.s. is an indeterminate form of the type $0 \cdot \infty$, whereas the r.h.s. is $c \, \delta(a - a')$, according to the orthonormalization condition for continuous spectra. This was the motivation behind making the eigenvalues of at least one of the operators A or B, discrete (in my case, the momentum operator p), because then you may normalize everything to a finite norm.
Ok,but can you prove $\langle a | B | a \rangle=\infty$ and $\langle a | a \rangle=\infty$ for all A,B and $|a \rangle$ satisfying the conditions?
Mentor
P: 6,248
 Quote by Shyan Consider two hermitian operators A and B and a system in state $|a\rangle$ which is an eigenstate of A with eigenvalue $\lambda$ So we have: $\langle a|[A,B]|a\rangle=\langle a|AB|a\rangle-\langle a |BA|a\rangle=(A^{\dagger}|a\rangle)^{\dagger}B|a \rangle-\lambda \langle a |B|a\rangle=(A|a \rangle)^{\dagger} B | a \rangle-\lambda \langle a |B|a \rangle=\lambda \langle a|B|a \rangle - \lambda \langle a|B|a \rangle=0$
 Quote by Shyan There is still a point here. We can't tell that the assumption that A has an eigenstate,causes the paradox because even if P doesn't have an eigenstate,one still can find an operator that does and do the calculation for that operator and its eigenstate and again arrive at the paradox.So I think sth else should be wrong. I've been trying to understand it via reading the thread that Goerge suggested but I can't follow the discussion because I don't know enough math. Can somebody present the final result of that thread in a simple language? thanks

There is a problem with the step

$$\left\langle a \vert AB \vert a\right\rangle =\left\langle Aa \vert Ba\right\rangle$$
To see the problem in terms of domains, let's work through in detail a fairly elementary example for which the eigenstates and eigenvalues above actually exist. In this example, subtleties with domains definitely come into play.

First, consider something even more elementary, real-valued functions of a single real variable. The domain of such a function $f$ is the (sub)set of all real numbers $x$ on which $f$ is allowed to act. Suppose $f$ is defined by $f\left(x\right) = 1/x$. The domain of $f$ cannot be the set of all real numbers $\mathbb{R}$, but it can be any subset of $\mathbb{R}$ that doesn't contain zero. Take the domain of $f$ to be the set of all non-zero real numbers. Define $g$ by $g\left(x\right) = 1/x$ with domain the set of all positive real numbers. As functions, $f \ne g$, because $f$ and $g$ have different domains, i.e., it takes both a domain and an action to specify a function. As functions, $f = g$ only when $f$ and $g$ have the same actions and the same domains.

Let the action of the momentum operator be given by $P=-id/dx$ (for convenience, set $\hbar =1$). On what wave functions can $P$ act, i.e, what is the domain, $D_{P}$, of $P$? Since $P$ operates the Hilbert space $H$ of square-integrable functions, $D_{P}$ must be a subset of the set of square-integrable functions. The action of $P$ has to give as output something that lives in the Hilbert space $H$, i.e., the output has to be square-integrable, and thus $D_{P}$ must be subset of the set of square-integrable functions whose derivatives are also square-integrable. Already, we see that the domain of $P$ cannot be all of the Hilbert space $H$.

As an observable, we want $P$ to be self-adjoint, i.e, we want $P=P^{\dagger }$. As in the case of functions above, this means that the actions of $P$ and $P^{\dagger }$ must be the same, and that the domains (the states on which $P$ and $P^{\dagger }$ act) $D_{P}$ and $D_{P^{\dagger }}$ must be the same. For concreteness, take wave functions on the interval with endpoints $x=0$ and $x=1$. The adjoint of the momentum operator is defined by

\begin{align} \left\langle P^{\dagger }g \vert f\right\rangle &=\left\langle g \vert Pf\right\rangle \\ &=-i\int_{0}^{1}g* \frac{df}{dx}dx \\ &=-i\left( \left[ g* f\right] _{0}^{1}-\int_{0}^{1}\frac{dg}{dx}* fdx\right) \\ & =-i\left[ g* f\right] _{0}^{1}+\left\langle Pg \vert f\right\rangle , \end{align}
where integration by parts has been used.

Consequently, the actions of $P$ and $P^{\dagger }$ are the same as long as the first term in the last line vanishes, i.e., as long as

\begin{align} 0 &= g* \left( 1\right) f\left( 1\right) -g* \left( 0\right) f\left( 0\right) \\ \frac{f\left( 1\right) }{f\left( 0\right) } &= \frac{g* \left( 0\right) }{g* \left( 1\right) } \end{align}
for non zero $f\left( 0\right)$ and $g\left( 1\right)$. Now, $f\left( 1\right) /f\left( 0\right)$ is some complex number, say $\lambda$, so $f\left( 1\right) = \lambda f\left( 0\right)$. Hence,

\begin{align} \lambda & =\frac{g* \left( 0\right) }{g* \left( 1\right) }\\ \lambda * & =\frac{g\left( 0\right) }{g\left( 1\right)} \\ g\left( 1\right) & =\frac{1}{\lambda * }g\left( 0\right) . \end{align}
From the relation $\left\langle P^{\dagger }g \vert f\right\rangle =\left\langle g \vert Pf\right\rangle$, we see that $g\in D_{P^{\dagger }}$ and $f\in D_{P}$. These domains can be made to be the same if the same $\lambda$ restrictions are placed on $f$ and $g$, i.e, if $\lambda =1/\lambda*$, or $\lambda* \lambda =1$. This means that $\lambda$ can be written as $\lambda =e^{i\theta }$ with $\theta$ real. Different choices of $\theta$ correspond to different boundary conditions, with $\theta =0$ corresponding to the periodic boundary condition used by Dickfore above (with $L=1$). Let's use this choice, so that $f$ is in $D_{P^{\dagger }}=D_{P}$ if $f$ is square-integrable, $f'$ is square-integrable, and $f\left( 1\right) =f\left( 0\right)$. Note that this works even if $0=f\left( 0\right)$. With this choice, $P=P^{\dagger }$, and we can write $\left\langle Pg \vert f\right\rangle =\left\langle g \vert Pf\right\rangle$. We cannot use this when $P$ (on either side) acts on a wave function $h$ that is not in $D_{P^{\dagger }}=D_{P}$. This is the problem with the "proof" above.

In $\left\langle a \vert AB \vert a\right\rangle$, take $A=P$ and $B=X$. Take $f\left( x\right) =e^{2\pi ix}$. Then, $\left( Pf\right) \left( x\right) =2\pi f\left( x\right)$ and $\left( PXf\right) \left( x\right) =P\left( xf\left( x\right) \right)$. However, $h\left( x\right) =xf\left( x\right) =xe^{2\pi ix}$ does not satisfy the boundary condition $h\left( 1\right) =h\left( 0\right)$, so $h$ is not in the domain $D_{P}$, and we cannot just slide [/itex]A=P[/itex] to the left.
 P: 923 That's very good Goerge,thanks. But sth that still bothers me is that you somewhere assumed one of the operators to be P.Doesn't that restrict the domain of your argument?I mean,that doesn't tell us that for every A and B for which [A,B]=cI,we can't switch the order,but just about P and X.And also that doesn't tell that there is no function that we can do such calculations on,but that there are some functions that are not suitable.So I think you should generalize your argument.(Or I'm wrong somewhere?) Thanks again

 Related Discussions Introductory Physics Homework 4 Biology, Chemistry & Other Homework 1 Advanced Physics Homework 3 Introductory Physics Homework 1 Quantum Physics 8