# From statistical to ontological: uncertainty principle

1. May 4, 2013

### nomadreid

The derivation of the momentum/position Heisenberg Uncertainty Principle (HUP) is based on the statistical interpretation which says that if we have a lot of quantum systems in identical states, and measure the momentum in half of them and get a distribution with standard deviation σp, and measure the position of the other half and get a distribution σx, then σp×σx ≥ $\hbar$/2. Fine. And obviously both σp and σx must be non-zero. But these are statements about a collection of particles. Purely from the point of view of statistics, a non-zero standard deviation in each of two distributions does not prevent one element from each set from being equal to the respective expected value of its distribution at the same time. Nonetheless, a commonly stated corollary is that a single particle cannot have both a determined position and a determined momentum simultaneously. This then transforms the HUP
(a) into a statement about a single particle, thereby
(b) giving the standard deviation an ontological meaning.
The only explanations that I have seen for this corollary are:
(1) confounding the HUP with the observer effect: hence not a corollary
(2) pointing out that there are macroscopic quantum effects, which may be a reason to look for some form of (a), but it does not justify the collapse from a statement about a collection of particles to the same statement about a single particle, and thereby does not justify (b).
(3) hand-waving.
Can anyone give me something better? Thanks.

2. May 4, 2013

### rubi

You don't need the HUP to show that a single particle can't have both a definite position and a definite momentum at the same time. All you need is the fact that the position operator and the momentum operator don't commute and thus can't have common (generalized) eigenstates. So if you are in a state of definite position (a generalized eigenstate of the position operator), then it is mathematically impossible that this state is also a (generalized) eigenstate of the momentum operator and thus at best a superposition of momentum eigenstates and thus not in a state of definite momentum.

3. May 4, 2013

### nomadreid

Thanks, rubi. That finally makes sense.

4. May 4, 2013

### vanhees71

One just must add that there is no position eigenstate but only "generalized position eigenstates". That's most simply seen in the position representation. The generalized position eigenstate $u_{x'}(x)$ with eigenvalue $x'$ is the Dirac $\delta$ distribution,
$$u_{x'}(x)=\delta(x-x').$$
It's not a square-integrable function but a distribution, for which already taking the square doesn't make sense. That's why a particle can never have a precisely determined position.

That's also consistent with the Heisenberg-Robertson uncertainty relation
$$\Delta x \Delta p \geq \hbar/2.$$
For any state of the particle (pure or mixed), the standard deviations of energy and momentum can never be exactly 0. Of course, you can make $\Delta x$ as small as you want (but never make it really 0). Then necessarily the standard deviation of the momentum becomes larger according to the uncertainty principle (and vice versa).

Everything else is speculation about the interpretation of states in quantum mechanics. I'm a follower of the minimal statistical interpretation: According to quantum theory we can know about physical systems only probabilities for the outcome of measurements, which are given by Born's rule. If your system is in a state which is the eigenstate of an operator representing an observable, this observable's value is determined, i.e., with a probability of 100% you find the corresponding eigenvalue of the operator. Any other observables are indetermined, and when measuring such an indetermined observable you find with some probability, given by the state of the system according to Born's rule, one of the eigenvalues of the representing operator.

Thus, in general quantum theory only makes predictions about ensembles of independently from each other in a certain (pure or mixed) state prepared systems. Whether or not there is a deterministic theory which can describe nature as successfully as quantum mechanics, I don't know. If so, then it will be a non-local theory due to the violation of Bell's inequalities, and so far nobody could come up with a consistent non-local deterministic theory. As long as this is not the case we have to accept quantum theory of the yet most comprehensive description of nature.

5. May 4, 2013

### glengarry

The entirety of the uncertainty relation is essentially encapsulated into what are called Zeno's paradoxes of motion. Let's forget about mass as it's really irrelevant to the problem at hand. The problem really boils down to the relationship between position and it's first derivative wrt time: velocity.

We live in a world where there is nothing but motion. (Even things that appear to the stationary can equivalently be said to be in motion relative to another reference frame.) But we are fundamentally incapable of expressing motion *as* motion. We must instead rely upon finding the differences between distinct stationary states. These stationary states are all that we can talk about. Even the principles of calculus are based on the idea of stationary states. The point behind calculus is just that we imagine that the differences between states are so small that we are effectively dealing with continuous functions. But who's to say what is "small enough", and what functions can rightly be called continuous?

I think the essence behind the HUP is just that we can either be talking about position or velocity. If we pick one, then the meaning of the other one simply vanishes. Therefore, I don't think the mathematical formulation (ie, as a product of their simultaneously measured uncertainties) of the HUP can really have any basis in any kind of ontological understanding of nature.

6. May 4, 2013

### nomadreid

Thank you, vanhees71 and glengarry, for your helpful responses.

vanhees71: It is interesting that you go further than the uncertainty principle or even rubi's answer (see first answer to my post), in that you conclude that
. That is, you do not invoke the non-commutativity of operators, or any dependence on the momentum, but (if I understood you correctly) say that by its very nature one cannot have a specific eigenvalue for a position operator. Is this correct?
You also write that you are a minimalist. In that case, what does your statement about the standard deviation of positions being greater than zero have to do with the idea of an experiment consisting of a single measurement, for which the standard deviation (in its usual statistical sense, not in some ontological sense which you agree is speculation) would trivially equal zero? I am missing the connection there.

glengarry: I would put in one key phrase into your characterisation of calculus. I would say that the application of calculus to measurements assumes "arbitrarily small enough" distances to approximate continuity. As a purely mathematical theory, calculus has no arbitrariness in the definition of continuity. The arbitrariness (or, with quantisation of space, different solution) comes in when physicists apply it to real measurements. But physicists conclude indeterminacy not only when they deal with the application, but also in theory, where Zeno's paradox has no difficulty. (Zeno's paradoxes of motion were, as a purely mathematical affair, solved with the calculus.)

7. May 5, 2013

### audioloop

nice reasoning.

8. May 7, 2013

### Naty1

I'll go one step further; The HUP has nothing to do with the measurement of a single particle [system]. The HUP isn't about the knowledge of the conjugate observables of a single particle in a single measurement. The uncertainty theorem is about the statistical distribution of the results of measurements.

example: A single scattering experiment consists of shooting a single particle at a target and measuring its angle of scatter. Quantum theory does not deal with such an experiment but rather with the statistical distribution of the results of an ensemble of similar results.

Here are my favorite descriptions:

From Zapper of these forums:

Misconception of the Heisenberg Uncertainty Principle.

http://physicsandphysicists.blogspot.com/2006/11/misconception-of-heisenberg-uncertainty.html

A complementary description:

Course Lecture Notes, Dr. Donald Luttermoser,East Tennessee State University:

Last edited: May 7, 2013
9. May 7, 2013

### Naty1

rubui posted:

As I read your 'single particle' implication above, it does not seem to be the correct interpretation...but it IS a common one...

The commutativity and non commutivity of operators applies to the distribution of multiple results, not an individual measurement of an individual particle.....[This is how I interpret Zapper and PAllens' explanations, above]

If I've got the interpretation wrong, somebody please explain....

10. May 7, 2013

### DrChinese

This reasoning, were it correct, would mean that entangled particle pairs should allow one to read both position and momentum (by inference). That doesn't happen, so there must be something going on "deeper" which the HUP *does* properly describe.

Besides, the HUP applies equally to spin. Using your thinking, there is no reason spin elements should behave similarly to position/momentum. And yet, non-commuting spin components display evidence the HUP and commuting ones do not. And you can see the effect vary at every combination in between.

11. May 7, 2013

### glengarry

I really do think that they way in which we theoretically "visualize" the nature of quantum objects plays an essential part in how we approach the experimental aspect of physics. According to Einstein, the mathematical location was always the thing. Then de Broglie did his dissertation, followed by the eponymous wavefunction of Schrodinger. These later guys always had a kind of diffuse waviness on the brain. And then you had all the others who refused to even admit to the existence of any kind of picture of quantum matter.

What I'm saying is that modern experimenters are not above any of these issues. I don't want to go any farther than to say that there are always issues of interpretation of any kind of experiment. Especially experiments that have the "QM" name brand attached to them.

I personally see all objects in the universe as always "entangled" with each other, because the notion of giving scale a fundamental standing in physical theory is not philosophically pleasing to me. In my book, everything is defined exactly by the bounds of the universe.

As far as the HUP is concerned, I just don't feel that it has anything to add in terms of deep ontological understanding. It may very well be of imminent practical utility within specific experimental setups that exist on *this* scale. But I am definitely not an expert on any kind of experimental physics. I am completely transfixed by theory, and everything I know of Heisenberg the man says that he had the same kind of attitude.

12. May 9, 2013

### nomadreid

Thanks for all these comments.
glengarry:
Or, to put it in the words of David Mermin, "Shut up and calculate." Perfectly valid, and in keeping with the ensemble (statistical) interpretation of quantum mechanics, which Naty1 favours. Some people may point to ontological descriptions as a useful guide to know which direction you want to develop your theory, but that is not what I wish to bring up. Rather, although there is no compelling reason to ascribe an uncertainty to the properties of a single particle, since we are talking about states and states are an equivalence class of many particles, nonetheless I am not sure that it is inconsistent to assume that each particle has its own uncertainty if one modifies the definitions of the symbols used for standard deviation and expected value appropriately. After all, when a single-particle interpretation says that |α| and |β| are probabilities in a superposition α|0>+β|1>, it does not mean "probability" in the classical sense, but merely using the probability amplitudes as vectors in Hilbert space. There is nothing that I see to force the spectrum to be distributed with one eigenvalue per particle, even though this is usually how it is interpreted. But the devil is in the details, and the details are what I am interested in, quite apart from the question as to which ontology is "better". I can put my question in two different manners, both of them necessarily rough:
(a) how could one re-define Δ and E(X) so that a single-particle uncertainty could make sense?
(b) It is not a question of superposition is interpreted as belonging to a single particle instead of to the ensemble of particles, but rather the superposition being interpreted as belonging to a single particle in addition to the ensemble of particles. In the latter case, one should be able to derive the probability distribution (or density matrix) for the ensemble of particles from the property (probability-redefined-for-a-single-particle) of the constituent particles (all being in the same initial state). How?
(I am asking for a suspension of disbelief here, I know, and also that I have jumped a little from the HUP to superposition, but the spirit of the hypothetical question is the same.)
Thanks.
P.S. Just came across this: http://lanl.arxiv.org/pdf/1111.3328v1.pdf which is of interest to the discussion, and I would very much appreciate comments on that paper's conclusions.

Last edited: May 9, 2013
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Threads - statistical ontological uncertainty Date
A Fermi and Bose gas in statistical mechanics Dec 8, 2017
B Why is there a quantum field? Feb 24, 2017