From statistical to ontological: uncertainty principle

In summary: This means that if we want to calculate the velocity of an object in a given reference frame, we first have to find all of the stationary states of that object in that reference frame, and then use the velocity of the most distant of those states as our starting point. In summary, according to the uncertainty principle, the momentum and position of a particle can never be simultaneously known with certainty. This is based on the statistical interpretation of quantum mechanics, which states that if we measure the momentum of a particle in half of the cases, and the position in the other half, then the two values will have a standard deviation that is greater than the constant known as the Heisenberg uncertainty principle.
  • #1
nomadreid
Gold Member
1,670
204
The derivation of the momentum/position Heisenberg Uncertainty Principle (HUP) is based on the statistical interpretation which says that if we have a lot of quantum systems in identical states, and measure the momentum in half of them and get a distribution with standard deviation σp, and measure the position of the other half and get a distribution σx, then σp×σx ≥ [itex]\hbar[/itex]/2. Fine. And obviously both σp and σx must be non-zero. But these are statements about a collection of particles. Purely from the point of view of statistics, a non-zero standard deviation in each of two distributions does not prevent one element from each set from being equal to the respective expected value of its distribution at the same time. Nonetheless, a commonly stated corollary is that a single particle cannot have both a determined position and a determined momentum simultaneously. This then transforms the HUP
(a) into a statement about a single particle, thereby
(b) giving the standard deviation an ontological meaning.
The only explanations that I have seen for this corollary are:
(1) confounding the HUP with the observer effect: hence not a corollary
(2) pointing out that there are macroscopic quantum effects, which may be a reason to look for some form of (a), but it does not justify the collapse from a statement about a collection of particles to the same statement about a single particle, and thereby does not justify (b).
(3) hand-waving.
Can anyone give me something better? Thanks.
 
Physics news on Phys.org
  • #2
You don't need the HUP to show that a single particle can't have both a definite position and a definite momentum at the same time. All you need is the fact that the position operator and the momentum operator don't commute and thus can't have common (generalized) eigenstates. So if you are in a state of definite position (a generalized eigenstate of the position operator), then it is mathematically impossible that this state is also a (generalized) eigenstate of the momentum operator and thus at best a superposition of momentum eigenstates and thus not in a state of definite momentum.
 
  • #3
Thanks, rubi. That finally makes sense.
 
  • #4
One just must add that there is no position eigenstate but only "generalized position eigenstates". That's most simply seen in the position representation. The generalized position eigenstate [itex]u_{x'}(x)[/itex] with eigenvalue [itex]x'[/itex] is the Dirac ##\delta## distribution,
[tex]u_{x'}(x)=\delta(x-x').[/tex]
It's not a square-integrable function but a distribution, for which already taking the square doesn't make sense. That's why a particle can never have a precisely determined position.

That's also consistent with the Heisenberg-Robertson uncertainty relation
[tex]\Delta x \Delta p \geq \hbar/2.[/tex]
For any state of the particle (pure or mixed), the standard deviations of energy and momentum can never be exactly 0. Of course, you can make [itex]\Delta x[/itex] as small as you want (but never make it really 0). Then necessarily the standard deviation of the momentum becomes larger according to the uncertainty principle (and vice versa).

Everything else is speculation about the interpretation of states in quantum mechanics. I'm a follower of the minimal statistical interpretation: According to quantum theory we can know about physical systems only probabilities for the outcome of measurements, which are given by Born's rule. If your system is in a state which is the eigenstate of an operator representing an observable, this observable's value is determined, i.e., with a probability of 100% you find the corresponding eigenvalue of the operator. Any other observables are indetermined, and when measuring such an indetermined observable you find with some probability, given by the state of the system according to Born's rule, one of the eigenvalues of the representing operator.

Thus, in general quantum theory only makes predictions about ensembles of independently from each other in a certain (pure or mixed) state prepared systems. Whether or not there is a deterministic theory which can describe nature as successfully as quantum mechanics, I don't know. If so, then it will be a non-local theory due to the violation of Bell's inequalities, and so far nobody could come up with a consistent non-local deterministic theory. As long as this is not the case we have to accept quantum theory of the yet most comprehensive description of nature.
 
  • #5
The entirety of the uncertainty relation is essentially encapsulated into what are called Zeno's paradoxes of motion. Let's forget about mass as it's really irrelevant to the problem at hand. The problem really boils down to the relationship between position and it's first derivative wrt time: velocity.

We live in a world where there is nothing but motion. (Even things that appear to the stationary can equivalently be said to be in motion relative to another reference frame.) But we are fundamentally incapable of expressing motion *as* motion. We must instead rely upon finding the differences between distinct stationary states. These stationary states are all that we can talk about. Even the principles of calculus are based on the idea of stationary states. The point behind calculus is just that we imagine that the differences between states are so small that we are effectively dealing with continuous functions. But who's to say what is "small enough", and what functions can rightly be called continuous?

I think the essence behind the HUP is just that we can either be talking about position or velocity. If we pick one, then the meaning of the other one simply vanishes. Therefore, I don't think the mathematical formulation (ie, as a product of their simultaneously measured uncertainties) of the HUP can really have any basis in any kind of ontological understanding of nature.
 
  • Like
Likes Suwailem
  • #6
Thank you, vanhees71 and glengarry, for your helpful responses.

vanhees71: It is interesting that you go further than the uncertainty principle or even rubi's answer (see first answer to my post), in that you conclude that
a particle can never have a precisely determined position
. That is, you do not invoke the non-commutativity of operators, or any dependence on the momentum, but (if I understood you correctly) say that by its very nature one cannot have a specific eigenvalue for a position operator. Is this correct?
You also write that you are a minimalist. In that case, what does your statement about the standard deviation of positions being greater than zero have to do with the idea of an experiment consisting of a single measurement, for which the standard deviation (in its usual statistical sense, not in some ontological sense which you agree is speculation) would trivially equal zero? I am missing the connection there.

glengarry: I would put in one key phrase into your characterisation of calculus. I would say that the application of calculus to measurements assumes "arbitrarily small enough" distances to approximate continuity. As a purely mathematical theory, calculus has no arbitrariness in the definition of continuity. The arbitrariness (or, with quantisation of space, different solution) comes in when physicists apply it to real measurements. But physicists conclude indeterminacy not only when they deal with the application, but also in theory, where Zeno's paradox has no difficulty. (Zeno's paradoxes of motion were, as a purely mathematical affair, solved with the calculus.)
 
  • #7
glengarry said:
The entirety of the uncertainty relation is essentially encapsulated into what are called Zeno's paradoxes of motion. Let's forget about mass as it's really irrelevant to the problem at hand. The problem really boils down to the relationship between position and it's first derivative wrt time: velocity.

We live in a world where there is nothing but motion. (Even things that appear to the stationary can equivalently be said to be in motion relative to another reference frame.) But we are fundamentally incapable of expressing motion *as* motion. We must instead rely upon finding the differences between distinct stationary states. These stationary states are all that we can talk about. Even the principles of calculus are based on the idea of stationary states. The point behind calculus is just that we imagine that the differences between states are so small that we are effectively dealing with continuous functions. But who's to say what is "small enough", and what functions can rightly be called continuous?

I think the essence behind the HUP is just that we can either be talking about position or velocity. If we pick one, then the meaning of the other one simply vanishes. Therefore, I don't think the mathematical formulation (ie, as a product of their simultaneously measured uncertainties) of the HUP can really have any basis in any kind of ontological understanding of nature.

nice reasoning.
 
  • #8
You don't need the HUP to show that a single particle can't have both a definite position and a definite momentum at the same time.

I'll go one step further; The HUP has nothing to do with the measurement of a single particle [system]. The HUP isn't about the knowledge of the conjugate observables of a single particle in a single measurement. The uncertainty theorem is about the statistical distribution of the results of measurements.

example: A single scattering experiment consists of shooting a single particle at a target and measuring its angle of scatter. Quantum theory does not deal with such an experiment but rather with the statistical distribution of the results of an ensemble of similar results. Here are my favorite descriptions:

From Zapper of these forums:

Misconception of the Heisenberg Uncertainty Principle.

http://physicsandphysicists.blogspot.com/2006/11/misconception-of-heisenberg-uncertainty.html

...One of the common misconceptions about the Heisenberg Uncertainty Principle (HUP) is that it is the fault of our measurement accuracy. A description that is often used is the fact that ….a very short wavelength photon has a very high energy, and thus, the act of position measurement will simply destroy the accurate information of that electron's momentum.


While this is true,(about measurement limitations of equipment) it isn't really a manifestation of the HUP. The HUP isn't about a single measurement and what can be obtained out of that single measurement. It is about how well we can predict subsequent measurements given the ‘identical’ conditions. In classical mechanics, if you are given a set of identical conditions, the dynamics of a particle will be well defined. The more you know the initial position, the better you will be able to predict it's momentum, and vice versa…...

What I am trying to get across is that the HUP isn't about the knowledge of the conjugate observables of a single particle in a single measurement. I have shown that there's nothing to prevent anyone from knowing both the position and momentum of a [single] particle in a single measurement with arbitrary accuracy: that is limited only by our technology. However, physics involves the ability to make a dynamical model that allows us to predict when and where things are going to occur in the future. While classical mechanics does not prohibit us from making as accurate of a prediction as we want, QM does! It is this predictive ability that is contained in the HUP. It is an intrinsic part of the QM formulation and not just simply a "measurement" uncertainty, as often misunderstood by many.
A complementary description:
PAllen: If you are measuring position and momentum of the 'same thing' at two different times, the measurements are necessarily timelike. The measurements occur at two times on the world line of the thing measured. This order will never change, no matter what the motion of the observer is. If, instead, they occur for the same time on the "thing's" world line, they are simultaneous for the purposes of the uncertainty principle.to measure a particle's momentum, we need to interact with it via a detector, which localizes the particle. So we actually do a position measurement (to arbitrary precision). Then we calculate the momentum, which requires that we know something else about the position of the particle at an earlier time (perhaps we passed it through a narrow slit). Both of those position measurements, and the measurement of the time interval, can be done to arbitrary precision, so we can calculate the momentum to arbitrary precision. From this you can see that in principle, there is no limitation on how precisely we can measure the momentum and position of a single particle.

Where the HUP comes into play is that if you then repeat the same sequence of arbitrarily precise measurements on a large numbers of identically prepared particles (i.e. particles with the same wave function, or equivalently particles sampled from the same probability distribution), you will find that your momentum measurements are not all identical, but rather form a probability distribution of possible values for the momentum. The width of this measured momentum distribution for many particles is what is limited by the HUP. In other words, the HUP says that the product of the widths of your measured momentum probability distribution, and the position probability distribution associated with your initial wave function, can be no smaller than Planck's constant divided by 4 times pi.
Course Lecture Notes, Dr. Donald Luttermoser,East Tennessee State University:

The HUP strikes at the heart of classical physics: the trajectory. Obviously, if we cannot know the position and momentum of a particle at t[0] we cannot specify the initial conditions of the particle and hence cannot calculate the trajectory...Due to quantum mechanics probabilistic nature, only statistical information about aggregates of identical systems can be obtained. QM can tell us nothing about the behavior of individual systems. ….QUOTE]

For many more perspectives and details, try these prior discussions of HUP:
[Warning: As I recall, these are many page discussions..]

https://www.physicsforums.com/showthread.php?t=516224

https://www.physicsforums.com/showthread.php?p=3700586#post3700586
 
Last edited:
  • #9
rubui posted:

You don't need the HUP to show that a single particle can't have both a definite position and a definite momentum at the same time. All you need is the fact that the position operator and the momentum operator don't commute and thus can't have common (generalized) eigenstates.
As I read your 'single particle' implication above, it does not seem to be the correct interpretation...but it IS a common one...

The commutativity and non commutivity of operators applies to the distribution of multiple results, not an individual measurement of an individual particle...[This is how I interpret Zapper and PAllens' explanations, above]

If I've got the interpretation wrong, somebody please explain...
 
  • #10
glengarry said:
I think the essence behind the HUP is just that we can either be talking about position or velocity. If we pick one, then the meaning of the other one simply vanishes. Therefore, I don't think the mathematical formulation (ie, as a product of their simultaneously measured uncertainties) of the HUP can really have any basis in any kind of ontological understanding of nature.

This reasoning, were it correct, would mean that entangled particle pairs should allow one to read both position and momentum (by inference). That doesn't happen, so there must be something going on "deeper" which the HUP *does* properly describe.

Besides, the HUP applies equally to spin. Using your thinking, there is no reason spin elements should behave similarly to position/momentum. And yet, non-commuting spin components display evidence the HUP and commuting ones do not. And you can see the effect vary at every combination in between.
 
  • #11
DrChinese said:
...there must be something going on "deeper" which the HUP *does* properly describe.
I really do think that they way in which we theoretically "visualize" the nature of quantum objects plays an essential part in how we approach the experimental aspect of physics. According to Einstein, the mathematical location was always the thing. Then de Broglie did his dissertation, followed by the eponymous wavefunction of Schrodinger. These later guys always had a kind of diffuse waviness on the brain. And then you had all the others who refused to even admit to the existence of any kind of picture of quantum matter.

What I'm saying is that modern experimenters are not above any of these issues. I don't want to go any farther than to say that there are always issues of interpretation of any kind of experiment. Especially experiments that have the "QM" name brand attached to them.

I personally see all objects in the universe as always "entangled" with each other, because the notion of giving scale a fundamental standing in physical theory is not philosophically pleasing to me. In my book, everything is defined exactly by the bounds of the universe.

As far as the HUP is concerned, I just don't feel that it has anything to add in terms of deep ontological understanding. It may very well be of imminent practical utility within specific experimental setups that exist on *this* scale. But I am definitely not an expert on any kind of experimental physics. I am completely transfixed by theory, and everything I know of Heisenberg the man says that he had the same kind of attitude.
 
  • #12
Thanks for all these comments.
glengarry:
I really do think that they way in which we theoretically "visualize" the nature of quantum objects plays an essential part in how we approach the experimental aspect of physics.
Or, to put it in the words of David Mermin, "Shut up and calculate." Perfectly valid, and in keeping with the ensemble (statistical) interpretation of quantum mechanics, which Naty1 favours. Some people may point to ontological descriptions as a useful guide to know which direction you want to develop your theory, but that is not what I wish to bring up. Rather, although there is no compelling reason to ascribe an uncertainty to the properties of a single particle, since we are talking about states and states are an equivalence class of many particles, nonetheless I am not sure that it is inconsistent to assume that each particle has its own uncertainty if one modifies the definitions of the symbols used for standard deviation and expected value appropriately. After all, when a single-particle interpretation says that |α| and |β| are probabilities in a superposition α|0>+β|1>, it does not mean "probability" in the classical sense, but merely using the probability amplitudes as vectors in Hilbert space. There is nothing that I see to force the spectrum to be distributed with one eigenvalue per particle, even though this is usually how it is interpreted. But the devil is in the details, and the details are what I am interested in, quite apart from the question as to which ontology is "better". I can put my question in two different manners, both of them necessarily rough:
(a) how could one re-define Δ and E(X) so that a single-particle uncertainty could make sense?
(b) It is not a question of superposition is interpreted as belonging to a single particle instead of to the ensemble of particles, but rather the superposition being interpreted as belonging to a single particle in addition to the ensemble of particles. In the latter case, one should be able to derive the probability distribution (or density matrix) for the ensemble of particles from the property (probability-redefined-for-a-single-particle) of the constituent particles (all being in the same initial state). How?
(I am asking for a suspension of disbelief here, I know, and also that I have jumped a little from the HUP to superposition, but the spirit of the hypothetical question is the same.)
Thanks.
P.S. Just came across this: http://lanl.arxiv.org/pdf/1111.3328v1.pdf which is of interest to the discussion, and I would very much appreciate comments on that paper's conclusions.
 
Last edited:

1. What is the uncertainty principle?

The uncertainty principle, also known as Heisenberg's uncertainty principle, is a fundamental concept in quantum mechanics that states that it is impossible to simultaneously know the exact position and momentum of a particle. This means that there will always be a degree of uncertainty in our measurements of these properties.

2. How does the uncertainty principle relate to statistics?

The uncertainty principle can be seen as a statistical phenomenon, as it deals with the limitations of our ability to measure certain properties of particles. It highlights the inherent uncertainty and randomness in the behavior of particles at the quantum level.

3. What is the difference between statistical and ontological uncertainty?

Statistical uncertainty refers to the limitations in our knowledge and measurements of a system, while ontological uncertainty deals with the inherent randomness and unpredictability of the system itself. In other words, statistical uncertainty is a result of our limited understanding and tools, while ontological uncertainty is a fundamental aspect of the system.

4. How does the uncertainty principle impact scientific research?

The uncertainty principle has significant implications for scientific research, especially in the field of quantum mechanics. It means that there will always be a degree of uncertainty in our measurements and predictions, which can affect the accuracy and reliability of our results. Scientists must take this into account and use statistical methods to analyze and interpret their data.

5. Are there any exceptions to the uncertainty principle?

There are certain exceptions to the uncertainty principle, such as the uncertainty relation for energy and time, which states that the product of the uncertainty in energy and the uncertainty in time cannot be smaller than a certain value. Additionally, there are some situations where the uncertainty principle may not apply, such as in large-scale systems where quantum effects are negligible.

Similar threads

Replies
1
Views
822
  • Quantum Physics
Replies
3
Views
268
Replies
10
Views
1K
Replies
10
Views
1K
Replies
3
Views
962
Replies
3
Views
416
Replies
2
Views
1K
  • Quantum Physics
Replies
19
Views
2K
Replies
14
Views
1K
Back
Top