# Is Heisenbergs uncertainty principle valid?

1. Nov 2, 2013

### edguy99

An article at physicsworld.com suggests that arbitrarily small measurements can be made.

Ozawa: "My theory suggests if you use your measuring apparatus as suggested by the maker, you can make better measurement than Heisenberg's relation"

Regarding his opposition: "They now prove that if you use it very badly – if, say, you use a microscope instead of a telescope to see the Moon – you cannot violate Heisenberg's relation. Thus, their formulation is not interesting."

The implications of this seem important. It would be interesting to hear some informed comments on this.

2. Nov 2, 2013

3. Nov 2, 2013

### Naty1

Based on multiple discussions in these forums, an individual measurement to arbitrairly small tolerances can be made. It is important to state what the HUP means and what it doesn't.....
as noted already, there are different formulations.

Here is my own synopsis from several discussions in these forums. If you search these forums [HUP] you can find many; I've misplaced a link to Zapper's blog on the subject. It is an excellent description and I have marked with as [1] below excerpts I think came from there.

Synopsis:

It IS possible to simultaneously measure the position and momentum of a single particle. The HUP doesn't say anything about whether you can measure both in a single measurement at the same time. That is a separate issue. [1]

My own synopsis:

A] Get a better instrument and you'll get better results to any accuracy.

B] Quantum theory does not predict the outcomes of single measurements; it only predicts the ensemble [statistical] properties.

C] In classical mechanics we can predict with absolute precision, to arbitrary accuracy, the future position and momentum [for example] of a single particle; The HUP says no you can't: you can only make a statistically based prediction!

It is possible to measure position and momentum simultaneously…a single measurement of a particle. What we can't do is to prepare an identical set of states such that we would be able to make an accurate prediction about what the result of a position measurement would be and an accurate prediction about what the result of a momentum measurement would be….for an ensemble of future measurements.

What we call "uncertainty" is a property of a statistical distribution. The HUP isn't about a single measurement and what can be obtained out of that single measurement. It is about how well we can predict subsequent measurements given the ‘identical’ initial conditions. The commutativity and non commutivity of operators applies to the distribution of results, not an individual measurement. This "inability to repeat identical measurement results" is in my opinion better described as an inability to prepare a state which results in identical observables. [1]

The uncertainty principle results from uncertainties which arise when attempting to prepare a set of identically prepared states…from 'identical' initial conditions. The wave function is not associated with an individual particle but rather with the probability for finding particles at a particular position.

What we can't do is to prepare an identical set of states [that yields identical measurements]. NO STATE PREPARATION PROCEDURE IS POSSIBLE WHICH WOULD YIELD AN ENSEMBLE OF SYSTEMS IDENTICAL IN ALL OF THEIR OBSERVABLE PROPERTIES. [instead, identical’ state preparation procedures yield a statistical distribution of observables [measurements].]

Fredrik: To prepare a state is to bring a particle on which we intend to do a measurement to the measuring device. Different ways of doing that may give us different average results. Two ways of doing it (two preparation procedures) are considered equivalent if no series of measurements can distinguish between them (i.e. if they give us the same wavefunction, or more generally, the same state operator/density matrix). These equivalence classes are often called "states".

The uncertainty principle restricts the degree of statistical homogeneity which it is possible to achieve in an ensemble of similarly prepared systems. A non-destructive position measurement is a state preparation that localizes the particle in the sense that it makes its wavefunction sharply peaked. This of course "flattens" its Fourier transform, so if the Fourier transform was sharply peaked before the position measurement, it isn't anymore.

The Uncertainty Principle finds its natural interpretation as a lower bound on the statistical dispersion among similarly prepared systems resulting from identical state preparation procedures and is not in any real sense related to the possible disturbance of a system by a measurement. The distinction between measurement and state preparation is essential for clarity.

A quantum state (pure or otherwise) represents an ensemble of similarly prepared systems. For example, the system may be a single electron. The ensemble will be the conceptual (infinite) set of all single electrons which have been subjected to some state preparation technique (to be specified for each state), generally by interaction with a suitable apparatus.

Albert Messiah, Quantum Mechanics, p119
“When carrying out a measurement of position or momentum on an individual system represented by psi, no definite prediction can be made about the result. The predictions defined here apply to a very large number [N] of equivalent systems independent of each other each system being represented by the same wave function [psi]. If one carries out a position measurement on each one of them, The probability density P[r], or momentum density, gives the distribution of the [N] results of measurements in the limit where the number N of members of this statistical ensemble approaches infinity.”

4. Nov 2, 2013

### ZapperZ

Staff Emeritus
5. Nov 2, 2013

### edguy99

There does appear to be some support to the Ozawa idea, in an older related link:

One experiment, carried out in 2012 by a team at the Vienna University of Technology (Nature Phys. 8 185), relied on a tomographic-style technique suggested by Ozawa himself in 2004, while the other by our group at Toronto (Phys. Rev. Lett. 109 100404) used weak measurement, as suggested by Wiseman and his co-worker Austin Lund in 2010, to directly measure the average disturbance experienced by a subensemble.

Even farther back (1933), can we still say that Einstein was wrong in his famous discussion with Bohr given these results:

Suppose two particles are set in motion towards each other with the same, very large, momentum, and that they interact with each other for a very short time when they pass at known positions.
Consider now an observer who gets hold of one of the particles, far away from the region of interaction, and measures its momentum; then, from the conditions of the experiment, he will obviously be able to deduce the momentum of the other particle. If, however, he chooses to measure the position of the first particle, he will be able to tell where the other particle is.
... How can the final state of the second particle be influenced by a measurement performed on the first, after all physical interaction has ceased between them?

Maybe in a couple hundred years with people on the moon and people on mars, we will actually be able to test this "thought experiment"...

6. Nov 2, 2013

### StevieTNZ

In the original post, isn't the article referring to weak measurements which don't give out much information on each individual particle (and that only averaging over a large number do you get definite information)?

Its much like the experiment that supposedly measured the position of photons (I believe) going through a double slit apparatus, yet interference still resulted. That was because the positions were 'weakly measured'.

7. Nov 2, 2013

### Naty1

Last edited: Nov 2, 2013
8. Nov 2, 2013

### vanhees71

The funny thing is that Heisenberg's first idea on the uncertainty relation was pretty vague, and that's why you can come to seemingly contradictory conclusions when analyzing it from the modern point of view of quantum theory.

Both Ozawa's and Busch's paper are fully correct, as far as I can see, but they define differently what's to be understood under Heisenberg's "disturbance of an observable A by measuring another observable B that is incompatible with observable A".

Heisenberg's formulation was pretty vague on how to define this disturbance by measurement. It was rather based on semiclassical "wave-particle dualistic" formulations and helped to establish the Copenhagen interpretation with the state collapse. Already this must be considered a suspicious thing since the collapse assumption introduces more trouble than it does good in our attempt to understand (or interpret) the quantum description of nature as given by the quantum-theoretical formalism. That's why even today you can write papers about how to understand the "error-disturbance uncertainty relation" properly, and this is not an easy business. I'm not an expert in this field and I still have a hard time to fully comprehend both definitions due to Ozawa and Busch.

The usually taught Heisenberg-Robertson uncertainty relation is much simpler to derive from the quantum-theoretical formalism but describes something else, namely the impossibility to prepare a quantum system in a state such that two incompatible observables are both taking determined values, but that the standard deviations of the values of these observables found when measuring them on an ensemble of equally (but independently) prepared systems are finite and their product always must exceed some value. The most famous example is the position-momentum uncertainty relation $\Delta x \Delta p_x \geq \hbar/2$ which holds for any (pure or mixed) state of a particle.

9. Nov 3, 2013

### DrChinese

This is an early version of the EPR experiment. So already done.