Understanding the meaning of "uncertainty" in Heisenberg's UP

In summary, the uncertainty principle in quantum mechanics states that there is a limit to how accurately two complementary observables can be determined or "prepared" at the same time. This means that if one observable is measured with high accuracy, the other one will have a higher uncertainty. This is due to the probabilistic nature of quantum mechanics and can be calculated using the standard deviation of the observables in the state of the particle. The time-energy uncertainty relation is different from the other uncertainty relations and is related to the stable ground state of a system. Measurements must be performed on an ensemble of particles to accurately determine the standard deviations of the observables.
  • #1
peguerosdc
28
7
TL;DR Summary
Why we can approximate the uncertainty (std. dev) as 1) the difference between two measurements, 2) the value of one measurement?
Hi!

I am checking Zettili's explanation on the uncertainty principle and I have this confusion on what the "uncertainty" really means which arises from the following statements:

When introducing the uncertainty principle, for the case of position and momentum it states that: if the x-component of the momentum of a particle is measured with an uncertainty ##\Delta p##, then its x-position cannot, at the same time, be measured more accurately than ##\Delta x = \hbar / (2\Delta p)##:

$$
\Delta x \Delta p \geq \hbar / 2
$$

Similarly for the energy and time:

$$
\Delta E \Delta t \geq \hbar / 2
$$

But the two given examples doesn't seem to fit with that definition.

The energy example says that: if we make two measurements of the energy of a system and if these measurements are separated by a time interval ##\Delta t##, the measured energies will differ by an amount ##\Delta E## which can in no way be smaller than ##\hbar / \Delta t##.
Now, this doesn't makes sense to me when the more formal statement of the uncertainty principle is given in terms of the standard deviation ##\sigma##:

$$
\sigma_A \sigma_B \geq \frac {|\langle[A,B]\rangle|} 2
$$

How is the difference between two measurements equivalent to the standard deviation?


Then, the next example calculates the uncertainty of the position of a 50kg person moving at 2m/s:

$$
\Delta x \geq \frac \hbar {2\Delta p} \approx \frac \hbar {2mv} = \frac \hbar {2 \times 50kg \times 2 ms^-1}
$$

This doesn't feel consistent neither with the definition in terms of the standard deviation nor with the first example.

  • In this case we only have one measurement for the momentum, so when comparing with the previous example, Why is the "uncertainty" of p approximated just to the value of p (instead of taking the difference between two measurements)?
  • When comparing with the definition of the uncertainty principle, Why we are now approximating the standard deviation of p to the value of p?

Thanks!
 
Physics news on Phys.org
  • #2
First of all the uncertainty relation as stated has nothing to do with how accurate you can measure two observables but how accurately these two observabes can be determined or "prepared". The position-momentum uncertainty relation means that if you prepare the particle in a (pure or mixed) state such that the position is very well determined, then the momentum is not well determined and vice versa. This is clear from the probabilistic meaning of the state: ##\Delta x## and ##\Delta p## are the standard deviations of position and momentum calculated using the quantum-mechanical state the particle is prepared in.

The general uncertainty relation for two observables ##A## and ##B## is given correctly, i.e.,
$$\Delta A \Delta B \geq \frac{1}{2} |\langle \mathrm{i} [\hat{A},\hat{B}] \rangle|.$$
To measure the standard deviations you have to prepare an ensemble of particles always in the same state and then measure very accurately observable ##A##. The resolution of the measurement device must be much better than the expected ##\Delta A##. Then you have to prepare an ensemble of particles again in this same state and measure observable ##B## very accurately. The resolution of the measurement device now must be much better than the expected ##\Delta B##.

The examples given are pretty handwaving. To make sense one must be much more specific about the concrete case discussed.

Last but not least it should be clear that the time-energy uncertainty relation is different from the so far discussed uncertainty relation, because in quantum theory time is not an observable but a parameter. The reason is that the energy must be bounded from below in order to have a stable ground state. This were impossible if you'd treat time and energy as canonically conjugate momenta. Then all energies were continuous and take all real values and thus the Hamiltonian weren't bounded from below nor would you get the correct discrete energy levels of bound states (e.g., for the hydrogen atom). Indeed in standard quantum theory time is always a parameter describing the causal evolution of systems, and the time evolution is described by the Hamiltonian of the system which usually represents the energy of the system.
 
  • Like
Likes atyy
  • #3
Thanks for the reply!

vanhees71 said:
First of all the uncertainty relation as stated has nothing to do with how accurate you can measure two observables but how accurately these two observabes can be determined or "prepared". The position-momentum uncertainty relation means that if you prepare the particle in a (pure or mixed) state such that the position is very well determined, then the momentum is not well determined and vice versa. This is clear from the probabilistic meaning of the state: ##\Delta x## and ##\Delta p## are the standard deviations of position and momentum calculated using the quantum-mechanical state the particle is prepared in.
I am not sure if I understand this as a subtlety or if there is a deeper meaning. We could say that "measuring" is a way of "determining" the value of an observable. When you perform a measurement, the wave function collapses and the value of that observable is well determined as the new state is an eigenstate of its operator (i.e. position). Afterwards, if you try to determine (directly by measurement or indirectly by other means) the momentum, you are tied to the probability given by the coefficients of the state in the momentum basis.
So, there is a certain difference between "measuring" and "determining", but I think they are tightly related.

vanhees71 said:
To measure the standard deviations you have to prepare an ensemble of particles always in the same state and then measure very accurately observable ##A##. The resolution of the measurement device must be much better than the expected ##\Delta A##. Then you have to prepare an ensemble of particles again in this same state and measure observable ##B## very accurately. The resolution of the measurement device now must be much better than the expected ##\Delta B##.
I get this. Standard deviation is an ensemble property so you need a set of measurements. Either measuring on an ensemble or measuring on one particle several times. This is precisely why I don't understand how Zettili is tackling its examples.

It doesn't give more specific details on the problem statements so I guess there should be a simple explanation.
 
  • #4
peguerosdc said:
When you perform a measurement, the wave function collapses and the value of that observable is well determined as the new state is an eigenstate of its operator (i.e. position).
That is how it works for observables with a discrete spectrum.

However, observables with a continuous spectrum such as position or energy/momentum of a free particle (what we’re discussing here) don’t work that way. The problem is that the position and momentum “eigenstates“ are delta functions in one representation but infinite plane waves in the other, and these are neither normalizable nor physically realizable.

(For a good time, Google for “rigged a Hilbert space”)
 
  • Like
Likes peguerosdc
  • #5
peguerosdc said:
How is the difference between two measurements equivalent to the standard deviation?

It is not. The standard deviation means that you prepare many independent systems in the same way, measure each of them in the same way, and estimate the standard deviation from the distribution of results.

Formally, there is also no energy-time uncertainty principle.

So the book is being very heuristic (I think it is overly heuristic here, but that is maybe a matter of taste, since it is traditional to learn "old quantum theory" which is a hodgepodge of right and wrong ideas before proper quantum theory was figured out. Some people, like me, still think it is useful to learn the lore, but not everyone agrees. If you find it irritating, just ignore it and stick to the formal theory (and over time you may find how some of the lore can be derived as approximations).
 
  • Like
Likes vanhees71
  • #6
Thanks everyone for the replies!

Nugatory said:
That is how it works for observables with a discrete spectrum.

However, observables with a continuous spectrum such as position or energy/momentum of a free particle (what we’re discussing here) don’t work that way. The problem is that the position and momentum “eigenstates“ are delta functions in one representation but infinite plane waves in the other, and these are neither normalizable nor physically realizable.

Then, what would be the physical scenario here? I mean, I suspect the wave function must collapse because when we measure/determine position of let's say an electron, we find a well defined position in space. So, in the momentum representation, does this mean the state is a superposition of all the sine waves (even if they are not normalizable) or this is not the case?

atyy said:
So the book is being very heuristic (I think it is overly heuristic here, but that is maybe a matter of taste, since it is traditional to learn "old quantum theory" which is a hodgepodge of right and wrong ideas before proper quantum theory was figured out. Some people, like me, still think it is useful to learn the lore, but not everyone agrees. If you find it irritating, just ignore it and stick to the formal theory (and over time you may find how some of the lore can be derived as approximations).

I see! Hmm I was hoping someone with more experience could shed some light on this, but I guess I'll have to live with it for the sake of my grades. Still, if someone knows the reasoning behind Zetilli's examples, I'd appreciate any additional comments :smile: These are the kind of things that I fear eventually could come and haunt me.
 
  • #7
peguerosdc said:
I suspect the wave function must collapse

Whether or not the wave function collapses is interpretation dependent. The most you can say with just the minimal math of QM is that you can treat the measurement as the preparation of a new state, whose state vector is the eigenvector corresponding to the measurement result.

See this Insights article (particularly rule 7) for more info and discussion:

https://www.physicsforums.com/insights/the-7-basic-rules-of-quantum-mechanics/
 
  • Like
Likes vanhees71
  • #8
peguerosdc said:
I see! Hmm I was hoping someone with more experience could shed some light on this, but I guess I'll have to live with it for the sake of my grades. Still, if someone knows the reasoning behind Zetilli's examples, I'd appreciate any additional comments :smile: These are the kind of things that I fear eventually could come and haunt me.

Very roughly, energy in QM is represented by frequency or wavelength. To get an accurate measurement of a wavelength, one often needs several periods, which is a long duration. So the more accurately wavelength is measured, the longer the time needed for the measurement, and the less precise the time.

However, unlike position and momentum, time is not an operator in QM, so there is no energy-time uncertainty that is strictly analogous to position-momentum uncertainty. And yes, you can see that there is no mention of "standard deviation" in my heuristic above, which indicates that it is a sometimes useful misinterpretation of the mathematics.

There is a length discussion in section 3 of Quantum mechanics: Myths and facts by Hrvoje Nikolic.
 
  • Like
Likes vanhees71
  • #9
peguerosdc said:
Similarly for the energy and time:

$$
\Delta E \Delta t \geq \hbar / 2
$$

But the two given examples doesn't seem to fit with that definition.

The energy example says that: if we make two measurements of the energy of a system and if these measurements are separated by a time interval ##\Delta t##, the measured energies will differ by an amount ##\Delta E## which can in no way be smaller than ##\hbar / \Delta t##.
Now, this doesn't makes sense to me ...
You're right, it makes no sense. Even if energy measurements are random, there is nothing to stop a second energy measurement at some later time coincidentally being the same as the first. It makes no sense to say that somehow one particular energy range is subsequenly forbidden just because you've measured that energy previously. And, for an energy eigenstate, all measurements over time should be the same in any case.

In QM by Griffiths, he makes it clear that the energy-time uncertainty relation is certainly not this. The ##\Delta t## is the time it takes the system to change substantially.

Let's suppose we prepare a system in an energy eigenstate at time ##t = 0##. ##\Delta E## is the uncertainty (standard deviation) for an energy measurement at time ##t = 0##. In this case ##\Delta E = 0## and hence ##\Delta t## is infinite. I.e. the system never changes. That makes sense.

On the other hand, if we have a system with a large uncertainty in energy at time ##t = 0##, then it takes a small time for the system to change substantially. Where the concept of changing substantially needs to be made more precise.
 
  • #10
peguerosdc said:
Thanks for the reply!I am not sure if I understand this as a subtlety or if there is a deeper meaning. We could say that "measuring" is a way of "determining" the value of an observable. When you perform a measurement, the wave function collapses and the value of that observable is well determined as the new state is an eigenstate of its operator (i.e. position). Afterwards, if you try to determine (directly by measurement or indirectly by other means) the momentum, you are tied to the probability given by the coefficients of the state in the momentum basis.
So, there is a certain difference between "measuring" and "determining", but I think they are tightly related.I get this. Standard deviation is an ensemble property so you need a set of measurements. Either measuring on an ensemble or measuring on one particle several times. This is precisely why I don't understand how Zettili is tackling its examples.

It doesn't give more specific details on the problem statements so I guess there should be a simple explanation.
One should distinguish between preparation of a system and measuring an observable. The preparation procedure puts the particle in a state, described by a statistical operator ##\hat{\rho}##. This defines the statistical properties of the outcome of measurements. It tells you which observables are determined and with which standard deviation. I meant "determining" in the sense of preparation.

I've the impression that Zettili is very sloppy in his language and maybe confusing, particularly about this mind bogling issue of the probabilistic meaning of the states.

For the uncertainty relation one has to distinguish between the Heisenberg-Robertson uncertainty relation which describes relations between the standard deviations of two observables and relate to the preparation of the system. It has nothing to do with which accuracy you an measure these observables, which is a property of the measurement device. To test these uncertainty relations you have to measure the these observables with a resolution much better than the standard deviations you want to measure, and this is from a theoretical point of view always possible and thus only a technical problem how well you can measure the observables with your measurement equipment. The misinterpretation of the uncertainty relation as principle impossibilty to measure observables accurately and the unavoidable disturbance of the system by the measurement goes back to Heisenberg himself. Bohr corrected this misunderstanding immediately after Heisenberg published his first paper on the uncertainty relation.

The question about disturbance through measurement is a much more complicated question. It can be handled with more recent developments in measurement theory using positive operator valued measures rather than the idealized von Neumann filter measurements usually discussed in textbooks. I'm not an expert in this. A good source are the papers and books by Busch, e.g.,

https://arxiv.org/abs/0706.3526
 
  • #11
peguerosdc said:
Then, what would be the physical scenario here? I mean, I suspect the wave function must collapse because when we measure/determine position of let's say an electron, we find a well defined position in space.
We do not get a single position, we get a more narrowly defined range of positions. Any device that measures the position of an electron is doing something equivalent to confining the electron in an infinite square well potential; the narrower the well the smaller the uncertainty in position. However the well always has non-zero width so there is always some uncertainty and the width of the wave packet representing the electron in position space will always be non-zero.

The ##\Delta{x}\Delta{p}## uncertainty relationship captures the way that as the position space wave function narrows, the momentum space representation widens and vice versa. Measurements of either are exercises in narrowing one while widening the other.
So, in the momentum representation, does this mean the state is a superposition of all the sine waves (even if they are not normalizable) or this is not the case?
In both the position and the momentum representations, the physically realizable states are superpositions of all the sine waves.
 

What is the Heisenberg Uncertainty Principle?

The Heisenberg Uncertainty Principle (HUP) is a fundamental principle in quantum mechanics that states that it is impossible to know both the exact position and momentum of a particle at the same time. This means that there will always be a level of uncertainty in our measurements of these properties.

What does "uncertainty" mean in the context of the HUP?

In the HUP, "uncertainty" refers to the inherent limitations in our ability to precisely measure certain properties of a particle. This uncertainty is not due to any flaws in our measurement tools, but rather is a fundamental aspect of the quantum world.

How does the HUP relate to the concept of determinism?

The HUP challenges the traditional concept of determinism, which states that the future state of a system can be predicted with complete accuracy based on its current state. The HUP suggests that there are inherent uncertainties in the behavior of particles at a quantum level, making it impossible to predict their exact behavior.

What are the implications of the HUP in practical applications?

The HUP has significant implications in fields such as quantum computing and cryptography, where precise measurements of particles are crucial. It also has implications in our understanding of the behavior of particles in the universe and the limitations of our knowledge about them.

Is the HUP accepted by all scientists?

The HUP is a well-established principle in quantum mechanics and is widely accepted by scientists. However, there are ongoing debates and research surrounding its interpretation and implications, and some scientists may have differing opinions on its significance.

Similar threads

Replies
3
Views
408
Replies
2
Views
338
Replies
13
Views
1K
  • Quantum Physics
Replies
16
Views
952
Replies
2
Views
2K
  • Quantum Physics
Replies
1
Views
823
Replies
2
Views
1K
  • Quantum Physics
Replies
17
Views
1K
Replies
2
Views
977
Replies
1
Views
819
Back
Top