Heisenberg Uncertainty Principle and Gaussian Distributions

Click For Summary
The discussion centers on the use of Gaussian distributions in the derivation of the Heisenberg Uncertainty Principle (HUP). Gaussian distributions are favored because they simplify mathematical calculations and yield the minimum product of uncertainties in position and momentum, specifically dxdp = hbar/2. The choice of Gaussian is not due to a physical reason but rather its mathematical convenience and the fact that it aligns with the properties of wave functions in quantum mechanics. The conversation also touches on the general uncertainty principle applicable to any two non-commuting operators, emphasizing that the Gaussian is just one of many possible distributions. Ultimately, the participants express frustration over the lack of clarity regarding the historical context of Heisenberg's choice.
  • #31
Careful,

Please take my comments kindly.

You have said that the central limit theorem may be relevant. I did make some absurd comment about hats and coats related to that. I thought about it some more, and decided the following.

The Gaussian is used to model the distribution of classical measurements, since random errors creeping into the measurement of the real physical value (which is the expectation of the distribution) tend to be distributed normally. So if we make measurements of a classical observable, say position of some object, and we get x_1, x_2, ..., x_n we expect these to be distributed normally.

Now let's consider the quantum case. If we measure x_1, x_2, ..., x_n, each time we put the system into eigenstates of position |x_1\rangle, |x_2\rangle, ..., |x_n\rangle. If the wavefunction was to be built up out of previous knowledge of the position of the object, we may well choose a Gaussian form for it (i.e. the wavefunction). But, as we all well know, the form of the wavefunction should be the delta function (assuming the measurements were perfect).

Now if the measurements weren't perfect, and had their own classical error bounds which were to be distributed randomly, then the wavefunctions could be Gaussian. The only reason we would do this is to show that classical measurement errors had crept into our quantum state description. But the whole point of the HUP is to show that even for classically perfect measurements, there still exists an uncertainty in knowledge of momenta of a system, given that we had taken an ensemble of states and performed position measurements of them. Assuming classical measurement error would show that these are experimental issues in measurement, but the HUP is meant to demonstrate that the uncertainties are fundamental, and not experiment-related.

As you (and others) have said, the history of physics is important to physics. I have yet to see, however, how knowing why Heisenberg chose Gaussians for his initial analysis (given that he has apparently not stated himself why he chose them, which implies it was a relatively arbitrary choice) would help modern physics.
 
Physics news on Phys.org
  • #32
Hi Masudr,

***
You have said that the central limit theorem may be relevant. I did make some absurd comment about hats and coats related to that. I thought about it some more, and decided the following. ***

I did not bother about your joke.

***
The Gaussian is used to model the distribution of classical measurements, since random errors creeping into the measurement of the real physical value (which is the expectation of the distribution) tend to be distributed normally. So if we make measurements of a classical observable, say position of some object, and we get x_1, x_2, ..., x_n we expect these to be distributed normally. ***

I do not see what this has to do with classical/quantum - in the latter I can add Gaussian noise as well.

***
Now let's consider the quantum case. If we measure x_1, x_2, ..., x_n, each time we put the system into eigenstates of position |x_1\rangle, |x_2\rangle, ..., |x_n\rangle. ***

Perfect measurements do not exist in quantum gravity, so let's not speak about them. If you do a thought experiment, then pick out a physical set up, not one where formalism wins it over modelism.

***
If the wavefunction was to be built up out of previous knowledge of the position of the object, we may well choose a Gaussian form for it (i.e. the wavefunction). ***

The pre measurement wave function is usually not Gaussian since interactions need to be included, see for example the double slit experiment. Depending on your approach to the measurement problem you either collapse the wave or not.


** Now if the measurements weren't perfect, and had their own classical error bounds which were to be distributed randomly, then the wavefunctions could be Gaussian. The only reason we would do this is to show that classical measurement errors had crept into our quantum state description. But the whole point of the HUP is to show that even for classically perfect measurements, there still exists an uncertainty in knowledge of momenta of a system, given that we had taken an ensemble of states and performed position measurements of them. ***

Well, if you mean to say that I cannot known the momentum of every particle separately prior to t=5 (and by continuity at t=5) given that I make a perfect position measurement at t = 5 and have a prior bias towards the original wave function, then I might disagree with you. Obviously, one has to be careful what one means with momentum here, since the equations of motion for the ``particles'' are first order d/dt x = f(x,t) and not second order (so forces are velocity dependent and one has no conserved Hamiltonian). The naive Lagragian for such system would be m/s int( x d/dt x - V(x,t)) dt where V(x,t) = - int_{0}^{x}( f(y,t) dy), hence the momentum would be ``m x/s'' itself - m is mass, s is some timescale (which is simply a consequence of the absence of the traditional kinetic term) - anyway you can develop an entire story of classical quantum systems in this way. So basically, I know the path, hence the velocity, ``energy'' and so on ... the mass * velocity might be thought of as ``momentum'', given that the particle is assumed to be free prior to t=5 (although the orbit does not satisfy d^2/dt^2 x = 0). Alternatively, in the Bohm de Broglie approach one has m d^2/dt^2 x = - d/dx (V(x) + Q(x,t)) where Q(x,t) is the quantum potential. In case Q depends only on x (actually this is not important at all, just take the Lagrangean int ( m (dx/dt)^2 - V(x) - Q(x,t) )dt which gives momentum m dx/dt ), this gives the first integral m/2 (dx/dt)^2 + V(x) + Q(x), where Q(x) represents the quantum corrections on the energy, hence the momentum here is simply m dx/dt. Now, I did not think this entirely through, but you might want to add a quantum description of the measurement apparatus and follow the pointer states, say at t=5, the pointer says x=2, then taking into account the reaction speed T of the apparatus and so on you can determine the position of the particle with an accuracy of roughly speaking Tv where v is the ``typical'' speed of the particles in the ensemble. Anyway, in this sense it seems hard to me to get an accuracy on x and ``p'' which goes below the Heisenberg uncertainty bound. Of course, you can further restrict to those states which are classical indeed, in either have an ensemble interpretation in terms of a flow of particles satisfying the ordinary second order Newtonian laws of motion. That is, if one restricts to potentials which are at most quadratic in the position variable (see Moyal), then the evolution equation for the Wigner function of some state of the quantized system, coincides with the classical Liouville equation (for higher order potentials, extra quantum corrections are added). This does not imply of course that the Wigner function needs to be positive, this is only so in a few cases. So, in this case, one can just say that quantum mechanics is nothing but classical physics with inadequate initial information on the position and momenta. Hence, in this philosophy, one can put the question for the single events on the classical level ; in either can one measure in principle the position and momentum of a classical particle at the same instant of time ? Even so, this does not conflict the Heisenberg inequalities, which as you point out are merely mathematical inequalities at the level of the *statistics* (there are many particles with the same position, but different momenta), the latter merely implies that a delta function in position space cannot be a delta function in momentum space too (trivial Fourier analysis).

**Assuming classical measurement error would show that these are experimental issues in measurement, but the HUP is meant to demonstrate that the uncertainties are fundamental, and not experiment-related.**

These uncertainties are fundamental in the sense that they show up in the statistics.

***
As you (and others) have said, the history of physics is important to physics. I have yet to see, however, how knowing why Heisenberg chose Gaussians for his initial analysis (given that he has apparently not stated himself why he chose them, which implies it was a relatively arbitrary choice) would help modern physics. ***

Haha, my expression there was not in particular referring to *this* specific example and of course I do not know why Heisenberg made this guess either (I made a suggestion).

Anyway, what I wanted to *suggest* by going to the gaussian, is that one might want to see the failure of the particle distribution in the double slit experiment to be more or less two separate Gaussians, to be a negation of the presumed independence of the different single events (which is a crucial assumption in the central limit theorem). After all, the interactions due to the plate (if one does not take into account a ZPF radiation) are not really influencing the statistics of the particles that go through in the first place (they are jolly free), given that I would take a CLASSICAL point of view on this experiment. This would bring us to considerations about polarizable media and so on (which is also proposed as a solution to the EPR paradox) but I think I am going to refrain from more comments here given that some might not see this connection.

I am not sure if this was a direct concern to Heisenberg but I would be surprised if people did not think in this way about the violation of Gaussianity in experiments where ``particles'' are presumably free at the time. That is what I mean by knowing your history.

Careful
 
Last edited:
  • #33
Well, if RogerPink is not with us anymore, then I'm writing this for nothing, but something intrigues me:


RogerPink said:
Thanks for the advice. The advice I received from my friends was "don't go to forums". I think I'm going to take their advice. I just wanted to point out thought that I posted an answer to my question in my previous post, which was posted before your post was. It says that it's been shown that hbar/2 is an invalid lower boundry for deltax(deltap). Heisenberg never wrote this, Kennard did and he made an assumption (which has been proven to be incorrect) that gaussian distributions could be used.

I think there is something wrong with the above statement. Not about the historical facts (of which I ignore the details and hence cannot make comments), but about the supposed statement that it has been established that having hbar/2 as a lower boundary is erroneous, given that there is a simple proof for this, not assuming any gaussian or other distribution, which indicates that this IS the lower boundary.
So how is this statement to be understood ?

I read your reference
http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=AJPIAS000070000010000983000001&idtype=cvips&gifs=yes

but I'm afraid it is rather pointless, because it argues against taking the *standard deviation* as a measure of "uncertainty".

Sure, if you take percentiles, or you take FWHM, you will find other results, but that's semantics. The Heisenberg uncertainty principle tells us something about standard deviations. So let's not call it "the Heisenberg Uncertainty principle", but the "Heisenberg standard deviation principle".

In my own field these discussions come up frequently too: what should one take as a "measure of spread" ? My point is always that standard deviation is a good measure, because it is quadratically additive without knowing the underlying distribution. That is: if you have two sources of spread which can be assumed to be independent, then you know you can sum the standard deviations quadratically and you'll have the standard deviation of the result. I know of no other measure of "spread" which has this property: a simple rule of combination, *independent of the given distributions*. It has nothing to do with quantum theory, but just a property of convolution and second order moments.
 
Last edited by a moderator:
  • #34
vanesch said:
I think there is something wrong with the above statement. Not about the historical facts (of which I ignore the details and hence cannot make comments), but about the supposed statement that it has been established that having hbar/2 as a lower boundary is erroneous, given that there is a simple proof for this, not assuming any gaussian or other distribution, which indicates that this IS the lower boundary.
So how is this statement to be understood ?

I need to stop coming here but I'll try to address your question, maybe you can help me understand this.

You mention above there is a simple proof to this. As there are several proofs I'm going to assume you mean the Commutator one where:

deltaXdeltaP>=1/2|[x,p]|

Clearly this gives us h-bar/2 as our minimum boundary. But what assumptions (if any) were necessary in the derivation of the relation:

deltaAdeltaB>=1/2|[A,B]|

Are the deltaAdeltaB in this equation the standard deviation of operators or observables? (I don't know, but you might and I would like to know for sure)

Could the assumption of a normal distribution have snuck in in the choices for A' and B' in the derivation or somewhere else? (Again I'm asking, I really don't know).

What spurred me to ask the question was that I read Kennard assumed gaussians for the error distributions and used standard deviation to come up with the exact inequality(At the time I thought is was Heisenberg did). Further reading showed me that Heisenberg and the copenhagen crew used simply said it was proportional to hbar. So the question was raised in me that is deltaXdeltaP>=hbar/2 a property of the observables themselves and the system being observed doesn't matter, or will the lower boundary vary from system to system. That's where I'm at at the momment. I wouldn't mind help.
 
  • #35
*** I need to stop coming here but I'll try to address your question, maybe you can help me understand this.

You mention above there is a simple proof to this. As there are several proofs I'm going to assume you mean the Commutator one where:

deltaXdeltaP>=1/2|[x,p]|

Clearly this gives us h-bar/2 as our minimum boundary. But what assumptions (if any) were necessary in the derivation of the relation:

deltaAdeltaB>=1/2|[A,B]|

Are the deltaAdeltaB in this equation the standard deviation of operators or observables? (I don't know, but you might and I would like to know for sure)

Could the assumption of a normal distribution have snuck in in the choices for A' and B' in the derivation or somewhere else? (Again I'm asking, I really don't know).

***

Your question has already twice (or more times) been aswered. You only need A,B to be Hermitian operators and psi can be any wavefunction whatsoever (it does not need to be an eigenstate of the Hamiltonian or whatever). delta A = ( <A^2> - <A>^2)^{1/2}, the proof is a simple mathematical excercise given in all introductory courses of QM.
 
  • #36
Never Mind.
 
  • #37
Now, apparently no one has understood what the OP is talking about (post #28). If there is still an issue to be resolved here, then a re-phrasing of the question would be helpful.[/QUOTE]

........
Perhaps you might explain the specifics of my lack of understanding. Thank you. Reilly Atkinson
 
  • #38
reilly said:
Perhaps you might explain the specifics of my lack of understanding. Thank you. Reilly Atkinson

Eek. I'm very sorry, I meant post #27 not post #28.
 
  • #39
I don't think I'm cut out for the forum format. Being in a better mood I think that rather than cut off storm off like a child I'll give you guys a play by play as I try to answer my own question (or my question evolves). Maybe if you see what I'm looking at you'll have a better idea what I'm trying to find out.

http://arxiv.org/PS_cache/quant-ph/pdf/0210/0210044.pdf

An interesting paper but contradicts what I thought kennard did. According to this paper kennard generalized the uncertainty relation for all distributions. The paper reformulates the uncertainty principle and lists possible violations.
 
Last edited by a moderator:
  • #40
Here is a great paper that has answered some of my questions.

Generalized Uncertainty Relations Phys Rev. A vol 35 pg 1486

And just so I'm clear here, I'm no longer asking a question of the forum, I'm just posting things I found helpful in my search for a clearer understanding of the limits of the uncertainty relation. If this is not an appropriate use of the forum, I won't be offended if this thread is killed.
 
  • #41
RogerPink said:
Here is a great paper that has answered some of my questions.

Generalized Uncertainty Relations Phys Rev. A vol 35 pg 1486

And just so I'm clear here, I'm no longer asking a question of the forum, I'm just posting things I found helpful in my search for a clearer understanding of the limits of the uncertainty relation. If this is not an appropriate use of the forum, I won't be offended if this thread is killed.
Great, so why did you not simply ask about gravitational modifications of the uncertainty principle ? You have to be careful what you mean here since [x,p] = i \hbar is valid by definition. In Newtonian gravity coupled to the Schrodinger equation, you are not going to get anything new (what is done in these papers is a classical analysis of error propagation) : the momentum here is still the free Euclidean momentum m dx/dt, moreover in order to import the Planck scale, you need G,c and \hbar, that is at least a relativistic quantum theory coupled to a gravitational background. In that case, choose a particular coordinate system as well as some state, and you will see that the kinetic term (mass) receives gravitational corrections. Hence, the correct momentum deviates from ``free'' momentum - just as this occurs in gauge theories. So, it is obvious that corrections arise on the uncertainty relations for the ``free'' momentum mdx/dt which you can guess by dimensional analysis.

So, both the question as well as the answer seem to be fairly trivial (we did not need to go into the meaning of the Heisenberg inequalities for that at all, neither about why Heisenberg used a Gaussian to start with !).

Careful
 
Last edited:
  • #42
data tells

RogerPink said:
The uncertainty equation is equal to h-bar over 2 and as I understand it, the 2 comes from the minimum standard deviation for a gaussian distribution. Which is to say the relation would be different if the error for position and momentum were represented by a different kind of distribution. Was there a physical reason for this choice of distribution or did this type of distribution just fit the data. Considering the precision to which Quantum Mechanics has been tested, the gaussian distribution is obviously correct, I'm just wondering if there was a physical reason he chose it.

why it is.
isnt this a physical reason to say the data tells?
every thing starts from here that:
we'll suppose we have a particle in between to walls in infinite distance we'll ask what is the momentium. then we'll bring the two walls to very near each other. then again we'll ask what is the momentium. in this between, all we know is that the particle is between the two walls, and we have the distance of the two walls measured. This will produce us with a normal distribution. which if we draw the curve will ressemble a bell, so it is called the bell curve as also.
 

Similar threads

Replies
32
Views
3K
Replies
13
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
Replies
4
Views
1K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 25 ·
Replies
25
Views
2K