# Heisenberg Uncertainty Principle and Gaussian Distributions

1. Aug 29, 2006

### RogerPink

I was reading about the derivation of the Heisenberg Uncertainty Principle and how Heisenberg used Gaussian Distributions to represent the uncertainty of position and momentum in his calculation. Why is it that Gaussian Distributions were used? There are many different types of distributions out there, why this kind in particular?

2. Aug 30, 2006

### Meir Achuz

The Gaussian is easy to do mathematically, and it turns out that the FT of a Gaussian is a Gaussian. It also turns out that the Gaussian has the minimum product of dxdp (as usually defined). For this reason, the HUP is stated as an inequality. dxdp=hbar/2 only for the Gaussian.

3. Aug 30, 2006

### RogerPink

I'm still confused

The uncertainty equation is equal to h-bar over 2 and as I understand it, the 2 comes from the minimum standard deviation for a gaussian distribution. Which is to say the relation would be different if the error for position and momentum were represented by a different kind of distribution. Was there a physical reason for this choice of distribution or did this type of distribution just fit the data. Considering the precision to which Quantum Mechanics has been tested, the gaussian distribution is obviously correct, I'm just wondering if there was a physical reason he chose it.

4. Aug 30, 2006

### masudr

It's better (in my opinion) to show that for any two operators which don't commute, there exists a corresponding uncertainty principle in the pair of observables those operators represent. In this case, you don't need to worry about specifics, as the result is fairly general.

5. Aug 30, 2006

### RogerPink

I'm sorry but that doesn't really answer my question at all. To phrase my question another way, in the equation:

deltaX x deltaP = h-bar/2

Where does the 2 come from and why?

6. Aug 30, 2006

### Meir Achuz

The 2 comes from the FT of a Gaussian.
Do it yourself. The math is fairlyl simple.
The HUP is usually written as "greater than or equal".
H picked G for the two reasons I gave.
Given any spatially confined wave function, dxdp (suitably defined)can be calculated by FT. If it is not Gaussian, dxdp will be greater than hbar/2.
I'm outtta here now.

7. Aug 30, 2006

### RogerPink

Not impressed with this forum

You have basically answered my question by saying "because the math works out." I've seen the derivation (Why do you think I'm asking the question?. I'll see you guys in the literature, this forum is a joke.

8. Aug 30, 2006

### franznietzsche

No, it is not equal. The general uncertainty principle for any two hermitian operators $$\hat A$$, $$\hat B$$ is

$$\Delta A \Delta B \ge \frac{i}{2}<[\hat A, \hat B]>$$

This is a provable fact for any two hermitian operators in hilbert space, regardless of the wave functions (so long as again, the wave functions are in hilbert space). You do not have to make any assumptions about the wave functions (except, that again, they are in hilbert space).

see:

http://galileo.phys.virginia.edu/classes/751.mf1i.fall02/GenUncertPrinciple.htm

In deriving the general uncertainty principle, no assumptions are made about the wave functions.

There is no physical reason to have used the gaussian distribution in initially finding the uncertainty principle, its just the easiest to work with, and happens to be the distribution thatg gives the minimal uncertainty.

You won't get far in physics with an attitude like that. Clearly you didn't understand what masudr said at all, and haven't seen the proper derivation for the general uncertainty principle. The uncertainty principle is a mathematical theorem, that applies to any two hermitian operators in hilbert space. If the mathematical assumptions that lead up to it apply to reality, then it applies to reality. It seems that it does. But there is no physical reason behind it, its a math theorem. Welcome to the world of theoretical physics.

9. Aug 30, 2006

### RogerPink

This will be my last thread on this forum, but in the interest of professionalism, I would like to resolve my question before I go. My question was prompted by the following historical account of the derivation of the Uncetainty Principle found on wikipedia. It reads:

Heisenberg did not just use any arbitrary number to describe the minimum standard deviation between position and momentum of a particle. Heisenberg knew that particles behaved like waves and he knew that the energy of any wave is the frequency multiplied by Planck's constant. In a wave, a cycle is defined by the return from a certain position to the same position such as from the top of one crest to the next crest. This actually is equivalent to a circle of 360 degrees, or 2π radians. Therefore, dividing h by 2π describes a constant that when multiplied by the frequency of a wave gives the energy of one radian. Heisenberg took ½ of as his standard deviation. This can be written as over 2 as above or it can be written as h/(4π). Normally one will see over 2 as this is simpler.

Two years earlier in 1925 when Heisenberg had developed his matrix mechanics the difference in position and momentum were already showing up in the formula. In developing matrix mechanics Heisenberg was measuring amplitudes of position and momentum of particles such as the electron that have a period of 2π, like a cycle in a wave, which are called Fourier series variables. When amplitudes of position and momentum are measured and multiplied together, they give intensity. However, Heisenberg found that when the position and momentum were multiplied together in that respective order or in the reverse order, there was a difference between the two calculated intensities of h/(2π). In other words, the two quantities position and momentum did not commute. In 1927, to develop the standard deviation for the uncertainty principle, Heisenberg took the gaussian distribution or bell curve for the imprecision in the measurement of the position q of a moving electron to the corresponding bell curve of the measured momentum p.

Please note that last sentence that says Heisenberg took the gaussian distribution or bell curve for the imprecision in the measurement of the position q of a moving electron...... My question here is why would he do that. Is there a physical reason to expect a gaussian distribution? Thats all I want to know. I'm not some quack trying to rewrite physics, I'm just curious about the history.

I find this forum condescending and insulting. I'm doing research and publishing. You can use my name and look it up (Roger H Pink)(Roger Pink). I understand that the fourier transform of a gaussian is a gaussian. I understand that fourier transforms can be used to derive the uncertainty relation. Neither of these facts tells me the physical reason behind the choice.

10. Aug 30, 2006

### masudr

Well joke or not, QM is a very serious subject. For more details of what I'm talking about, see Shankar, Principles of QM, pgs. 237-239.

Two operators that don't commute have a minimum uncertainty, and the product of the uncertainties in those pair of observables is at least $\hbar/2.$ Note that this has nothing to do with Gaussians. It even gives you the kind of state which will have the minimum value in equations (9.2.15), and it still doesn't specify that they must be Gaussians.

EDIT: I started typing this (then took a long break) before franznietzsche's post.

Last edited: Aug 30, 2006
11. Aug 30, 2006

### masudr

If all you want to know is why he would use the Gaussian, and what's the physical reason, then this may help. Firstly, the ground state of the harmonic oscillator is the Gaussian. That's a good a reason as any to try the Gaussian. Secondly, he had to try some function, and why not the Gaussian? Any choice would have you asking the same question.

You're entitled to your opinions. I find this forum very useful. Many of its regulars are people much smarter than me, and when I ask a question, I expect condescending answers. Remember, these people haven't done courses in teaching.

I'm happy for you. I'm not researching nor publishing, merely an undergraduate. Just because you publish, you shouldn't expect special treatment; the fact that you are publishing and in research is largely irrelevant. You shouldn't take the internet personally.

Last edited: Aug 30, 2006
12. Aug 30, 2006

### RogerPink

I think what you're trying to say here is you don't know. You make some good guess's, but you don't really provide any answer, you just say what you would do.

I'm very good at Physics and I certainly don't need people who don't understand my question insulting me. This isn't a Math forum so its reasonable to ask for the physical meaning of mathematical choices. I was just hopeful that on a physics forum there might be someone who knew the history behind Heisenberg's derivation. Instead I got a bunch of guys yelling at me about basic quantum mechanics.

13. Aug 30, 2006

### Careful

My guess is that it has something to do with the central limit theorem in statistics (which was rigorously proven in 1901 and well known already in the 18'th century).

Careful

14. Aug 30, 2006

### RogerPink

That's interesting. I don't no much about it so Ill give it a read. One thing I noticed was this:

"The Central Limit Theorem which states that if the sum of the variables has a finite variance, then it will be approximately normally distributed."

But of course we are talking about a product, not a sum, so I'm not sure. Still, at least your answer:

a) doesn't assume I don't know basic quantum mechanics
b) doesn't assume I don't know math

So thanks for that.

15. Aug 30, 2006

### RogerPink

So I read some more and found the following:

"The central limit theorem tells us what to expect about the sum of independent random variables, but what about the product? Well, the logarithm of a product is simply the sum of the logs of the factors, so the log of a product of random variables tends to have a normal distribution, which makes the product itself have a log-normal distribution. Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the product of different random factors, so they follow a log-normal distribution."

According to this, wouldn't he have used a log-normal distribution instead of a gaussian distribution? Does it make a difference in terms of standard deviation?

16. Aug 30, 2006

### masudr

Two different subjects there...

ps. I'd eat my hat and coat if it had anything to do with the central limit theorem: that says that randomly distributed variables tend to the normal distribution as $N \rightarrow \infty$; why a wavefunction should be that is arbitrary.

17. Aug 30, 2006

### chroot

Staff Emeritus
As has been said, the uncertainty between two non-commuting operators is not equal to h-bar/2, but is strictly greater than or equal to h-bar/2. The Gaussian distribution is the "best" in this regard, because it achieves this minimum uncertainty. You are free to carry on using any other kind of distribution you want, but you will not achieve this minimal uncertainty with anything but the Gaussian.

That's the reason it's commonly used -- it achieves the minimum uncertainty. That's all.

- Warren

18. Aug 30, 2006

### Careful

I don't know, but I want to see you eating your
hat (you can have your coat). As you know the gaussian is the only attractor for the
convolution product in the space of all probability measures.
Therefore, the most natural thing is to expect psi^2 to be gaussian,
which determines psi up to a local phase. What Chroot says is well known, I
could also add that the so called coherent (and vacuum squeezed)
states are the only classical states in QFT as well as the only ones
which saturate the uncertainty bound (and yes, they are all gaussian). But I am afraid that in the
1920 this was of no concern at all (for example QFT did not exist
yet ).

There is a deeper issue related to this remark which has to do with the meaning of statistics, but I shall not get into this now.

BTW it is of crucial importance to know the HISTORY of the field in order to do good PHYSICS, these two hang very
thightly together.

Careful

Last edited: Aug 30, 2006
19. Aug 30, 2006

### masudr

I think you mean relevant history as different parts of physics may share principles but are often unrelated. Besides what people could fill up hundreds of pages of last century's physics can be summarised in a few lines today. History of physics is not as important as many people make out.

20. Aug 30, 2006

### RogerPink

OK Warren, so assuming what you say is correct and that h-bar over 2 is the minimum value that can be calculated for all distributions, what would the Uncertainty Relation look like if log normal distributions were used instead of Gaussians?

And for everyone on this thread for the last time, everyone here knows that it's and inequality. Everyone here knows that there is a postion operator and a momentum operator. Everyone here knows xp-px=ih-bar, so please stop saying it. The original derivation was an expression of inherent uncertainty in the measurement of a sytem. I'm just trying to understand his reasoning. Heisenberg was literally talking about error when he wrote delta x, just like an experimentalist would. He chose to represent the distribution of that error as a gaussian which then leads to the over 2 part of the expression (which comes from the standard deviation for the gaussian) Different distributions would produce different standard deviations, but this one obviously produced results that agreed with experiment. So how did he know to use it? Is there some sort of statistical rule that says these types of parameters have error distributions like gaussians?

21. Aug 30, 2006

### chroot

Staff Emeritus
Well, as was mentioned earlier, the central limit theorem says that the sum of any random processes tends toward a Gaussian distribution. As a result, virtually all naturally-occuring random processes have essentially Gaussian distributions. When anyone uses a model of any kind of random process, it makes the most sense to just start with the Gaussian -- unless you know something more specific about the random process a priori.

For example, if you had to guess at a model of the jitter of an electronic oscillator, you'd do well to assume it's pretty much Gaussian. The jitter of a physical oscillator is comrpised of noise contributions from many random processes all added together, and the result has to tend to be Gaussian by the central limit theorem.

- Warren

22. Aug 30, 2006

### RogerPink

Chroot, please see my earlier response to your central limit suggestion.

Wow, my question was better than I thought. I recieved some responses from other boards. It turns out that using a gaussian standard deviation to produce an exact lower limit for the uncertainty relation was:

1. Not done by Heisenberg but by Kennard afterwards
2. Proven to be an incorrect method for determining the lower limit. You can't just assume the error is gaussian, it depends on the physical system involved.

Here are the links that provide this information.

http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=AJPIAS000070000010000983000001&idtype=cvips&gifs=yes [Broken]

http://plato.stanford.edu/entries/qt-uncertainty/

Last edited by a moderator: May 2, 2017
23. Aug 31, 2006

### Careful

This is the best confession of lack of knowledge I have ever seen Moreover, what was disconnected fifty years ago, may be entangled´´ next year. What is considered to be irrelevant now, may have been important 40 years ago and might revive again next decade. That is how science works, and why understanding the reasons for our choices today is important. What we learn today is just a drop on a plate of interesting ideas which were conceived last century.

Last edited: Aug 31, 2006
24. Aug 31, 2006

### masudr

*sigh*

Have you read Maxwell's original treatise on EM? It's a fairly dry (and useless) read. The modern formulation is a hundred times better.

And please, for your own sake, don't make personal jibes at someone on an anonymous internet forum.

25. Aug 31, 2006

### Careful

Well, I was talking about the *previous* century, not the 19'th (Maxwell died in 1879) - and no I did not read this treatise. In that respect, I can say that the orginal treatment of tensor calculus by Schouten is still very instructive, that the original papers by Dirac, Feynman and others on quantum field theory (and their worries), the work of Moyal, Wigner and others on the possibility of deterministic quantum mechanics, of realists like Boyer, Marshall, Barut .... on quantum phenomena derived from zero point radiation are all very useful and quite unknown indeed. Briefly, it is extremely useful to know the detailed history of contemporary theories especially when they turn out to be problematic; that is not just the positive reasons why they were accepted, but the negative ones despite of which they survived. In my experience, if you think long enough about problems in contemporary physics and how to solve it, you are *bound* to arrive at some alternatives formulated in the time of their conception'' (or not too long after it anyway).

As far as I know, I owe one apology to ttn for suggesting there might be a synchronization problem in the solution of the measurement problem in BM, which was a silly mistake of mine (only local approaches which do not intend to go beyond the psi wave have this, such as MWI or relational QM).

Careful

Last edited: Aug 31, 2006