Heisenberg Uncertainty Principle and Gaussian Distributions

Click For Summary

Discussion Overview

The discussion revolves around the Heisenberg Uncertainty Principle (HUP) and the use of Gaussian distributions in its derivation. Participants explore the mathematical and physical reasoning behind the choice of Gaussian distributions for representing uncertainties in position and momentum, as well as the implications of this choice in the context of quantum mechanics.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants note that Gaussian distributions are mathematically convenient, particularly because the Fourier Transform (FT) of a Gaussian is also a Gaussian.
  • Others argue that the Gaussian distribution has the minimum product of uncertainties in position and momentum (dxdp), leading to the formulation of the HUP as an inequality.
  • One participant expresses confusion about the physical reasoning behind using Gaussian distributions, questioning whether it was merely a mathematical convenience or if there was a deeper physical justification.
  • Another participant emphasizes that the uncertainty principle applies generally to any two non-commuting operators, suggesting that the specifics of the distribution may not be as critical.
  • Some participants assert that the factor of 2 in the uncertainty equation arises from the properties of the Gaussian distribution, while others challenge the adequacy of this explanation.
  • A later reply highlights that the general uncertainty principle does not depend on the specific wave functions used, as long as they are in Hilbert space.
  • One participant provides a historical account of Heisenberg's derivation, suggesting that Heisenberg's choice of Gaussian distributions was influenced by the wave-like behavior of particles and the mathematical properties of Fourier series.

Areas of Agreement / Disagreement

Participants express differing views on the reasons for using Gaussian distributions in the context of the HUP. While some agree on the mathematical convenience, others question the lack of a physical rationale. The discussion remains unresolved regarding the necessity of Gaussian distributions versus other types of distributions.

Contextual Notes

Participants mention that the uncertainty principle is a mathematical theorem applicable to any two hermitian operators, which introduces a level of abstraction that may not directly address the choice of distribution in practical terms. The discussion also reflects varying levels of understanding and interpretation of the mathematical derivations involved.

RogerPink
Messages
25
Reaction score
0
I was reading about the derivation of the Heisenberg Uncertainty Principle and how Heisenberg used Gaussian Distributions to represent the uncertainty of position and momentum in his calculation. Why is it that Gaussian Distributions were used? There are many different types of distributions out there, why this kind in particular?
 
Physics news on Phys.org
The Gaussian is easy to do mathematically, and it turns out that the FT of a Gaussian is a Gaussian. It also turns out that the Gaussian has the minimum product of dxdp (as usually defined). For this reason, the HUP is stated as an inequality. dxdp=hbar/2 only for the Gaussian.
 
I'm still confused

Meir Achuz said:
The Gaussian is easy to do mathematically, and it turns out that the FT of a Gaussian is a Gaussian. It also turns out that the Gaussian has the minimum product of dxdp (as usually defined). For this reason, the HUP is stated as an inequality. dxdp=hbar/2 only for the Gaussian.

The uncertainty equation is equal to h-bar over 2 and as I understand it, the 2 comes from the minimum standard deviation for a gaussian distribution. Which is to say the relation would be different if the error for position and momentum were represented by a different kind of distribution. Was there a physical reason for this choice of distribution or did this type of distribution just fit the data. Considering the precision to which Quantum Mechanics has been tested, the gaussian distribution is obviously correct, I'm just wondering if there was a physical reason he chose it.
 
It's better (in my opinion) to show that for any two operators which don't commute, there exists a corresponding uncertainty principle in the pair of observables those operators represent. In this case, you don't need to worry about specifics, as the result is fairly general.
 
masudr said:
It's better (in my opinion) to show that for any two operators which don't commute, there exists a corresponding uncertainty principle in the pair of observables those operators represent. In this case, you don't need to worry about specifics, as the result is fairly general.

I'm sorry but that doesn't really answer my question at all. To phrase my question another way, in the equation:

deltaX x deltaP = h-bar/2

Where does the 2 come from and why?
 
The 2 comes from the FT of a Gaussian.
Do it yourself. The math is fairlyl simple.
The HUP is usually written as "greater than or equal".
H picked G for the two reasons I gave.
Given any spatially confined wave function, dxdp (suitably defined)can be calculated by FT. If it is not Gaussian, dxdp will be greater than hbar/2.
I'm outtta here now.
 
Not impressed with this forum

You have basically answered my question by saying "because the math works out." I've seen the derivation (Why do you think I'm asking the question?. I'll see you guys in the literature, this forum is a joke.
 
RogerPink said:
The uncertainty equation is equal to h-bar over 2 and as I understand it, the 2 comes from the minimum standard deviation for a gaussian distribution. Which is to say the relation would be different if the error for position and momentum were represented by a different kind of distribution. Was there a physical reason for this choice of distribution or did this type of distribution just fit the data. Considering the precision to which Quantum Mechanics has been tested, the gaussian distribution is obviously correct, I'm just wondering if there was a physical reason he chose it.


No, it is not equal. The general uncertainty principle for any two hermitian operators \hat A, \hat B is

\Delta A \Delta B \ge \frac{i}{2}<[\hat A, \hat B]>

This is a provable fact for any two hermitian operators in hilbert space, regardless of the wave functions (so long as again, the wave functions are in hilbert space). You do not have to make any assumptions about the wave functions (except, that again, they are in hilbert space).

see:

http://galileo.phys.virginia.edu/classes/751.mf1i.fall02/GenUncertPrinciple.htm

In deriving the general uncertainty principle, no assumptions are made about the wave functions.

There is no physical reason to have used the gaussian distribution in initially finding the uncertainty principle, its just the easiest to work with, and happens to be the distribution thatg gives the minimal uncertainty.

RogerPink said:
You have basically answered my question by saying "because the math works out." I've seen the derivation (Why do you think I'm asking the question?. I'll see you guys in the literature, this forum is a joke.

You won't get far in physics with an attitude like that. Clearly you didn't understand what masudr said at all, and haven't seen the proper derivation for the general uncertainty principle. The uncertainty principle is a mathematical theorem, that applies to any two hermitian operators in hilbert space. If the mathematical assumptions that lead up to it apply to reality, then it applies to reality. It seems that it does. But there is no physical reason behind it, its a math theorem. Welcome to the world of theoretical physics.
 
franznietzsche said:
You won't get far in physics with an attitude like that. Clearly you didn't understand what masudr said at all, and haven't seen the proper derivation for the general uncertainty principle. The uncertainty principle is a mathematical theorem, that applies to any two hermitian operators in hilbert space. If the mathematical assumptions that lead up to it apply to reality, then it applies to reality. It seems that it does. But there is no physical reason behind it, its a math theorem. Welcome to the world of theoretical physics.

This will be my last thread on this forum, but in the interest of professionalism, I would like to resolve my question before I go. My question was prompted by the following historical account of the derivation of the Uncetainty Principle found on wikipedia. It reads:

Heisenberg did not just use any arbitrary number to describe the minimum standard deviation between position and momentum of a particle. Heisenberg knew that particles behaved like waves and he knew that the energy of any wave is the frequency multiplied by Planck's constant. In a wave, a cycle is defined by the return from a certain position to the same position such as from the top of one crest to the next crest. This actually is equivalent to a circle of 360 degrees, or 2π radians. Therefore, dividing h by 2π describes a constant that when multiplied by the frequency of a wave gives the energy of one radian. Heisenberg took ½ of as his standard deviation. This can be written as over 2 as above or it can be written as h/(4π). Normally one will see over 2 as this is simpler.

Two years earlier in 1925 when Heisenberg had developed his matrix mechanics the difference in position and momentum were already showing up in the formula. In developing matrix mechanics Heisenberg was measuring amplitudes of position and momentum of particles such as the electron that have a period of 2π, like a cycle in a wave, which are called Fourier series variables. When amplitudes of position and momentum are measured and multiplied together, they give intensity. However, Heisenberg found that when the position and momentum were multiplied together in that respective order or in the reverse order, there was a difference between the two calculated intensities of h/(2π). In other words, the two quantities position and momentum did not commute. In 1927, to develop the standard deviation for the uncertainty principle, Heisenberg took the gaussian distribution or bell curve for the imprecision in the measurement of the position q of a moving electron to the corresponding bell curve of the measured momentum p.



Please note that last sentence that says Heisenberg took the gaussian distribution or bell curve for the imprecision in the measurement of the position q of a moving electron... My question here is why would he do that. Is there a physical reason to expect a gaussian distribution? Thats all I want to know. I'm not some quack trying to rewrite physics, I'm just curious about the history.

I find this forum condescending and insulting. I'm doing research and publishing. You can use my name and look it up (Roger H Pink)(Roger Pink). I understand that the Fourier transform of a gaussian is a gaussian. I understand that Fourier transforms can be used to derive the uncertainty relation. Neither of these facts tells me the physical reason behind the choice.
 
  • #10
Well joke or not, QM is a very serious subject. For more details of what I'm talking about, see Shankar, Principles of QM, pgs. 237-239.

Two operators that don't commute have a minimum uncertainty, and the product of the uncertainties in those pair of observables is at least \hbar/2. Note that this has nothing to do with Gaussians. It even gives you the kind of state which will have the minimum value in equations (9.2.15), and it still doesn't specify that they must be Gaussians.

EDIT: I started typing this (then took a long break) before franznietzsche's post.
 
Last edited:
  • #11
RogerPink said:
My question here is why would he do that. Is there a physical reason to expect a gaussian distribution? Thats all I want to know.

If all you want to know is why he would use the Gaussian, and what's the physical reason, then this may help. Firstly, the ground state of the harmonic oscillator is the Gaussian. That's a good a reason as any to try the Gaussian. Secondly, he had to try some function, and why not the Gaussian? Any choice would have you asking the same question.

I find this forum condescending and insulting.
You're entitled to your opinions. I find this forum very useful. Many of its regulars are people much smarter than me, and when I ask a question, I expect condescending answers. Remember, these people haven't done courses in teaching.

I'm doing research and publishing. You can use my name and look it up (Roger H Pink)(Roger Pink).
I'm happy for you. I'm not researching nor publishing, merely an undergraduate. Just because you publish, you shouldn't expect special treatment; the fact that you are publishing and in research is largely irrelevant. You shouldn't take the internet personally.
 
Last edited:
  • #12
masudr said:
If all you want to know is why he would use the Gaussian, and what's the physical reason, then this may help. Firstly, the ground state of the harmonic oscillator is the Gaussian. That's a good a reason as any to try the Gaussian. Secondly, he had to try some function, and why not the Gaussian? Any choice would have you asking the same question.

I think what you're trying to say here is you don't know. You make some good guess's, but you don't really provide any answer, you just say what you would do.

I'm very good at Physics and I certainly don't need people who don't understand my question insulting me. This isn't a Math forum so its reasonable to ask for the physical meaning of mathematical choices. I was just hopeful that on a physics forum there might be someone who knew the history behind Heisenberg's derivation. Instead I got a bunch of guys yelling at me about basic quantum mechanics.
 
  • #13
RogerPink said:
I think what you're trying to say here is you don't know. You make some good guess's, but you don't really provide any answer, you just say what you would do.

I'm very good at Physics and I certainly don't need people who don't understand my question insulting me. This isn't a Math forum so its reasonable to ask for the physical meaning of mathematical choices. I was just hopeful that on a physics forum there might be someone who knew the history behind Heisenberg's derivation. Instead I got a bunch of guys yelling at me about basic quantum mechanics.
My guess is that it has something to do with the central limit theorem in statistics (which was rigorously proven in 1901 and well known already in the 18'th century).

Careful
 
  • #14
Careful said:
My guess is that it has something to do with the central limit theorem in statistics (which was rigorously proven in 1901 and well known already in the 18'th century).

Careful

That's interesting. I don't no much about it so Ill give it a read. One thing I noticed was this:

"The Central Limit Theorem which states that if the sum of the variables has a finite variance, then it will be approximately normally distributed."

But of course we are talking about a product, not a sum, so I'm not sure. Still, at least your answer:

a) doesn't assume I don't know basic quantum mechanics
b) doesn't assume I don't know math

So thanks for that.
 
  • #15
So I read some more and found the following:

"The central limit theorem tells us what to expect about the sum of independent random variables, but what about the product? Well, the logarithm of a product is simply the sum of the logs of the factors, so the log of a product of random variables tends to have a normal distribution, which makes the product itself have a log-normal distribution. Many physical quantities (especially mass or length, which are a matter of scale and cannot be negative) are the product of different random factors, so they follow a log-normal distribution."

According to this, wouldn't he have used a log-normal distribution instead of a gaussian distribution? Does it make a difference in terms of standard deviation?
 
  • #16
RogerPink said:
I was just hopeful that on a physics forum there might be someone who knew the history behind Heisenberg's derivation.

Two different subjects there...

ps. I'd eat my hat and coat if it had anything to do with the central limit theorem: that says that randomly distributed variables tend to the normal distribution as N \rightarrow \infty; why a wavefunction should be that is arbitrary.
 
  • #17
As has been said, the uncertainty between two non-commuting operators is not equal to h-bar/2, but is strictly greater than or equal to h-bar/2. The Gaussian distribution is the "best" in this regard, because it achieves this minimum uncertainty. You are free to carry on using any other kind of distribution you want, but you will not achieve this minimal uncertainty with anything but the Gaussian.

That's the reason it's commonly used -- it achieves the minimum uncertainty. That's all.

- Warren
 
  • #18
masudr said:
Two different subjects there...

ps. I'd eat my hat and coat if it had anything to do with the
central limit theorem: that says that randomly distributed variables
tend to the normal distribution as N \rightarrow<br /> \infty; why a wavefunction should be that is
arbitrary.
I don't know, but I want to see you eating your
hat (you can have your coat). As you know the gaussian is the only attractor for the
convolution product in the space of all probability measures.
Therefore, the most natural thing is to expect psi^2 to be gaussian,
which determines psi up to a local phase. What Chroot says is well known, I
could also add that the so called coherent (and vacuum squeezed)
states are the only classical states in QFT as well as the only ones
which saturate the uncertainty bound (and yes, they are all gaussian). But I am afraid that in the
1920 this was of no concern at all (for example QFT did not exist
yet :wink:).

There is a deeper issue related to this remark which has to do with the meaning of statistics, but I shall not get into this now.

BTW it is of crucial importance to know the HISTORY of the field in order to do good PHYSICS, these two hang very
thightly together.

Careful
 
Last edited:
  • #19
Careful said:
BTW it is of crucial importance to know the HISTORY of the field in order to do good PHYSICS, these two hang very
thightly together.

I think you mean relevant history as different parts of physics may share principles but are often unrelated. Besides what people could fill up hundreds of pages of last century's physics can be summarised in a few lines today. History of physics is not as important as many people make out.
 
  • #20
chroot said:
As has been said, the uncertainty between two non-commuting operators is not equal to h-bar/2, but is strictly greater than or equal to h-bar/2. The Gaussian distribution is the "best" in this regard, because it achieves this minimum uncertainty. You are free to carry on using any other kind of distribution you want, but you will not achieve this minimal uncertainty with anything but the Gaussian.

That's the reason it's commonly used -- it achieves the minimum uncertainty. That's all.

- Warren

OK Warren, so assuming what you say is correct and that h-bar over 2 is the minimum value that can be calculated for all distributions, what would the Uncertainty Relation look like if log normal distributions were used instead of Gaussians?

And for everyone on this thread for the last time, everyone here knows that it's and inequality. Everyone here knows that there is a postion operator and a momentum operator. Everyone here knows xp-px=ih-bar, so please stop saying it. The original derivation was an expression of inherent uncertainty in the measurement of a sytem. I'm just trying to understand his reasoning. Heisenberg was literally talking about error when he wrote delta x, just like an experimentalist would. He chose to represent the distribution of that error as a gaussian which then leads to the over 2 part of the expression (which comes from the standard deviation for the gaussian) Different distributions would produce different standard deviations, but this one obviously produced results that agreed with experiment. So how did he know to use it? Is there some sort of statistical rule that says these types of parameters have error distributions like gaussians?
 
  • #21
Well, as was mentioned earlier, the central limit theorem says that the sum of any random processes tends toward a Gaussian distribution. As a result, virtually all naturally-occurring random processes have essentially Gaussian distributions. When anyone uses a model of any kind of random process, it makes the most sense to just start with the Gaussian -- unless you know something more specific about the random process a priori.

For example, if you had to guess at a model of the jitter of an electronic oscillator, you'd do well to assume it's pretty much Gaussian. The jitter of a physical oscillator is comrpised of noise contributions from many random processes all added together, and the result has to tend to be Gaussian by the central limit theorem.

- Warren
 
  • #22
Chroot, please see my earlier response to your central limit suggestion.

Wow, my question was better than I thought. I received some responses from other boards. It turns out that using a gaussian standard deviation to produce an exact lower limit for the uncertainty relation was:

1. Not done by Heisenberg but by Kennard afterwards
2. Proven to be an incorrect method for determining the lower limit. You can't just assume the error is gaussian, it depends on the physical system involved.

Here are the links that provide this information.

http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=AJPIAS000070000010000983000001&idtype=cvips&gifs=yes

http://plato.stanford.edu/entries/qt-uncertainty/
 
Last edited by a moderator:
  • #23
masudr said:
I think you mean relevant history as different parts of physics may share principles but are often unrelated. Besides what people could fill up hundreds of pages of last century's physics can be summarised in a few lines today. History of physics is not as important as many people make out.
This is the best confession of lack of knowledge I have ever seen :bugeye: Moreover, what was disconnected fifty years ago, may be ``entangled´´ next year. What is considered to be irrelevant now, may have been important 40 years ago and might revive again next decade. That is how science works, and why understanding the reasons for our choices today is important. What we learn today is just a drop on a plate of interesting ideas which were conceived last century.
 
Last edited:
  • #24
*sigh*

Have you read Maxwell's original treatise on EM? It's a fairly dry (and useless) read. The modern formulation is a hundred times better.

And please, for your own sake, don't make personal jibes at someone on an anonymous internet forum.
 
  • #25
masudr said:
*sigh*

Have you read Maxwell's original treatise on EM? It's a fairly dry (and useless) read. The modern formulation is a hundred times better.

And please, for your own sake, don't make personal jibes at someone on an anonymous internet forum.

Well, I was talking about the *previous* century, not the 19'th (Maxwell died in 1879) - and no I did not read this treatise. In that respect, I can say that the orginal treatment of tensor calculus by Schouten is still very instructive, that the original papers by Dirac, Feynman and others on quantum field theory (and their worries), the work of Moyal, Wigner and others on the possibility of deterministic quantum mechanics, of realists like Boyer, Marshall, Barut ... on quantum phenomena derived from zero point radiation are all very useful and quite unknown indeed. Briefly, it is extremely useful to know the detailed history of contemporary theories especially when they turn out to be problematic; that is not just the positive reasons why they were accepted, but the negative ones despite of which they survived. In my experience, if you think long enough about problems in contemporary physics and how to solve it, you are *bound* to arrive at some alternatives formulated in the time of their ``conception'' (or not too long after it anyway).

As far as I know, I owe one apology to ttn for suggesting there might be a synchronization problem in the solution of the measurement problem in BM, which was a silly mistake of mine (only local approaches which do not intend to go beyond the psi wave have this, such as MWI or relational QM). :blushing:

Careful
 
Last edited:
  • #26
Hello RogerPink,

First of all, a friendly advice, please cool down, and don't take any message which you take potentially insinuating that you have a problem personally. There have been studies about communication through e-mail and typed text on forums and the like, and there is a much higher amount of misunderstanding leading to conflict than in direct or verbal communication, simply due to missing unspoken communication (voice intonation, body language etc...). All this can contribute to an unfortunate perception of agression, leading to a totally unnecessary escalation of verbal violence. So start from the idea that people trying to answer your question are genuinly trying to help you, but don't know your background, and might make a wrong guess at your "mileage".


RogerPink said:
OK Warren, so assuming what you say is correct and that h-bar over 2 is the minimum value that can be calculated for all distributions, what would the Uncertainty Relation look like if log normal distributions were used instead of Gaussians?

There's a simple proof, quoted by Franznietsche, that demonstrates exactly the following:

Given two operators A and B, corresponding to measurements (hence, hermitean operators) and given any wavefunction, then the statistical distributions of the quantities of A and B, as described by this wavefunction and the operator, through the Born rule, satisfy the following property: the standard deviation of the distribution of A, times the standard deviation of the distribution of B will be larger than |i/2 <[A,B]>|
where the last expression stands for the expectation value of the commutator for the given wavefunction.

In the specific case of canonically conjugate observables X and P, where [X,P] = i hbar, then this gives us that the standard deviation for X times the standard deviation for P will be larger than hbar/2, if you calculate the distributions for X and for P for ANY state.

This is one point, which you might or might not be aware of. In this formulation, it applies to ANY statistical distribution of X and P that can be obtained from any thinkable state, through the Born rule, and we're only concerned with the standard deviations of those distributions.

The second point is that the only distribution which satisfies equality, is the gaussian distribution. All other distributions will have a strict inequality. That's simply a property of gaussian distributions and Fourier transforms, a property a priori unrelated to quantum theory.

The third point is that a harmonic oscillator, in quantum theory, happens to have as a solution for its ground state, a gaussian wavefunction. Now, I don't know of any logical reason for this to be related to the previous point (there might be a deeper reason, but I'm not aware of it).

Now the last point has two consequences. The first one is that for any harmonic oscillator situation, the ground state also is the state with "minimum uncertainty", given that - by coincidence or not - its wavefunction is gaussian. The second one is that, given that "small perturbations" of a classical system usually give you in first order, a harmonic oscillator, this solution is found a lot. For QFT, for instance, it is supposed to be the true equation of motion of the free field.

This is, in a summary, what people said here (and what I could add). Now maybe all this is trivial to you. Fine. Maybe not.

And for everyone on this thread for the last time, everyone here knows that it's and inequality. Everyone here knows that there is a postion operator and a momentum operator. Everyone here knows xp-px=ih-bar, so please stop saying it. The original derivation was an expression of inherent uncertainty in the measurement of a sytem. I'm just trying to understand his reasoning. Heisenberg was literally talking about error when he wrote delta x, just like an experimentalist would. He chose to represent the distribution of that error as a gaussian which then leads to the over 2 part of the expression (which comes from the standard deviation for the gaussian) Different distributions would produce different standard deviations, but this one obviously produced results that agreed with experiment. So how did he know to use it? Is there some sort of statistical rule that says these types of parameters have error distributions like gaussians?

As to the original motivations of Heisenberg, I'm totally ignorant of it, I'm only (as did others) telling you what's the modern PoV.
As people pointed out, it is not SILLY to start with a gaussian, because of the central limit theorem (as Careful put it nicely: "it is the attractor of convolution in the space of probability distributions", in other words, if you add a lot of similar independent errors together you arrive at a gaussian).
People doing error calculations have a kind of gene that makes them like gaussians. Whether this was the motivation of Heisenberg of not, however, I don't know at all.

If you understand the modern PoV, however, it is - unless for historical reasons - totally irrelevant to pick an a priori hypothesis of a gaussian. You will simply arrive at the minimum estimate (the lower bound) when you do so. This might have been a coincidence, that Heisenberg - for unrelated reasons - just picked out by coincidence that distribution which arrives at the equality, establishing hence the correct lower boundary. Maybe Heisenberg just picked something to work with, maybe he had a deeper reason, I'm ignorant of his original motivations.

cheers,
Patrick.
 
  • #27
Patrick,

Thanks for the advice. The advice I received from my friends was "don't go to forums". I think I'm going to take their advice. I just wanted to point out thought that I posted an answer to my question in my previous post, which was posted before your post was. It says that it's been shown that hbar/2 is an invalid lower boundary for deltax(deltap). Heisenberg never wrote this, Kennard did and he made an assumption (which has been proven to be incorrect) that gaussian distributions could be used.

Please note as no one here seems to know what I'm talking about, this in no way changes xp-px=ih. It only says that the Exact solution proposed by kennard is incorrect. Also note that I'm not the one that's saying its incorrect, it has been established in the literature. Since this solution never effects any problems anyway, it didn't really matter that it turned out to be wrong.

I have never been so positive that I am wasting my breath as I am right now.

Last post ever,
Roger Pink
 
  • #28
Some of this discussion is above my pay grade, but being a physicist turned statistician, I'd like to point out that the use of a normal distribution is common in statistics even when normality is in doubt -- often it's a case of what else can you do? But a key reason is that for large samples, the distribution of the mean is very nearly normal. So, given how experiments are usually done, the use of the normal distribution in explaining the HUP (see Kemble's old The Fundamental Principles of Quantum Mechanics) makes good intuitive sense, even if it is not formally rigorous, even if it is a bit slippery around the edges

That is, the use of the normal distribution simply followed standard practice in statistics -- like propagation of errors, the great love of all lab students.

Roger Pink -- There are some of us who 1. have doctorates, 2. have lots of experience, and 3. still find things to learn here, even amongst the rumble and tumble. As topics become more advanced, intellectual battles become fiercer, and physics can become a contact sport -- marketplace of ideas and all that. A good idea, a good theory can and must withstand attacks, aggressive or subtle. Survival in the physics world requires a thick skin, as well as intellect and creativity.

BTW I used to tell my students, undergraduate and graduate alike, that if people (more than one or two) cannot understand your argument, then, chances are that you are not doing a good job of explanation.

I hope that you stick around.
Regards,
Reilly Atkinson
 
Last edited:
  • #29
** Survival in the physics world requires a thick skin, as well as intellect and creativity. **

Unfortunately, these battles almost always emerge from either a lack of effort to understand the other party or from the unwillingness to answer to the prospect of some pitfalls. It still is a mystery for me how (2) and (3) can be commensurable with (1).

Careful
 
Last edited:
  • #30
From what I could discern, the OP's original question was, "Why did Heisenberg use the Gaussian as his distribution for positions and momenta of a system?" (Post #1)

Some people have made posts that didn't answer the OP's question. Other people have made some good guesses. Someone pointed out that Gaussians provide the minimum uncertainty, but the OP said Kennard showed this, not Heisenberg. Others have said that the Gaussian is most often used to model classical measurements, since many random errors will be distributed normally.

It now appears that the OP is concerned with who (and the validity of) proved that the equality holds for the Gaussian. The OP has since answered these latter questions (post #22), which apparently answered his original question.

Now, apparently no one has understood what the OP is talking about (post #28). If there is still an issue to be resolved here, then a re-phrasing of the question would be helpful.
 

Similar threads

  • · Replies 32 ·
2
Replies
32
Views
4K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 7 ·
Replies
7
Views
942
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 17 ·
Replies
17
Views
3K