Hello RogerPink,
First of all, a friendly advice, please cool down, and don't take any message which you take potentially insinuating that you have a problem personally. There have been studies about communication through e-mail and typed text on forums and the like, and there is a much higher amount of misunderstanding leading to conflict than in direct or verbal communication, simply due to missing unspoken communication (voice intonation, body language etc...). All this can contribute to an unfortunate perception of agression, leading to a totally unnecessary escalation of verbal violence. So start from the idea that people trying to answer your question are genuinly trying to help you, but don't know your background, and might make a wrong guess at your "mileage".
RogerPink said:
OK Warren, so assuming what you say is correct and that h-bar over 2 is the minimum value that can be calculated for all distributions, what would the Uncertainty Relation look like if log normal distributions were used instead of Gaussians?
There's a simple proof, quoted by Franznietsche, that demonstrates exactly the following:
Given two operators A and B, corresponding to measurements (hence, hermitean operators) and given any wavefunction, then the statistical distributions of the quantities of A and B, as described by this wavefunction and the operator, through the Born rule, satisfy the following property: the standard deviation of the distribution of A, times the standard deviation of the distribution of B will be larger than |i/2 <[A,B]>|
where the last expression stands for the expectation value of the commutator for the given wavefunction.
In the specific case of canonically conjugate observables X and P, where [X,P] = i hbar, then this gives us that the standard deviation for X times the standard deviation for P will be larger than hbar/2, if you calculate the distributions for X and for P for ANY state.
This is one point, which you might or might not be aware of. In this formulation, it applies to ANY statistical distribution of X and P that can be obtained from any thinkable state, through the Born rule, and we're only concerned with the standard deviations of those distributions.
The second point is that the only distribution which satisfies equality, is the gaussian distribution. All other distributions will have a strict inequality. That's simply a property of gaussian distributions and Fourier transforms, a property a priori unrelated to quantum theory.
The third point is that a harmonic oscillator, in quantum theory, happens to have as a solution for its ground state, a gaussian wavefunction. Now, I don't know of any logical reason for this to be related to the previous point (there might be a deeper reason, but I'm not aware of it).
Now the last point has two consequences. The first one is that for any harmonic oscillator situation, the ground state also is the state with "minimum uncertainty", given that - by coincidence or not - its wavefunction is gaussian. The second one is that, given that "small perturbations" of a classical system usually give you in first order, a harmonic oscillator, this solution is found a lot. For QFT, for instance, it is supposed to be the true equation of motion of the free field.
This is, in a summary, what people said here (and what I could add). Now maybe all this is trivial to you. Fine. Maybe not.
And for everyone on this thread for the last time, everyone here knows that it's and inequality. Everyone here knows that there is a postion operator and a momentum operator. Everyone here knows xp-px=ih-bar, so please stop saying it. The original derivation was an expression of inherent uncertainty in the measurement of a sytem. I'm just trying to understand his reasoning. Heisenberg was literally talking about error when he wrote delta x, just like an experimentalist would. He chose to represent the distribution of that error as a gaussian which then leads to the over 2 part of the expression (which comes from the standard deviation for the gaussian) Different distributions would produce different standard deviations, but this one obviously produced results that agreed with experiment. So how did he know to use it? Is there some sort of statistical rule that says these types of parameters have error distributions like gaussians?
As to the original motivations of Heisenberg, I'm totally ignorant of it, I'm only (as did others) telling you what's the modern PoV.
As people pointed out, it is not SILLY to start with a gaussian, because of the central limit theorem (as Careful put it nicely: "it is the attractor of convolution in the space of probability distributions", in other words, if you add a lot of similar independent errors together you arrive at a gaussian).
People doing error calculations have a kind of gene that makes them like gaussians. Whether this was the motivation of Heisenberg of not, however, I don't know at all.
If you understand the modern PoV, however, it is - unless for historical reasons - totally irrelevant to pick an a priori hypothesis of a gaussian. You will simply arrive at the minimum estimate (the lower bound) when you do so. This might have been a coincidence, that Heisenberg - for unrelated reasons - just picked out by coincidence that distribution which arrives at the equality, establishing hence the correct lower boundary. Maybe Heisenberg just picked something to work with, maybe he had a deeper reason, I'm ignorant of his original motivations.
cheers,
Patrick.