Confusion about getting uncertainty by using differetiation.

kof9595995
Messages
676
Reaction score
2
My question comes from my homework, but I don't think it's a homework question, so I put it here, still I will put the homework in this thread caus I think it would help.
HW problem: Show that for a free particle the uncertainty relation can also be written as
\delta \lambda \delta x \ge \frac{{{\lambda ^2}}}{{4\pi }}.
Where \delta \lambda is the de Broglie's wave length
My solution is :
\delta \lambda = |\frac{{d\lambda }}{{dp}}|\delta p = \frac{h}{{{p^2}}}\delta p
so \delta \lambda \delta x = \frac{h}{{{p^2}}}\delta x\delta p \ge \frac{h}{{{p^2}}}\frac{h}{{4\pi }} = \frac{1}{{4\pi }}{(\frac{h}{p})^2} = \frac{{{\lambda ^2}}}{{4\pi }}

Although I got the expected result, but I really doubt if it's legal to use differentiation here. Because \delta p is a standard deviation not increment. Using differentiation just means you map the range \delta p to another range \delta \lambda as if they were increments. \delta p is the standard deviation for \delta p distribution, but how do you know the \delta \lambda is the standard deviation for \delta \lambda distribution?


And I try to work out a counterexample:
Suppose W and G are two physical quantities, G follows the normal distribution
G = \frac{{10}}{{\sigma \sqrt {2\pi } }}\exp ( - \frac{{{x^2}}}{{2{\sigma ^2}}})

W = 100G = \frac{{1000}}{{\sigma \sqrt {2\pi } }}\exp ( - \frac{{{x^2}}}{{2{\sigma ^2}}})

So W and G should have the same standard deviation \sigma (I'm not quite sure , am I correct at this?), but differentiation tells you the standard deviation should be 100 times of the first distribution.

EDIT: My counterexample is wrong, please ignore it.
 
Last edited:
Physics news on Phys.org
Basicly what I'm trying to ask is:
if g=g(f), and I used \delta g=\frac{{dg}}{{df}}\delta f. Then actually I presumed
\sqrt{<(g-<g>)^2>}\approx\frac{{dg}}{{df}}\sqrt{<(f-<f>)^2>}, but I can't see how to prove this.
 
You're correct that it's only approximately true. It's like approximating f(x) near x=a by f(a)+f'(a)(x-a).
 
I understand in the differentiation case why f(a)+f'(a)(x-a) is the first order approximation. But I just can't see in any way a standard deviation should behave like this, even in an approximation sense.
So \delta g=\frac{{dg}}{{df}}\delta f just seems to me more like an abuse of the notation delta here
 
Anybody can help, or is there any really correct way of doing this problem? My head really exploded, any help is appreciated.
 
Em, so I can use first order part of Taylor expansion to prove it, that's very helpful, thanks
 
Back
Top