Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Operator dispersion

  1. Oct 14, 2013 #1
    I'm trying to get my head around quantum mechanics with the help of Sakurai "Modern Quantum Physics". It's been good so far, but I came across a formula I don't really understand. When discussing uncertainty relation (in 1.4) the author begins with defining an "operator":

    [itex]\Delta A \equiv A - \left\langle A \right\rangle [/itex]

    Where A is an observable. He then defines the dispersion of A to be the expectation value of the of the square of this operator: [itex]\left\langle \Delta A \right\rangle ^ 2[/itex].

    I'm pretty sure I understand the concept of observable dispersion correctly, but correct me if I'm wrong: it's the average squared deviation of the measurements from mean (variation). Of course this is all computed for given state (ket). The results of the same measurement (with the same properly prepared state) performed multiple times will have certain variation, which is the same as dispersion we're talking about. Is this ok? I think it is, because I was even able to arrive at proper formula (which agrees with author's result) by summing the squared deviations from 'mean' (operator A expectation value) over all eigenkets of this operator, weighted with probabilities of each outcome.

    However, what I don't understand is the author's derivation, in particular the definition of this new 'delta operator' - let me write it again:

    [itex]\Delta A \equiv A - \left\langle A \right\rangle [/itex]

    How can one subtract the expected value which is a number (scalar) from an operator, which is represented by some matrix? This doesn't seem kosher. Is this a common practice? Will I see more examples of such 'flawed' notation? This seems really confusing...
     
  2. jcsd
  3. Oct 14, 2013 #2
    It's just short for ##A-\left\langle A \right\rangle\cdot\rm{id}##.

    I think you made a mistake there, since ##\left\langle \Delta A \right\rangle=0##. I guess it's more like ##\sigma^2_A=\left\langle\left( \Delta A\right)^2 \right\rangle##.
     
  4. Oct 15, 2013 #3
    This makes sense now!

    To make sure it's consistent I tried expanding this [itex]\Delta A[/itex] in eigenbasis of A, using [itex]A = \sum\limits_n a_n \left|a_n\right\rangle \left\langle a_n\right|[/itex], and applying it to a ket vector - I got correct result (sum of squared deviations of eigenvalues from mean weighted by probabilities)!. This is exciting. Thanks!

    Of course I made a typo writing [itex]\left\langle \Delta A \right\rangle ^2[/itex] instead of [itex]\left\langle \left( \Delta A \right)^2 \right\rangle [/itex]. Thanks for pointing this out.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Operator dispersion
  1. Dispersion Relation (Replies: 7)

Loading...