# Operator dispersion

## Main Question or Discussion Point

I'm trying to get my head around quantum mechanics with the help of Sakurai "Modern Quantum Physics". It's been good so far, but I came across a formula I don't really understand. When discussing uncertainty relation (in 1.4) the author begins with defining an "operator":

$\Delta A \equiv A - \left\langle A \right\rangle$

Where A is an observable. He then defines the dispersion of A to be the expectation value of the of the square of this operator: $\left\langle \Delta A \right\rangle ^ 2$.

I'm pretty sure I understand the concept of observable dispersion correctly, but correct me if I'm wrong: it's the average squared deviation of the measurements from mean (variation). Of course this is all computed for given state (ket). The results of the same measurement (with the same properly prepared state) performed multiple times will have certain variation, which is the same as dispersion we're talking about. Is this ok? I think it is, because I was even able to arrive at proper formula (which agrees with author's result) by summing the squared deviations from 'mean' (operator A expectation value) over all eigenkets of this operator, weighted with probabilities of each outcome.

However, what I don't understand is the author's derivation, in particular the definition of this new 'delta operator' - let me write it again:

$\Delta A \equiv A - \left\langle A \right\rangle$

How can one subtract the expected value which is a number (scalar) from an operator, which is represented by some matrix? This doesn't seem kosher. Is this a common practice? Will I see more examples of such 'flawed' notation? This seems really confusing...

Related Quantum Physics News on Phys.org
It's just short for $A-\left\langle A \right\rangle\cdot\rm{id}$.

He then defines the dispersion of A to be the expectation value of the of the square of this operator: $\left\langle \Delta A \right\rangle ^ 2$.
I think you made a mistake there, since $\left\langle \Delta A \right\rangle=0$. I guess it's more like $\sigma^2_A=\left\langle\left( \Delta A\right)^2 \right\rangle$.

This makes sense now!

To make sure it's consistent I tried expanding this $\Delta A$ in eigenbasis of A, using $A = \sum\limits_n a_n \left|a_n\right\rangle \left\langle a_n\right|$, and applying it to a ket vector - I got correct result (sum of squared deviations of eigenvalues from mean weighted by probabilities)!. This is exciting. Thanks!

Of course I made a typo writing $\left\langle \Delta A \right\rangle ^2$ instead of $\left\langle \left( \Delta A \right)^2 \right\rangle$. Thanks for pointing this out.