Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Equivalent of HUP for spins?

  1. Feb 22, 2010 #1

    nomadreid

    User Avatar
    Gold Member

    I know the following question is elementary, but being a dilettante in QM, I am confused on the following. So, I have two questions.

    The first question is whether the following reasoning is correct, and if not, why not.

    First, the Heisenberg Uncertainty Principle (HUP) applies to Hermitian operators. Second, operators on spins are unitary. Thirdly, although Hermitian and unitary operators are not identical, there is a one-to-one relation given by the fact that for every unitary operator U there exists a Hermitian operator K such that U= exp(iK). Therefore there should be an equivalent to the HUP that one can apply to the unitary operators which are used to go from one spin state to another, for example on the Bloch sphere.

    The second question is: if the above is correct, then what is this equivalent principle? If it is not, is there any way that the HUP is used on operators on spin states?

    Hope the question does not appear too stupid; thanks in advance for the answers.
     
  2. jcsd
  3. Feb 23, 2010 #2

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    There is an HUP among the three Sx Sy and Sz operators.

    The generalized HUP says:

    [tex]<A^2><B^2>\geq \frac{1}{4}|<[A,B]>|^2[/tex]

    Just plug in either Sx or Sy or Sz for either A or B.
     
  4. Feb 23, 2010 #3

    nomadreid

    User Avatar
    Gold Member

    Ah. That makes sense. Thank you, Matterwave.
     
  5. Feb 23, 2010 #4

    nomadreid

    User Avatar
    Gold Member

    Postscript. As I said, the reply makes sense.... mathematically. I even see how to derive it. However, my physical intuition is limited. Whereas the more traditional (one substitution further) version can be given an intuitive form in saying something like "if we prepare a large number of quantum systems in identical states, and then perform measurements of C on some of those systems, and of D in others, then the standard deviation of the C results times the standard deviation of the results for D will satisfy..." However, when I try to do the same thing for the generalized HUP, I don't come out with anything nearly as concrete. Could you paraphrase it in measurement terms? Thanks again.
     
  6. Feb 23, 2010 #5

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The interpretation is precisely what you said, no matter what C and D are. I'm a bit confused by the left-hand side in Matterwave's HUP though. See this post for my version. (I don't have time right now to think about if his version is correct too).
     
  7. Feb 23, 2010 #6

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    Yes, the LHS of my equation is not quite good notation. Better notation would be [tex]\sigma_{A}^2\sigma_{B}^2[/tex] or [tex](\Delta A)^2(\Delta B)^2[/tex]

    Squarerooting the entire equation must yield the regular form of the HUP.

    I.e. [tex]\sigma_A\sigma_B=\frac{1}{2}|<[A,B]>|[/tex]

    I just copied that form from wikipedia without really thinking about it. My apologies.
     
    Last edited: Feb 23, 2010
  8. Feb 23, 2010 #7

    nomadreid

    User Avatar
    Gold Member

    Fredrik: Thanks, but the post you referred me to repeats my statement of the modern interpretation of the full HUP, and the derivation is clear to me, but that was not my question. That is, I am fine with
    (*) σ_C∙σ_D ≥ |〈ψ|[C,D] |ψ〉|/2
    when C and D are the momentum and position operators. However, it is the step earlier in that derivation, which can be applied to the spin operators, that I am uncomfortable with: not the mathematics or derivation of it, but the physical interpretation of
    (**) 4 〈ψ|C^2 |ψ〉 〈ψ|D^2 |ψ〉 ≥ |〈ψ|[C,D] |ψ〉 |^2
    when C and D are orthogonal spin operators.
    (Sorry if this is a little clumsily typed: just refer to Matterwave's posts for neater versions.) In my second post I wrote the physical interpretation of (*) merely to show what sort of explanation I was looking for for the physical interpretation of (**), if there is one; that is, in terms of measurement.

    Matterwave: I am not bothered by the different versions of notation. However, I am still puzzled as to the measurement interpretation (is there one?) of (**).

    Thanks for further replies.
     
  9. Feb 23, 2010 #8

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I understood your question, and I thought I answered it by saying that you had already posted the correct interpretation. Why do you think that the interpretation is any different for an arbitrary pair of operators than for x and p?

    I'm not saying that it's obvious how the inequality should be interpreted, but I am saying that it's impossible to understand what

    [tex]\Delta p=\sqrt{\langle(p-\langle p\rangle)^2\rangle}[/tex]

    means without also understanding what

    [tex]\Delta A=\sqrt{\langle(A-\langle A\rangle)^2\rangle}[/tex]

    means. <A> is the average result that we get if we measure A many times on systems that are all prepared in the same state [tex]|\psi\rangle[/tex]. (A-<A>)2 can be thought of as a mathematical representation of a measurement device that first measures A to get an intermediate result a, then calculates (a-<A>)2, and presents that as the result of the measurement. [itex]\langle(A-\langle A\rangle)^2\rangle[/itex] is the average result of a large number of measurements performed by such a device, on systems that are all prepared in the same state [tex]|\psi\rangle[/tex] as before. And the uncertainty [itex]\Delta A[/itex] is defined to be equal to square root of that.

    The uncertainty/standard deviation is defined in this slightly awkward way because the more intuitive quantity, the "average deviation from the average", i.e. <A-<A>> is always =0.

    It seems that Matterwave's version of the HUP is just my version squared. So there's no significant difference between them.
     
  10. Feb 23, 2010 #9

    SpectraCat

    User Avatar
    Science Advisor

    The spectrum of the spin operators for a spin-1/2 particle is discrete, with possible eigenvalues of ±hbar/2. To understand what "uncertainty" means in this context, think about what happens if you take a beam of H-atoms with randomly polarized spins in the following experiment. First put them through a Stern-Gerlach (SG) filter oriented along the z-axis and select the upward-deflected component (SGz+), corresponding to the eigenvalue +hbar/2. Now take that beam and send it through an SG filter oriented along the x-axis. What will be the result of the second measurement, +hbar/2 or -hbar/2? Since the SGz+ state can be written as a symmetric linear combination of the SGx+ and SGx- states, you have equal probability of getting either result, so the "uncertainty" for the result of the pair of measurements is clearly larger than hbar/2.

    This analogy is not perfect, because the uncertainty in the first measurement can be said to be zero if you only select one beam, but hopefully it provides you with the physical picture that you are looking for. Essentially, you cannot simultaneously know the components of the angular momentum along orthogonal cartesian axes with arbitrary precision.
     
  11. Feb 24, 2010 #10

    nomadreid

    User Avatar
    Gold Member

    Thank you, SpectraCat and Fredrik, for your replies.
    First, SpectraCat: yes, this is the sort of thing I was looking for, something to bite on in my attempt to correlate the mathematical expression with a physical one. Again, thanks. And I like your choice of avatar name for QM.
    Secondly, Fredrik: yes, the expressions of the regular form of the HUP are the same in Matterwave's and your versions, but I am more interested in what Matterwave called the generalized form, which I wrote as (**) in my post. You asked why I should think that the regular form does not apply to the spin operators. Here was my confusion, which probably contains an elementary mistake, but one that escapes my notice: In the usual form, you take the standard deviations of each operator, independently, and only then multiplied them. If you did this with the spins, each standard deviation would give exactly h-bar/2 (expected value zero, and each value +/- h-bar/2), and the product would be h-bar^2/4, which doesn't match the usual HUP. So I did not see how to apply the regular form (applied to the continuous case) to a discrete case. I would welcome your correction to my reasoning. (Or anyone else who reads this.)
     
  12. Feb 24, 2010 #11

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    Remember that the HUP is an inequality, and not an equality. The HUP only gives you a minimum possible uncertainty, and this minimum is not always realizable.

    I suppose you can think of the spin as a vector, like angular momentum, that points in an uncertain direction. When you try to measure the z-component of this vector, you get a certain standard deviation. When you try to measure the x, or y component of this vector for an identically prepared particle, you get a a certain other standard deviation. The product of these will always be greater than or equal to a certain value as specified by the HUP.
     
  13. Feb 24, 2010 #12

    nomadreid

    User Avatar
    Gold Member

    Matterwave: again thank you for helping me. If I may tax your patience a little more: If I understand correctly, your reply is essentially the same one as SpectraCat's. The fact that the relation will be an inequality is clear in the continuous case but one thing that I find puzzling when applied to a discrete case, as I outlined in my last reply to Fredrik.
     
  14. Feb 24, 2010 #13

    SpectraCat

    User Avatar
    Science Advisor

    Just to be completely clear for the OP's sake, the HUP says nothing explicit about the uncertainty of a single measurement. Any single measurement of any observable, commuting or non-commuting, will result in an eigenstate of that observable, and (theorerically) can be made with arbitrary precision. The HUP says there is a lower limit on the width of the distribution you will observe from multiple measurements.
     
  15. Feb 24, 2010 #14

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I consider that inequality an intermediate step in the derivation of the usual (generalized) uncertainty relation. What remains to be done there is simply to substitute A and B for A-<A> and B-<B> on both sides, and then take the square root of both sides. (I did that in my derivation).

    The interpretation of <A2> is easier than the interpretation of ΔA. A2 can be interpreted as a measuring device that first measures A to get a number a, and then presents a2 as the result of the measurement. And <A2> is the average of many such results.

    Now you're talking about standard deviations rather than expectation values of squared operators (which is what that "intermediate" inequality is about), so I assume that you're talking about the (generalized) uncertainty relation in its standard form. ("Generalized" just means that the operators don't have to be x and p).

    You seem to be forgetting that all expectation values are computed using the same state. If the state is an eigenstate of Sz, the result of a Sz measurement is 1/2 every time. (I'm setting [itex]\hbar=1[/itex]). So <Sz >=1/2 and

    [tex]\Delta S_z=\sqrt{\langle(S_z-\langle S_z\rangle)^2\rangle}=\sqrt{\langle(S_z-\frac 1 2)^2\rangle}=0[/tex]

    Recall what I said about the interpretation of (A-<A>)2 in my previous post. If you understand that, you'll understand that when A=Sz, its expectation in an eigenstate of Sz is 0.
     
  16. Feb 25, 2010 #15

    nomadreid

    User Avatar
    Gold Member

    Fredrik: Thanks for the continued help. I have two main points:

    First ,a question about the following:
    "If the state is an eigenstate of Sz, the result of a Sz measurement is 1/2 every time."

    I thought that Sz (talking about an electron) had two eigenvalues, [tex]\pm\frac{1}{2}[/tex]
    ???

    Now, correct me if I am wrong, but I think there is a problem with notation. When you wrote
    "What remains to be done there is simply to substitute A and B for A-<A> and B-<B> on both sides,"
    it would have been clearer if different variables had been used. That is, something like
    "What remains to be done there is simply to substitute C and D for A-<A> and B-<B>, respectively on both sides,"


    However, I still am a little confused with your notation.
    (By the way, how do you quote to get the quotes in those nice boxes? I haven't used the Forums enough, apparently.)

    Starting with your
    "<A> is the average result that we get if we measure A many times on systems that are all prepared in the same state |psi> . (A-<A>)^2 can be thought of as a mathematical representation of a measurement device that first measures A to get an intermediate result a, then calculates (a-<A>)^2, and presents that as the result of the measurement. <(A-<A>)^2> is the average result of a large number of measurements performed by such a device, on systems that are all prepared in the same state |psi> as before."

    [Hey, how come the latex codes didn't translate when I cut and paste? I will be a little lazy and write things out. ]

    In what does this definition differ from the usual definition of standard deviation?

    Going on to
    "<A> is the average result that we get if we measure A many times on systems that are all prepared in the same state psi "

    This looks like the definition of the expected value.

    Up to this point, the notation mix-up is not responsible for the confusion. However, perhaps it is responsible for the following?

    The above seems to contradict the following, when you say
    "So <Sz >=1/2 and

    Delta S_z=sqrt{<(S_z-<S_z>)^2>}=sqrt{<(S_z-1/2)^2>}=0"

    and "when A=Sz, its expectation in an eigenstate of Sz is 0. "

    seem to say that the definition which I took to be the standard deviation was stated applying after the substitution, but in stating it as the expected value, you are thinking of before the substitution?

    Then "And the uncertainty Delta-A is defined to be equal to square root of that."

    "the square root of that" being the definition that I took to be the standard deviation, and
    since the uncertainties in the HUP is defined as the standard deviations, this would seem to confirm my original impression that this is the standard deviation (after the substitution), and the expected value (before the substitution). However, then I run into another notation confusion, since the expected value is written <C>, not Delta-C, which is reserved for the standard deviation. However, you are now saying that Delta S[tex]_{z}[/tex] = 0, so that if you are measuring S[tex]_{z}[/tex] and S[tex]_{x}[/tex], you would end up with [tex]\Delta[/tex]S[tex]_{z}[/tex][tex]\Delta[/tex]S[tex]_{x}[/tex] = 0, which is absurd.

    (Sorry, superscripts rather than subscripts, but you get the idea, as I can't correct this with a simple edit.)

    Where have I gone wrong in interpreting your uses of symbols for expected value and standard deviation?
     
    Last edited: Feb 25, 2010
  17. Feb 25, 2010 #16

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I don't have time to reply to the rest right now, but since you spotted an obvious error, I should at least address that. What I was thinking was "if the state is the positive eigenvalue eigenstate of Sz...", but it came out wrong.
     
  18. Feb 25, 2010 #17

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    One should note that the "uncertainty" in the case of the HUP should always be applied to a series of measurements, rather than one single measurement. The uncertainty principle is a consequence of the statistical (or probabilistic) interpretation of QM.

    The uncertainty in one single measurement, is limited by your measuring device, but in principle, could be zero. If I measure the position of my particle, and then measured the momentum of my particle. BOTH measurements can independently have (theoretically) 0 uncertainty; however, the subsequent measurement of momentum of my particle has "erased" the first measurement of position. Although each measurement has 0 uncertainty, I still don't know both values to better than the HUP because both values simply do not exist simultaneously.

    Niels Bohr really tried to figure out why my measurement of momentum would erase my measurement of position, and vice versa. This is one reason he hypothesized the Bohr microscope, I believe.
     
  19. Feb 25, 2010 #18

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    We have proved that any two hermitian operators A and B satisfies an inequality that can be put in the form f(A,B)≥0. If A and B are hermitian, then so is any operator of the form A+cI or B+cI, where c is a complex number and I is the identity operator, so we must also have f(A-<A>,B-<B>)≥0.

    I think you're right that a lot of people would find it easier to understand a proof of f(X,Y)≥0 followed by the choice X=A-<A>, Y=B-<B>. Sometimes I forget that people who are just learning this stuff for the first time lack mathematical maturity. (Yes, I think it's mostly a matter of mathematical maturity. I wouldn't expect a graduate student to get confused by what I said).

    Use quote tags. Click the quote button next to the post you'd like to reply to and you'll see what they look like. (When you do, please delete everything but the stuff that you're actually replying to. It's annoying to see people quote a full page post in its entirety and than type a one sentence reply). You can type quote tags manually, but if you use the ones created when you click the quote button, you also get that little link back to the post you're quoting.

    When I write this I copied the line [noparse]
    [/noparse] manually.

    This is the usual definition...of the quantum mechanical version of "variance". Standard deviation is the square root of the variance, both in QM and statistics.

    It is. Well, technically, I'd say that the definition of the expectation value of [itex]A[/itex] in the state [itex]|\psi\rangle[/itex] is

    [tex]\langle A\rangle=\langle\psi|A|\psi\rangle[/itex]

    and that the reason why we define it this way is that the right-hand side is equal to

    [tex]\sum_a |\langle a|\psi\rangle|^2 a[/tex]

    which is the average result of a large number of measurements.

    Your quoting technique has hidden the fact that the word "its" in the last sentence refers to (A-<A>)2. The next problem is the same mistake (on my part) that you spotted before. It shouldn't be "an eigenstate", it should be "the positive value eigenstate". So what I was trying to say is that when [itex]|\psi\rangle=\left|\uparrow\rangle[/itex],

    [tex](\Delta S_z)^2=\langle(S_z-\langle S_z\rangle)^2\rangle=\langle(S_z-\frac 1 2)^2\rangle=0[/tex]

    I wanted you to think that, and then realize that it isn't absurd if the right-hand side is 0 too. You only got the first part of that right. :wink: (I realize of course that my mistake of saying "an eigenstate" when I specifically meant [itex]\left|\uparrow\rangle[/itex] contributed to that). If you want a non-trivial inequality (i.e. not 0≥0), the state can't be an eigenstate of one of the operators.
     
    Last edited: Feb 25, 2010
  20. Feb 25, 2010 #19

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    To make this point more clear, derive the HUP for spin in a spin eigenstate. You will find that since the commutation relations are:

    [tex][S_z, S_x]=i\hbar S_y[/tex]

    Plugging into the generalized uncertainty principle, you get that:

    [tex]\sigma_z\sigma_x \geq \frac{\hbar}{2}<S_y>[/tex] and for an eigenstate of [tex]S_z[/tex], the expectation value of [tex]S_y[/tex] is zero (check for yourself by finding the expectation of the pauli spin matrix y for an eigenstate of Sz).
     
    Last edited: Feb 26, 2010
  21. Feb 26, 2010 #20

    nomadreid

    User Avatar
    Gold Member

    OK, Fredrik and Matterwave, things are starting to look a lot clearer after these latest explanations. I am really very grateful to both of you for that, as well as to SpectraCat earlier on in this thread. Now I can go back and keep working on further details in QM, which of course will lead me to further difficulties (more's the fun!), so you will see me again in this section (with neater posts, thanks to Fredrik's indications). For now, thanks a million!
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook