Is there an equivalent of the HUP for spins?

In summary, the conversation discusses the Heisenberg Uncertainty Principle and its application to Hermitian and unitary operators, specifically in relation to spin states. The first question posed is whether there is an equivalent principle that can be applied to unitary operators used to go from one spin state to another, and the second question asks for the interpretation of the generalized HUP when applied to spin operators. The conversation also delves into the physical interpretation of the HUP and its mathematical notation.
  • #1
nomadreid
Gold Member
1,668
203
I know the following question is elementary, but being a dilettante in QM, I am confused on the following. So, I have two questions.

The first question is whether the following reasoning is correct, and if not, why not.

First, the Heisenberg Uncertainty Principle (HUP) applies to Hermitian operators. Second, operators on spins are unitary. Thirdly, although Hermitian and unitary operators are not identical, there is a one-to-one relation given by the fact that for every unitary operator U there exists a Hermitian operator K such that U= exp(iK). Therefore there should be an equivalent to the HUP that one can apply to the unitary operators which are used to go from one spin state to another, for example on the Bloch sphere.

The second question is: if the above is correct, then what is this equivalent principle? If it is not, is there any way that the HUP is used on operators on spin states?

Hope the question does not appear too stupid; thanks in advance for the answers.
 
Physics news on Phys.org
  • #2
There is an HUP among the three Sx Sy and Sz operators.

The generalized HUP says:

[tex]<A^2><B^2>\geq \frac{1}{4}|<[A,B]>|^2[/tex]

Just plug in either Sx or Sy or Sz for either A or B.
 
  • #3
Ah. That makes sense. Thank you, Matterwave.
 
  • #4
Postscript. As I said, the reply makes sense... mathematically. I even see how to derive it. However, my physical intuition is limited. Whereas the more traditional (one substitution further) version can be given an intuitive form in saying something like "if we prepare a large number of quantum systems in identical states, and then perform measurements of C on some of those systems, and of D in others, then the standard deviation of the C results times the standard deviation of the results for D will satisfy..." However, when I try to do the same thing for the generalized HUP, I don't come out with anything nearly as concrete. Could you paraphrase it in measurement terms? Thanks again.
 
  • #5
The interpretation is precisely what you said, no matter what C and D are. I'm a bit confused by the left-hand side in Matterwave's HUP though. See this post for my version. (I don't have time right now to think about if his version is correct too).
 
  • #6
Yes, the LHS of my equation is not quite good notation. Better notation would be [tex]\sigma_{A}^2\sigma_{B}^2[/tex] or [tex](\Delta A)^2(\Delta B)^2[/tex]

Squarerooting the entire equation must yield the regular form of the HUP.

I.e. [tex]\sigma_A\sigma_B=\frac{1}{2}|<[A,B]>|[/tex]

I just copied that form from wikipedia without really thinking about it. My apologies.
 
Last edited:
  • #7
Fredrik: Thanks, but the post you referred me to repeats my statement of the modern interpretation of the full HUP, and the derivation is clear to me, but that was not my question. That is, I am fine with
(*) σ_C∙σ_D ≥ |〈ψ|[C,D] |ψ〉|/2
when C and D are the momentum and position operators. However, it is the step earlier in that derivation, which can be applied to the spin operators, that I am uncomfortable with: not the mathematics or derivation of it, but the physical interpretation of
(**) 4 〈ψ|C^2 |ψ〉 〈ψ|D^2 |ψ〉 ≥ |〈ψ|[C,D] |ψ〉 |^2
when C and D are orthogonal spin operators.
(Sorry if this is a little clumsily typed: just refer to Matterwave's posts for neater versions.) In my second post I wrote the physical interpretation of (*) merely to show what sort of explanation I was looking for for the physical interpretation of (**), if there is one; that is, in terms of measurement.

Matterwave: I am not bothered by the different versions of notation. However, I am still puzzled as to the measurement interpretation (is there one?) of (**).

Thanks for further replies.
 
  • #8
I understood your question, and I thought I answered it by saying that you had already posted the correct interpretation. Why do you think that the interpretation is any different for an arbitrary pair of operators than for x and p?

I'm not saying that it's obvious how the inequality should be interpreted, but I am saying that it's impossible to understand what

[tex]\Delta p=\sqrt{\langle(p-\langle p\rangle)^2\rangle}[/tex]

means without also understanding what

[tex]\Delta A=\sqrt{\langle(A-\langle A\rangle)^2\rangle}[/tex]

means. <A> is the average result that we get if we measure A many times on systems that are all prepared in the same state [tex]|\psi\rangle[/tex]. (A-<A>)2 can be thought of as a mathematical representation of a measurement device that first measures A to get an intermediate result a, then calculates (a-<A>)2, and presents that as the result of the measurement. [itex]\langle(A-\langle A\rangle)^2\rangle[/itex] is the average result of a large number of measurements performed by such a device, on systems that are all prepared in the same state [tex]|\psi\rangle[/tex] as before. And the uncertainty [itex]\Delta A[/itex] is defined to be equal to square root of that.

The uncertainty/standard deviation is defined in this slightly awkward way because the more intuitive quantity, the "average deviation from the average", i.e. <A-<A>> is always =0.

It seems that Matterwave's version of the HUP is just my version squared. So there's no significant difference between them.
 
  • #9
nomadreid said:
Fredrik: Thanks, but the post you referred me to repeats my statement of the modern interpretation of the full HUP, and the derivation is clear to me, but that was not my question. That is, I am fine with
(*) σ_C∙σ_D ≥ |〈ψ|[C,D] |ψ〉|/2
when C and D are the momentum and position operators. However, it is the step earlier in that derivation, which can be applied to the spin operators, that I am uncomfortable with: not the mathematics or derivation of it, but the physical interpretation of
(**) 4 〈ψ|C^2 |ψ〉 〈ψ|D^2 |ψ〉 ≥ |〈ψ|[C,D] |ψ〉 |^2
when C and D are orthogonal spin operators.
(Sorry if this is a little clumsily typed: just refer to Matterwave's posts for neater versions.) In my second post I wrote the physical interpretation of (*) merely to show what sort of explanation I was looking for for the physical interpretation of (**), if there is one; that is, in terms of measurement.

Matterwave: I am not bothered by the different versions of notation. However, I am still puzzled as to the measurement interpretation (is there one?) of (**).

Thanks for further replies.

The spectrum of the spin operators for a spin-1/2 particle is discrete, with possible eigenvalues of ±hbar/2. To understand what "uncertainty" means in this context, think about what happens if you take a beam of H-atoms with randomly polarized spins in the following experiment. First put them through a Stern-Gerlach (SG) filter oriented along the z-axis and select the upward-deflected component (SGz+), corresponding to the eigenvalue +hbar/2. Now take that beam and send it through an SG filter oriented along the x-axis. What will be the result of the second measurement, +hbar/2 or -hbar/2? Since the SGz+ state can be written as a symmetric linear combination of the SGx+ and SGx- states, you have equal probability of getting either result, so the "uncertainty" for the result of the pair of measurements is clearly larger than hbar/2.

This analogy is not perfect, because the uncertainty in the first measurement can be said to be zero if you only select one beam, but hopefully it provides you with the physical picture that you are looking for. Essentially, you cannot simultaneously know the components of the angular momentum along orthogonal cartesian axes with arbitrary precision.
 
  • #10
Thank you, SpectraCat and Fredrik, for your replies.
First, SpectraCat: yes, this is the sort of thing I was looking for, something to bite on in my attempt to correlate the mathematical expression with a physical one. Again, thanks. And I like your choice of avatar name for QM.
Secondly, Fredrik: yes, the expressions of the regular form of the HUP are the same in Matterwave's and your versions, but I am more interested in what Matterwave called the generalized form, which I wrote as (**) in my post. You asked why I should think that the regular form does not apply to the spin operators. Here was my confusion, which probably contains an elementary mistake, but one that escapes my notice: In the usual form, you take the standard deviations of each operator, independently, and only then multiplied them. If you did this with the spins, each standard deviation would give exactly h-bar/2 (expected value zero, and each value +/- h-bar/2), and the product would be h-bar^2/4, which doesn't match the usual HUP. So I did not see how to apply the regular form (applied to the continuous case) to a discrete case. I would welcome your correction to my reasoning. (Or anyone else who reads this.)
 
  • #11
Remember that the HUP is an inequality, and not an equality. The HUP only gives you a minimum possible uncertainty, and this minimum is not always realizable.

I suppose you can think of the spin as a vector, like angular momentum, that points in an uncertain direction. When you try to measure the z-component of this vector, you get a certain standard deviation. When you try to measure the x, or y component of this vector for an identically prepared particle, you get a a certain other standard deviation. The product of these will always be greater than or equal to a certain value as specified by the HUP.
 
  • #12
Matterwave: again thank you for helping me. If I may tax your patience a little more: If I understand correctly, your reply is essentially the same one as SpectraCat's. The fact that the relation will be an inequality is clear in the continuous case but one thing that I find puzzling when applied to a discrete case, as I outlined in my last reply to Fredrik.
 
  • #13
Matterwave said:
Remember that the HUP is an inequality, and not an equality. The HUP only gives you a minimum possible uncertainty, and this minimum is not always realizable.

I suppose you can think of the spin as a vector, like angular momentum, that points in an uncertain direction. When you try to measure the z-component of this vector, you get a certain standard deviation. When you try to measure the x, or y component of this vector for an identically prepared particle, you get a a certain other standard deviation. The product of these will always be greater than or equal to a certain value as specified by the HUP.

Just to be completely clear for the OP's sake, the HUP says nothing explicit about the uncertainty of a single measurement. Any single measurement of any observable, commuting or non-commuting, will result in an eigenstate of that observable, and (theorerically) can be made with arbitrary precision. The HUP says there is a lower limit on the width of the distribution you will observe from multiple measurements.
 
  • #14
nomadreid said:
Secondly, Fredrik: yes, the expressions of the regular form of the HUP are the same in Matterwave's and your versions, but I am more interested in what Matterwave called the generalized form, which I wrote as (**) in my post.
I consider that inequality an intermediate step in the derivation of the usual (generalized) uncertainty relation. What remains to be done there is simply to substitute A and B for A-<A> and B-<B> on both sides, and then take the square root of both sides. (I did that in my derivation).

The interpretation of <A2> is easier than the interpretation of ΔA. A2 can be interpreted as a measuring device that first measures A to get a number a, and then presents a2 as the result of the measurement. And <A2> is the average of many such results.

nomadreid said:
...you take the standard deviations of each operator, independently, and only then multiplied them. If you did this with the spins, each standard deviation would give exactly h-bar/2 (expected value zero, and each value +/- h-bar/2),
Now you're talking about standard deviations rather than expectation values of squared operators (which is what that "intermediate" inequality is about), so I assume that you're talking about the (generalized) uncertainty relation in its standard form. ("Generalized" just means that the operators don't have to be x and p).

You seem to be forgetting that all expectation values are computed using the same state. If the state is an eigenstate of Sz, the result of a Sz measurement is 1/2 every time. (I'm setting [itex]\hbar=1[/itex]). So <Sz >=1/2 and

[tex]\Delta S_z=\sqrt{\langle(S_z-\langle S_z\rangle)^2\rangle}=\sqrt{\langle(S_z-\frac 1 2)^2\rangle}=0[/tex]

Recall what I said about the interpretation of (A-<A>)2 in my previous post. If you understand that, you'll understand that when A=Sz, its expectation in an eigenstate of Sz is 0.
 
  • #15
Fredrik: Thanks for the continued help. I have two main points:

First ,a question about the following:
"If the state is an eigenstate of Sz, the result of a Sz measurement is 1/2 every time."

I thought that Sz (talking about an electron) had two eigenvalues, [tex]\pm\frac{1}{2}[/tex]
?

Now, correct me if I am wrong, but I think there is a problem with notation. When you wrote
"What remains to be done there is simply to substitute A and B for A-<A> and B-<B> on both sides,"
it would have been clearer if different variables had been used. That is, something like
"What remains to be done there is simply to substitute C and D for A-<A> and B-<B>, respectively on both sides,"However, I still am a little confused with your notation.
(By the way, how do you quote to get the quotes in those nice boxes? I haven't used the Forums enough, apparently.)

Starting with your
"<A> is the average result that we get if we measure A many times on systems that are all prepared in the same state |psi> . (A-<A>)^2 can be thought of as a mathematical representation of a measurement device that first measures A to get an intermediate result a, then calculates (a-<A>)^2, and presents that as the result of the measurement. <(A-<A>)^2> is the average result of a large number of measurements performed by such a device, on systems that are all prepared in the same state |psi> as before."

[Hey, how come the latex codes didn't translate when I cut and paste? I will be a little lazy and write things out. ]

In what does this definition differ from the usual definition of standard deviation?

Going on to
"<A> is the average result that we get if we measure A many times on systems that are all prepared in the same state psi "

This looks like the definition of the expected value.

Up to this point, the notation mix-up is not responsible for the confusion. However, perhaps it is responsible for the following?

The above seems to contradict the following, when you say
"So <Sz >=1/2 and

Delta S_z=sqrt{<(S_z-<S_z>)^2>}=sqrt{<(S_z-1/2)^2>}=0"

and "when A=Sz, its expectation in an eigenstate of Sz is 0. "

seem to say that the definition which I took to be the standard deviation was stated applying after the substitution, but in stating it as the expected value, you are thinking of before the substitution?

Then "And the uncertainty Delta-A is defined to be equal to square root of that."

"the square root of that" being the definition that I took to be the standard deviation, and
since the uncertainties in the HUP is defined as the standard deviations, this would seem to confirm my original impression that this is the standard deviation (after the substitution), and the expected value (before the substitution). However, then I run into another notation confusion, since the expected value is written <C>, not Delta-C, which is reserved for the standard deviation. However, you are now saying that Delta S[tex]_{z}[/tex] = 0, so that if you are measuring S[tex]_{z}[/tex] and S[tex]_{x}[/tex], you would end up with [tex]\Delta[/tex]S[tex]_{z}[/tex][tex]\Delta[/tex]S[tex]_{x}[/tex] = 0, which is absurd.

(Sorry, superscripts rather than subscripts, but you get the idea, as I can't correct this with a simple edit.)

Where have I gone wrong in interpreting your uses of symbols for expected value and standard deviation?
 
Last edited:
  • #16
nomadreid said:
First ,a question about the following:
"If the state is an eigenstate of Sz, the result of a Sz measurement is 1/2 every time."

I thought that Sz (talking about an electron) had two eigenvalues, [tex]\pm\frac{1}{2}[/tex]
?
I don't have time to reply to the rest right now, but since you spotted an obvious error, I should at least address that. What I was thinking was "if the state is the positive eigenvalue eigenstate of Sz...", but it came out wrong.
 
  • #17
One should note that the "uncertainty" in the case of the HUP should always be applied to a series of measurements, rather than one single measurement. The uncertainty principle is a consequence of the statistical (or probabilistic) interpretation of QM.

The uncertainty in one single measurement, is limited by your measuring device, but in principle, could be zero. If I measure the position of my particle, and then measured the momentum of my particle. BOTH measurements can independently have (theoretically) 0 uncertainty; however, the subsequent measurement of momentum of my particle has "erased" the first measurement of position. Although each measurement has 0 uncertainty, I still don't know both values to better than the HUP because both values simply do not exist simultaneously.

Niels Bohr really tried to figure out why my measurement of momentum would erase my measurement of position, and vice versa. This is one reason he hypothesized the Bohr microscope, I believe.
 
  • #18
nomadreid said:
Now, correct me if I am wrong, but I think there is a problem with notation. When you wrote
"What remains to be done there is simply to substitute A and B for A-<A> and B-<B> on both sides,"
it would have been clearer if different variables had been used. That is, something like
"What remains to be done there is simply to substitute C and D for A-<A> and B-<B>, respectively on both sides,"
We have proved that any two hermitian operators A and B satisfies an inequality that can be put in the form f(A,B)≥0. If A and B are hermitian, then so is any operator of the form A+cI or B+cI, where c is a complex number and I is the identity operator, so we must also have f(A-<A>,B-<B>)≥0.

I think you're right that a lot of people would find it easier to understand a proof of f(X,Y)≥0 followed by the choice X=A-<A>, Y=B-<B>. Sometimes I forget that people who are just learning this stuff for the first time lack mathematical maturity. (Yes, I think it's mostly a matter of mathematical maturity. I wouldn't expect a graduate student to get confused by what I said).

nomadreid said:
(By the way, how do you quote to get the quotes in those nice boxes? I haven't used the Forums enough, apparently.)
Use quote tags. Click the quote button next to the post you'd like to reply to and you'll see what they look like. (When you do, please delete everything but the stuff that you're actually replying to. It's annoying to see people quote a full page post in its entirety and than type a one sentence reply). You can type quote tags manually, but if you use the ones created when you click the quote button, you also get that little link back to the post you're quoting.

When I write this I copied the line [noparse]
nomadreid said:
[/noparse] and pasted it every time I wanted a new quote box, and ended the quotes by typing [noparse]
[/noparse] manually.

nomadreid said:
In what does this definition differ from the usual definition of standard deviation?
This is the usual definition...of the quantum mechanical version of "variance". Standard deviation is the square root of the variance, both in QM and statistics.

nomadreid said:
This looks like the definition of the expected value.
It is. Well, technically, I'd say that the definition of the expectation value of [itex]A[/itex] in the state [itex]|\psi\rangle[/itex] is

[tex]\langle A\rangle=\langle\psi|A|\psi\rangle[/itex]

and that the reason why we define it this way is that the right-hand side is equal to

[tex]\sum_a |\langle a|\psi\rangle|^2 a[/tex]

which is the average result of a large number of measurements.

nomadreid said:
The above seems to contradict the following, when you say
"So <Sz >=1/2 and

Delta S_z=sqrt{<(S_z-<S_z>)^2>}=sqrt{<(S_z-1/2)^2>}=0"

and "when A=Sz, its expectation in an eigenstate of Sz is 0. "
Your quoting technique has hidden the fact that the word "its" in the last sentence refers to (A-<A>)2. The next problem is the same mistake (on my part) that you spotted before. It shouldn't be "an eigenstate", it should be "the positive value eigenstate". So what I was trying to say is that when [itex]|\psi\rangle=\left|\uparrow\rangle[/itex],

[tex](\Delta S_z)^2=\langle(S_z-\langle S_z\rangle)^2\rangle=\langle(S_z-\frac 1 2)^2\rangle=0[/tex]

nomadreid said:
However, you are now saying that Delta S[tex]_{z}[/tex] = 0, so that if you are measuring S[tex]_{z}[/tex] and S[tex]_{x}[/tex], you would end up with [tex]\Delta[/tex]S[tex]_{z}[/tex][tex]\Delta[/tex]S[tex]_{x}[/tex] = 0, which is absurd.
I wanted you to think that, and then realize that it isn't absurd if the right-hand side is 0 too. You only got the first part of that right. :wink: (I realize of course that my mistake of saying "an eigenstate" when I specifically meant [itex]\left|\uparrow\rangle[/itex] contributed to that). If you want a non-trivial inequality (i.e. not 0≥0), the state can't be an eigenstate of one of the operators.
 
Last edited:
  • #19
Fredrik said:
I wanted you to think that, and then realize that it isn't absurd if the right-hand side is 0 too. You only got the first half of that right. :smile: (I realize of course that my mistake of saying "an eigenstate" when I specifically meant [itex]\left|\uparrow\rangle[/itex] contributed to that). If you want a non-trivial inequality (i.e. not 0≥0), the state can't be an eigenstate.

To make this point more clear, derive the HUP for spin in a spin eigenstate. You will find that since the commutation relations are:

[tex][S_z, S_x]=i\hbar S_y[/tex]

Plugging into the generalized uncertainty principle, you get that:

[tex]\sigma_z\sigma_x \geq \frac{\hbar}{2}<S_y>[/tex] and for an eigenstate of [tex]S_z[/tex], the expectation value of [tex]S_y[/tex] is zero (check for yourself by finding the expectation of the pauli spin matrix y for an eigenstate of Sz).
 
Last edited:
  • #20
OK, Fredrik and Matterwave, things are starting to look a lot clearer after these latest explanations. I am really very grateful to both of you for that, as well as to SpectraCat earlier on in this thread. Now I can go back and keep working on further details in QM, which of course will lead me to further difficulties (more's the fun!), so you will see me again in this section (with neater posts, thanks to Fredrik's indications). For now, thanks a million!
 

What is the Heisenberg uncertainty principle (HUP) for spins?

The Heisenberg uncertainty principle (HUP) for spins is a fundamental principle in quantum mechanics that states that it is impossible to know both the exact position and momentum of a spin particle at the same time. This means that the more precisely we know the position of a spin particle, the less precisely we can know its momentum, and vice versa.

How is the HUP for spins different from the HUP for position and momentum?

The HUP for spins is different from the HUP for position and momentum in that it applies specifically to spin particles, which have intrinsic angular momentum, rather than to all particles. Additionally, the HUP for spins involves the uncertainty in measuring the spin along different directions, rather than the uncertainty in measuring position and momentum along a specific direction.

What is the equivalent of HUP for spins in terms of spin operators?

The equivalent of HUP for spins can be expressed in terms of the spin operators. The HUP for spins states that the product of the uncertainties in measuring the spin along two different directions is always greater than or equal to the absolute value of the expectation value of the commutator of the spin operators along those two directions. This is known as the uncertainty relation for spin operators.

How does the HUP for spins affect measurements of spin particles?

The HUP for spins has important implications for the measurements of spin particles. It means that it is not possible to know the exact spin state of a particle, and that there will always be some uncertainty in our measurements. This also means that certain spin measurements cannot be simultaneously determined with arbitrary precision.

Can the HUP for spins be violated?

No, the HUP for spins, like the HUP for position and momentum, is a fundamental principle in quantum mechanics and cannot be violated. It is a consequence of the wave-like behavior of particles at the quantum level and is supported by experimental evidence. Attempts to violate the HUP for spins have not been successful.

Similar threads

Replies
6
Views
896
Replies
5
Views
1K
Replies
52
Views
4K
  • Quantum Physics
Replies
24
Views
1K
  • Quantum Physics
Replies
1
Views
223
  • Quantum Physics
Replies
7
Views
871
Replies
4
Views
3K
  • Quantum Physics
Replies
1
Views
813
  • Quantum Physics
3
Replies
87
Views
5K
Replies
1
Views
853
Back
Top