wm said:
Jesse, thanks for this; I like it very much. Also: Excuse my jumping in and out at the moment but I'm just grabbing bits of time hopefully to move us ahead. Which should not happen till we have agreement re my equations.
I think they are going to be just fine BUT could you tell me how you want the last correct line (whatever you deem that to be) to be written.
I ask because if I go on what's above, you look like you would like an x in three places? But would that be satisfactory?
Actually, one minor physical issue occurred to me--you have the vectors as 3-vectors, but if you want to mimic the type of spin measurements made in QM, they should really be 2-vectors. This is because, when you measure spin using
Stern-Gerlach magnets, the long axis of the magnet has to be alligned parallel to the particle's path, so you just have the freedom to rotate the magnets around this axis at any angle (this is why in discussions of Bell's theorem people often talk about each experimenter choosing 'an angle'--if they had 3 degrees of freedom, they would each have to select 2 distinct angles instead).
So, I'd amend your (5) to look like this:
(5) = - <[(ax ay)
x(sx, sy)]
x[(sx sy)
x(bx', by')]>
Of course since the two quantities in brackets give scalars, strictly speaking the middle
x could be replaced by a *, but leaving it as matrix multiplication makes it easier to go to step (6):
(6) = - (ax ay)
x<(sx, sy)
x(sx sy)>
x(bx', by')
That's the last step in your proof I'd agree with.
wm said:
The reason that I want this correction because all I did in my own work was to see that the matrix that resides in the middle (after all correct mathematical proceses) is just the equivalent of a unit-matrix, obtained by taking the ensemble-average inside the matrix and evaluating each element's ensemble-average.
The 2-matrix in the center actually does not work out to be a unit matrix. One thing to note is that if
s is an individual unit vector, while it's true that (sx sy)
x(sx, sy), i.e. the dot product of
s with itself, is always 1, it's
not true that the 2x2 matrix (sx, sy)
x(sx sy) is always a unit matrix; for example, if sx = 0.5 and sy = 0.866, the matrix works out to be:
(0.5)*(0.5) (0.5)*(0.866)
(0.866)*(0.5) (0.866)*(0.866)
or
0.25 0.433
0.433 0.75
However, you'd probably point out that we are interested in the average expectation value of this matrix when
s is allowed to take any angle from 0 to 2pi. We know that if the angle of
s is \theta, then s_x = cos(\theta ) and s_y = sin(\theta ). So, the matrix would be:
\left( \begin{array}{cc} cos^2(\theta ) & cos(\theta )*sin(\theta ) \\<br />
sin(\theta )*cos(\theta ) & sin^2(\theta ) \end{array} \right)
For each of these four components, to find the expectation value we must integrate them from 0 to 2pi, then multiply the result by (1/2pi)...see the end of my previous post for an explanation of why the expectation value of a function based on an arbitrary angle would be calculated in this way.
Using
the integrator, we have:
\int cos^2(\theta ) \, d\theta = (1/2)*(\theta + cos(\theta )*sin(\theta ))
\int sin(\theta )*cos(\theta ) \, d\theta = (-1/2)*cos^2(\theta )
\int sin^2(\theta ) \, d\theta = (1/2)*(\theta - cos(\theta )*sin(\theta ))
So, taking each function f(\theta ) and plugging in the limits of integration f(2pi) - f(0), the expectation value for the matrix is:
\frac{1}{2\pi} \left( \begin{array}{cc} \pi & 0 \\<br />
0 & \pi \end{array} \right)
or:
\left( \begin{array}{cc} \frac{1}{2} & 0 \\<br />
0 & \frac{1}{2} \end{array} \right)
So, it looks like this will end up just being another way of proving that - <
a.s*
s.b'> is equal to -(1/2)*cos(a - b), which I had proved earlier by just doing one big integral.