Questions about Bell: Answering Philosophical Difficulties

  • Thread starter Thread starter krimianl99
  • Start date Start date
  • Tags Tags
    Bell
  • #51
Well sure what’s that got to do with not allowing the opportunity for Alice and Bob to select the angle for A, B, & C independently?
Are you saying A, B, & C must all be the same (0 or 90 degree shifts) from the three functions used by the other observer?
Don’t you think that eliminates an important element of independence between Alice and Bob to predetermine the types of tests they are allowed to use down to a set of three identical functions?

From such a starting point it seems more like a gimmick designed to produce (I’m sure not intentionally) an expected or wanted result than a rational evaluation of all the independent variables possible in the problem. I really don’t see where it is any better than the Van Newmwn proof.
 
Physics news on Phys.org
  • #52
RandallB said:
Well sure what’s that got to do with not allowing the opportunity for Alice and Bob to select the angle for A, B, & C independently?
They do select them independently. Say on each trial Alice has a 1/3 chance of choosing A, a 1/3 chance of choosing B, and a 1/3 chance of choosing C, and the same goes for Bob. Then on some trials they will make different choices, like Alice-B and Bob-C. But on other trials they will happen to make the same choice, like both choosing B. What I'm saying is that if we look at the subset of trials where they both happen to choose the same angle, they are 100% guaranteed to get opposite spins (or guaranteed to get the same color lighting up in vanesch's example--either way the correlation is perfect). Do you agree?
RandallB said:
Are you saying A, B, & C must all be the same (0 or 90 degree shifts) from the three functions used by the other observer?
I don't understand what you're asking here. A, B & C are three distinct angles, like 0, 60, 90 or something. When you say they "must all be the same", are you talking about the assumption that each of the two particles must have a predetermined response for how it will behave if it's measured on any of the three angles? If so, this is just something we must assume if we want to believe in local realism and still explain how the particles' responses are always perfectly correlated when the experimenters happen to pick the same angle.
RandallB said:
Don’t you think that eliminates an important element of independence between Alice and Bob to predetermine the types of tests they are allowed to use down to a set of three identical functions?
I'm not sure what you're asking here either, maybe if you clarify what you meant in the previous part it'll become more clear to me.
 
Last edited:
  • #53
Ian Davis said:
I had thought that photons were particularly good candidates to consider as being time reversed because under time reversal none of their fundamental characteristics change, so logically we can't even suggest which way in time a photon is travelling. I've no idea what gluons would manifest themselves as under time reversal, but at a guess they'd provide the nuclear forces associated with anti-protons, etc. Time reversed gravitons at a guess would be good candidates to explain the force of dark energy because their exchange viewed from our perspective would be pulling things apart, while from the perspective of backwards time would have them behaving just like gravitons in creating an attraction between mass. All very lay reasoning, with no math to back it, but I've not encountered the notion that it is more unreasonable to imagine bosons moving backwards in time than fermions so wish to know more, the better to improve my understanding of what can be and what can't be.
All the known fundamental laws of physics are already either time-symmetric (invariant under time-reversal) or CPT-symmetric (invariant under a combination of time reversal, matter/antimatter charge reversal, and parity inversion). For a time-symmetric set of laws, what this means is that if you take a movie of a system obeying those laws and play it backwards, there will be no way for another physicist to know for sure that you are playing the movie backwards rather than forwards, since the system's behavior in the backwards movie is still obeying exactly the same laws (though the backwards movie may appear statistically unlikely if it shows entropy decreasing in an isolated system). This is true of gravitation, which is perfectly time-symmetric--a backwards movie of a gravitating system will not involve the appearance of any kind of "antigravity", despite what you might think. I discussed this in post #68 here:
Actually, gravity is time-symmetric, meaning the laws of gravity are unchanged under a time-reversal transformation--in physical terms, this means that if you look at a film of objects moving under the influence of gravity, there's no way (aside from changes in entropy) to determine if you're watching the film being played forwards or if it's being played backwards. The reason it seems asymmetric is because of entropy, like how a falling object will smack the ground and dissipate most of its kinetic energy as sound and heat--if a falling object had a perfectly elastic collision with the ground so that no kinetic energy was dissipated in this way, each time it hit the ground it would bounce back up to an equal height as before, so this would look the same forwards as backwards (and the reversed version of the collision where kinetic energy is dissipated is not ruled out by the laws of physics, it's just statistically unlikely that waves of sound and the random jostling of molecules due to heat would converge to give a sudden push to an object that had been previously been resting on the ground...if it did happen, though, it would look just like a reversed movie of an object falling to the ground and ending up resting there). Likewise, any situation where no collisions are involved, like orbits, will still be consistent with the laws of gravity when viewed in reverse.
The idea behind CPT-symmetry is basically similar--if you take a movie of a system obeying CPT-symmetric laws, then play it backwards and take the mirror image so that the +x direction is now labeled -x, the +y now labeled -y and the +z now labeled -z (parity inversion) and you reverse the labels of particles and antiparticles (so that electrons in the original movie are now labeled as positrons in the reversed movie, and vice versa), then the new altered movie will still appear to be obeying the exact same laws as in the unaltered version.
 
  • #54
Ian Davis said:
I am not sure if I understand what you mean by emphasising that creation and destruction of bosons, being problematic in the context of time reversal. I'd understand bosons to be absorbed and emitted (converted to/from energy absorbed emitted by the fermion of the correct type), but I don't see how this understanding in any sense "screws up as an explanation in any case with bosons".

No, that's not what I wanted to say. I wanted to say that with fermions, one might "hate the idea" to have creation and annihilation, and then one can find an "explanation" for it, which is that fermions sometimes travel back in time. As such, one can then eliminate the need to consider "creation" and "annihilation".
But even considering "traveling back in time", one cannot eliminate the need to consider "creation" and "annihilation" of bosons. So if you ANYHOW have to consider creation and annihilation (which you wanted to avoid and adopted the "back in time" explanation for it) for bosons, then you can also accept it for fermions and any NEED to consider back in time propagation vanishes, as its explanatory power (its possibility of doing away with creation and annihilation) was in any case not working for bosons.

In other words, the assumption that particles go back in time is never needed, as it doesn't explain anything. And we can explain everything in QFT with particles going forward in time, and considering creation/annihilation.

I had thought that photons were particularly good candidates to consider as being time reversed because under time reversal none of their fundamental characteristics change, so logically we can't even suggest which way in time a photon is travelling.

This is correct, so we can just as well take it that it goes forward, no ? It will not be possible to DEMONSTRATE that it goes backward in time, and that it CAN'T be seen as traveling forward in time. And this brings us back to the original article: something that complies with actual theory can never PROVE that it went back in time !
 
  • #55
RandallB said:
Sorry, Now you have me totally confused and rereading your proof and prior posts are of no help. Given this current statement I am at a loss to understand what the propose of the D-function was in your prior posts.

The whole idea of Bell's proof is that whether the red or the green light lights up at the Alice box is given by a probability that is determined by the "local inputs", which are two-fold: an input that comes from the "central box", and the button that Alice pushes.

That is, GIVEN these inputs, so given the message from the central box, and the choice of Alice, this gives us a probability for there to be "red" as a result (and hence, the complementary probability to have "green" of course).

Now, this can be a genuine probability, like, say, 0.6, or it can be a certainty, which comes down to the probability to be 0 (green for sure) or 1 (red for sure). We leave this open.

So GIVEN the message from the central box (lambda1 if you want), and GIVEN the choice by Alice (X, which is A, B or C), we have a function, which is P(X,lambda1), and gives us that famous probability.

We can hold the same reasoning at Bob's, where the function will be Q(Y,lambda2).

Now, D is the expectation value of the correlation function of Alice's and Bob's outcomes, when they have picked respectively X and Y, and when the message lambda1 was sent to alice, and the message lambda2 was sent to bob.

D is nothing else but the probability to have (red,red) times +1 plus the probability to have (green,green) times +1 plus the probability to have (red,green) times -1 plus the probability to have (green,red) times -1, under the assumption that Alice pushed X, that Bob pushed Y, that lambda1 was sent to Alice, and under the assumption tht lambda2 was sent to Bob.

As we assume that the "drawing" is done locally (all "common information" is already taken care off by the message lambda1 and lambda2, so we only look at the REMAINING uncertainties), we can assume that the probability to have, say, (red,red) is given by:

P(X,lambda1) x Q(Y,lambda2).

The probability to have, red-green is given by:
P(X,lambda1) x (1 - Q(Y,lambda2) )

etc...

And from this, we can calculate the above D function (the expectation over the remaining probabilities, given X, Y, lambda1 and lambda2) and we find:

D(X,Y, lambda1, lambda2) = ( 1 - 2 x P(X,lambda1) ) x (1 - 2 x Q(Y, lambda2))

Now there is a triviality, which seems to be confusing you, which I applied:
we can call a new mathematical structure: lambda = { lambda1, lambda2 }. If lambda1 is a real number, and lambda2 is a real number, then lambda can be seen as a 2-dim vector. If lambda1 was a text file, and lambda2 is a text file, then lambda can be seen as the concatenation of the two text files. It is just NOTATION.

Now, if in all generality, you have a function f(x), you can ALWAYS define a function g(x,y) which is equal to f(x) for all values of y, of course.
So if P(X,lambda1) is a function of lambda1, you can ADD lambda2 as an argument, which doesn't do anything: P'(X,lambda1,lambda2) = P(X,lambda1).
Same for Q, we can define Q'(Y,lambda1,lambda2) = Q(Y, lambda2).

But we have the "vector" notation lambda which stands for {lambda1, lambda2}, so we can write P'(X,lambda) and Q'(Y,lambda). They just have a "useless" argument more, but they are the same function, just as g(x,y) is in fact just f(x), and y doesn't play a role. But if this confuses you, I will continue to write lambda1, lambda2.

So we can write:
D(X,Y, lambda1, lambda2) = ( 1 - 2 x P'(X,lambda1,lambda2) ) x (1 - 2 x Q'(Y, lambda1,lambda2))

And we can drop the ', and call P', simply P, and Q' simply Q.

So we can write:
D(X,Y, lambda1, lambda2) = ( 1 - 2 x P(X,lambda1,lambda2) ) x (1 - 2 x Q(Y, lambda1,lambda2))

Ok, so D was the expectation value of the correlation, GIVEN the choice of Alice and Bob, and GIVEN the (hidden) messages sent from the central box.

It is important to note that D is always a real number between -1 and +1. This comes from the fact that P and Q are probabilities, and hence between 0 and 1.

Now, we assume that those messages themselves are randomly sent out with a given probability distribution. That means, there's a certain probability Pc(lambda1,lambda2) to send out a specific couple of messages, namely {lambda1,lambda2}.

Given that Alice and Bob can't see that message, THEIR correlation function (for a given choice X and Y) will be the expectation value of D over this probability distribution of the couples (lambda1, lambda2), right ? Bob and Alice will "average" their correlation function over the messages.

So how does this work out ? Well, you have to sum of course each value of D(X,Y,lambda1,lambda2) multiplied with the probability that the messages sent out will be {lambda1,lambda2}. THIS will give you the correlation function that Bob and Alice will find when they picked X and Y, in other words, C(X,Y).

So we have that:

<br /> C(X,Y) = \sum_{(lambda_1,lambda_2)} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2)<br />

This "sum" can be an integral over whatever is the set of the couples (lambda1,lambda2). It can be a huge set. In the case of text files, we have to sum over all thinkable couples of textfiles (but some might have probability Pc=0 of course). In the case of real numbers, we have to integrate over the plane. It doesn't matter.

The above expression is valid for the 9 different C(X,Y) values: for C(A,A), for C(A,B),...

But we KNOW certain C values: C(A,A) = 1 for instance. Does C(A,A) = 1 impose a condition on D or on Pc ?

Yes, it does. This is the whole point. Let us write out the above expression for the case C(A,A):

<br /> C(A,A) = 1 = \sum_{(lambda_1,lambda_2)} D(A,A,lambda1,lambda2) Pc(lambda1,lambda2)<br />

Now,
<br /> \sum_{(lambda_1,lambda_2)} Pc(lambda1,lambda2) = 1 <br />

because it is a probability distribution, all Pc values are between 0 and 1, and D(A,A,lambda1,lambda2) is a number between -1 and 1. Such a sum can only be equal to 1 if ALL D(A,A,lambda1,lambda2) values are equal to 1 (at least, for those lambda1 and lambda2 for which Pc is not equal to 0).

So we know that D(A,A,lambda1, lambda2) = 1 for all lambda1, and all lambda2.

But we also know that D(A,A,lambda1,lambda2) = ( 1 - 2 x P(A,lambda1,lambda2) ) x (1 - 2 x Q(A, lambda1,lambda2))

So we have that:
( 1 - 2 x P(A,lambda1,lambda2) ) x (1 - 2 x Q(A, lambda1,lambda2)) = 1 for all lambda1, and lambda2.

Well, (1 - 2 x) (1 - 2 y), with x and y between 0 and 1, can only be equal to 1 in two different cases:

x = y = 1 OR

x = y = 0.

This means that for each couple (lambda1, lambda2) we have only 2 possibilities:

OR
P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 1

OR
P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 0

Of course, if you take a random lambda1 and lambda2, it can be, say 1, and if you take another lambda1 and lambda2, it can be 0, but it is in each case one of both.

So this means we can split the whole set of (lambda1,lambda2) couples into two parts:
those couples that give P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 1 and then the other couples, which necessarily give: P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 0.

Concerning P(A,lambda1,lambda2), we hence don't need to know precisely what are lambda1, and lambda2 (text files, numbers,...), but just whether they fall in the first part, or in the second, because in the first part, P(A,lambda1,lambda2) will be equal to 1, and in the second part, it will be 0. In ANY case, P(A,lambda1,lambda2) = Q(A,lambda1,lambda2).

So if we know in which of the part the couple (lambda1,lambda2) falls, we know enough about it to know the value of P(A,lambda1,lambda2) and Q(A,lambda1,lambda2). It is either 1 or 0. So the split of the set of couples (lambda1,lambda2) comes about because of the fact that we deduced that in any case, P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) can only take up 2 possible values.

Now, we apply the same reasoning to C(B,B) = 1 and then to C(C,C) = 1, and we will now have 3 "partitions" in two of the set of (lambda1,lambda2) couples. The first partition, as we showed, determines the value of P(A,lambda1,lambda2) = Q(A,lambda1,lambda2) = 0 or 1. The second partition will determine the value of P(B,lambda1,lambda2) = Q(B,lambda1,lambda2) = 0. And the last one will do so for P(C,lambda1,lambda2) = Q(C,lambda1,lambda2) = 0

Now, if you apply 3 different partitions in 2 parts to any set, you will end up with at most 8 pieces. So our entire set of couples (lambda1,lambda2) is now cut in 8 pieces, and if we know in which piece a couple falls, we know what will be the results for the 6 functions:
P(A,lambda1,lambda2), P(B,lambda1,lambda2), P(C,lambda1,lambda2), Q(A,lambda1,lambda2), Q(B,lambda1,lambda2), Q(C,lambda1,lambda2).

Each of these functions is constant over each of the 8 different pieces of the set of (lambda1,lambda2) couples (either it is 1 or it is 0).

Now, if we know these 6 values, we know also the 9 values of
D(A,A,lambda1,lambda2), D(A,B,lambda1,lambda2), D(A,C,lambda1,lambda2) ...
D(C,C,lambda1,lambda2).

Each of these functions is CONSTANT over each of the 8 different pieces of our (lambda1,lambda2) set, because they depend on the P and Q functions which are constant. We can call these constant values D(X,Y,firstslice), D(X,Y,secondslice) ...
D(X,Y,8thslice)

Now, pick one of these, say, D(A,B,lambda1,lambda2). This function can only take on at most 8 different values, because we have only 8 different possibilities for P(A,lambda1,lambda2) and Q(B,lambda1,lambda2). But in fact it can take on only 4, because our 8 different possibilities included P(C,lambda1,lambda2) and this value doesn't enter into the calculation of D(A,B,lambda1,lambda2), so of our 8 different "slices", they will give 2 by 2 the same result (namely, the two slices that only differ for P(C,lambda1,lambda2) will not change the value of D).

Now, if we go back to
<br /> C(X,Y) = \sum_{(lambda_1,lambda_2)} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2)<br />

split the sum over the entire set of couples (lambda1,lambda2) over the 8 different slices:

<br /> C(X,Y) = \sum_{(lambda_1,lambda_2) in first slice} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2) + <br />
\sum_{(lambda_1,lambda_2) in second slice} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2) + ...<br />
<br /> \sum_{(lambda_1,lambda_2) in 8th slice} D(X,Y,lambda1,lambda2) Pc(lambda1,lambda2)<br />

But within the first slice, D is constant! And within the second slice, too...
So we can bring this outside:

<br /> C(X,Y) = D(X,Y,firstslice) \sum_{(lambda_1,lambda_2) in first slice} Pc(lambda1,lambda2) + <br />
D(X,Y,secondslice) \sum_{(lambda_1,lambda_2) in second slice} Pc(lambda1,lambda2) + ...<br />
D(X,Y,8thslice)\sum_{(lambda_1,lambda_2) in 8th slice} Pc(lambda1,lambda2)<br />

And now the sums that remain, are nothing else but the sum of probabilities of each of the (lambda1,lambda2) couples in the first slice (which we call p1), of each of the (lambda1,lambda2) couples in the second slice (which we call p2), ...

So:
<br /> C(X,Y) = D(X,Y,firstslice) p1 + D(X,Y,secondslice) p2 + ...<br /> D(X,Y,8thslice) p8<br />

But let us look a bit deeper into D(X,Y,firstslice). In the first slice, we have that P(A,lambda1,lambda2) = 1 = Q(A,lambda1,lambda2) AND
P(B,lambda1,lambda2) = 1 = Q(B,lambda1,lambda2) AND
P(C,lambda1,lambda2) = 1 = Q(C,lambda1,lambda2)

So this means that D(X,Y,firstslice) = 1 for all X and Y !

Now in the second slice, we have that:
P(A,lambda1,lambda2) = 1 = Q(A,lambda1,lambda2) AND
P(B,lambda1,lambda2) = 1 = Q(B,lambda1,lambda2) AND
P(C,lambda1,lambda2) = 0 = Q(C,lambda1,lambda2)

So this means that D(A,B,secondslice) = 1, D(A,C,secondslice) = -1, ...

Etc,...

In fact, we will find that those famous constants of are just 1 or -1, and we can calculate them (using D(X,Y) = (1-2P(X)) (1-2Q(Y)) ) in each slice. So there aren't even 4 possibilities for D, but only 2!

Given this, it means that we can calculate each of the 9 functions:
C(X,Y) as sums and differences of p1, p2, p3, ... p8.
But of course, we already know the C(A,A) = C(B,B) = C(C,C) = 1, because we imposed this. If you do the calculation (do it as an exercise!) you will find that each time, they come out to be p1 + p2 + ... + p8 = 1. That is because D(A,A...) = 1 for all of the slices, and D(B,B,...) = 1 for all of the slices and D(C,C,...) = 1 for all of the slices, as we already deduced before.
 
Last edited:
  • #56
Originally Posted by ThomasT
A perfect correlation between coincidental detection and angular difference would be described by a linear function, wouldn't it? The surprise is that the observed correlation function isn't linear, but rather just what one would expect if one were analyzing the same optical properties at both ends for any given pair of detection attributes.

vanesch said:
I have no idea why you think that the correlation function should be linear ?

Where did I say that I think it should be linear? I said that a perfect correlation would be linear. But I wouldn't expect that.

Originally Posted by ThomasT
(And it would seem to follow that if paired attributes have the same properties, then the emissions associated with them were spawned by a common event --eg. their interaction, or by tweaking each member of a pair in the same way, or, as in the Aspect et al experiments, their creation via an atomic transition.)

vanesch said:
But that's exactly what Bell's theorem analyses! Is it possible that the perfect correlations on one hand (the C(A,A) = C(B,B) = C(C,C) = 1) and the observed "crossed correlations" (C(A,B), C(B,C) and C(A,C) ) can be the result of a common origin ?

I don't think that's what Bell's theorem actually analyses, or maybe I just don't understand what you're saying. Anyway, let's continue.

vanesch said:
It turns out that the answer is no, if the C are those predicted by quantum mechanics for an entangled pair of particles, and we pick the right angles of the analysers.

Keep in mind that we're not correlating what happens at A with what happens at B. We're correlating angular difference with coincidental detection.

If you only plot coincidence rates corresponding to 0 and 90 degree angular difference, then connect the dots, then you get a straight line, don't you? What does that tell you? It doesn't tell me much of anything necessarily.

vanesch said:
The experimental optical implementation, with approximate sources and detectors, is only a very approximative approach to the very simple quantum-mechanical question of what happens to entangled pairs of particles, which are in the quantum state:

|up>|down> - |down> |up>

In the optical experiment, we are confronted with the fact that our source of entangled photons is emitting randomly,in time and in different directions, of which we can only capture a small amount of them, and of which we don't have a priori timing information. But that is not a limitation of principle, it is a limitation of practicality in the lab.

So we use time coincidence as a way to ascertain that we have a high probability to deal with "two parts of an entangled pair". We also have the limited detection efficiency of the photon detectors, which make that we don't trigger each time the detector when they receive an entangled pair. But we can have a certain sample of pairs of which we are pretty sure that they ARE from entangled pairs, as they show perfect (anti-) correlation, which would be unexplainable if they were of different origin.

It would be simpler, and it is in principle entirely possible, to SEND OUT entangled pairs of particles ON COMMAND, but we simply don't know how to make such an efficient source.
It would also be simpler if we had 100% efficient particle detectors. In that case, our experimental setup would ressemble more the "black box" machine of Bell.

I take it, although I'm not sure, that you don't agree with:
There seem to be at least two assumptions made by the quantum model builders. (1) The paired detection attributes had a common (emission) cause; ie., paired detection attributes are associated with filtration events associated with, eg. optical, disturbances that emerged from, eg., the same atomic transition. (2) The filters are analyzing/filtering the same property or properties of the commonly caused incident disturbances -- the precise physical nature of these incident disturbances and their properties being necessarily unknown (ie., unknowable).

So, I'll ask you again:
Don't you think these assumptions, (1) and (2) above, are part (vis a vis classical optics) of the quantum mechanical approach?

Originally Posted by ThomasT
The relations are between the rate of coincidental detection and the angular difference of the settings of the polarizers, aren't they? What is so surprising about this relationship when viewed from the perspective of classical optics?

vanesch said:
Well, how do you explain perfect anti-correlation purely on the grounds of classical optics? If you have, say, an incident pulse on both sides with identical polarisation, which happens to be 45 degrees wrt to the (identical) polariser axes at Bob and Alice, which make normally Alice have 50% chance to see "up" and 50% chance to see down, and Bob too, how come that they find each time the SAME outcome ? That is, how come that when Alice sees "up" (remember, with 50% chance), that Bob ALSO sees "up" and if Alice sees "down" that Bob also sees down ? You'd expect that you would have a total lack of correlation in this case, no ?

Now, of course, the source won't always send out pairs which are 45 degrees away from Alice's and Bob's axis, but sometimes it will. So how come that we find perfect correlation ?

I'm not sure what you mean by perfect correlation. There is no perfect correlation between coincidence rate and any one angular difference. That wouldn't mean anything. The rate is always (in the ideal) a certain number associated with a certain angular difference. In ascertaining the correlation between angular dependence and coincidence rate you would want to plot as many rates with respect to different angular differences as you could.

If you know (have produced) the polarization of the incident light, then you can use a classical treatment, can't you? The problem is that we don't know anything about the incident pulses. Quantum theory makes two assumptions: (1) they had a common source, and (2) they are, in effect, the same thing.

Anyway, I was talking about viewing the relationship between angular dependence and coincidence rate from the perspective of classical optics -- not actually calculating the results using classical optics.

Originally Posted by ThomasT
That is, the angular dependency is just what one would expect if A and B are analyzing essentially the same thing with regard to a given pairing. And, isn't it only logical to assume that that sameness was produced at emission because the opposite moving optical disturbances were emitted by the same atom?

vanesch said:
No, this is exactly what Bell analysed ! This would of course have been the straightforward explanation, but it doesn't work.

I think you're wrong about this, because it happens to be exactly what the developers of quantum theory did assume. However, in order to do accurate calculations and develop a consistent mathematical framework for the theory it was necessary to leave out certain details (about polarization for example) that were part of the classical theory, but which led to calculational problems when applied to quantum experimental phenomena. One simply can't say anything about the angle of polarization of the light incident on the polarizers-analyzers.

In place of all the metaphysical stuff of classical physics we have the quantum superposition of states (which doesn't pretend to be anything other than a mathematical contrivance).
 
  • #57
So what exactly is the difference between determinism and superdeterminism?
 
  • #58
ThomasT said:
So what exactly is the difference between determinism and superdeterminism?
I wrote something about this in post #29 of this thread:
From reading the wikipedia article I get the impression that superdeterminism is basically the same as the notion of a "conspiracy" in the initial conditions of the universe, which ensures that the hidden-variables state in which two particles are created will always be correlated with the "choice" of measurements that the experiments decide to make on them. So, for example, in any trial where the experimenters were predetermined to measure the same spin axis, the particles would always be created with opposite spin states on that axis, but in trials where the experimenters were not predetermined to measure the same spin axis, the hidden spin states of the two particles on any given axis would not necessarily be opposite.

Since in a deterministic universe the state of an experimenter's brain which determines his "choice" of what to measure on a given trial can be influenced by a host of factors in his past which have nothing to do with the creation of the particle (what he had for lunch that day, for example), the only way for such correlations to exist would be to pick very special initial conditions of the universe--the correlations would not be explained by the laws of physics alone (unless this constraint on the initial conditions is itself somehow demanded by the laws of physics).
 
  • #59
ThomasT said:
After reading a quote (from a BBC interview) of Bell about superdeterminism, I still don't understand the difference between superdeterminism and determinism. From what Bell said they seem to be essentially the same.

Is it that experimental violations of Bell inequalities show that the spatially separated data streams are statistically dependent?

Or, is it that there is a statistical dependence between coincidental detections and associated angular differences between polarizers (that's the only way that I've seen the correlations mapped)?
My understanding is that the "statistical independence" that is violated in superdeterminism is the independence of each particle's state prior to being measured (including the state of any 'hidden variables' associated with the particle) from each experimenter's choice of what angle to set their detector when measuring the particle. In other words, whatever it is that determines the particle's state, it must act as if it does not "know in advance" how the experimenter is going to choose to measure it. If there is a spacelike separation between the two experimenters' measurements, then if we find that the particles always give opposite results whenever the experimenters both happen to choose the same angle, the only way to explain this in a local realist universe is if both particles had predetermined answers to what result they'd give when measured on that angle, and both were assigned opposite predetermined answers when they were created at a common location. But if nature acts as if it doesn't "know in advance" what angles the experimenters will choose, then the only conclusion for a local realist must be that on every trial the particles are assigned predetermined (opposite) answers for what result they'll give when measured on any possible choice of angle.

Do you disagree with any of this?
 
  • #60
JesseM said:
My understanding is that the "statistical independence" that is violated in superdeterminism is the independence of each particle's state prior to being measured (including the state of any 'hidden variables' associated with the particle) from each experimenter's choice of what angle to set their detector when measuring the particle. In other words, whatever it is that determines the particle's state, it must act as if it does not "know in advance" how the experimenter is going to choose to measure it. If there is a spacelike separation between the two experimenters' measurements, then if we find that the particles always give opposite results whenever the experimenters both happen to choose the same angle, the only way to explain this in a local realist universe is if both particles had predetermined answers to what result they'd give when measured on that angle, and both were assigned opposite predetermined answers when they were created at a common location. But if nature acts as if it doesn't "know in advance" what angles the experimenters will choose, then the only conclusion for a local realist must be that on every trial the particles are assigned predetermined (opposite) answers for what result they'll give when measured on any possible choice of angle.

Do you disagree with any of this?

No, I don't disagree. But I still wouldn't be able to answer the question: what is the definition of superdeterminism. So, I don't agree either. :smile:

Thanks for the effort. I was looking for something a bit shorter. Is there a clear, straightforward definition for the term or isn't there?
 
  • #61
ThomasT said:
No, I don't disagree. But I still wouldn't be able to answer the question: what is the definition of superdeterminism. So, I don't agree either. :smile:

Thanks for the effort. I was looking for something a bit shorter. Is there a clear, straightforward definition for the term or isn't there?
To summarize what I was saying in that paragraph, how about defining superdeterminism as something like "a lack of statistical independence between variables associated with the particle prior to measurement and the experimenters' choice of what detector setting to use when making the measurement"?
 
  • #62
JesseM said:
I wrote something about this in post #29 of this thread:
Thanks, that thread was most helpful. My take on it is that the ideas of superdeterminism and determinism, for the purpose of ascertaining the meaning Bell's theorem, are essentially synonymous, and, more importantly, unnecessary.
 
  • #63
JesseM said:
I wrote something about this in post #29 of this thread:

JesseM said:
To summarize what I was saying in that paragraph, how about defining superdeterminism as something like "a lack of statistical independence between variables associated with the particle prior to measurement and the experimenters' choice of what detector setting to use when making the measurement"?
Put it in general form.
 
  • #64
ThomasT said:
Put it in general form.
What do you mean by "general form"? If you mean a form that doesn't specifically discuss detector settings of experimenters, I don't think that's possible, the central point of what is meant by the term "superdeterminism" seems to be that experimenters can't treat their choices of measurements as random, that nature can "anticipate" what choice they will make and alter the prior states of the system being measured accordingly.
 
  • #65
JesseM said:
What do you mean by "general form"? If you mean a form that doesn't specifically discuss detector settings of experimenters, I don't think that's possible, the central point of what is meant by the term "superdeterminism" seems to be that experimenters can't treat their choices of measurements as random, that nature can "anticipate" what choice they will make and alter the prior states of the system being measured accordingly.
So, superdeterminism is just a special case of determinism involving Bell's theorem and EPR-Bell tests?
 
Last edited:
  • #66
JesseM said:
What do you mean by "general form"? If you mean a form that doesn't specifically discuss detector settings of experimenters, I don't think that's possible, the central point of what is meant by the term "superdeterminism" seems to be that experimenters can't treat their choices of measurements as random, that nature can "anticipate" what choice they will make and alter the prior states of the system being measured accordingly.
Random is defined at the instrumental level, isn't it? That being so, then the polarizer settings are random. But, the coincidence rates aren't random.

I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.
These are the assumptions that quantum theory makes, and this is as far as it can go in talking about what is happening independent of observation. These assumptions come from the perspective of classical optics, and from these assumptions (and appropriate experimental designs) we would expect to see the observed angular dependency.

So, I don't think I need superdeterminism to avoid nonlocality.
 
  • #67
ThomasT said:
So, superdeterminism is just a special case of determinism involving Bell's theorem and EPR-Bell tests?
I think that's all Bell meant by superdeterminism (see here and here), although different authors might not mean exactly the same thing by that word. Sometimes people talk about superdeterminism as a rejection of "counterfactual definiteness", meaning physics can no longer address questions of what would have happened if a different measurement had been made on the system, but I suppose this is just another way of saying that we cannot assume statistical independence between the choice of measurement on a system and the state of the system prior to measurement. Basically I think this amounts to a limitation on allowable initial conditions for the system and the experimenter, in statistical mechanics terms you can no longer assume that all microstates consistent with a given observed macrostate are physically allowable.
 
  • #68
ThomasT said:
Random is defined at the instrumental level, isn't it? That being so, then the polarizer settings are random. But, the coincidence rates aren't random.
No, the randomness here is about whether there's a correlation between the "hidden states" of particles prior to measurement and the experimenter's choice of what measurement setting to use, over a large number of trials. This is not a question that can be addressed "instrumentally", since by definition we have no way to find out what the hidden states on a given trial actually are. But if we take the perspective of an imaginary omniscient observer who knows the hidden states on each trial, it must be true that the observer either will or won't see a correlation between the complete state of a particle prior to measurement and the experimenter's choice of how to measure it--i.e. the particle either will or won't act as if it can "anticipate" in advance what the experimenter will choose.
ThomasT said:
I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.
But that's the whole idea that Bell's theorem intends to refute. Bell starts by imagining that the perfect correlation when both experimenters use the same detector setting is due to a common cause--each particle is created with a predetermined answer to what result it will give on any possible angle, and they are always created in such a way that they are predetermined to give opposite answers on each possible angle. But if you do make this assumption, it leads you to certain conclusions about what statistics you'll get when the experimenters choose different detector settings, and these conclusions are violated in QM.

Perhaps it would help if you looked at the example involving scratch lotto cards that I gave on another thread:
The key to seeing why you can't explain the results by just imagining the electrons had preexisting spins on each axis is to look at what happens when the two experimenters pick different axes to measure. Here's an analogy I came up with on another thread (for more info, google 'Bell's inequality'):

Suppose we have a machine that generates pairs of scratch lotto cards, each of which has three boxes that, when scratched, can reveal either a cherry or a lemon. We give one card to Alice and one to Bob, and each scratches only one of the three boxes. When we repeat this many times, we find that whenever they both pick the same box to scratch, they always get opposite results--if Bob scratches box A and finds a cherry, and Alice scratches box A on her card, she's guaranteed to find a lemon.

Classically, we might explain this by supposing that there is definitely either a cherry or a lemon in each box, even though we don't reveal it until we scratch it, and that the machine prints pairs of cards in such a way that the "hidden" fruit in a given box of one card is always the opposite of the hidden fruit in the same box of the other card. If we represent cherries as + and lemons as -, so that a B+ card would represent one where box B's hidden fruit is a cherry, then the classical assumption is that each card's +'s and -'s are the opposite of the other--if the first card was created with hidden fruits A+,B+,C-, then the other card must have been created with the hidden fruits A-,B-,C+.

The problem is that if this were true, it would force you to the conclusion that on those trials where Alice and Bob picked different boxes to scratch, they should find opposite fruits on at least 1/3 of the trials. For example, if we imagine Bob's card has the hidden fruits A+,B-,C+ and Alice's card has the hidden fruits A-,B+,C-, then we can look at each possible way that Alice and Bob can randomly choose different boxes to scratch, and what the results would be:

Bob picks A, Alice picks B: same result (Bob gets a cherry, Alice gets a cherry)

Bob picks A, Alice picks C: opposite results (Bob gets a cherry, Alice gets a lemon)

Bob picks B, Alice picks A: same result (Bob gets a lemon, Alice gets a lemon)

Bob picks B, Alice picks C: same result (Bob gets a lemon, Alice gets a lemon)

Bob picks C, Alice picks A: opposite results (Bob gets a cherry, Alice gets a lemon)

Bob picks C, Alice picks picks B: same result (Bob gets a cherry, Alice gets a cherry)

In this case, you can see that in 1/3 of trials where they pick different boxes, they should get opposite results. You'd get the same answer if you assumed any other preexisting state where there are two fruits of one type and one of the other, like A+,B+,C-/A-,B-,C+ or A+,B-,C-/A-,B+,C+. On the other hand, if you assume a state where each card has the same fruit behind all three boxes, like A+,B+,C+/A-,B-,C-, then of course even if Alice and Bob pick different boxes to scratch they're guaranteed to get opposite fruits with probability 1. So if you imagine that when multiple pairs of cards are generated by the machine, some fraction of pairs are created in inhomogoneous preexisting states like A+,B-,C-/A-,B+,C+ while other pairs are created in homogoneous preexisting states like A+,B+,C+/A-,B-,C-, then the probability of getting opposite fruits when you scratch different boxes should be somewhere between 1/3 and 1. 1/3 is the lower bound, though--even if 100% of all the pairs were created in inhomogoneous preexisting states, it wouldn't make sense for you to get opposite answers in less than 1/3 of trials where you scratch different boxes, provided you assume that each card has such a preexisting state with "hidden fruits" in each box.

But now suppose Alice and Bob look at all the trials where they picked different boxes, and found that they only got opposite fruits 1/4 of the time! That would be the violation of Bell's inequality, and something equivalent actually can happen when you measure the spin of entangled photons along one of three different possible axes. So in this example, it seems we can't resolve the mystery by just assuming the machine creates two cards with definite "hidden fruits" behind each box, such that the two cards always have opposite fruits in a given box.
Imagine that you are the source manufacturing the cards to give to Alice and Bob (the common cause). Do you agree that if the cards cannot communicate and choose what fruit to show based on what box was scratched on the other card (no nonlocality), and if you have no way to anticipate in advance which of the three boxes Alice and Bob will each choose to scratch (no superdeterminism), then the only way for you to guarantee that they will always get opposite results when the scratch the same box is to predetermine the fruit that will appear behind each box A, B, C if it is scratched, making sure the predetermined answers are opposite for the two cards (so if Alice's card has predetermined answers A+,B+,C-, then Bob's card must have predetermined answers A-,B-,C+)? And if you agree with this much, do you agree or disagree with the conclusion that if you predetermine the answers in this way, this will necessarily mean that when they pick different boxes to scratch they must get opposite fruits at least 1/3 of the time?

By the way, I also extended the scratch lotto analogy to a different Bell inequality in post #8 of this thread, if it helps.
 
Last edited:
  • #69
JesseM said:
They do select them independently.

I don't understand what you're asking here.
Of course you don't understand - you like vanesch are only continuing as before without addressing the point I’ve made. You insist there are only three possible angles and expect that to represent “ They do select them independently”. We are not talking about pushing a button independently, were talking about independent selecting the 3 functions to be used for those buttons, without any interference or suggestions from non local site such as the other observer. In your example that means a selection of 6 angles (or at least five). Such as ALICE ( 0, 60, 90)and BOB (0 45 120). I can see allowing one angle (like 0) to be considered to come up the same by chance. But all three, no; that would risk over simplification of the problem to the point of making the conclusinions unreliable. All I've been saying is that this has been oversimplied and leaves the conclusion incomplete.

I see no point in rereading the same explanations of the same thing with the same predetermined restrictions being enforced on the separate, should have been independent, observers.
Maybe the two you believe this binary example is conclusive, IMO it is not.

I will close my input to this thread by requesting a binary opinion choice to clarify our differences and confirm our opinions really are different.

Last year the Kwiat Team at Illinois received and spent over $70,000 in funding on scientific testing aimed at closing “loopholes” in the EPR-Bell question represented in this example:

The Opinion choice is RED OR BLUE. pick only one

THE RED OPINION: And my opinion; agrees with scientists such as those on the Kwiat Team that do not consider any existing proof (including this binary one) conclusive. And that additional funding and experimental work on Bell EPR issues such as the tests at Illionis are justified.

THE BLUE OPINION: Your apparent position; that this binary proof is conclusive. Thus the efforts being expended and any additional funding of scientific testing of EPR-Bell issues is no longer justified. Such experiments as they exist along with this binary proof belong in a undergraduate teaching environment. And advanced labs should be concerned with more important work rather than rehashing old news no one has any doubts about.

Are you guys in fact picking BLUE as your opinion?

That is all I want your choice on this opinion RED or BLUE. No Green, no Gray, no Red&Blue, no explanations.
I'm satisfied that my choice of Red is reasonable and that a significant number of real practicing scientists share it.

If you choice really is Blue;
Please I need no further matrix of explanations. Address your concerns to the active scientist that obviously feel differently as new advanced Bell-EPR type testing efforts continue. If you are successful in convincing any of those doing such testing to publicly agree with you that their testing has been unjustified and future funding of that type is no longer justified; then I’ll know I need to relook at your arguments on this approach. No need to add them to this thread just reference us to any papers you may publish to make your point with the scientists that need to stop wasting their efforts. If the details in your papers are enough to convince the scientific community to change their opinion to BLUE it will be good reading for the rest of us.

I think we have shared more than enough on this with each other.
Other than looking for your opinion choice RED or BLUE I will unsubscribe from this thread.
 
  • #70
RandallB said:
Of course you don't understand - you like vanesch are only continuing as before without addressing the point I’ve made. You insist there are only three possible angles
I'm not insisting there are only three possible angles, it's just a condition of the experiment that the two experimenters agree ahead of time that they will choose between three particular angles, even though there are many other possible angles they might have measured.
RandallB said:
and expect that to represent “ They do select them independently”.
Yes, they choose which of the three independently. Obviously, the three angles that they are choosing between were not themselves selected independently by the experimenters, as I said they made an agreement ahead of time along the lines of "on each trial, we'll always choose one of the three angles 0, 60, 90" or whatever.
RandallB said:
We are not talking about pushing a button independently, were talking about independent selecting the 3 functions to be used for those buttons, without any interference or suggestions from non local site such as the other observer.
What do you mean by "functions"? They could design their experiment so that each of the three buttons automatically set the detector to one of the three angles--button A might set it to 0, button B might set it to 60, and button C might set it to 90. It doesn't make sense to argue about the setup itself, because Bell's proof assumes this sort of setup, and then shows that the results QM predicts the experimenters will get when using this particular setup are inconsistent with local realism. Are you arguing that given this experimental setup, the results predicted by QM are not inconsistent with local realism?
RandallB said:
In your example that means a selection of 6 angles (or at least five). Such as ALICE ( 0, 60, 90)and BOB (0 45 120).
Again, it's just part of the assumed setup that they have each agreed to choose between the same three angles on each trial. If Alice is choosing between 0, 60, and 90, then Bob must have agreed to choose between 0, 60 and 90 as well. So on one trial you might have Alice-60 and Bob-90, on another trial you might have Alice-90 and Bob-0, but there will never be a trial where either of them picks an angle that isn't 0, 60, or 90 (if these are the three angles they have agreed in advance to pick between).
RandallB said:
I will close my input to this thread by requesting a binary opinion choice to clarify our differences and confirm our opinions really are different.

Last year the Kwiat Team at Illinois received and spent over $70,000 in funding on scientific testing aimed at closing “loopholes” in the EPR-Bell question represented in this example:

The Opinion choice is RED OR BLUE. pick only one

THE RED OPINION: And my opinion; agrees with scientists such as those on the Kwiat Team that do not consider any existing proof (including this binary one) conclusive. And that additional funding and experimental work on Bell EPR issues such as the tests at Illionis are justified.

THE BLUE OPINION: Your apparent position; that this binary proof is conclusive. Thus the efforts being expended and any additional funding of scientific testing of EPR-Bell issues is no longer justified. Such experiments as they exist along with this binary proof belong in a undergraduate teaching environment. And advanced labs should be concerned with more important work rather than rehashing old news no one has any doubts about.
I am not addressing the issue of whether actual experiments sufficiently resemble Bell's idealized thought-experiment to constitute experimental refutations of local realism, I'm just talking about theoretical predictions here. In Bell's thought-experiment, Bell's theorem shows definitively that any local realist theory must respect the Bell inequalities, and quantum theory definitively predicts the Bell inequalities will be violated in this experiment. When people talk about "loopholes" in EPR experiments that require better tests, they are pointing out ways in which previous experiments may have fallen short of the ideal thought-experiment (not successfully detecting every pair of particles, for example), they are not arguing that the predicted violations of Bell inequalities by quantum theory don't definitively prove that QM is incompatible with local realism (but experiments are needed to check if QM's predictions are actually correct in the real world). Do you agree that on a theoretical level, Bell's theorem shows beyond a shadow of a doubt that the predictions of QM are inconsistent with local realism?
 
Last edited:
  • #71
ThomasT said:
So, superdeterminism is just a special case of determinism involving Bell's theorem and EPR-Bell tests?

Sort of. Superdeterminism is sometimes offerred as a "solution" to Bell's Theorem that restores locality and realism. The problem is that it replaces it with something which is inifinitely worse - and makes no sense whatsoever. Superdeterminism is not really a theory so much as a concept: like God, its value lies in the eyes of the beholder. As far as I know, no actual working theory has ever been put forth that passes even the simplest of tests.
 
  • #72
ThomasT said:
I'm not sure what you mean by perfect correlation. There is no perfect correlation between coincidence rate and any one angular difference.

Hum, no offense, but I think you totally misunderstood the EPR-Bell type experiments. There's a common source, two opposite arms (or optical fibers or anything) and two experimental setups: Alice's and Bob's.
Each experimental setup can be seen to consist of a polarizing beam splitter which splits the incoming light into an "up" part and a "down" part, and to each of these two channels, there's a photomultiplier. The angle of the polarizing beam splitter can be rotated.

Now, if a photon/lightpulse/... comes in which is "up" wrt to the orientation of the beamsplitter, then that photon is going to make click (ideally) once the "up" photomultiplier, and not the down one. If the photon is in the "down" direction, then it is going to make click the down photomultiplier and not the up one. If the photon is polarized in 45 degrees, then it will randomly or make click the up one and not the down one, or it will make click the down one and not the up one.

So, if a photon is detected, in any case one of both photomultipliers will click at Alice. Never both. That's verified. But sometimes none, because of finite efficiency.

At Bob, we have the same.

Now, we look only at those pulses which are detected both at Alice and Bob: if at Alice something clicks, but not at Bob, we reject it, and also vice versa. This is an item which receives some criticism, but it is due to the finite efficiency of the photomultipliers.

However, what one notices is that if both Alice's and Bob's analyzers are parallel, then EACH TIME there is a click at Alice and and Bob, it is the SAME photomultiplier that clicks on both sides. That is, each time that Alice's "up" photomultiplier clicks, well it is also Bob's "up" multiplier that clicks, NEVER the "down" one. And each time it is Alice's "down" photomultiplier that clicks, well it is also Bob's "down" multiplier that clicks, never his "up" one.

THIS is what one means with "perfect correlation".

This can easily be explained if both photons/lightpulses are always OR perfectly aligned with Bob and Alice's analysers, or "anti-aligned". But any couple of photons that would be "in between", say at 45 degrees, will hard to explain in a classical way: if each of them has 50-50% chance to go up or down, at Alice or Bob, why do they do *the same thing* at both sides ?

Moreover, this happens (with the same source) for all angles, as long as Alice's and Bob's are parallel. So if the source was emitting first only perfectly aligned and anti-aligned photon pairs with Alice's and Bob's angles both at, say, 20 degrees (so they are parallel), then it is hard to explain that when both Alice and Bob turn their angles at, say, 45 degrees, they STILL find perfect correlation with classical optics, no ?
 
Last edited:
  • #73
RandallB said:
Of course you don't understand - you like vanesch are only continuing as before without addressing the point I’ve made. You insist there are only three possible angles and expect that to represent “ They do select them independently”. We are not talking about pushing a button independently, were talking about independent selecting the 3 functions to be used for those buttons, without any interference or suggestions from non local site such as the other observer.

But we're not talking about angles here ! I'm talking you about a thought experiment with a box which has 3 buttons! No photons. No polarizers.

We just have a black box machine of which we don't know how it works, but we suppose that it complies to some general ideas (the famous Bell assumptions of locality etc...)
Just 3 buttons on each side, labeled A, B or C, an indicator that the experiment is ready, and a red and a green light.

And then the conditions of functioning, that each time Alice and Bob happen to push the same button, they ALWAYS find that the same light lights up. It never happens that Alice and Bob both push the button C, and at Alice the green light lights up, and at Bob, the red one.
And also the condition that over a long run, at Alice, for the cases where she pushed A, she got on average about 50% red and 50% green, in the cases where she pushed B, the same, and in cases where she pushed C, the same.

These are the elements GIVEN for a thought experiment. It's the description of a thinkable setup. I can build you one with a small processor and a few wires and buttons which does this, so it is not an impossible setup.

The question is, what can we derive as conditions for the list of events where Alice happened to push A, and Bob happened to push B. And for the other list where Alice
happened to push B, and Bob happened to push C. etc...

THIS is the derivation of Bell's theorem (or rather, of Bell's inequalities). He derives some conditions on those lists, given the setup and given the conditions.

IT IS A REASONING ON PAPER. So I'm NOT talking about any *experimental* observationsj in the lab. That's a different story.

Now, it is true of course that the "setup" here corresponds more or less to a setup of principle where there is a common "emitter of pairs of entangled particles", and then two experimental boxes where one can choose between 3 settings of angles (that's equivalent to pushing A, B or C), and get a binary output each time (red or green). It is this idealized setup which is quantified in a simple quantum-mechanical calculation, and which is (more or less well) approximated by real experiments. So these "experimental physics" issues are of course the inspiration for our reasoning. But I repeat: the reasoning presented here has a priori nothing to do with particles, angles, polarizers or anything: just with a black box setup which has certain properties, and of which we try to deduce other properties, under a number of assumptions of the workings of the black box.

So I can answer your "red or blue" question: concerning GENUINE EXPERIMENTS, of course it is a good idea to try to bring the experiment closer to the ideal situation, which is still relatively far away. So yes, in as much as the proposed experiments are indeed improvements, it is a good idea to fund them. But that's a question of how NATURE behaves (does it deviate from quantum mechanics or not).

However, concerning the *formal reasoning*, no there is not much doubt. Quantum mechanics (as a theory) is definitely not compatible with the assumptions of Bell.

In about the same way as that there is not much doubt that Pythagoras' theorem follows from the assumptions (axioms) of Euclidean geometry. Whether or not "real space" is well-described by that Euclidean model. So in as much as spending money on an experiment that tests whether physical space follows or not, the Euclidean prescription might be sensible, spending money to see whether Pythagoras' theorem (on paper) follows from the Euclidean axioms would, I think, be some waste. And there is no link between both! It is not because in real physical space, Euclidean axioms are not correct, that suddenly, Pythagoras' proof from Euclid's axioms is wrong!
 
Last edited:
  • #74
JesseM said:
Imagine that you are the source manufacturing the cards to give to Alice and Bob (the common cause). Do you agree that if the cards cannot communicate and choose what fruit to show based on what box was scratched on the other card (no nonlocality), and if you have no way to anticipate in advance which of the three boxes Alice and Bob will each choose to scratch (no superdeterminism), then the only way for you to guarantee that they will always get opposite results when the scratch the same box is to predetermine the fruit that will appear behind each box A, B, C if it is scratched, making sure the predetermined answers are opposite for the two cards (so if Alice's card has predetermined answers A+,B+,C-, then Bob's card must have predetermined answers A-,B-,C+)?

The nice thing about the proof presented earlier in this thread (which some here don't seem to understand) is that a priori, one even leaves in place the possibility of some random element in the generation of the results and that the assumption of locality only means that the *probability* of having a cherry or a banana is determined, but not necessarily the outcome, and that it FOLLOWS from the above requirement of perfect (anti-) correlation that they must be pre-determined.

I say this because sometimes a (senseless) objection to Bell's argument is that he *assumes* determinism. No assumption of determinism is necessary, but it FOLLOWS that the probabilities must be 0 or 1 (once common information is taken into account) from the perfect correlation.
 
  • #75
DrChinese said:
Superdeterminism is sometimes offerred as a "solution" to Bell's Theorem that restores locality and realism.

From a logical point of view it is a solution. No quotes needed.

The problem is that it replaces it with something which is inifinitely worse - and makes no sense whatsoever.

I've heard many times statements like these but I've heard no valid argument against superdeterminism. Can you present such an argument?

Superdeterminism is not really a theory so much as a concept: like God, its value lies in the eyes of the beholder.

Superdeterminism is nothing but the old, classical determinism with a requirement of logical consistency added.

As far as I know, no actual working theory has ever been put forth that passes even the simplest of tests.

This is true, but it says nothing about the possibility that such a theory might exist.
 
  • #76
ueit said:
From a logical point of view it is a solution. No quotes needed.

I've heard many times statements like these but I've heard no valid argument against superdeterminism. Can you present such an argument?

Logically, it is true that there is no argument that in a deterministic theory, superdeterminism is not supposed to hold. After all, everything in a deterministic frame is a function of the initial conditions, which can always be picked in exactly such a way as to obtain any correlation you want. It is based on such kind of reasoning that astrology has a ground.

However, as I pointed out already a few times, it is an empirical observation that things which don't seem to have a direct or indirect causal link happen to be statistically independent. This is the only way that we can "disentangle" cause-effect relationships (namely, by observing correlations between the "randomly" selected cause, and the observed effect). In other words, "coincidences" obey statistical laws.

It is sufficient that one single kind of phenomenon doesn't follow this rule, and as a consequence, no single cause-effect relationship cannot be deduced anymore. Simply because this single effect can always be included in the "selection chain" of any cause-effect relationship and hence "spoil" the statistical independence in that relationship.

So *if* superdeterminism is true, then it is simply amazing that we COULD deduce cause-effect relationships at all, in just any domain of scientific activity. Ok, this could be part of the superdeterminism too, but it would even be MORE conspirational: superdeterminism that mimics as determinism. Call it "hyperdeterminism" :smile:

Now that I come to think of it, it could of course explain a lot of crazy things that happen in the world... :-p
 
  • #77
Having looked up where it was brought up, Super-determinism is not a special case of determinism at all and is actually a fairly simple fourth assumption.

Such a term shouldn't even be used to describe this possibility, it is actually a whole other assumption that is unrelated to the other 3. Perhaps the name is just a way to try and hide this fact.

The assumption being referred to is that there was not something that occurred in the past that both caused the person to choose the detection settings and caused the particles to behave in such a way.

The implications of that being the case are a little farfetched, but other than that it is just plain old determinism. It is not the same thing as objective reality assumption, since it could just be in this one case.
 
Last edited:
  • #78
krimianl99 said:
since it could just be in this one case.

No, not really. If there is an influence that STRONGLY CORRELATES just ANY technique that I use to make the choices at Bob, the choices at Alice, and the particles sent out, then this means that there are such correlations EVERYWHERE. As I said in another post, nothing stops me in principle from using the selection for medecine/placebo in a medical test determine at the same time the settings at Bob, and use the results of the medical tests on another set of ill people at Alice. If there have to be correlations between the choices of Alice and Bob in all cases, then also in THIS case and hence between any medecine/placebo selection procedure on one hand, and medical tests on another.

But this would mean that any correlation between the outcome of a medical test and whether or not a person received a treatment is never a proof of the medicine working, as I found such correlations already between two DIFFERENT sets of patients (namely those at Bob to get the medecine on one hand, and those at Alice who, whether they got better or not, determined Alice's choice).
 
  • #79
vanesch said:
If there is an influence that STRONGLY CORRELATES just ANY technique that I use to make the choices at Bob, the choices at Alice, and the particles sent out, then this means that there are such correlations EVERYWHERE.

How so? Just because you don't know what could cause such a correlation doesn't mean the correlation would always be there just because it is there when tested. Maybe an event that causes the particles to become entangled radiates a mind control wave to the experimenters but only at times where the particles are about to be tested. It's not much more far fetched then the whole thing to start.

It's just a fourth assumption that has nothing to do with the others. Furthermore it illustrates my point about the differences between the limits of induction and people just making mistakes with deduction.

With more experiences, different points of view, and a lot of practice understanding the limits of induction and using them, the human race can definitely reduce uncertainty caused by the limits of induction.

But that is TOTALLY different than checking for errors in DEDUCTIVE reasoning. In one case you are checking for something similar to a typo, and in the other you are being totally paranoid that anything that you haven't already thought of could be going on.

In a real proof, all induction is limited to the premises. As long as you don't have a reason to doubt the premises, the proof holds. In so called "proof" by negation the whole thing is subject to the limits of induction.
 
Last edited:
  • #80
krimianl99 said:
In a real proof, all induction is limited to the premises. As long as you don't have a reason to doubt the premises, the proof holds. In so called "proof" by negation the whole thing is subject to the limits of induction.
What do you mean here? Would you deny that "proof by contradiction" is a deductive argument rather than an inductive one? It's often used in mathematics, for example (look at this proof that there is no largest prime number). And Bell's theorem can be understood as a purely theoretical argument to show that certain classes of mathematical laws cannot generate the same statistical predictions as the theory of QM.
 
  • #81
Ian Davis said:
True, but I find just as intriguing the question as to which way that same pulse of light is traveling within the constructed medium. The explanation that somehow the tail of the signal contains all the necessary information to construct the resulting complex wave observed, and the coincidence that the back wave visually intersects precisely as it does with the entering wave, without in any way interferring with the arriving wave, seems to me a lot less intuitive than that the pulse on arriving at the front of the medium travels with a velocity of ~ -2c to the other end, and then exits. The number 2 of all numbers also seems strange. Why not 1. It seems a case where we are willing to defy occams razor in order to defend apriori beliefs. How much energy is packed into that one pulse of light, and how is this energy to be conserved when that one pulse visually becomes three. From where does that tail of the incoming signal derive the strength to form such a strong back signal. Is the signal fractal in the sense that within some small part is the description of the whole. Questions I can't answer not being a physicist, but still questions that trouble me with the standard explanations given, about it all being smoke and mirrors.

Likewise I find Feynmans suggestion that spontaneous construction and destruction of positron electron pairs is our world view is in reality electrons changing direction in time as consequence of absorbing/emitting a photon both intriguing and rather appealing.

It does seem that our reluctance to have things move other than forwards in time means that we must jump through hoops to explain why despite appearances things like light, electrons and signals cannot move backwards in time. My primary interest is in the question of time itself. I'm not well equipped to understand answers to this question, but it seems to me that time is the one question most demanding and deserving of serious thought by physicists even if that thought produces no subsequent answers.

Re Feynman; the notion of going backwards in time is simply a metaphor. It turns out that the manipulations required to make a Dirac Hamiltonian with only positive energy eigenvalues, are, equivalent to having negative energy solutions travel backwards in time -- this is nothing more than setting (-E)) into (+E),in the expression exp(iEt). If you go back and review first the old-fashioned perturbation theory, and its successor, modern covariant field theory, you can see very clearly the origin's of Feynman's metaphor. Among other things, you will see how the old-fashioned perturbation theory diagrams combine to produce the usual covariant Feynman diagrams of, say the Compton Effect. You will get a much better idea of what "backwards in time" brings to the table -- in my judgment the idea is a creative fiction, but a very powerful one.

QFT is a somewhat difficult subject. To get even a basic understanding you need to deal with both the technical as well as the conceptual aspects. I highly recommend Weinberg's Chapter I of Vol I of Quantum Theory of Fields -- he gives a good summary of the basic stuff you need to know to start to understand QFT. Quite frankly, the physics community embraced Feynman's metaphor rather quickly -- along with the Schwinger and Tomonaga versions -- almost100%, and became part of "what everybody knows", as in tacit physics knowledge, as in no big deal. As a metaphor, Feynman's idea is brilliant and powerful; as a statement describing reality it is at least suspect.

Does the usual diagram of an RLC circuit mirror the physical processes of the circuit?

Regards,
Reilly Atkinson
 
Last edited:
  • #82
ueit said:
From a logical point of view it is a solution. No quotes needed.

I've heard many times statements like these but I've heard no valid argument against superdeterminism. Can you present such an argument?This is true, but it says nothing about the possibility that such a theory might exist.

There is no theory called superdeterminism which has anything to do with particle theory. There is an idea behind it, but no true theory called something like "superdeterministic quantum theory" exists. That is why quotes are needed. You cannot negate a theory which assumes that which it seeks to prove.

Note that superdeterminism is a totally ad hoc theory with no testable components. Adds nothing to our knowledge of particle behavior. And worse, if true, would require that every particle contain a complete history of the entire universe so it would be capable of matching the proper results for Bell tests - while remaining local.

In addition, there would need to be connections between forces - such as between the weak and the electromagnetic - that are heretofor unknown and not a part of the Standard Model. That is because superdeterminism would lead to all kinds of connections and would itself impose constraints.

Just as Everett's MWI required substantial work to be fleshed into something that could be taken seriously, and Bohm's mechanics is still being worked on, the same would be required of a "superdeterministic" theory before it would really qualify as viable. I have yet to see a single paper published which seriously takes apart the idea of superdeterminsm in a critical manner and builds a version which meets scientific rigors.

Here is a simple counter-example: The detectors of Alice and Bob are controlled by an algorithm based on radioactive decay of separate uranium samples. Thus, randomness introduced by the weak force (perhaps the time of decay) controls the selection of angle settings. According to superdeterminism, those separated radioactive samples actually independently contain the blueprint for the upcoming Bell test and work together (although locally) to insure that what appears to be a random event is actually connected.

Please, don't make me laugh any harder. :)
 
  • #83
vanesch said:
Logically, it is true that there is no argument that in a deterministic theory, superdeterminism is not supposed to hold. After all, everything in a deterministic frame is a function of the initial conditions, which can always be picked in exactly such a way as to obtain any correlation you want.

This is not what I have in mind. I don't see EPR explained by the initial conditions, but by a new law of physics that holds regardless of those parameters.

It is based on such kind of reasoning that astrology has a ground.

May be but that is not what I propose.

However, as I pointed out already a few times, it is an empirical observation that things which don't seem to have a direct or indirect causal link happen to be statistically independent.

1. In order to suspect a causal link you need a theory. In Bohm's theory the motion of one particle directly influences the motion of another no matter how far. The transactional interpretation has an absorber-emitter information exchange that goes backwards in time. None of these is obvious or intuitive. But we accept them (more or less) because the theory say that. Now, if a theory says that emission and absorbtion events share a common cause in the past then they "seem" causally related because the theory says it is so.

2. We are speaking about microscopic events. We have no direct empirical observation about this world and Heisenberg uncertainty introduces more limitations. So clearly you need a theory first and then to decide what is causaly related and what it is not.

This is the only way that we can "disentangle" cause-effect relationships (namely, by observing correlations between the "randomly" selected cause, and the observed effect). In other words, "coincidences" obey statistical laws.

So you wouldn't believe that a certain star can produce a supernova explosion until you "randomly" select a star and start throwing matter in it, right?

It is sufficient that one single kind of phenomenon doesn't follow this rule, and as a consequence, no single cause-effect relationship cannot be deduced anymore.

I strongly disagree. This is like saying that if a non-local theory is true then we cannot do science anymore because our experiments might be influenced by whatever a dude in another galaxy is doing. All interpretations bring some strange element but this is not necessarily present in an obvious way at macroscopic level.

Simply because this single effect can always be included in the "selection chain" of any cause-effect relationship and hence "spoil" the statistical independence in that relationship.

So, please show me how the assumption that any emitter-absorber pair has a common "ancestor" "spoils" the statistical independence in a medical test. I think you will need to also assume that a patient has all the emitters and the medic all the absorbers (or at least most of them) to deduce such a thing. But maybe you have some other proof in mind.

So *if* superdeterminism is true, then it is simply amazing that we COULD deduce cause-effect relationships at all, in just any domain of scientific activity. Ok, this could be part of the superdeterminism too, but it would even be MORE conspirational: superdeterminism that mimics as determinism. Call it "hyperdeterminism" :smile:

I think you are using a double-standard here. All interpretations have this kind of conspiracy. We have a non-determinismic theory that mimics as determinism, a non-local theory that mimics as local and a multiverse theory that mimics as a single, 4D-universe theory. Also there is basically no difference between determinism and superdeterminism except the fact that the first one can be proven to be logically inconsistent.

I think that the main error in your reasoning comes from a huge extrapolation from microscopic to classical domain. You may have statistical independence at macroscopic level in a superdeterministic theory just like you can have a local universe based on a non-local fundamental theory.
 
  • #84
ueit said:
1. In order to suspect a causal link you need a theory. In Bohm's theory the motion of one particle directly influences the motion of another no matter how far. The transactional interpretation has an absorber-emitter information exchange that goes backwards in time. None of these is obvious or intuitive. But we accept them (more or less) because the theory say that. Now, if a theory says that emission and absorbtion events share a common cause in the past then they "seem" causally related because the theory says it is so.

Yes, and it is about that class of theories that Bell's inequalities tell us something.

2. We are speaking about microscopic events.

No, we are talking about black boxes with choices by experimenters, binary results, and the correlations between those binary results as a function of the choices of the experimenter. Bell's inequalities are NOT about photons, particles or anything specific. They are about the link that there can exist between *choices of observers* on one hand, and *correlations of binary events* on the other hand.

We have no direct empirical observation about this world and Heisenberg uncertainty introduces more limitations. So clearly you need a theory first and then to decide what is causaly related and what it is not.

We consider a *class* of theories: namely those that are local, in which we do not consider superdeterminism of the kind that a distant choice by a distant observer can have a statistical correlation with a choice of a local observer, and with an eventual "central source" (given locality, this can then only happen through "special initial conditions"), and we consider that there are genuine binary outcomes each time. We consider now that whatever theory is describing the functioning of our black box experiment, it is part of this class of theories. Well, if that's the case, then there are relations between certain correlations one can observe that way. The particular relation that interests us here is that where it is given that for identical choices (Alice A ,and bob A for instance), the correlation is complete.
It then turns out that one has conditions on the OTHER correlations.

I strongly disagree. This is like saying that if a non-local theory is true then we cannot do science anymore because our experiments might be influenced by whatever a dude in another galaxy is doing.

But this is TRUE! The only way in which Newtonian gravity gets out of this, is because influences are diminishing with distance. If gravity weren't falling in 1/r^2, but go, say, as ln(r), it would be totally impossible to deduce the equivalent of Newton's laws, ever!


I think that the main error in your reasoning comes from a huge extrapolation from microscopic to classical domain. You may have statistical independence at macroscopic level in a superdeterministic theory just like you can have a local universe based on a non-local fundamental theory.

Of course! The only thing Bell is telling us, is that given the quantum-mechanical predictions, it will not be possible to do this with a non-superdeterministic, local, etc... theory. That's ALL.
 
  • #85
krimianl99 said:
How so? Just because you don't know what could cause such a correlation doesn't mean the correlation would always be there just because it is there when tested. Maybe an event that causes the particles to become entangled radiates a mind control wave to the experimenters but only at times where the particles are about to be tested. It's not much more far fetched then the whole thing to start.

Well, indeed, we make the assumption that no such thing happens. That's the assumption of no superdeterminism: that there is no statistical correlation between the brain of an experimenter making a choice, and the emission of a pair of particles.
 
  • #86
DrChinese said:
Note that superdeterminism is a totally ad hoc theory with no testable components. Adds nothing to our knowledge of particle behavior.

Rather like the "theory" of intelligent design in biology.
 
  • #87
vanesch said:
Well, indeed, we make the assumption that no such thing happens. That's the assumption of no superdeterminism: that there is no statistical correlation between the brain of an experimenter making a choice, and the emission of a pair of particles.

Vanesch, that's nice of you to say. (And I mean that in a good way.)

But really, I don't see that "no superdeterminism" is a true assumption any more than it is to assume that "the hand of God" does not personally intervene to hide the true nature of the universe each and every time we perform an experiment. There must be a zillion similar assumptions that could be pulled out of the woodwork ("the universe is really only 10 minutes old, prior history is an illusion"). They are basically all "ad hoc", and in effect, anti-science.

In no way does the presence or absence of this assumption change anything. Anyone who wants to believe in superdeterminism can, and they will still not have made one iota of change to orthodox quantum theory. The results are still as predicted by QT, and are different than would have been expected by EPR. So the conclusion that "local realism holds" ends up being a Pyrrhic victory (I hope I spelled that right).
 
  • #88
DrChinese said:
Vanesch, that's nice of you to say. (And I mean that in a good way.)
But really, I don't see that "no superdeterminism" is a true assumption any more than it is to assume that "the hand of God" does not personally intervene to hide the true nature of the universe each and every time we perform an experiment. There must be a zillion similar assumptions that could be pulled out of the woodwork ("the universe is really only 10 minutes old, prior history is an illusion"). They are basically all "ad hoc", and in effect, anti-science.

Well, the aim of what I wanted to show in this thread is that there is a logical conclusion that one can draw from a certain number of assumptions (and the meta-assumptions that logic and so on hold of course). Whether these assumptions are "reasonable", "evident" or whatever doesn't change anything to the fact that they are necessary or not in the logical deduction. And one needs the assumption of no superdeterminism in two instances:
1) When one writes that the residual uncertainty of the couple, say, (red,red) is the product of the probability to have red at Alice (taking into account the common information) and the product of the probability to have red at Bob: in other words the statistical independence of these two unrelated events
2) When one assumes that the distribution of lambda itself is statistically independent of the residual probabilities (we can weight D over this probability) and the choices at Alice and Bob (so is no function of X and Y).

This is like showing Pythagoras' theorem: it is not because you might find the fifth axiom of Euclid "so evident as not to be a true assumption", that you don't need it in the logical proof!
 
Last edited:
  • #89
vanesch said:
... the assumption of no superdeterminism: that there is no statistical correlation between the brain of an experimenter making a choice, and the emission of a pair of particles.
Could you put this in observable terms? Something like, the assumption is made (in the formulation of Bell inequalities) that there is no connection between a pair of polarizer settings and the paired detection attributes associated with those polarizer settings?

I feel like I'm getting farther and farther away from clarifying this stuff for myself, and I still have to respond to some replies to my queries by you and Jesse. And thanks, by the way.

Anyway, the experimental results seem to make very clear the (quantitative) relationship between joint polarizer settings and associated joint detection attributes. The theoretical approach and test preparation methods inspired by quantum theory yields a very close proximity between qm predictions and results. This quantum theoretical approach assumes a common cause, and involves the instrumental analysis-filtering of (assumed) like physical entities by like instrumental analyzers-filters, and timer-controlled pairing techniques.

We know that there is a predictable quantitative relationship between joint polarizer settings and pairs of appropriately associated joint detection attributes. And so, it's assumed that there is a qualitative relationship also. This is the basis for the assumption of common cause and common filtration of common properties.

The experimental violation of Bell inequalities has shown that the assumptions of common cause and common filtration of common properties can't be true if one uses a certain sort of predictive formulation wherein one also assumes that events at A and B (for any given set of paired detection attributes) are independent of each other, and the probability of coincidental detections in this case would be the product of the separate probabilities at A and B. Of course, the experimental design(s) necessary to produce entanglement preclude such independence -- and the quantum mechanical predictive formulation in association with the first two (common cause) assumptions considered in light of the experimental results supports the common cause (and therefore similar or identical disturbances moving from emitter to filter during any given coincidence interval) assumption(s).
 
Last edited:
  • #90
Originally Posted by ThomasT
I'm not sure what you mean by perfect correlation. There is no perfect correlation between coincidence rate and anyone angular difference.

vanesch said:
Hum, no offense, but I think you totally misunderstood the EPR-Bell type experiments.
No offense taken. I realize that I can be a bit, er, dense at times. I'm here to learn, to pass on anything that I have learned and think is ok, and especially to put out here for criticism any insights that I think I might have. I very much appreciate you mentors and advisors, etc., taking the time to explain things.
vanesch said:
There's a common source, two opposite arms (or optical fibers or anything) and two experimental setups: Alice's and Bob's.
Each experimental setup can be seen to consist of a polarizing beam splitter which splits the incoming light into an "up" part and a "down" part, and to each of these two channels, there's a photomultiplier. The angle of the polarizing beam splitter can be rotated.

Now, if a photon/lightpulse/... comes in which is "up" wrt to the orientation of the beamsplitter, then that photon is going to make click (ideally) once the "up" photomultiplier, and not the down one. If the photon is in the "down" direction, then it is going to make click the down photomultiplier and not the up one. If the photon is polarized in 45 degrees, then it will randomly or make click the up one and not the down one, or it will make click the down one and not the up one.

So, if a photon is detected, in any case one of both photomultipliers will click at Alice. Never both. That's verified. But sometimes none, because of finite efficiency.

At Bob, we have the same.

Now, we look only at those pulses which are detected both at Alice and Bob: if at Alice something clicks, but not at Bob, we reject it, and also vice versa. This is an item which receives some criticism, but it is due to the finite efficiency of the photomultipliers.

However, what one notices is that if both Alice's and Bob's analyzers are parallel, then EACH TIME there is a click at Alice and and Bob, it is the SAME photomultiplier that clicks on both sides. That is, each time that Alice's "up" photomultiplier clicks, well it is also Bob's "up" multiplier that clicks, NEVER the "down" one. And each time it is Alice's "down" photomultiplier that clicks, well it is also Bob's "down" multiplier that clicks, never his "up" one.

THIS is what one means with "perfect correlation".
Ok. I understand what you're talking about wrt perfect correlation now. This is only applicable when the analyzers are aligned. And, in this case we're correlating 'up' clicks at A with 'up' clicks at B and 'down' clicks at A with 'down' clicks at B.

vanesch said:
This can easily be explained if both photons/lightpulses are always OR perfectly aligned with Bob and Alice's analysers, or "anti-aligned". But any couple of photons that would be "in between", say at 45 degrees, will hard to explain in a classical way: if each of them has 50-50% chance to go up or down, at Alice or Bob, why do they do *the same thing* at both sides ?
Classically, if the analyzers are aligned and they're analyzing the same optical disturbance, then you would expect just the results that you get. Quantum mechanics gets around the problem of disturbance-filter angular relationship by not saying anything about specific emission angles. It just says that if the optical disturbances are emitted in opposite directions, then the analyzers will be dealing with essentially the same thing(s). And classical optics tells us that if the light between analyzer A and analyzer B is of a sort, then the results at detector A and detector B will be the same for any given set of paired detection attributes: if there's a detection at A then there will be a detection at B, and if there's no detection at A, then there will be no detection at B.

vanesch said:
Moreover, this happens (with the same source) for all angles, as long as Alice's and Bob's are parallel. So if the source was emitting first only perfectly aligned and anti-aligned photon pairs with Alice's and Bob's angles both at, say, 20 degrees (so they are parallel), then it is hard to explain that when both Alice and Bob turn their angles at, say, 45 degrees, they STILL find perfect correlation with classical optics, no ?
We don't know what the source is emitting. From the experimental results, there's not much that can be said about it. But the assumption is made that the analyzers are analyzing the same thing at both ends during any given coincidence interval.

I think that one can get an intuitive feel for why the quantum mechanical predictions work by viewing them from the perspective of the applicable classical optics laws and experiments. Don't you think so?
 
  • #91
ThomasT said:
Classically, if the analyzers are aligned and they're analyzing the same optical disturbance, then you would expect just the results that you get.

Uh, no, not at all! That's the whole point. If you send identical but independent light pulses to an analyser, it will *randomly* click "up" or "down", but the probabilities of them depend on the precise orientation between the analyser and the polarisation of the pulses.

In other words, imagine that two identical pulses arrive, one after the other, on the same analyser. You wouldn't expect this analyser to click twice "up" or twice "down" in a row, right ? You would expect the responses to be statistically independent. Well, the same for two identical pulses sent out to two different (but identical) analysers.

We don't know what the source is emitting. From the experimental results, there's not much that can be said about it. But the assumption is made that the analyzers are analyzing the same thing at both ends during any given coincidence interval.

If you take the classical description, then you KNOW what the two pulses are going to do, no ?
 
  • #92
Originally Posted by ThomasT
Classically, if the analyzers are aligned and they're analyzing the same optical disturbance, then you would expect just the results that you get.
vanesch said:
Uh, no, not at all! That's the whole point. If you send identical but independent light pulses to an analyser, it will *randomly* click "up" or "down", but the probabilities of them depend on the precise orientation between the analyser and the polarisation of the pulses.

In other words, imagine that two identical pulses arrive, one after the other, on the same analyser. You wouldn't expect this analyser to click twice "up" or twice "down" in a row, right ? You would expect the responses to be statistically independent. Well, the same for two identical pulses sent out to two different (but identical) analysers.
If you're talking about a setup where you have a polarizer between the emitter and the analyzing polarizer, then ok. However, in that setup, the opposite-moving pulses wouldn't be considered identical following their initial polarization in the same sense that they can be considered identical if they remain unaltered until they hit their respective analyzing polarizers. I'm considering the quantum EPR-Bell type setups (eg. Aspect et al experiment using time-varying analyzers, 1984 I think).

For use as an analogy to the EPR-Bell experiments (at least the simplest optical ones) I'm thinking of a polariscopic type setup. It's the only way to be sure that you've got the same optical disturbance incident on (extending between) both polarizers during a certain coincidence interval. I'm trying to understand, among other things, why Heisenberg alludes so frequently to classical optics in various writings on quantum theory. It would seem to give a basis for the so called projection postulate among other things. I mean Heisenberg, Schroedinger, Dirac, Born, Bohr, Pauli, Jordan, etc. didn't just snatch stuff out of thin air. They had reasons for adopting the methods they did, and if something worked then it was retained in the theory. Of course, sometimes their reasons aren't so clear to mortals such as myself. :smile: Heisenberg's explanations are particularly hard for me to understand sometimes. I'm not sure how much of this is due to his style of expression, and the fact that I don't know much German and must rely on translations.

Anyway, I understand that one can't quantitatively reproduce the results of EPR-Bell tests using strictly classical methods. Quantitative predictability isn't the sort of understanding that I'm aiming at here. Quantum theory already gives me that.

Returning to my analogy (banal thought it might be), a simple EPR-Bell optical setup might look like this:

detector A <--- polarizer A <--- emitter ------> polarizer B ---> detector B

And a polariscopic setup might look like this:

source ------> polarizer A ---------------------> polarizer B ---> detector

I'll get to the details of my analogy in a future post (if the connection doesn't immediately jump out at you :smile:). It provides a means of understanding that nonlocality (in the spacetime sense of the word) is not necessary to explain the results of EPR-Bell tests.

Originally Posted by ThomasT
We don't know what the source is emitting. From the experimental results, there's not much that can be said about it. But the assumption is made that the analyzers are analyzing the same thing at both ends during any given coincidence interval.
vanesch said:
If you take the classical description, then you KNOW what the two pulses are going to do, no ?

Sorry that I didn't state this clearly at first. I'm not looking for a classical description per se. I don't think that's possible. Quantum theory is necessary.

I'm looking more for the classical basis for certain aspects of quantum theory, because, as far as I can tell, the meaning of Bell's theorem is that we can't, in a manner of speaking, count our chickens before they're hatched (sometimes called, most confusingly I think, Bell's realism assumption). Which is one important reason why quantum theoretical methods (eg. superposition of states) are necessary.

The realistic or hidden variable approach is actually, in contrast with quantum theory, the metaphysical speculation approach. Which so far has turned out to be not ok when applied to quantum experimental results.
 
  • #93
JesseM said:
To summarize what I was saying in that paragraph, how about defining superdeterminism as something like "a lack of statistical independence between variables associated with the particle prior to measurement and the experimenters' choice of what detector setting to use when making the measurement"?
Can I paraphrase the above as:

Superdeterminism says that, with regard to, say, the 1984 Aspect et al experiment that used time-varying analyzers, variables associated with photon production and variables associated with detector setting selection are statistically dependent (ie., strongly correlated).

But how is this different from regular old garden variety determinism?

And, of course, this is true. Isn't it? A photon pair (a coincidental detection) is produced during a certain time interval. This (temporal proximity) is how they're chosen to be paired. Even though the settings of the analyzing polarizers are varying perhaps several times during the photon production interval (and while the optical disturbances are enroute from emitter to polarizer), there's one and only one setting associated with each photon of the pair (which is determined by temporal proximity to the detection event).
 
Last edited:
  • #94
ThomasT said:
But how is this different from regular old garden variety determinism?
It would be a bizarre form of determinism where nature would have to "predict" the future choices of the experimenters (which presumably would depend on a vast number of factors in the past light cone of their brain state at the moment they make the choice, including things like what they had for lunch) at the moment the photons are created, and select their properties accordingly.
ThomasT said:
And, of course, this is true. Isn't it? A photon pair (a coincidental detection) is produced during a certain time interval.
Wait, are you equating the detection of the photons with their being "produced"? The idea of a local hidden variables theory is to explain the fact that the photons always give the same results when identical detector settings are used by postulating that the photons are assigned identical predetermined answers for each possible detector setting at the moment they are emitted from the source--nature can't wait until they are actually detected to assign them their predetermined answers, because there'd be no way to make sure they get the same answers without FTL being involved (as there is a spacelike separation between the two detection-events). So if you define superdeterminism as:
Superdeterminism says that, with regard to, say, the 1984 Aspect et al experiment that used time-varying analyzers, variables associated with photon production and variables associated with detector setting selection are statistically dependent (ie., strongly correlated).
...this can only be the correct definition if "photon production" refers to the moment the two photons were created/emitted from a common location, not the moment they were detected. Nature must have assigned them predetermined (and identical) answers for the results they'd give on each detector setting at that moment (there is simply no alternative under local realism--do you understand the logic?), and superdeterminism says that when assigning them their answers, nature acts as if it "knows in advance" what combination of detector settings the two experimenters will later use, so if we look at the subset of trials where the experimenters went on to choose identical settings, the statistical distribution of preassigned answers would be different in this subset than in the subset of the trials where the experimenters went on to choose different settings.
 
  • #95
ThomasT said:
Anyway, I understand that one can't quantitatively reproduce the results of EPR-Bell tests using strictly classical methods. Quantitative predictability isn't the sort of understanding that I'm aiming at here. Quantum theory already gives me that.

Returning to my analogy (banal thought it might be), a simple EPR-Bell optical setup might look like this:

detector A <--- polarizer A <--- emitter ------> polarizer B ---> detector B

And a polariscopic setup might look like this:

source ------> polarizer A ---------------------> polarizer B ---> detector

Yes, THIS setup will give you identical results as the EPR setup. But, you realize that we are doing here the measurements on the SAME lightpulse, while in the EPR setup, they are two SEPARATE pulses, right ? And that in the second setup there's no surprise that polarizer A will have an influence on the lightpulse that will be incident on polarizer B, given that it passed through A.

However, in the EPR setup, we are talking about 2 different light pulses, and the light pulse that went to B has never seen setup A.

edit:
So this is a bit as if, when someone would demonstrate the use of a faster-than-light telephone, where he talks to someone on alpha centaury, and gets immediately an anwer, you would say that that has nothing surprising, because you can think of a similar setup, where you have a telephone to the room next door, and it functions in the same way :smile:
 
Last edited:
  • #96
Originally Posted by ThomasT
I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.

Originally Posted by JesseM
But that's the whole idea that Bell's theorem intends to refute. Bell starts by imagining that the perfect correlation when both experimenters use the same detector setting is due to a common cause--each particle is created with a predetermined answer to what result it will give on any possible angle, and they are always created in such a way that they are predetermined to give opposite answers on each possible angle. But if you do make this assumption, it leads you to certain conclusions about what statistics you'll get when the experimenters choose different detector settings, and these conclusions are violated in QM.

Originally Posted by vanesch
But that's exactly what Bell's theorem analyses! Is it possible that the perfect correlations on one hand (the C(A,A) = C(B,B) = C(C,C) = 1) and the observed "crossed correlations" (C(A,B), C(B,C) and C(A,C) ) can be the result of a common origin ?

It turns out that the answer is no, if the C are those predicted by quantum mechanics for an entangled pair of particles, and we pick the right angles of the analysers.


---------------------
Ok, let's forget about super-duper-quasi-hyper-whatever determinism for the moment. The first statement in this post isn't refuted by any formal treatment (Bell or other, involving choices-settings and binary results), and is the situation that the quantum mechanical treatment assumes in dealing with (at least certain sorts of) entangled pairs.

I've tried to show how this can be visualized by using the analogy of a polariscopic setup to the simplest optical Bell-EPR setup.

Viewed from this perspective, there's no mystery at all concerning why the functional relationship between angular difference and coincidence rate is the same as Malus' Law.

The essential lessen I take from experimental violations of Bell inequalities is that physics is a long way from understanding the deep nature of light -- but the general physical bases of quantum entanglement can be understood (in a qualitative, not just quantitative, sense) now.

To reiterate, Bell's theorem doesn't contradict the idea of common emission cause and common emission properties. It does contradict the assignment of specific values, eg., polarization angles, etc., to emitted pairs.
 
Last edited:
  • #97
ThomasT said:
The first statement in this post isn't refuted by any formal treatment (Bell or other, involving choices-settings and binary results)
If you're talking about the statement "I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing", that absolutely is refuted by Bell--do you still not understand that Bell started by assuming the very sort of "common cause" you're talking about, and proved that under local realism, it could not possibly explain the correlations seen by QM?

Did you ever look over the example involving scratch lotto cards I gave in post #68 on this thread? If so, do you see that the whole point of the example was to try to explain the perfect correlations in terms of a common cause--a source manufacturing cards so that the fruit behind each possible square on two matched cards would always be opposite? Do you see that this assumption naturally leads to the conclusion that when the experimenters pick different boxes to uncover, they will get opposite results at least 1/3 of the time? I gave a slight variation on this proof with an example involving polarized light in post #22 of this thread if you'd find that helpful. It would also be helpful to me if you answered the question I asked at the end of the post with the scratch lotto card example:
Imagine that you are the source manufacturing the cards to give to Alice and Bob (the common cause). Do you agree that if the cards cannot communicate and choose what fruit to show based on what box was scratched on the other card (no nonlocality), and if you have no way to anticipate in advance which of the three boxes Alice and Bob will each choose to scratch (no superdeterminism), then the only way for you to guarantee that they will always get opposite results when the scratch the same box is to predetermine the fruit that will appear behind each box A, B, C if it is scratched, making sure the predetermined answers are opposite for the two cards (so if Alice's card has predetermined answers A+,B+,C-, then Bob's card must have predetermined answers A-,B-,C+)? And if you agree with this much, do you agree or disagree with the conclusion that if you predetermine the answers in this way, this will necessarily mean that when they pick different boxes to scratch they must get opposite fruits at least 1/3 of the time?
ThomasT said:
Viewed from this perspective, there's no mystery at all concerning why the functional relationship between angular difference and coincidence rate is the same as Malus' Law.
Of course there is. Malus' law doesn't talk about the probability of a yes/no answer, it talks about the reduction in intensity of light getting through a polarizer--to turn it into a yes/no question you'd need something like a light that would go on only if the the light coming through the polarizer was above a certain threshold of intensity (like in the example from post #22 of the other thread I mentioned above), or which had a probability of turning on based on the intensity that made it through the polarizer (which could ensure that if the wave is polarized at angle theta and the polarizer is set to angle phi, then the probability the light would go on would be cos^2[theta - phi]). But even if you did this, there'd be no possible choice of the waves' angle theta such that, if two experimenters at different locations set their two polarizer angles to phi and xi and measured two waves with identical polarization angle theta, the probability of both getting the same yes/no answer would be equal to cos^2[phi - xi]; Bell's theorem proves that it's impossible to reproduce this quantum relationship (which is not Malus' law) under local realism. If you don't see why, I'd ask again that you review the lotto card analogy and tell me if you agree that the probabilistic claim about getting correlated results at least 1/3 of the time when the two people pick different boxes to scratch should be guaranteed to hold under local realism; if you agree in that example but don't see how it extends to the case of polarized waves, I can elaborate on that point if you wish.
ThomasT said:
To reiterate, Bell's theorem doesn't contradict the idea of common emission cause and common emission properties. It does contradict the assignment of specific values, eg., polarization angles, etc., to emitted pairs.
It contradicts the idea that any sort of "common cause" explanation which is consistent with local realism can match the results predicted by QM.
 
  • #98
ThomasT said:
ThomasT said:
I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.
Ok, let's forget about super-duper-quasi-hyper-whatever determinism for the moment. The first statement in this post isn't refuted by any formal treatment (Bell or other, …….

I've tried to show how this can be visualized by using the analogy of a polariscopic setup to the simplest optical Bell-EPR setup. ... Viewed from this perspective, there's no mystery at all ……

To reiterate, Bell's theorem doesn't contradict the idea of common emission cause and common emission properties. It does contradict the assignment of specific values, eg., polarization angles, etc., to emitted pairs.
No the “common emission properties” defined in Local and Realistic (LR) terms is what EPR-Bell is intended to search for. And observations so far as applied to the problem have yet to reveal any LR means to explain Bell inequities.

I suspect you have been plowing through ideas like “Superdeterminism” and “Local Vs. Non-local” without really understanding the EPR-Bell issues. Example: I don’t think anyone knows what you mean by “polariscopic” where you say;
a polariscopic setup might look like this:

source ------> polarizer A ---------------------> polarizer B ---> detector
Someone should have told you that you cannot send a photon through a second polarizer as the first one completely randomizes the polarization to a new alignment. No useful information can be gained from using a second 'polarizer B'.

Strongly recommend you review the Bell notes like those at http://www.drchinese.com/David/Bell_Theorem_Negative_Probabilities.htm" .
Focus on (figure 3) there, and explaining the inequity line, especially how the LR approach has yet to resolve the measurements at 22.5 and 67.5 degrees before claiming you know what “Bell's theorem doesn't contradict”.
Don’t bother with the “easy math A,B, C approach” stick with the stuff based real experiments.
 
Last edited by a moderator:
  • #99
JesseM said:
If you're talking about the statement "I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing", that absolutely is refuted by Bell--do you still not understand that Bell started by assuming the very sort of "common cause" you're talking about, and proved that under local realism, it could not possibly explain the correlations seen by QM?

It [Bell's theorem] contradicts the idea that any sort of "common cause" explanation which is consistent with local realism can match the results predicted by QM.

Right, under the assumptions of (1) statistical independence and (2) the validity of extant attempts at a description of the reality underlying the instrumental preparations, then "it could not possibly explain the correlations seen by QM".

Those assumptions are wrong, that's all. Isn't that what we've been talking about? For a Bell inequality to be violated experimentally, then one or more of the assumptions involved in its formulation must be incorrect.

I don't think that anyalyzing this stuff with analogies like washing socks, or lotto cards, etc., though I appreciate your efforts, will provide any insight into what's happening in optical Bell experiments. The light doesn't care about probabilities of yes/no answers. The light doesn't care how Bertelman washed his socks. The light in optical Bell experiments is behaving much as it behaves in an ordinary polariscopic setup. The question is, why.

It isn't known what's happening at the level of the light-polarizer interaction. QM assumes only that polarizers A and B are analyzing the same optical disturbance for a given coincidence interval. Along with this goes the assumption of common emitter for any given pair. (In the Aspect et al 1984 experiment using time-varying analyzers the emitters were calcium atoms. Much care was taken in the experimental preparation to ensure that paired photons corresponded to optical disturbances emitted from the same atom.)
 
  • #100
RandallB said:
Someone should have told you that you cannot send a photon through a second polarizer as the first one completely randomizes the polarization to a new alignment. No useful information can be gained from using a second 'polarizer B'.
The first polarizer is adjusted to transmit the maximum intensity. Varying the angle between polarizer A and polarizer B, and then measuring the intensity of the light after transmission (or maybe not) by polarizer B, results in a cos^s angular dependence. This is how Malus' Law was discovered a few hundred years ago, and it is, in my view, strikingly similar to what's happening in simple A-B optical Bell tests.
 
Back
Top