Questions about Bell: Answering Philosophical Difficulties

  • Thread starter Thread starter krimianl99
  • Start date Start date
  • Tags Tags
    Bell
Click For Summary
Bell's theorem challenges the assumptions of locality, superdeterminism, and objective reality in light of quantum mechanics, revealing contradictions with experimental results. The discussion emphasizes that proof by negation is problematic, as it relies on identifying all non-trivial assumptions, which is often impossible. Non-locality poses significant challenges for relativity, as any exception could undermine its foundational principles. The conversation also highlights the complexities of superdeterminism, suggesting it complicates statistical reasoning in scientific inquiry. Ultimately, the implications of Bell's findings raise profound questions about the nature of reality and the limits of scientific reasoning.
  • #91
ThomasT said:
Classically, if the analyzers are aligned and they're analyzing the same optical disturbance, then you would expect just the results that you get.

Uh, no, not at all! That's the whole point. If you send identical but independent light pulses to an analyser, it will *randomly* click "up" or "down", but the probabilities of them depend on the precise orientation between the analyser and the polarisation of the pulses.

In other words, imagine that two identical pulses arrive, one after the other, on the same analyser. You wouldn't expect this analyser to click twice "up" or twice "down" in a row, right ? You would expect the responses to be statistically independent. Well, the same for two identical pulses sent out to two different (but identical) analysers.

We don't know what the source is emitting. From the experimental results, there's not much that can be said about it. But the assumption is made that the analyzers are analyzing the same thing at both ends during any given coincidence interval.

If you take the classical description, then you KNOW what the two pulses are going to do, no ?
 
Physics news on Phys.org
  • #92
Originally Posted by ThomasT
Classically, if the analyzers are aligned and they're analyzing the same optical disturbance, then you would expect just the results that you get.
vanesch said:
Uh, no, not at all! That's the whole point. If you send identical but independent light pulses to an analyser, it will *randomly* click "up" or "down", but the probabilities of them depend on the precise orientation between the analyser and the polarisation of the pulses.

In other words, imagine that two identical pulses arrive, one after the other, on the same analyser. You wouldn't expect this analyser to click twice "up" or twice "down" in a row, right ? You would expect the responses to be statistically independent. Well, the same for two identical pulses sent out to two different (but identical) analysers.
If you're talking about a setup where you have a polarizer between the emitter and the analyzing polarizer, then ok. However, in that setup, the opposite-moving pulses wouldn't be considered identical following their initial polarization in the same sense that they can be considered identical if they remain unaltered until they hit their respective analyzing polarizers. I'm considering the quantum EPR-Bell type setups (eg. Aspect et al experiment using time-varying analyzers, 1984 I think).

For use as an analogy to the EPR-Bell experiments (at least the simplest optical ones) I'm thinking of a polariscopic type setup. It's the only way to be sure that you've got the same optical disturbance incident on (extending between) both polarizers during a certain coincidence interval. I'm trying to understand, among other things, why Heisenberg alludes so frequently to classical optics in various writings on quantum theory. It would seem to give a basis for the so called projection postulate among other things. I mean Heisenberg, Schroedinger, Dirac, Born, Bohr, Pauli, Jordan, etc. didn't just snatch stuff out of thin air. They had reasons for adopting the methods they did, and if something worked then it was retained in the theory. Of course, sometimes their reasons aren't so clear to mortals such as myself. :smile: Heisenberg's explanations are particularly hard for me to understand sometimes. I'm not sure how much of this is due to his style of expression, and the fact that I don't know much German and must rely on translations.

Anyway, I understand that one can't quantitatively reproduce the results of EPR-Bell tests using strictly classical methods. Quantitative predictability isn't the sort of understanding that I'm aiming at here. Quantum theory already gives me that.

Returning to my analogy (banal thought it might be), a simple EPR-Bell optical setup might look like this:

detector A <--- polarizer A <--- emitter ------> polarizer B ---> detector B

And a polariscopic setup might look like this:

source ------> polarizer A ---------------------> polarizer B ---> detector

I'll get to the details of my analogy in a future post (if the connection doesn't immediately jump out at you :smile:). It provides a means of understanding that nonlocality (in the spacetime sense of the word) is not necessary to explain the results of EPR-Bell tests.

Originally Posted by ThomasT
We don't know what the source is emitting. From the experimental results, there's not much that can be said about it. But the assumption is made that the analyzers are analyzing the same thing at both ends during any given coincidence interval.
vanesch said:
If you take the classical description, then you KNOW what the two pulses are going to do, no ?

Sorry that I didn't state this clearly at first. I'm not looking for a classical description per se. I don't think that's possible. Quantum theory is necessary.

I'm looking more for the classical basis for certain aspects of quantum theory, because, as far as I can tell, the meaning of Bell's theorem is that we can't, in a manner of speaking, count our chickens before they're hatched (sometimes called, most confusingly I think, Bell's realism assumption). Which is one important reason why quantum theoretical methods (eg. superposition of states) are necessary.

The realistic or hidden variable approach is actually, in contrast with quantum theory, the metaphysical speculation approach. Which so far has turned out to be not ok when applied to quantum experimental results.
 
  • #93
JesseM said:
To summarize what I was saying in that paragraph, how about defining superdeterminism as something like "a lack of statistical independence between variables associated with the particle prior to measurement and the experimenters' choice of what detector setting to use when making the measurement"?
Can I paraphrase the above as:

Superdeterminism says that, with regard to, say, the 1984 Aspect et al experiment that used time-varying analyzers, variables associated with photon production and variables associated with detector setting selection are statistically dependent (ie., strongly correlated).

But how is this different from regular old garden variety determinism?

And, of course, this is true. Isn't it? A photon pair (a coincidental detection) is produced during a certain time interval. This (temporal proximity) is how they're chosen to be paired. Even though the settings of the analyzing polarizers are varying perhaps several times during the photon production interval (and while the optical disturbances are enroute from emitter to polarizer), there's one and only one setting associated with each photon of the pair (which is determined by temporal proximity to the detection event).
 
Last edited:
  • #94
ThomasT said:
But how is this different from regular old garden variety determinism?
It would be a bizarre form of determinism where nature would have to "predict" the future choices of the experimenters (which presumably would depend on a vast number of factors in the past light cone of their brain state at the moment they make the choice, including things like what they had for lunch) at the moment the photons are created, and select their properties accordingly.
ThomasT said:
And, of course, this is true. Isn't it? A photon pair (a coincidental detection) is produced during a certain time interval.
Wait, are you equating the detection of the photons with their being "produced"? The idea of a local hidden variables theory is to explain the fact that the photons always give the same results when identical detector settings are used by postulating that the photons are assigned identical predetermined answers for each possible detector setting at the moment they are emitted from the source--nature can't wait until they are actually detected to assign them their predetermined answers, because there'd be no way to make sure they get the same answers without FTL being involved (as there is a spacelike separation between the two detection-events). So if you define superdeterminism as:
Superdeterminism says that, with regard to, say, the 1984 Aspect et al experiment that used time-varying analyzers, variables associated with photon production and variables associated with detector setting selection are statistically dependent (ie., strongly correlated).
...this can only be the correct definition if "photon production" refers to the moment the two photons were created/emitted from a common location, not the moment they were detected. Nature must have assigned them predetermined (and identical) answers for the results they'd give on each detector setting at that moment (there is simply no alternative under local realism--do you understand the logic?), and superdeterminism says that when assigning them their answers, nature acts as if it "knows in advance" what combination of detector settings the two experimenters will later use, so if we look at the subset of trials where the experimenters went on to choose identical settings, the statistical distribution of preassigned answers would be different in this subset than in the subset of the trials where the experimenters went on to choose different settings.
 
  • #95
ThomasT said:
Anyway, I understand that one can't quantitatively reproduce the results of EPR-Bell tests using strictly classical methods. Quantitative predictability isn't the sort of understanding that I'm aiming at here. Quantum theory already gives me that.

Returning to my analogy (banal thought it might be), a simple EPR-Bell optical setup might look like this:

detector A <--- polarizer A <--- emitter ------> polarizer B ---> detector B

And a polariscopic setup might look like this:

source ------> polarizer A ---------------------> polarizer B ---> detector

Yes, THIS setup will give you identical results as the EPR setup. But, you realize that we are doing here the measurements on the SAME lightpulse, while in the EPR setup, they are two SEPARATE pulses, right ? And that in the second setup there's no surprise that polarizer A will have an influence on the lightpulse that will be incident on polarizer B, given that it passed through A.

However, in the EPR setup, we are talking about 2 different light pulses, and the light pulse that went to B has never seen setup A.

edit:
So this is a bit as if, when someone would demonstrate the use of a faster-than-light telephone, where he talks to someone on alpha centaury, and gets immediately an anwer, you would say that that has nothing surprising, because you can think of a similar setup, where you have a telephone to the room next door, and it functions in the same way :smile:
 
Last edited:
  • #96
Originally Posted by ThomasT
I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.

Originally Posted by JesseM
But that's the whole idea that Bell's theorem intends to refute. Bell starts by imagining that the perfect correlation when both experimenters use the same detector setting is due to a common cause--each particle is created with a predetermined answer to what result it will give on any possible angle, and they are always created in such a way that they are predetermined to give opposite answers on each possible angle. But if you do make this assumption, it leads you to certain conclusions about what statistics you'll get when the experimenters choose different detector settings, and these conclusions are violated in QM.

Originally Posted by vanesch
But that's exactly what Bell's theorem analyses! Is it possible that the perfect correlations on one hand (the C(A,A) = C(B,B) = C(C,C) = 1) and the observed "crossed correlations" (C(A,B), C(B,C) and C(A,C) ) can be the result of a common origin ?

It turns out that the answer is no, if the C are those predicted by quantum mechanics for an entangled pair of particles, and we pick the right angles of the analysers.


---------------------
Ok, let's forget about super-duper-quasi-hyper-whatever determinism for the moment. The first statement in this post isn't refuted by any formal treatment (Bell or other, involving choices-settings and binary results), and is the situation that the quantum mechanical treatment assumes in dealing with (at least certain sorts of) entangled pairs.

I've tried to show how this can be visualized by using the analogy of a polariscopic setup to the simplest optical Bell-EPR setup.

Viewed from this perspective, there's no mystery at all concerning why the functional relationship between angular difference and coincidence rate is the same as Malus' Law.

The essential lessen I take from experimental violations of Bell inequalities is that physics is a long way from understanding the deep nature of light -- but the general physical bases of quantum entanglement can be understood (in a qualitative, not just quantitative, sense) now.

To reiterate, Bell's theorem doesn't contradict the idea of common emission cause and common emission properties. It does contradict the assignment of specific values, eg., polarization angles, etc., to emitted pairs.
 
Last edited:
  • #97
ThomasT said:
The first statement in this post isn't refuted by any formal treatment (Bell or other, involving choices-settings and binary results)
If you're talking about the statement "I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing", that absolutely is refuted by Bell--do you still not understand that Bell started by assuming the very sort of "common cause" you're talking about, and proved that under local realism, it could not possibly explain the correlations seen by QM?

Did you ever look over the example involving scratch lotto cards I gave in post #68 on this thread? If so, do you see that the whole point of the example was to try to explain the perfect correlations in terms of a common cause--a source manufacturing cards so that the fruit behind each possible square on two matched cards would always be opposite? Do you see that this assumption naturally leads to the conclusion that when the experimenters pick different boxes to uncover, they will get opposite results at least 1/3 of the time? I gave a slight variation on this proof with an example involving polarized light in post #22 of this thread if you'd find that helpful. It would also be helpful to me if you answered the question I asked at the end of the post with the scratch lotto card example:
Imagine that you are the source manufacturing the cards to give to Alice and Bob (the common cause). Do you agree that if the cards cannot communicate and choose what fruit to show based on what box was scratched on the other card (no nonlocality), and if you have no way to anticipate in advance which of the three boxes Alice and Bob will each choose to scratch (no superdeterminism), then the only way for you to guarantee that they will always get opposite results when the scratch the same box is to predetermine the fruit that will appear behind each box A, B, C if it is scratched, making sure the predetermined answers are opposite for the two cards (so if Alice's card has predetermined answers A+,B+,C-, then Bob's card must have predetermined answers A-,B-,C+)? And if you agree with this much, do you agree or disagree with the conclusion that if you predetermine the answers in this way, this will necessarily mean that when they pick different boxes to scratch they must get opposite fruits at least 1/3 of the time?
ThomasT said:
Viewed from this perspective, there's no mystery at all concerning why the functional relationship between angular difference and coincidence rate is the same as Malus' Law.
Of course there is. Malus' law doesn't talk about the probability of a yes/no answer, it talks about the reduction in intensity of light getting through a polarizer--to turn it into a yes/no question you'd need something like a light that would go on only if the the light coming through the polarizer was above a certain threshold of intensity (like in the example from post #22 of the other thread I mentioned above), or which had a probability of turning on based on the intensity that made it through the polarizer (which could ensure that if the wave is polarized at angle theta and the polarizer is set to angle phi, then the probability the light would go on would be cos^2[theta - phi]). But even if you did this, there'd be no possible choice of the waves' angle theta such that, if two experimenters at different locations set their two polarizer angles to phi and xi and measured two waves with identical polarization angle theta, the probability of both getting the same yes/no answer would be equal to cos^2[phi - xi]; Bell's theorem proves that it's impossible to reproduce this quantum relationship (which is not Malus' law) under local realism. If you don't see why, I'd ask again that you review the lotto card analogy and tell me if you agree that the probabilistic claim about getting correlated results at least 1/3 of the time when the two people pick different boxes to scratch should be guaranteed to hold under local realism; if you agree in that example but don't see how it extends to the case of polarized waves, I can elaborate on that point if you wish.
ThomasT said:
To reiterate, Bell's theorem doesn't contradict the idea of common emission cause and common emission properties. It does contradict the assignment of specific values, eg., polarization angles, etc., to emitted pairs.
It contradicts the idea that any sort of "common cause" explanation which is consistent with local realism can match the results predicted by QM.
 
  • #98
ThomasT said:
ThomasT said:
I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing.
Ok, let's forget about super-duper-quasi-hyper-whatever determinism for the moment. The first statement in this post isn't refuted by any formal treatment (Bell or other, …….

I've tried to show how this can be visualized by using the analogy of a polariscopic setup to the simplest optical Bell-EPR setup. ... Viewed from this perspective, there's no mystery at all ……

To reiterate, Bell's theorem doesn't contradict the idea of common emission cause and common emission properties. It does contradict the assignment of specific values, eg., polarization angles, etc., to emitted pairs.
No the “common emission properties” defined in Local and Realistic (LR) terms is what EPR-Bell is intended to search for. And observations so far as applied to the problem have yet to reveal any LR means to explain Bell inequities.

I suspect you have been plowing through ideas like “Superdeterminism” and “Local Vs. Non-local” without really understanding the EPR-Bell issues. Example: I don’t think anyone knows what you mean by “polariscopic” where you say;
a polariscopic setup might look like this:

source ------> polarizer A ---------------------> polarizer B ---> detector
Someone should have told you that you cannot send a photon through a second polarizer as the first one completely randomizes the polarization to a new alignment. No useful information can be gained from using a second 'polarizer B'.

Strongly recommend you review the Bell notes like those at http://www.drchinese.com/David/Bell_Theorem_Negative_Probabilities.htm" .
Focus on (figure 3) there, and explaining the inequity line, especially how the LR approach has yet to resolve the measurements at 22.5 and 67.5 degrees before claiming you know what “Bell's theorem doesn't contradict”.
Don’t bother with the “easy math A,B, C approach” stick with the stuff based real experiments.
 
Last edited by a moderator:
  • #99
JesseM said:
If you're talking about the statement "I think one can understand (sort of) the observed correlation function, and that there is no need for a nonlocal explanation, simply by assuming a common (emission) cause and that the polarizers are analyzing essentially the same thing", that absolutely is refuted by Bell--do you still not understand that Bell started by assuming the very sort of "common cause" you're talking about, and proved that under local realism, it could not possibly explain the correlations seen by QM?

It [Bell's theorem] contradicts the idea that any sort of "common cause" explanation which is consistent with local realism can match the results predicted by QM.

Right, under the assumptions of (1) statistical independence and (2) the validity of extant attempts at a description of the reality underlying the instrumental preparations, then "it could not possibly explain the correlations seen by QM".

Those assumptions are wrong, that's all. Isn't that what we've been talking about? For a Bell inequality to be violated experimentally, then one or more of the assumptions involved in its formulation must be incorrect.

I don't think that anyalyzing this stuff with analogies like washing socks, or lotto cards, etc., though I appreciate your efforts, will provide any insight into what's happening in optical Bell experiments. The light doesn't care about probabilities of yes/no answers. The light doesn't care how Bertelman washed his socks. The light in optical Bell experiments is behaving much as it behaves in an ordinary polariscopic setup. The question is, why.

It isn't known what's happening at the level of the light-polarizer interaction. QM assumes only that polarizers A and B are analyzing the same optical disturbance for a given coincidence interval. Along with this goes the assumption of common emitter for any given pair. (In the Aspect et al 1984 experiment using time-varying analyzers the emitters were calcium atoms. Much care was taken in the experimental preparation to ensure that paired photons corresponded to optical disturbances emitted from the same atom.)
 
  • #100
RandallB said:
Someone should have told you that you cannot send a photon through a second polarizer as the first one completely randomizes the polarization to a new alignment. No useful information can be gained from using a second 'polarizer B'.
The first polarizer is adjusted to transmit the maximum intensity. Varying the angle between polarizer A and polarizer B, and then measuring the intensity of the light after transmission (or maybe not) by polarizer B, results in a cos^s angular dependence. This is how Malus' Law was discovered a few hundred years ago, and it is, in my view, strikingly similar to what's happening in simple A-B optical Bell tests.
 
  • #101
ThomasT said:
Right, under the assumptions of (1) statistical independence and (2) the validity of extant attempts at a description of the reality underlying the instrumental preparations, then "it could not possibly explain the correlations seen by QM".
(1) Only if by "statistical independence" you are referring to the idea that at the moment the particles are created, whatever properties they are assigned (the 'common cause' which makes sure they both later give the same answers when measured on the same angle) are statistically independent of the future choices of the experimenters about what angle to set their polarizers--the source does not have a "crystal ball" to see into the future behavior of the experimenters as in superdeterminism. No other assumptions of statistical independence are being made here.

(2) Can you be more specific about what you mean by "extant attempts at a description of the reality underlying the instrumental preparations"? The only reality assumed by Bell was local realism, and the fact that each particle must, when created, have been given predetermined answers to what response they'd give to each possible detector angle, with the predetermined answers being the same for each particle (the common cause). The second follows from the first, as there is no other way to explain how particles could always give the same response to the same detector angle besides predetermined answers, if you rule out FTL conspiracies between the particles.
ThomasT said:
I don't think that anyalyzing this stuff with analogies like washing socks, or lotto cards, etc., though I appreciate your efforts, will provide any insight into what's happening in optical Bell experiments.
This seems like a knee-jerk reaction you haven't put any thought into. After all, if the light has predetermined answers to what response it will give to each of the three possible polarizer settings (as must be true under local realism--do you disagree?), then this is very much like the case where each card has a predetermined hidden fruit under each square. Where, specifically, do you think the analogy breaks down?
ThomasT said:
The light in optical Bell experiments is behaving much as it behaves in an ordinary polariscopic setup.
No it isn't. In an ordinary polariscopic setup it is impossible to set things up so that each of two experimenters always gets yes/no answers on each trial, and are picking between three possible detector settings, and when they pick the same detector setting they always get the same answer, but when they pick different detector settings they get the same answer less than 1/3 of the time. And yet this is what QM predicts can happen for entangled photons if the three polarizer angles are 0, 60, and 120.
ThomasT said:
It isn't known what's happening at the level of the light-polarizer interaction. QM assumes only that polarizers A and B are analyzing the same optical disturbance for a given coincidence interval.
What are you talking about? Bell didn't make any assumptions about "optical disturbances", he just pointed out that under local realism, if experimenters always get the same answer when they choose the same detector setting and both make their measurements at the same time, that must be because the "things" (you don't have to assume they're 'optical disturbances' or anything so specific) that are measured by each detector must have been assigned identical predetermined answers to what result they'd give for each detector setting at some point when they were in the same local region of space. Do you think it is possible for local realism to be true yet this second assumption to be false? If so, then you haven't thought things through carefully enough, but I can explain why this assumption follows necessarily from local realism if you wish.
 
  • #102
ThomasT said:
This is how Malus' Law was discovered a few hundred years ago, and it is, in my view, strikingly similar to what's happening in simple A-B optical Bell tests.
Well yah duh,
But saying it is “strikingly similar … to Bell” - only tells me you do not understand what Bell was looking for, let alone the point the observations apparently make.
Take some time to read Bell and understand what the straight Bell inequity line from 100% to 0% is.
How QM predictions and actual observations produce a “violation” by going above and below that line with the curve of a sine wave across the same range.

And finally how Local theories can only predict the same sine wave shape but limited to a range of 75% to 25% which keeps it inside (on the 50% side) of the Bell inequity line. The point is a true local theory has yet to explain how to match the observations as non-locals can.

You need to bring yourself up to speed on the real experiment and these issues or you will never be able to keep up with the discussions. Certainly before claiming you know what “Bell's theorem doesn't contradict” you have a lot more asking questions and understanding to attain before doing that.
Otherwise you will only generate pointless arguments here.
 
  • #103
JesseM said:
(1) Only if by "statistical independence" you are referring to the idea that at the moment the particles are created, whatever properties they are assigned (the 'common cause' which makes sure they both later give the same answers when measured on the same angle) are statistically independent of the future choices of the experimenters about what angle to set their polarizers--the source does not have a "crystal ball" to see into the future behavior of the experimenters as in superdeterminism. No other assumptions of statistical independence are being made here.

The assumption of statistical independence in Bell's formulation has to do with setting the probability of coincidental detection equal to the product of the probability of detection at A and the probability of detection at B: P(A,B) = P(A) P(B).

Factorability of the joint probability has been taken to represent locality. But, it doesn't represent locality. It represents statistical independence between events (probability of detection) at A and events (probability of detection) at B during any given coincidence interval.

That there is no such statistical independence is evident, and is dictated by the experimental design(s) necessary to test Bell's theorem (ie., necessary to produce entangled pairs).

So, as far as I can tell, Bell didn't make a locality assumption per se.


JesseM said:
(2) Can you be more specific about what you mean by "extant attempts at a description of the reality underlying the instrumental preparations"? The only reality assumed by Bell was local realism, and the fact that each particle must, when created, have been given predetermined answers to what response they'd give to each possible detector angle, with the predetermined answers being the same for each particle (the common cause). The second follows from the first, as there is no other way to explain how particles could always give the same response to the same detector angle besides predetermined answers, if you rule out FTL conspiracies between the particles.

We know that attempts, to date, to describe the light-polarizer interactions in realistic mathematical expressions have not matched the qm predictions for all settings.

It's interesting that if we just don't say anything realistic about these interactions (except that the polarizers are interacting with the same disturbance during a given coincidence interval), vis quantum theory, then the probability of joint detection can be fairly accurately calculated.

Apparently whatever is being emitted and is incident on the polarizers is not behaving according to classical polarization theories.


JesseM said:
This seems like a knee-jerk reaction you haven't put any thought into. After all, if the light has predetermined answers to what response it will give to each of the three possible polarizer settings (as must be true under local realism--do you disagree?), then this is very much like the case where each card has a predetermined hidden fruit under each square. Where, specifically, do you think the analogy breaks down?
It just seems to me like an unproductive way to think about this stuff. After all, people have been mulling over Bell's theorem for half a century with no agreement as to it's meaning. Why not try a different perspective?

Seeing the connection between simple A-B optical Bell tests and the classic polariscope might prove to be quite, er, fruitful. :smile:

JesseM said:
No it isn't. In an ordinary polariscopic setup it is impossible to set things up so that each of two experimenters always gets yes/no answers on each trial, and are picking between three possible detector settings, and when they pick the same detector setting they always get the same answer, but when they pick different detector settings they get the same answer less than 1/3 of the time. And yet this is what QM predicts can happen for entangled photons if the three polarizer angles are 0, 60, and 120.

The similarity between the two setups that I see involves the same light extending between polarizers A and B (forget about the emitter in the optical Bell tests), and the same detection rate angular dependence.

JesseM said:
What are you talking about? Bell didn't make any assumptions about "optical disturbances", he just pointed out that under local realism, if experimenters always get the same answer when they choose the same detector setting and both make their measurements at the same time, that must be because the "things" (you don't have to assume they're 'optical disturbances' or anything so specific) that are measured by each detector must have been assigned identical predetermined answers to what result they'd give for each detector setting at some point when they were in the same local region of space. Do you think it is possible for local realism to be true yet this second assumption to be false? If so, then you haven't thought things through carefully enough, but I can explain why this assumption follows necessarily from local realism if you wish.

The statistical independence representation and the assignment of specific emission property values together constitute what's usually called, misleadingly I think, the assumption of local realism.

The, obviously wrong, statistical independence assumption has nothing to do with locality. We're left with the possibility of realistic representations being contradicted.

If experimenters always get the same answer when they choose the same polarizer setting during a coincidence interval, that might be because the polarizers are analyzing the same thing (which was produced during the same emission interval). This is what quantum theory assumes.
 
Last edited:
  • #104
RandallB said:
Certainly before claiming you know what “Bell's theorem doesn't contradict” you have a lot more asking questions and understanding to attain before doing that.
One of my contentions is that Bell's theorem doesn't actually make a locality assumption. If you think it does, then point out where you think it is in his formulation.

If Bell's locality condition isn't, in reality, a locality condition, then Bell's theorem doesn't contradict locality.
 
  • #105
ThomasT said:
The assumption of statistical independence in Bell's formulation has to do with setting the probability of coincidental detection equal to the product of the probability of detection at A and the probability of detection at B: P(A,B) = P(A) P(B).
You'd have to be more specific about what "A" and "B" are supposed to represent here. For example, if A="experimenter 1 measures at angle 120, gets result spin-up", and B="experimenter 2 measures at angle 120, gets result spin-down" then it is certainly not true that Bell assumed that P(A,B) = P(A)*P(B)...if each experimenter has a 1/3 chance of choosing angle 120, then P(A) = P(B) = 1/6 (because on any given angle, there is a 1/2 chance of getting spin-up and a 1/2 chance of getting spin-down), but P(A,B) is not 1/6*1/6 = 1/36, but rather 1/18 (because there's a 1/3*1/3 = 1/9 chance that both experimenters choose angle 120, but if they both do it's guaranteed they'll get opposite spins, so there's a 1/2 chance experimenter 1 will get spin-up and experimenter 2 will get spin-down, and a 1/2 chance experimenter 1 will get spin-down and experimenter 2 will get spin-up).
ThomasT said:
We know that attempts, to date, to describe the light-polarizer interactions in realistic mathematical expressions have not matched the qm predictions for all settings.
Really? In what experiments have QM predictions not matched the results?
ThomasT said:
It's interesting that if we just don't say anything realistic about these interactions (except that the polarizers are interacting with the same disturbance during a given coincidence interval), vis quantum theory, then the probability of joint detection can be fairly accurately calculated.
I really have no idea what you're talking about here. Can you actually give the specifics of this calculation that you say gives a "fairly accurate" match for the probability of joint detection?
JesseM said:
This seems like a knee-jerk reaction you haven't put any thought into. After all, if the light has predetermined answers to what response it will give to each of the three possible polarizer settings (as must be true under local realism--do you disagree?), then this is very much like the case where each card has a predetermined hidden fruit under each square. Where, specifically, do you think the analogy breaks down?
ThomasT said:
It just seems to me like an unproductive way to think about this stuff. After all, people have been mulling over Bell's theorem for half a century with no agreement as to it's meaning.
Where did you get that idea? As far as I know, all physicists agree on the meaning: that Bell's theorem shows the predictions of QM for entangled particles are inconsistent with local realism. So what lack of agreement are you referring to?

Also, just waving away my argument with the word "unproductive" again suggests a knee-jerk reaction that you haven't put any thought into. It would be like if I presented a proof that there can be no largest prime number, and instead of addressing flaws in the proof, you just said "it seems like an unproductive way to think about prime numbers...people have been mulling over the prime numbers for years with no agreement as to their meaning." Sorry, but proofs are proofs, you have to address the specifics if you want to dispute their conclusions. And everything about my example involving lotto cards maps directly to the real experimental setup involving particles, if you don't see how this works I can explain, though it should be pretty obvious if you give it any thought (to get you started, the two cards map to the two particles being measured, the three possible boxes either person can scratch map to the three possible detector angles each experimenter can choose from, and the hidden fruits behind each box map to the notion that each particle has been assigned a predetermined answer to the result it will give when measured on any of the three possible angles).
ThomasT said:
Seeing the connection between simple A-B optical Bell tests and the classic polariscope might prove to be quite, er, fruitful. :smile:
What "connection" is that? All your statements are so hopelessly vague. Please give specifics, like numerical predictions that you think are the same.
ThomasT said:
The similarity between the two setups that I see involves the same light extending between polarizers A and B (forget about the emitter in the optical Bell tests), and the same detection rate angular dependence.
What detection rate angular dependence? You never give any specifics. There is nothing in the classical optics setup that says that if one experimenter has his polarizer set to angle phi and the other has his polarizer set to angle xi, then the probability of them getting the "same result" in some sense will be cos^2(phi - xi); that is a purely quantum rule for the detection rate angular dependence, which has nothing to do with Malus' law (which has to do with the angle between the polarization of the wave and the detector, not to do with the angle between two detectors).
ThomasT said:
The statistical independence representation and the assignment of specific emission property values together constitute what's usually called, misleadingly I think, the assumption of local realism.
As I said, it seems to me that the kind of "statistical independence" you referred to above is not assumed by Bell or any other physicist--I think you're just confused, or else you're being overly vague about what "A" and "B" are supposed to represent.
ThomasT said:
If experimenters always get the same answer when they choose the same polarizer setting during a coincidence interval, that might be because the polarizers are analyzing the same thing (which was produced during the same emission interval). This is what quantum theory assumes.
That's what Bell assumed too, that they were both analyzing two things with properties that were sufficiently "the same" to ensure they had the same predetermined answers to any possible measurement (or opposite answers, depending on the specific experiment being discussed). But he showed that even if you assume this, then under local realism this leads to conflicts with QM over the predicted statistics in trials where the experimenters choose different detector settings. I already explained this with the lotto card example, which you apparently refuse to look at--would you be happier if I performed some trivial editing on that example so that it was no longer about lotto cards, but instead about particles whose spin are measured at two different detectors? None of the math would need to be any different, but if you have some kind of psychological block about mapping analogies to the actual physical situation they're supposed to be analogous to, perhaps it would help you to see that the proof really is completely straightforward.
 
Last edited:
  • #106
ThomasT said:
Factorability of the joint probability has been taken to represent locality. But, it doesn't represent locality. It represents statistical independence between events (probability of detection) at A and events (probability of detection) at B during any given coincidence interval.

...

If Bell's locality condition isn't, in reality, a locality condition, then Bell's theorem doesn't contradict locality.

Actually, I somewhat agree with these statements. I also think that the separability requirement does not strictly represent locality. Bell says that the vital assumption is that the setting of Alice does not affect the outcome at Bob (and vice versa). So I do believe a locality assumption is represented. I usually refer to this as Bell Locality to distinguish it from other possible representations.

But if you are, in fact, a local realist... then it is a little difficult to maintain that Bell's Theorem is not talking to you. The entire idea of Bell was to show that you need to account for Alice and Bob's space-like separated results being correlated in a way that local realism does not allow. Specifically, the results cannot match the predictions of Quantum Mechanics.
 
  • #107
DrChinese said:
Actually, I somewhat agree with these statements. I also think that the separability requirement does not strictly represent locality. Bell says that the vital assumption is that the setting of Alice does not affect the outcome at Bob (and vice versa).
Ah, so when ThomasT wrote P(A,B) = P(A)*P(B), this may have been an equation that was actually presented in a proof of Bell's theorem, but based on what you say here I'd guess it was presented with the understanding that A and B were only supposed to represent the choice of settings made by Alice and Bob, not the results they obtained with these settings. In this case I do agree the equation should hold as long as Alice and Bob are making their choices independently, but I am not sure that ThomasT was clear on the limited scope of the equation. Hopefully you'd agree with my point here:
You'd have to be more specific about what "A" and "B" are supposed to represent here. For example, if A="experimenter 1 measures at angle 120, gets result spin-up", and B="experimenter 2 measures at angle 120, gets result spin-down" then it is certainly not true that Bell assumed that P(A,B) = P(A)*P(B)...if each experimenter has a 1/3 chance of choosing angle 120, then P(A) = P(B) = 1/6 (because on any given angle, there is a 1/2 chance of getting spin-up and a 1/2 chance of getting spin-down), but P(A,B) is not 1/6*1/6 = 1/36, but rather 1/18 (because there's a 1/3*1/3 = 1/9 chance that both experimenters choose angle 120, but if they both do it's guaranteed they'll get opposite spins, so there's a 1/2 chance experimenter 1 will get spin-up and experimenter 2 will get spin-down, and a 1/2 chance experimenter 1 will get spin-down and experimenter 2 will get spin-up).
 
  • #108
JesseM said:
Hopefully you'd agree with my point here:

I don't think we have any significant disagreements on this topic... :)

The issue is really with the person who is arguing that Bell's Theorem does not rule out Local Realism. The burden is really on them to provide a qualifying theory that can match QM. If you already are convinced that either realism or locality can be abandoned, there isn't much left to argue about. It just becomes semantics.

But if you are a local realist, there is a big hill to climb, and attacking Bell's assumptions is a waste of time. So what if there is a little rust around some element of Bell's brilliant paper? Just put forward a qualifying local realistic theory!

So my question is: ThomasT, are you a local realist?
 
  • #109
ThomasT said:
One of my contentions is that Bell's theorem doesn't actually make a locality assumption. If you think it does, then point out where you think it is in his formulation.

If Bell's locality condition isn't, in reality, a locality condition, then Bell's theorem doesn't contradict locality.

Bell uses TWO assumptions to be able to write:
P(A,B,lambda) = P(A,lambda) P(B,lambda).

The first assumption is locality. Now, locality has a slightly different definition depending on whether we have to do with a deterministic theory or with a stochastic theory. In a deterministic theory, the definition is simple: the time evolution of an ontological physical quantity at a space(time) point is entirely determined by the values of the ontological physical quantities defined in a close neighbourhood of said space(time) point.

This by itself already assumes that we have postulated ontological physical quantities, and that they are fields over space(time). Indeed, it doesn't make sense to talk about locality about physical quantities who are not attached to a point in space(time). It also assumes that we have given the full list of ontological (observable or non-observable) quantities.

In practice, this comes down to requiring that the time evolution of all ontological physical quantities is given by a set of partial differential equations.

Relativity requires on top of that, "an upper limit of propagation speed", which comes down requiring that the Green's functions of the partial differential equations vanish outside of the light cone.

This is locality for deterministic theories.

Things become a bit more difficult for stochastic theories. In a stochastic theory, physical quantities are not determined uniquely by the "current state", only their *probabilities* are determined by the "current ontological state". The thing is that probabilities are not physical quantities, because they depend on the conditions one imposes. Well, here one requires the following for locality. We still assume that there are ontological physical quantities associated to each point in space(time).

The conditional probability for an ontological physical quantity at point P to evolve into one or another value, given all the values of the ontological physical quantities within a neighbourhood of point P, remains unchanged when one adds extra conditions concerning the physical values of remote, or past, events.

If that's the case, then the stochastic theory is said to be local.

Let us understand this definition. Assume that we are at point P, at instant t0, and we look at a physical ontological quantity X. At t0+dt, X can take on certain values. Now, if we don't know anything about the physical situation, then we can say for instance that these potential values of X are distributed according to a certain distribution (say, uniform). One would think that "the more we know", the more "refined" our probabilities for X at P and at t0 + dt will be. For instance, the probability to have X0, knowing that at t and P, we had another physical quantity Y = Y0, will be different than if we didn't know Y to be equal to Y0. And if we know about Z = Z0 at P and t0, then that changes again our probabilities for X at t0 + dt. And if we know about Z = Z1 at another point, Q, then this still changes our probability of X at t0+dt.
But IF WE TAKE INTO ACCOUNT all the ontological physical quantities in a neighbourhood of P, at time t0, which we call collectively ALL0, then we find a certain probability P(X0|ALL0) to have X = X0 at t0 + dt, and this is "all the useful information we need and that will tell us something about X0". So if now we ADD another condition:
P(X0 | ALL0 AND STUFF) and "STUFF" is a condition on an ontological physical variable somewhere else, or in the past, then:
P(X0 | ALL0) = P(X0 | ALL0 AND STUFF)

In other words, knowing something extra won't change anything to the probability distribution of X anymore. The neighbourhood of P, and all ontological physical variables, specified everything there was to know.

If that's the case, we call our stochastic theory "local". Notice - and that is very important - that if our stochastic theory is actually a deterministic theory, then both definitions of locality coincide. The only difference is that the probability values will be 1 or 0.

Bell needs this definition to be able to write that P(A,lambda) is not dependent on B (the choice at Bob's). But note the "lambda": it stands for "all the ontological physical variables that are present at Alice". Lambda contains actually a bit more (the part sent to Bob), but we know that ONCE we have the "local" part, that normally, P won't change anymore.

So it is in writing P(A,lambda) (choice at Alice: local quantity, and variables dependent on the incoming particle, whatever they are), and not P(A,B,lambda), Bell uses locality of a stochastic theory to find the probability of having "up" with choice A.
We can write a similar thing Q(B,lambda) at Bob: the probability for Bob to find "up".

The second thing he needs, is that the probability to find, say, (up,up) (written: R(A,B,lambda) ) is now given by the product of the probability of "up" at Alice and the probability of "up" at Bob.

R(A,B,lambda) = P(A,lambda) x P(B,lambda).

HERE, we use the assumption of stochastic independence of our FULLY DETERMINED probabilities. Note that we don't write: R(A,B) = P(A) x P(B). No, we use lambda: for a given (unknowable in practice, but assumed to be given in theory) fully determined ontological state. This is the assumption of no superdeterminism.

Point is: with R(A,B,lambda), we can't do anything because we don't know lambda. So we will have to weight over lambda.

We use again locality in assuming that there is a P(lambda), a certain probability distribution of the ontological physical quantities sent out by the source, which doesn't depend on the choices A and B.

And we use again no superdeterminism when we apply:
integral over lambda of R(A,B,lambda) x P(lambda) to obtain the probability to have "up,up" without any lambda condition.
 
  • #110
DrChinese said:
So my question is: ThomasT, are you a local realist?
Well, I don't have any new local realistic theory to offer. :smile:

It is, as you've indicated, a problem of semantics.

Thank you to you, Jesse, Randall, vanesch, etc. for taking the time to provide thoughtful comments and criticisms.

Now I will go over vanesch's latest post in this thread point by point.
 
  • #111
vanesch said:
Bell uses TWO assumptions to be able to write:
P(A,B,lambda) = P(A,lambda) P(B,lambda).

The first assumption is locality. Now, locality has a slightly different definition depending on whether we have to do with a deterministic theory or with a stochastic theory. In a deterministic theory, the definition is simple: the time evolution of an ontological physical quantity at a space(time) point is entirely determined by the values of the ontological physical quantities defined in a close neighbourhood of said space(time) point.

This by itself already assumes that we have postulated ontological physical quantities, and that they are fields over space(time). Indeed, it doesn't make sense to talk about locality about physical quantities who are not attached to a point in space(time). It also assumes that we have given the full list of ontological (observable or non-observable) quantities.

In practice, this comes down to requiring that the time evolution of all ontological physical quantities is given by a set of partial differential equations.

Relativity requires on top of that, "an upper limit of propagation speed", which comes down requiring that the Green's functions of the partial differential equations vanish outside of the light cone.

This is locality for deterministic theories.

Things become a bit more difficult for stochastic theories. In a stochastic theory, physical quantities are not determined uniquely by the "current state", only their *probabilities* are determined by the "current ontological state". The thing is that probabilities are not physical quantities, because they depend on the conditions one imposes. Well, here one requires the following for locality. We still assume that there are ontological physical quantities associated to each point in space(time).

The conditional probability for an ontological physical quantity at point P to evolve into one or another value, given all the values of the ontological physical quantities within a neighbourhood of point P, remains unchanged when one adds extra conditions concerning the physical values of remote, or past, events.

If that's the case, then the stochastic theory is said to be local.

Let us understand this definition. Assume that we are at point P, at instant t0, and we look at a physical ontological quantity X. At t0+dt, X can take on certain values. Now, if we don't know anything about the physical situation, then we can say for instance that these potential values of X are distributed according to a certain distribution (say, uniform). One would think that "the more we know", the more "refined" our probabilities for X at P and at t0 + dt will be. For instance, the probability to have X0, knowing that at t and P, we had another physical quantity Y = Y0, will be different than if we didn't know Y to be equal to Y0. And if we know about Z = Z0 at P and t0, then that changes again our probabilities for X at t0 + dt. And if we know about Z = Z1 at another point, Q, then this still changes our probability of X at t0+dt.
But IF WE TAKE INTO ACCOUNT all the ontological physical quantities in a neighbourhood of P, at time t0, which we call collectively ALL0, then we find a certain probability P(X0|ALL0) to have X = X0 at t0 + dt, and this is "all the useful information we need and that will tell us something about X0". So if now we ADD another condition:
P(X0 | ALL0 AND STUFF) and "STUFF" is a condition on an ontological physical variable somewhere else, or in the past, then:
P(X0 | ALL0) = P(X0 | ALL0 AND STUFF)

In other words, knowing something extra won't change anything to the probability distribution of X anymore. The neighbourhood of P, and all ontological physical variables, specified everything there was to know.

If that's the case, we call our stochastic theory "local". Notice - and that is very important - that if our stochastic theory is actually a deterministic theory, then both definitions of locality coincide. The only difference is that the probability values will be 1 or 0.

Bell needs this definition to be able to write that P(A,lambda) is not dependent on B (the choice at Bob's). But note the "lambda": it stands for "all the ontological physical variables that are present at Alice". Lambda contains actually a bit more (the part sent to Bob), but we know that ONCE we have the "local" part, that normally, P won't change anymore.

So it is in writing P(A,lambda) (choice at Alice: local quantity, and variables dependent on the incoming particle, whatever they are), and not P(A,B,lambda), Bell uses locality of a stochastic theory to find the probability of having "up" with choice A.
We can write a similar thing Q(B,lambda) at Bob: the probability for Bob to find "up".

The second thing he needs, is that the probability to find, say, (up,up) (written: R(A,B,lambda) ) is now given by the product of the probability of "up" at Alice and the probability of "up" at Bob.

R(A,B,lambda) = P(A,lambda) x P(B,lambda).

HERE, we use the assumption of stochastic independence of our FULLY DETERMINED probabilities. Note that we don't write: R(A,B) = P(A) x P(B). No, we use lambda: for a given (unknowable in practice, but assumed to be given in theory) fully determined ontological state. This is the assumption of no superdeterminism.

Point is: with R(A,B,lambda), we can't do anything because we don't know lambda. So we will have to weight over lambda.

We use again locality in assuming that there is a P(lambda), a certain probability distribution of the ontological physical quantities sent out by the source, which doesn't depend on the choices A and B.

And we use again no superdeterminism when we apply:
integral over lambda of R(A,B,lambda) x P(lambda) to obtain the probability to have "up,up" without any lambda condition.
Thanks for the lengthy explanation. I don't think that Bell's theorem or experimental violations of Bell inequalities tell us anything about whether nature harbours nonlocal or instantaneous action at a distance forces or connections (or whatever).

Some conventions:

A = rate of detection at A per unit of time
B = rate of detection at B per unit of time
(A,B) = rate of coincidental detection per unit of time
a = setting of polarizer at A
b = setting of polarizer at B
(a,b) = |a-b| = joint setting of polarizers = angular difference between settings
P = probability (ie., rate of detection per unit of time normalized to 1)

In a standard EPR-Bell test:

the individual detection rate
without polarizers
P(A) = A
P(B) = B
P(A) = P(B)

the individual detection rate
with polarizers (averaged over all possible settings)
P(A) = P(A|a) = P(A|b) = P(A|a,b) = .5A = .5
P(B) = P(B|b) = P(B|a) = P(B|a,b) = .5B = .5

the coincidental detection rate
without reference to polarizer settings
P(A,B) = P(A) P(B) = .25

the coincidental detection rate
with reference to polarizer settings (any given (a,b))
P(A,B|a,b) = cos^2(a,b) (the joint probability approaches cos^2(a,b) as the number of trials approaches infinity)

The individual probabilities never change, no matter what conditions are imposed. If you know a or b or (a,b), it doesn't matter.

However, P(A,B) /= P(A,B|a,b).

So, in this case we no longer have a local theory. We have a global one.

Of course, the instantaneous action that's happening in the global case has nothing to do (as far as anyone knows) with ftl or instantaneous physical propagations or connections. It's simply that when we change the setting, a, then we instantaneously change the setting (a,b), and therefore instantaneously change the joint probability.
 
  • #112
ThomasT said:
.. I don't think that Bell's theorem or experimental violations of Bell inequalities tell us anything about whether nature harbours nonlocal or instantaneous action at a distance forces or connections (or whatever).
No one has claimed that the Bell Theorem or applying it to EPR-Bell experiments helps select a 1)nonlocal or 2) instantaneous action at a distance forces or 3) connections (or whatever) [I assume you include ‘entanglement’ here] or 4) any other QM interpretation of reality.
Only that EPR-Bell experiments applying the Bell Theorem test and question the viability that any Einstein Local [Local and Realistic] explanation of reality might be possible. None of the 1) thru 4) above are compatible with any Einstein Local explanation using Local Realism.

There are very few published Local Realist but there are some, but you would be at odds with them and in agreement with vanesch based on your conclusions here:

So, in this case we no longer have a local theory. We have a global one.

Of course, the instantaneous action that's happening in the global case has nothing to do (as far as anyone knows) with ftl or instantaneous physical propagations or connections. It's simply that when we change the setting, a, then we instantaneously change the setting (a,b), and therefore instantaneously change the joint probability.
I take this to mean you agree the evidence rejects Local Realism (my own preference) as unable to explain how joint probabilities P(A,B) can ‘instantaneously change’ with any change in “a” or “b”. And that reality must be defined by what you call a “Global Theory”.
The net of this "Global Theory" in no different than what vanesch have claimed. Namely that the evidence so far says (and some claim it says so conclusively) that no “Einstein Local” explanation can correctly account for EPR-Bell experimental observations with no prefreance towards any ‘Non-Local’ interpretation.
As far as I can tell no one has claimed anything more than that.
 
Last edited:
  • #113
I have to say that, for different reasons, I'm occupied with other stuff, and I'm honestly a bit tired of discussing MWI, Bell/EPR stuff and all that. I have the impression I've been writing the same kind of arguments at least a dozen times on these subjects - which remain nevertheless interesting and fascinating. So sorry not to enter the discussion again, right now...
 
  • #114
RandallB said:
No one has claimed that the Bell Theorem or applying it to EPR-Bell experiments helps select a 1)nonlocal or 2) instantaneous action at a distance forces or 3) connections (or whatever) [I assume you include ‘entanglement’ here] or 4) any other QM interpretation of reality.
Are you saying that no one has claimed that EPR-Bell stuff implies nonlocality in nature? Of course they have. Even some physicists claim this. But I think they're mistaken, and if one takes the time to sort out the language associated with all this EPR-Bell stuff, then one will find that there's really nothing to get excited about. Some local realistic formulations are incompatible with qm -- that's all.

As for quantum entanglement -- it's an experimental fact and can, I think, be understood from a classical perspective.

RandallB said:
There are very few published Local Realist but there are some, but you would be at odds with them and in agreement with vanesch based on your conclusions here:
Except that he, and many others, seem to think that there's a nonlocality problem. But, imho, there isn't.

From what you've written, I'm supposing that we agree that there isn't any sort of nonlocality problem, but there is a problem with making certain classical or realistic formulations compatible with orthodox quantum theory.

RandallB said:
I take this to mean you agree the evidence rejects Local Realism (my own preference) as unable to explain how joint probabilities P(A,B) can ‘instantaneously change’ with any change in “a” or “b”. And that reality must be defined by what you call a “Global Theory”.
I agree that the evidence rejects at least one sort of local realistic formulation. But there are lots of them now. Don't some of them actually correctly predict the EPR-Bell correlations?

There's nothing mysterious about why or how P(A,B) changes instantaneously when a or b are changed. This can be understood locally and (somewhat, if not completely) realistically.

If one is using a global observational perspective, then one will need a global theory. I think that when people refer to Bohmian mechanics as being nonlocal, then what nonlocal really means in this context is global. Everything in the universe is, in a global sense (ie., wrt the motion of the universe as a whole), entangled with everything else in the universe. But, this doesn't in any sense mean that some event here on Earth instantaneously causes some event on the other side of the universe.

The word nonlocal has been used where the word global would have been a better choice. This has created a lot of unnecessary confusion (and fantasies of ftl travel and communication) surrounding EPR-Bell questions.
 
Last edited:
  • #115
Bell, EPR, and all that explained through a sex analogy:
https://www.physicsforums.com/blogs/demystifier-61953/sex-quantum-psychology-and-hidden-variables-1477/
 
Last edited by a moderator:
  • #116
ThomasT,
With regard to Bell's assumption, you are right.
One of my contentions is that Bell's theorem doesn't actually make a locality assumption. If you think it does, then point out where you think it is in his formulation.

But you've reversed the implication when you assert:
If Bell's locality condition isn't, in reality, a locality condition, then Bell's theorem doesn't contradict locality.
Because Locality is one way to assure causal independence in the acts of observation. Thus Bell's "locality" condition (a much broader assumption) is in fact implied by locality and if it is rejected so too must locality.

This having been said it is indeed the "third" assumption classical realism which should be rejected as it too implies either a trivial classical realism (the universe exists and no other objects or objective properties can be well defined) or it implies with or without locality the very same ability factor probabilities under some choice of variables. Any classical correlation matrix can be diagonalized and thus you can always find pairs of random variables which factor.

Locality is just an easy way to assert that you can find a specific pair of variables whose probabilities factor. One could "as easily" assert that say the polarization and momentum of photons are never coupled in a carefully constructed experimental apparatus. Yet they can be entangled prior to the measurements and a violation of Bell's inequality can be demonstrated within QM.

Locality issues are a red herring. It's all about our historic concept of ontological reality getting in the way of understanding the phenomenological behavior of empirical actuality. What happens, happens! And quantum theory describes it quite well. Attempting to "interpret" it in terms of an ontological picture of "reality" breaks down at the quantum level.

I prefer to argue that given science is an epistemological discipline we should excise any non-operational ontological language such as "states" except as tentative shortcuts for classes of empirical phenomena and only when applicable (e.g. when describing the meter reading of a quantum experiment and not the system itself)...(or of course when working within the classical approximation to quantum actuality.)
 
Last edited:
  • #117
ThomasT said:
Are you saying that no one has claimed that EPR-Bell stuff implies nonlocality in nature?
Did you read my post or just react to it?
Perhaps English is a second language so let me repeat: No one has claimed that Bell Theorem applied to EPR-Bell experiments helps in any way to select the best one of many non-local explanations such as your “Global Theory” as better than any other non-local theory.

As I said they are only saying “that EPR-Bell experiments applying the Bell Theorem test and question the viability that any Einstein Local [Local and Realistic] explanation of reality might be possible”.

I suspect you are having a problem with defining the term “Local” and do not understand that redefining a new term for “Global Local” is NOT the same as “Local”. “Local” implies the meaning intended by “Einstein Local”, even Bohm himself acknowledged that BM was non-local with or without the super-deterministic version of ‘local’ that you with “Global Local”; that is nothing new and not “Einstein Local”. I recommend reviewing an https://www.physicsforums.com/showthread.php?t=181904"
I agree that the evidence rejects at least one sort of local realistic formulation. But there are lots of them now. Don't some of them actually correctly predict the EPR-Bell correlations?

There's nothing mysterious about why or how P(A,B) changes instantaneously when a or b are changed. This can be understood locally and (somewhat, if not completely) realistically.
Since when are there lots of local realistic (Einstein Local) formulations? can you name two?
Or just one local realistic formulation that is even close being “somewhat, if not completely” locally and realistically understood to resolve what you call this “EPR-Bell stuff”.
Remember that cannot include a formulation that rejects Local Realism such as the global observational perspective of a “global theory” or a super-deterministic BM interpretation.

I’m not sure if you are looking for a way to support Local Realism, or if you are trying to define the best Non-Local interpretation as one that uses a “Global” / Super-deterministic version of “Local”. If the latter, “Bell” by definition can be of no help to you.
 
Last edited by a moderator:
  • #118
jambaugh said:
Locality issues are a red herring.

So are reality issues. :smile:

jambaugh said:
It's all about our historic concept of ontological reality getting in the way of understanding the phenomenological behavior of empirical actuality.

Our understanding of things has to do with our being able to see (or at least visualize) them. So there is a natural desire to render instrumental behavior in terms of deeper causes. Unfortunately, in the case of quantum experimental phenomena this hasn't worked too well.
jambaugh said:
I prefer to argue that given science is an epistemological discipline we should excise any non-operational ontological language such as "states" except as tentative shortcuts for classes of empirical phenomena and only when applicable (e.g. when describing the meter reading of a quantum experiment and not the system itself)...(or of course when working within the classical approximation to quantum actuality.)
That's a lot of stuff to excise. :rolleyes: Meanwhile, a certain amount of time will be taken up deciphering and explaining the semantics of quantum lingo -- with operationalism being the order of the day, hopefully.

Anyway if one were to ask, "What do experimental violations of Bell inequalities have to do with nonlocality in nature?" I feel now that I can answer, "Nothing.", with a certain amount of assurance.
 
  • #119
ThomasT said:
So are reality issues. :smile:
I disagree... else what's left of the implications of EPR?
Our understanding of things has to do with our being able to see (or at least visualize) them.
Yes in so far as our evolutionary method of dealing with our environment. But now that we are looking beyond the scope of say finding food and avoiding tigers and knowing when to plant our corn we must get more formal in the meaning of "understanding of things". In science it can only be measured by our ability to predict. Nothing succeeds like success!...
So there is a natural desire to render instrumental behavior in terms of deeper causes.
Yes exactly... understanding deeper causes and processes rather than a deeper visualization of an ontological reality. Understand the reason behind our desire to visualize... it works at the classical level... then understand when that reason ceases to be applicable... when we push beyond that level. We then must revert to the fundamental epistemological foundation of knowing, our science is based on a epistemology of empirical phenomena (process) not on a Platonic logic of reality (objects).

Unfortunately, in the case of quantum experimental phenomena this hasn't worked too well.
The phenomenological and causal aspects have worked out brilliantly. It's QM conforming to our desire to paint an objective world picture which hasn't worked too well. And that's our failing not the theory's.

That's a lot of stuff to excise. :rolleyes:
Yes and no. You needn't excise, just qualify... especially when you get too close to the borderline where the issues begin to become important. In short people should quit trying to (re)"interpret" quantum theory. It's already has its phenomenological (operational) interpretation in the Born probability interp.

Meanwhile, a certain amount of time will be taken up deciphering and explaining the semantics of quantum lingo -- with operationalism being the order of the day, hopefully.
Yes indeed.
Anyway if one were to ask, "What do experimental violations of Bell inequalities have to do with nonlocality in nature?" I feel now that I can answer, "Nothing.", with a certain amount of assurance.
Yes Indeed!
 
  • #120
jambaugh said:
Locality issues are a red herring.
ThomasT said:
So are reality issues.
jambaugh said:
I disagree... else what's left of the implications of EPR?
What evidence do either of you use to come to these one of contrary conclusions?
[note: by ‘reality issues’ I assume you guys mean “Realism” as in the realism of a classical reality verses the possible reality of a multidimensional and/or “FTL” wave function or entanglement collapse.]
Neither of you can use Bell or EPR-Bell as evidence as it is only able to address “Local” as understood by Einstein which requires both Locality AND Realism.

jambaugh said:
ThomasT said:
Anyway if one were to ask, "What do experimental violations of Bell inequalities have to do with nonlocality in nature?" I feel now that I can answer, "Nothing.", with a certain amount of assurance.
Yes Indeed!
From what do you derive assurance that any solution that may come in the future claiming to be more complete than the Non-Locals (from QM to BM to Strings) should not be required to explain the non-local implications of EPR-Bell as if non-locality means nothing.
As a Local Realist like myself (the Einstein claim) I see that as the exact obligation of any LR explanation. Until a detailed description in LR terms can match the measured EPR-Bell results the Non-Local solution currently in use must be considered at least viable if not most likely complete, regardless of what my or anyone’s personal preference might be.

And IMO any solution that wishes to discredit the current explanation of EPR-Bell results must do so using both locality and realism, in other words find the complete solution Bell himself was originally looking for that demonstrates a more complete hidden variable LR solution as possible.

Arguments trying to decide if we have a misperception in understanding nature because of nonlocality in nature verses nature not based on realism have nothing to do with Bell, as they only address which Non-Local approach is preferable.
 

Similar threads

  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 75 ·
3
Replies
75
Views
11K
  • · Replies 66 ·
3
Replies
66
Views
7K
  • · Replies 87 ·
3
Replies
87
Views
8K
  • · Replies 82 ·
3
Replies
82
Views
10K
  • · Replies 197 ·
7
Replies
197
Views
32K
  • · Replies 40 ·
2
Replies
40
Views
2K
Replies
35
Views
796
  • · Replies 11 ·
Replies
11
Views
5K
  • · Replies 190 ·
7
Replies
190
Views
15K