An abstract long-distance correlation experiment

In summary, the basic experimental setting features a sequence of independent, identically distributed signals sent by Norbert to two identically built devices operated by Alice and Bob, located symmetrically more than 1km apart from each other and from Norbert. Devices have pointers that can take three values, and each device has a red and blue light that can potentially light up for a time interval when a signal arrives. Alice and Bob randomly, uniformly, and independently change their pointer settings every ##\Delta t## seconds. Yvonne selects events in the raw data received from Alice and Bob by discarding events when their total number within a time interval of ##\delta t## is different from 2, or equals 2 but are both on
  • #1
A. Neumaier
Science Advisor
Insights Author
8,608
4,642
Inspired by stevendaryl's description of an EPR-like setting that doesn't refer to a particle concept, I want to discuss in this thread a generalized form of his setting that features a class of long-distance correlation experiments but abstracts from all distracting elements of reality and from all distracting elements of imagination, thus allowing the analysis to concentrate on the essentials. Pictorial, the setting,
stevendaryl said:
proxy.php?image=http%3A%2F%2Fdee-mccullough.com%2FEPR1.jpg
is identical to that pictured in stevendaryl's post. But the interpretation of the figure (to be discussed below) is optimized, so that one cannot speak of many things that usually obscure nonlocality discussions.

Note that my goal in this discussion is not to prove or disprove local realism in the conventional form, but (in line with the originating thread) to investigate weirdness in quantum mechanics and its dependence on the language chosen, using this specific experimental arrangement.

Please keep this thread free from discussion of other settings for experiments related to nonlocality.

If you think the thread is too long but want to know the outcome from my perspective, you may jump directly to my main conclusions in post #187 (where I conclude that anything nonlocal is due to the intelligence of an observer) and post #197 (where I define a Lorentz invariant notion of causality sufficient to exclude superluminal signalling but far weaker than the unrealistic causality assumptions made in the derivation Bell-type theorems).
 
Last edited:
Physics news on Phys.org
  • #2
Below is a description of the basic experimental settings. All participants interested in the discussion, and in particular stevendaryl, are invited to comment on the suitability of the setting to contain both experiments that display fully classical and fully quantum behavior representative of certain experiment discussing long-distance nonlocality in Bell's sense. In addition, many other experiments match the setting, e.g., those to check that the devices work individually as prescribed, by sending signals only to one of the devices.

Improvements were made to the original setting to accommodate valid criticism discussed in posts #3-#48 below. Since the basic setting is now stable, it is no longer open for discussion. If you want to go immediately to stage 2, you may continue with post #49, where I introduce additional features that impose more structure, again asking for your participation to make everything as clear and constructive as possible.

The basic setting
1. A source operated by Norbert sends a sequence of independent, identically distributed signals with temporal spacing ##\gg\delta t## but ##\ll\Delta t## seconds to two identically built devices operated by Alice and Bob, located symmetrically more than 1km apart from each other and from Norbert. (Here ##0\ll\delta t\ll\Delta t## are fixed real numbers, ##\delta t\le 2.99 \mu s##, less than the time needed to travel 1km.)

2. Each device has a pointer that can take 3 values and a red and blue light that can possibly light up for a time interval ##\ll\delta t## when a signal arrives - if this happens, this is called an event.

3. Alice and Bob randomly, uniformly, and independently change their pointer settings every ##\Delta t## seconds. Both keep a record of the time and the pointer setting of any event on their side, together with the color of the light observed. Both purify their record by omitting all events where two lights light up on their own detector within a time interval of ##\delta t##. They also discards events within ##\delta t## of their own pointer switch. The remaining events are called pure events.

4. After Alice and Bob independently collected their data for ##\gg\Delta t## seconds, each calculates a ##2\times 3## matrix ##A## respectively ##B## of statistical observables whose entries are the relative frequency ##X_{is}## of pure events where ##c_X=i,p_X=s## (##X=A,B##). They send their ##2\times 3## matrices to you, the analyzer. In addition, they send their raw data to Yvonne who evaluates their data according to the following protocol for creating the statistics.

5. Yvonne postselects events in the raw data received from Alice and Bob by discarding events when their total number within a time interval of ##\delta t## is different from 2, or equals 2 but are both on the side of Alice or of Bob. She also discards events within ##\delta t## of a pointer switch. As a consequence, all remaining events are pure and occur in pairs consisting of one event ##A## on the side of Alice and one event ##B## on the side of Bob, and to each event there are well-defined pointer settings ##p_A,p_B## of the devices of Alice and Bob. Alice and Bob characterize each event through the numbers ##c_A,c_B## defined by ##c_X:=## 1 if ##X## is red, 2 if ##X## is blue.

6. Yvonne summarizes the experiment of Alice and Bob by calculating two ##2\times 2## matrices ##E## and ##F## of statistical observables whose entries are the relative frequency ##E_{ik}## of pairs where ##c_A=i,~c_B=k,~p_A=p_B## and the relative frequency ##F_{ik}## of pairs where ##c_A=i,~c_B=k,~p_A\ne p_B##. In addition, Yvonne checks whether the total number of discarded events is within 10% of the total number of remaining events. She sends the matrices ##E,F## to you, the analyzer if this is the case; otherwise she reports to you failure of the experiment due to lack of care in the setup.

7. Other ways of analyzing the full experimental record (i.e., before postselection) are acceptable for discussing auxiliary purposes such as checking the efficiency of transmission and detection. However, the sole goal of the experiment is to study the correlations expressed in the matrices ##A,B,E,F##.

8. Alice and Bob perform their experiments synchronous with Norbert's signals, accounting for the delay due to transmission. The devices are shielded from other external influences to the extent current technology allows it. The analysis will have to make allowance for corresponding imperfections.

9. Norbert, Alice, Bob, and Yvonne are not human beings but simply acronyms for elementary control programs behind the automatized source control, detector controls, and postselection control, respectively. In particular, they are assumed not have any artificial intelligence, hence they have neither knowledge nor a capability for being surprised.

10. Some time after the whole experiment is over, you, the analyst of the experiment read Yvonne's report of the experimental data including the four matrices with the summary statistics. After checking that no mistake has been made you publish the four matrices ##A,B,E,F## in a scientific journal. (Possibly you post in addition refined statistics on a web supplement.) You and the readers of the journal are the ones who have knowledge and therefore may or may not be surprised about the published findings, depending on your world view.

The matrices ##A,B,E,F## are the published output of the experiment. They
(and potentially more) are to be predicted by various existing or hypothetical theories for how certain specific signals sent by Norbert may affect the observation devices.
 
Last edited:
  • Like
Likes stevendaryl
  • #3
Just for historical accuracy, even though I made up this set-up from memory, Bell sketched the same sort of setup in his book "Speakable and Unspeakable in Quantum Mechanics", in the chapter "Bertlmann's socks and the nature of reality", section number 4.
 
  • #4
A. Neumaier said:
''
''
The basic setting
1. A source operated by Norbert sends a sequence of signals with temporal spacing ##\gg\delta t## but ##\ll\Delta t## seconds to two identically built devices operated by Alice and Bob, located symmetrically more than 1km apart from each other and from Norbert. (Here ##0\ll\delta t\ll\Delta t## are fixed real numbers.)

''
Does Norbert send a signal to B every time a signal is sent to A, or is Norbert selecting which direction to send each signal ?
(Yes, I cannot follow simple English ).
 
  • #5
Mentz114 said:
Does Norbert send a signal to B every time a signal is sent to A, or is Norbert selecting which direction to send each signal ?
Nothing specific is being said so far about the nature of the signal.

It could (for example) be no signal at all, a signal sent to one or both of the detectors only, two different signals sent to Alice and Bob, a coherent superposition of quantum states, a mixture of classical or quantum states, or signals loaded with hidden variable information. Whatever the (real or hypothetical) theory underlying the subsequent analysis allows and accounts for.

This will allow an objective discussion without being bogged down by possibly restrictive assumptions on the way the signals are prepared, transmitted, or detected.
 
  • #6
A. Neumaier said:
Nothing specific is being said so far about the nature of the signal.
It could (for example) be no signal at all, a signal sent to one or both of the detectors only, two different signals sent to Alice and Bob, a coherent superposition of quantum states, a mixture of classical or quantum states, or signals loaded with hidden variable information. Whatever the (real or hypothetical) theory underlying the subsequent analysis allows and accounts for.
This will allow an objective discussion without being bogged down by possibly restrictive assumptions on the way the signals are prepared, transmitted, or detected.
Thank you.
 
  • #7
Someone has described this exact thought experiment already - Bell, says stevendaryl and I seem to recall seeing it elsewhere as well. One of Aspect's retrospectives? It seems a good framework for discussion, although it becomes more interesting when you add statements about the correlations that Alice and Bob find.

Are you assuming that Norbert's signals are not transmitted superluminally? And also that the content of each signal is independent of the content of the previous ones (if not, conspiratorial theories will be allowed)?
 
  • #8
Nugatory said:
Are you assuming that Norbert's signals are not transmitted superluminally?
At this point, nothing is assumed. The assumptions will be part of the theory that models the way predictions are made, and can bedifferent for different theories. For example if the signals consist of thermal waves satisfying a parabolic equation, information transmission is instantaneous, while if your model is relativistic, this is forbidden.

At present I am just creating the framework, interactively with all of you. The main question at present is if the framework is deemed wide enough such that everyone taking part can accommodate on this abstract level one instance of their favorite explanatory theory, be it Bohmian mechanics, or quantum mechanics with collapse, or a particle-free field theory, or whatever one of you may come up with. Concrete matrices and theories may be specified at a later stage.
 
  • #9
I'm guessing in the case you don't want postselection you can be more specific about the signal so that no events are discarded.
 
  • #10
ddd123 said:
in the case you don't want postselection you can be more specific about the signal so that no events are discarded.
Since in the thought experiment there is no need to optimize the number of valid observations, discarding some of the signals doesn't matter; it cannot change the observed asymptotic probabilities. I think postselection is always beneficial since it reduces artifacts coming from signals due to detector inefficiencies and detector sensitivity to signals that come from the environment rather than from Norbert's source. Otherwise I'd have added another statistical observable that counts the number of data mismatches. But I think this number doesn't tell much about maters of principles, hence can be safely ignored.
 
  • #11
A. Neumaier said:
If necessary, improvements are made to the setting to accommodate valid criticism. Once the basic setting is stable, I'll use the setting to impose more structure, again asking for your participation to make everything as clear and constructive as possible.

Here's a couple of criticisms made from the point of view that this is supposed to be a Bell test. You've stated that you want to keep this open for now, but I assume that the main point is to abstract Bell experiments (possibly among other things) and if it doesn't capture a Bell test then you'll want to change it.
1. A source operated by Norbert sends a sequence of signals with temporal spacing ##\gg\delta t## but ##\ll\Delta t## seconds to two identically built devices operated by Alice and Bob, located symmetrically more than 1km apart from each other and from Norbert. (Here ##0\ll\delta t\ll\Delta t## are fixed real numbers.)

Something you might want to think about: why do you need a Norbert at all? You later state that you make no assumptions about what signals Norbert is emitting or who he is sending them to. Presumably the time of emission also shouldn't be critical to the analysis. So why not just drop Norbert? If you want to keep the scenario as generic and black box as possible, then just have an Alice and a Bob each choosing from a set of possible measurements and recording one of a set of possible results.
2. Each device has a pointer that can take 3 values and a red and blue light that can possibly light up for a time interval ##\ll\delta t## when a signal arrives - if this happens, this is called an event.
5. They postselect events in their records by discarding events when their total number within a time interval of ##\delta t## is different from 2, or equals 2 but are both on the side of Alice or of Bob. They also discard events within ##\delta t## of a pointer switch.
As a consequence, all remaining events occur in pairs consisting of one event ##A## on the side of Alice and one event ##B## on the side of Bob, and to each event there are well-defined pointer settings ##p_A,p_B## of the devices of Alice and Bob. Alice and Bob characterize each event through the numbers ##c_A,c_B## defined by ##c_X:=## 1 if ##X## is red, 2 if ##X## is blue.

This kind of postselection (deciding what you count as an event based on the results obtained) is dangerous: it's possible for a local hidden variable model to effectively fake a Bell violation if you postselect the results like this. In general you need to decide what will be counted as an event in advance.

The simplest way to do this is that fits this requirement is to use predefined time windows: require that Alice and Bob choose measurements ##x_{n}## and ##y_{n}## (##n \in \mathbb{N}##) at or just after times ##t_{0} + n \Delta t## (in some reference frame) and must record corresponding outcomes ##a_{n}## and ##b_{n}## at or before times ##t_{0} + n \Delta t + \delta t##, where ##\delta t <\Delta t## is chosen such that light would take longer than ##\delta t## to travel between Alice and Bob.

Also, why are you saying that Alice's and Bob's pointers take three values? Why not two (e.g., for CHSH), or ##N##, or even ##N_{\mathrm{A}}## and ##N_{\mathrm{B}}## for Alice and Bob individually?
6. Alice and Bob summarize their experiment by calculating two ##2\times 2## matrices ##E## and ##F## of statistical observables whose entries are the relative frequency ##E_{ik}## of pairs where ##c_A=i,~c_B=k,~p_A=p_B## and the relative frequency ##F_{ik}## of pairs where ##c_A=i,~c_B=k,~p_A\ne p_B##.

Depending on how general you want to be, already defining a summary of the statistics might be premature. If you're willing to assume the underlying explanation for the results is i.i.d. (which is reasonable if you want to keep things simple to begin with), then the usual object of study is the probability ##P(ab \mid xy)## that Alice and Bob get results ##a## and ##b## conditioned on performing measurements ##x## and ##y##. Assuming things are i.i.d., ##P(ab \mid xy)## is in principle well defined and summarises the experimental results. If you want to abandon the i.i.d. assumption then things might get more complicated.
 
Last edited:
  • #12
wle said:
Also, why are you saying that Alice's and Bob's pointers take three values? Why not two (e.g., for CHSH), or NNN, or even NANAN_{\mathrm{A}} and NBNBN_{\mathrm{B}} for Alice and Bob individually?

I can't see anything in the framework that says Bob and Alice must use all 3 settings. Clarification is required though.
 
  • #13
I'm just wondering where the number three comes from. If he's using a small fixed number of measurements for simplicity then the simplest Bell inequality only needs two. If he explicitly wants to be more general then the number of measurements can just as well be ##N##.
 
  • #14
wle said:
I'm just wondering where the number three comes from. If he's using a small fixed number of measurements for simplicity then the simplest Bell inequality only needs two. If he explicitly wants to be more general then the number of measurements can just as well be ##N##.

I always go for three, because it's the easiest to see the weirdness of quantum statistics. Three was also used by Dr. Chinese in his essay here:
http://drchinese.com/David/Bell_Theorem_Easy_Math.htm

The statement of Bell's inequality uses 4 settings: Two for Alice and two for Bob. In my opinion, it's the perfect anti-correlations that are the most stark fact about EPR, and those only show up if Alice and Bob have the possibility of making the same choices. But if Alice can choose settings [itex]\alpha_1[/itex] or [itex]\alpha_2[/itex], and Bob has the same two choices, you don't have enough statistics to rule out hidden-variables. The following hidden-variable theory explains the statistics perfectly:
  1. With probability [itex]\frac{1}{2} cos^2(\frac{\theta}{2})[/itex], Alice's particle will be measured to be spin-up along either axis ([itex]\alpha_1[/itex] or [itex]\alpha_2[/itex], and Bob's particle will be measured to be spin-down along those axes (where [itex]\theta[/itex] is the angle between the axes).
  2. With probability [itex]\frac{1}{2} sin^2(\frac{\theta}{2})[/itex], Alice's particle will be measured to be spin-up along axis [itex]\alpha_1[/itex], and will be measured to be spin-down along axis [itex]\alpha_2[/itex], and Bob's particle will be measured to be the opposite.
  3. With probability [itex]\frac{1}{2} cos^2(\frac{\theta}{2})[/itex], Alice's particle will be measured to be spin-down along either axis ([itex]\alpha_1[/itex] or [itex]\alpha_2[/itex], and Bob's particle will be measured to be spin-up along those axes.
  4. With probability [itex]\frac{1}{2} sin^2(\frac{\theta}{2})[/itex], Alice's particle will be measured to be spin-down along axis [itex]\alpha_1[/itex], and will be measured to be spin-up along axis [itex]\alpha_2[/itex], and Bob's particle will be measured to be the opposite.
With three choices, you can show that no hidden variable theory works, but not with just two.
 
  • #15
stevendaryl said:
With three choices, you can show that no hidden variable theory works, but not with just two.

You can with two if the state is non-maximally entagled: http://arxiv.org/pdf/quant-ph/0512025.pdf
 
  • Like
Likes stevendaryl and atyy
  • #16
I edited my original setting to make the following amendment
A. Neumaier said:
how certain specific signals sent by Norbert (introduced to be able to make appropriate choices) may affect the observation devices.

7. Other ways of analysing the full experimental record (i.e., before postselection) are acceptable for auxiliary purposes such as checking the efficiency of transmission and detection. However, the sole goal of the experiment is to study the correlations expressed in the matrices ##E##and ##F##
in order to meet the criticism of wle.
wle said:
why not just drop Norbert?
wle said:
This kind of postselection (deciding what you count as an event based on the results obtained) is dangerous:
 
  • #17
wle said:
Here's a couple of criticisms made from the point of view that this is supposed to be a Bell test. You've stated that you want to keep this open for now, but I assume that the main point is to abstract Bell experiments (possibly among other things) and if it doesn't capture a Bell test then you'll want to change it.

Something you might want to think about: why do you need a Norbert at all? You later state that you make no assumptions about what signals Norbert is emitting or who he is sending them to. Presumably the time of emission also shouldn't be critical to the analysis. So why not just drop Norbert? If you want to keep the scenario as generic and black box as possible, then just have an Alice and a Bob each choosing from a set of possible measurements and recording one of a set of possible results.

This kind of postselection (deciding what you count as an event based on the results obtained) is dangerous: it's possible for a local hidden variable model to effectively fake a Bell violation if you postselect the results like this. In general you need to decide what will be counted as an event in advance.

The simplest way to do this is that fits this requirement is to use predefined time windows: require that Alice and Bob choose measurements ##x_{n}## and ##y_{n}## (##n \in \mathbb{N}##) at or just after times ##t_{0} + n \Delta t## (in some reference frame) and must record corresponding outcomes ##a_{n}## and ##b_{n}## at or before times ##t_{0} + n \Delta t + \delta t##, where ##\delta t <\Delta t## is chosen such that light would take longer than ##\delta t## to travel between Alice and Bob.

Also, why are you saying that Alice's and Bob's pointers take three values? Why not two (e.g., for CHSH), or ##N##, or even ##N_{\mathrm{A}}## and ##N_{\mathrm{B}}## for Alice and Bob individually?

Depending on how general you want to be, already defining a summary of the statistics might be premature. If you're willing to assume the underlying explanation for the results is i.i.d. (which is reasonable if you want to keep things simple to begin with), then the usual object of study is the probability ##P(ab \mid xy)## that Alice and Bob get results ##a## and ##b## conditioned on performing measurements ##x## and ##y##. Assuming things are i.i.d., ##P(ab \mid xy)## is in principle well defined and summarises the experimental results. If you want to abandon the i.i.d. assumption then things might get more complicated.
I introduced Norbert in analogy to Alice and Bob (who are dispensable as well in a minimal setting) in order that one can talking about all degrees of freedom in the traditional personalized way. This is only done as a figure of speech; nothing depends on it: Norbert, Alice and Bob are not human beings but the control programs behind the automatized source control and detector controls, respectively. Actually, to make this perfect I'll add in a moment another change to the setting introducing Yvonne, who does the postselection instead of Alice and Bob.

I was taking stevendaryl's picture as blueprint. It contained 3 pointer settings, so I assumed them. This covers the 2 pointer setting since nothing was specified about how the pointer affects the results. (This is one of the strengths of the setting.) it is easy to wire a concrete detector such that pointers 2 and 3 have exactly the same effect on the lights.

Also stevendaryl didnt refer to Bell experiments, so I didn't either. Bell is relevant only for one special case of the analysis - when the underlying hypothetical theory is a local hidden variable theory of particles moving along the transmission lines. At the present stage of the discussion, the only thing that needs to be ensured is that using Nature rather than a hypothetical model, Norbert can prepare at least one kind of signals resulting in matrices ##E## and ##F## violating the predictions of Bell's theorem. I trust that stevendaryl made his original proposal with that in mind.

For simplicity, I also assumed perfect symmetry between Alice and Bob.

I'll add statements about independence and timing.

Which particular items in the postselection protocol give rise to the loophole you claimed exists? I don't see how my postselection scheme is essentially different from yours. Note that the postselection scheme is known in advance, hence what are the final events (pairs counted) is decided in advance, as you required.
 
Last edited:
  • #18
As just promised, I reattributed some items to Yvonne, and added the following additional rule of the game:
A. Neumaier said:
8. Norbert, Alice, Bob, and Yvonne are not human beings but simply acronyms for elementary control programs behind the automatized source control, detector controls, and postselection control, respectively. In particular, they are assumed not have any artificial intelligence, hence they have neither knowledge nor a capability for being surprised. It is you, the analyst of the experiment, who - some time after the whole experiment is over - sees Yvonne's report of the experimental data including their analysis who has this knowledge and may or may not be surprised, depending on your world view.
Since nothing changed in the data collection and analysis, this should not affect the scientific content of the setting, but removes any trace of anthropomorphism.
 
Last edited:
  • #19
I also added the following:
A. Neumaier said:
9. Alice and Bob perform their experiments synchronous with Norbert's signals, accounting for the delay due to transmission. The devices are shielded from other external influences to the extent current technology allows it. The analysis will have to make allowance for corresponding imperfections.
 
  • #20
Nugatory said:
Are you assuming that [...] the content of each signal is independent of the content of the previous ones (if not, conspiratorial theories will be allowed)?
Is now required.
 
  • #21
wle said:
why do you need a Norbert at all?
A. Neumaier said:
in order that one can talking about all degrees of freedom in the traditional personalized way.
Norbert is needed to be able to talk about the result of the experiment for different choices of Norbert's signals (in repeated experiments) without having to invoke counterfactual reasoning.
 
  • #22
After rereading the 5 point description of stevendaryl's original experiment I noticed that in my setting one was no longer allowed to speak about what Alice and Bob, considered independently (see his point 3). I rectified this. The main changes are in points 3 and 4, but I also made various corresponding changes later.

Therefore please reread the complete description in the updated post #2 and check that it is possible to consistently talk about everything of relevance to the interpretation of the experiment without any allusion to human features before the matrices with the statistical summaries reach you, the human analyzer.
 
Last edited:
  • #23
A. Neumaier said:
6. Yvonne summarizes the experiment of Alice and Bob by calculating two ##2\times 2## matrices ##E## and ##F## of statistical observables whose entries are the relative frequency ##E_{ik}## of pairs where ##c_A=i,~c_B=k,~p_A=p_B## and the relative frequency ##F_{ik}## of pairs where ##c_A=i,~c_B=k,~p_A\ne p_B##. She sends these matrices to you, the analyzer.
Why only two matrices? Yvonne could calculate nine matrices (three for the same settings and six for different settings). Say in real experiments visibilities for perfect correlations are reported for each measurement angle separately. And we can always reduce the matrices to two from these nine.
 
  • #24
zonde said:
Yvonne could calculate nine matrices (three for the same settings and six for different settings).
Yes, but the fewer observables the simpler the later analysis. You will see that there is nontrivial work to do in the next stage, and I want to minimize this work. Therefore I want to keep the number of observables to the minimum, while featuring the same discriminative power as stevendaryl's original setting had.

Yvonne could calculate a lot more since she has lots of data, and one can take the mean of any function of these. For example, she could make separate statistics for data collected at dawn, at daylight, and at night, if she thinks that there might be problems in shielding the detectors from natural light. (Just a made up possibility to indicate the kind of observables that are ignored in my setting, and in this particular case also by everyone else.) But when you, the analyzer, try to publish the results in Phys. Rev. Lett. there is a 4 page limit (actually, as wle points out further below, it is now a 3500 word limit) to describe and justify everything - your motivation, your experimental setting, the choices of the details, the statistics, your conclusions, and all references; clearly the less statistics must be explained and displayed the better. You just need to pick the statistical observables that produce the aha effect in the most pronounced way. This makes for successful papers - Occam's razor everywhere except where it eliminates the key information. (Einstein: ''Everything Should Be Made as Simple as Possible, But Not Simpler''.)

Thus please simply check whether you can find in the current setup a setting where quantum mechanics predicts something weird enough to be worth discussing. If you can find one (and I take stevendaryl's post from which I have taken the setting as an indicator that one can), the setting is sufficient and there is no need for additional observables.
 
Last edited:
  • #25
zonde said:
And we can always reduce the matrices to two from these nine.
By the same token, you should never do any statistics but always only consider the complete raw data since we can always reduce the raw data to the final ones.
 
  • #26
A. Neumaier said:
By the same token, you should never do any statistics but always only consider the complete raw data since we can always reduce the raw data to the final ones.
In point 10. the data is published and it is only matrices that are published. It would be much better if experiments in journals would be published with raw data download links IMO. But if not then at least a bit more data is better than just an absolute minimum necessary to give the key findings of experiment.

But I suppose you have some idea about the setup and then it depends on you, do you intend to break the symmetry of three measurement settings or not. If you intend to keep possible explanation symmetric then of course there is not much point in giving separate matrices.
 
  • #27
zonde said:
It would be much better if experiments in journals would be published with raw data download links IMO.
To satisfy you, I amended the basic setting to allow posting more data on the web. (But I guess almost nobody will read them, except those interested in repeating the experiment. Did you ever read the web supplement of any article you downloaded?)

I don't think it will make a significant difference to the remainder of our discussion.
 
  • #28
A. Neumaier said:
Did you ever read the web supplement of any article you downloaded?
I did analysis of dataset from Weih's experiment http://arxiv.org/abs/quant-ph/9810080. And it gave me much better understanding of experimental side of entanglement.
 
  • #29
stevendaryl said:
The statement of Bell's inequality uses 4 settings: Two for Alice and two for Bob.

This is what I meant.

With three choices, you can show that no hidden variable theory works, but not with just two.

"Three choices" (in total) doesn't make sense unless you claim that one of Alice's measurements is "the same" as one of Bob's. I think this is misleading since, even if you think of the measurements as being along certain spatial axes, the relative orientation between Alice and Bob is completely unimportant in Bell's theorem.

To clarify, note that, by your way of counting, it's possible to obtain the maximal quantum violation of CHSH with just two measurements. The maximal CHSH violation is usually described as happening if Alice and Bob measure the operators ##A = \sigma_{z}## and ##A' = \sigma_{x}##, and ##B = (\sigma_{z} + \sigma_{x}) / \sqrt{2}## and ##B' = (\sigma_{z} - \sigma_{x}) / \sqrt{2}##, respectively on the state ##\lvert \Phi_{+} \rangle = \bigl( \lvert 0 \rangle \lvert 0 \rangle + \lvert 1 \rangle \lvert 1 \rangle \bigr) / \sqrt{2}##, but you could just as well rotate Bob's apparatus so it measures ##B = \sigma_{z}## and ##B' = \sigma_{x}## (like Alice) and rotate Bob's half of the entangled state and get exactly the same result.
 
  • #30
wle said:
"Three choices" doesn't make sense unless you claim that one of Alice's measurements is "the same" as one of Bob's.

They each choose one axis out of the same set of 3 possible axes.

To clarify, note that, by your way of counting, it's possible to obtain the maximal quantum violation of CHSH with just two measurements. The maximal CHSH violation is usually described as happening if Alice and Bob measure the operators ##A = \sigma_{z}## and ##A' = \sigma_{x}##, and ##B = (\sigma_{z} + \sigma_{x}) / \sqrt{2}## and ##B' = (\sigma_{z} - \sigma_{x}) / \sqrt{2}##, respectively on the state ##\lvert \Phi_{+} \rangle = \bigl( \lvert 0 \rangle \lvert 0 \rangle + \lvert 1 \rangle \lvert 1 \rangle \bigr) / \sqrt{2}##, but you could just as well rotate Bob's apparatus so it measures ##B = \sigma_{z}## and ##B' = \sigma_{x}## (like Alice) and rotate Bob's half of the entangled state and get exactly the same result.

As I said, if Alice and Bob are choosing from disjoint sets of choices, then the perfect anti-correlation doesn't come into play. You're right, that you can prove a violation of Bell's inequality in that case. But the advantage (in my opinion) of the three axes, where Alice and Bob choose from the same set, is that it's pure algebra to show that there can't be a hidden variable explanation.

If, for each pair, it is determined at the time of pair creation what Alice's result will be for each of the 3 directions, then there are 8 possible types of hidden variable:
  1. UUU: This type gives spin-up for all three directions.
  2. UUD: This type gives spin-up for the first two directions, and down for the third direction.
  3. UDU:
  4. UDD
  5. DUU
  6. DUD
  7. DDU
  8. DDD
From the rotational symmetry of the three directions, we can argue (with more work, you can show that the statistics imply this) that there are only two distinct probabilities involved:
  1. x = the probability that all three outcomes are the same (either all-up or all-down)
  2. y = the probability that two outcomes are the same, and the third is different
The statistics show that:
x+y = 1/8 (the probability that any two directions have the same result, ignoring the third direction)
2x + 6y = 1 (the sum of all probabilities must be 1)

These constraints imply that [itex]y=\frac{3}{16}[/itex] and [itex]x = - \frac{1}{16}[itex], which is impossible, since probabilities must be positive.
 
  • #31
It seems that the discussion of the basic setting has subsided, so that the setting is now stable and can be frozen.

I'd therefore like to ask zonde, ddd123, and stevendaryl, who in the other thread (#246, #251, #267) had expressed interest (in a discussion of stevendaryl's original setting), either to express further reservations to the exposition in post #2, or to agree to the freeze, so that I can go on to stage two of the specification.
 
Last edited:
  • #32
A. Neumaier said:
It seems that the discussion of the basic setting has subsided, so that the setting is now stable and can be frozen.

I'd therefore like to ask zonde, ddd123, and stevendaryl, who in the other thread (#246, #251, #267) had expressed interest (in a discussion of stevendaryl's original setting), either to express further reservations to the exposition in post #2, or to agree to the freeze, so that I can go on to stage two of the specification.

Fine with me.
 
  • #33
A. Neumaier said:
I was taking stevendaryl's picture as blueprint. It contained 3 pointer settings, so I assumed them.

Why are you using stevendaryl's picture as a starting point? The general idea you're starting to describe (i.e., modelling Alice and Bob as black box devices that accept inputs and emit outputs) isn't new. It's how Bell presented his result in essays he wrote between 1975 and 1990. It's also more or less how theorists working in the field think about Bell experiments (see, for instance, this review article, starting with the description of figure 1 on page 2). Have you checked that the problem you're starting to describe hasn't already been solved?

Which particular items in the postselection protocol give rise to the loophole you claimed exists?

I was referring to the detection loophole. You had Alice and Bob (now partially Yvonne) deciding what to record as an event based on when they observe a result and what the result is (specifically, whether and how many lights they see go on). A local hidden variable model can potentially exploit this to fake a Bell violation. This is a well known loophole in experimental Bell tests.

A. Neumaier said:
Yes, but the fewer observables the simpler the later analysis.

Not necessarily. One joint probability distribution ##P(ab \mid xy)## isn't conceptually difficult to reason about. Then the general problem is to decide if a given probability distribution is "strange" or "quantum" or "nonlocal" or whatever it is you want to study. With regard to Bell's theorem, there are some very useful general observations that can be made about local probability distributions before concentrating attention on a more restricted scenario and/or statistic.

But when you, the analyzer, try to publish the results in Phys. Rev. Lett. there is a 4 page limit to describe and justify everything - your motivation, your experimental setting, the choices of the details, the statistics, your conclusions, and all references; clearly the less statistics must be explained and displayed the better. You just need to pick the statistical observables that produce the aha effect in the most pronounced way.

In real life research papers (including PRLs) reporting a Bell experiment, the main result is normally just one statistic (the value of a Bell correlator such as CHSH) along with some measure of confidence. In theoretical research papers on Bell and similar scenarios its quite common to start by imagining one has access to the full table of joint conditional probabilities and take this as the object of study.
 
Last edited:
  • #34
wle said:
Why are you using stevendaryl's picture as a starting point?
Because he created the picture during the discussion that gave rise to the present thread, and because his description was complete but didn't refer to Bell's theorem or to particles or to knowledge or to entanglement but just to observable stuff, postulating a particular outcome. If there are earlier such descriptions in the literature it matters for the history, but not for the present discussion.

Concerning loopholes, I don't think this is very relevant for this thread. We are not going to prove that no local realist explanation is possible. The concern of this thread is about weirdness and how it is caused by language, not about nonlocal correlations and how they cannot be caused by local hidden variables.

I don't think the results of nonlocality experiments would be less weird if it were impossible to close all loopholes. At least, 15 years ago, I already thought that Aspect's experiment (together with the fact that QM proved to be the right description of all microscopic phenomena) was proof enough for establishing nonlocal correlations. But I still found everything weird at the time.

As I allowed for supplementary material, the remaining information, while informative, doesn't affect the basic setting. It didn't mention the specific limitations of PRL.

wle said:
its quite common to start by imagining
whereas I deliberately eliminated any human imagination from my basic setting.
 
Last edited:
  • #35
Well we can't rule out that one way to remove the weirdness would involve going along the lines of loopholes. But concerning postselection if, as you say, we will consider asymptotic probabilities as we have no practical constraints, I don't see much of a possibility for that loophole anymore. Maybe wle disagrees with this.
 

Similar threads

  • Quantum Interpretations and Foundations
4
Replies
138
Views
5K
  • Quantum Physics
2
Replies
47
Views
4K
Replies
97
Views
6K
  • Quantum Physics
Replies
1
Views
758
  • Quantum Interpretations and Foundations
2
Replies
54
Views
3K
  • Quantum Physics
Replies
4
Views
799
  • Quantum Physics
3
Replies
100
Views
9K
Replies
23
Views
5K
  • Quantum Physics
Replies
9
Views
2K
  • Quantum Physics
Replies
15
Views
3K
Back
Top