# An abstract long-distance correlation experiment

Tags:
1. Jan 17, 2016

### A. Neumaier

Inspired by stevendaryl's description of an EPR-like setting that doesn't refer to a particle concept, I want to discuss in this thread a generalized form of his setting that features a class of long-distance correlation experiments but abstracts from all distracting elements of reality and from all distracting elements of imagination, thus allowing the analysis to concentrate on the essentials. Pictorial, the setting,
is identical to that pictured in stevendaryl's post. But the interpretation of the figure (to be discussed below) is optimized, so that one cannot speak of many things that usually obscure nonlocality discussions.

Note that my goal in this discussion is not to prove or disprove local realism in the conventional form, but (in line with the originating thread) to investigate weirdness in quantum mechanics and its dependence on the language chosen, using this specific experimental arrangement.

Please keep this thread free from discussion of other settings for experiments related to nonlocality.

If you think the thread is too long but want to know the outcome from my perspective, you may jump directly to my main conclusions in post #187 (where I conclude that anything nonlocal is due to the intelligence of an observer) and post #197 (where I define a Lorentz invariant notion of causality sufficient to exclude superluminal signalling but far weaker than the unrealistic causality assumptions made in the derivation Bell-type theorems).

Last edited: Feb 7, 2016
2. Jan 17, 2016

### A. Neumaier

Below is a description of the basic experimental settings. All participants interested in the discussion, and in particular stevendaryl, are invited to comment on the suitability of the setting to contain both experiments that display fully classical and fully quantum behavior representative of certain experiment discussing long-distance nonlocality in Bell's sense. In addition, many other experiments match the setting, e.g., those to check that the devices work individually as prescribed, by sending signals only to one of the devices.

Improvements were made to the original setting to accommodate valid criticism discussed in posts #3-#48 below. Since the basic setting is now stable, it is no longer open for discussion. If you want to go immediately to stage 2, you may continue with post #49, where I introduce additional features that impose more structure, again asking for your participation to make everything as clear and constructive as possible.

The basic setting
1. A source operated by Norbert sends a sequence of independent, identically distributed signals with temporal spacing $\gg\delta t$ but $\ll\Delta t$ seconds to two identically built devices operated by Alice and Bob, located symmetrically more than 1km apart from each other and from Norbert. (Here $0\ll\delta t\ll\Delta t$ are fixed real numbers, $\delta t\le 2.99 \mu s$, less than the time needed to travel 1km.)

2. Each device has a pointer that can take 3 values and a red and blue light that can possibly light up for a time interval $\ll\delta t$ when a signal arrives - if this happens, this is called an event.

3. Alice and Bob randomly, uniformly, and independently change their pointer settings every $\Delta t$ seconds. Both keep a record of the time and the pointer setting of any event on their side, together with the color of the light observed. Both purify their record by omitting all events where two lights light up on their own detector within a time interval of $\delta t$. They also discards events within $\delta t$ of their own pointer switch. The remaining events are called pure events.

4. After Alice and Bob independently collected their data for $\gg\Delta t$ seconds, each calculates a $2\times 3$ matrix $A$ respectively $B$ of statistical observables whose entries are the relative frequency $X_{is}$ of pure events where $c_X=i,p_X=s$ ($X=A,B$). They send their $2\times 3$ matrices to you, the analyzer. In addition, they send their raw data to Yvonne who evaluates their data according to the following protocol for creating the statistics.

5. Yvonne postselects events in the raw data received from Alice and Bob by discarding events when their total number within a time interval of $\delta t$ is different from 2, or equals 2 but are both on the side of Alice or of Bob. She also discards events within $\delta t$ of a pointer switch. As a consequence, all remaining events are pure and occur in pairs consisting of one event $A$ on the side of Alice and one event $B$ on the side of Bob, and to each event there are well-defined pointer settings $p_A,p_B$ of the devices of Alice and Bob. Alice and Bob characterize each event through the numbers $c_A,c_B$ defined by $c_X:=$ 1 if $X$ is red, 2 if $X$ is blue.

6. Yvonne summarizes the experiment of Alice and Bob by calculating two $2\times 2$ matrices $E$ and $F$ of statistical observables whose entries are the relative frequency $E_{ik}$ of pairs where $c_A=i,~c_B=k,~p_A=p_B$ and the relative frequency $F_{ik}$ of pairs where $c_A=i,~c_B=k,~p_A\ne p_B$. In addition, Yvonne checks whether the total number of discarded events is within 10% of the total number of remaining events. She sends the matrices $E,F$ to you, the analyzer if this is the case; otherwise she reports to you failure of the experiment due to lack of care in the setup.

7. Other ways of analyzing the full experimental record (i.e., before postselection) are acceptable for discussing auxiliary purposes such as checking the efficiency of transmission and detection. However, the sole goal of the experiment is to study the correlations expressed in the matrices $A,B,E,F$.

8. Alice and Bob perform their experiments synchronous with Norbert's signals, accounting for the delay due to transmission. The devices are shielded from other external influences to the extent current technology allows it. The analysis will have to make allowance for corresponding imperfections.

9. Norbert, Alice, Bob, and Yvonne are not human beings but simply acronyms for elementary control programs behind the automatized source control, detector controls, and postselection control, respectively. In particular, they are assumed not have any artificial intelligence, hence they have neither knowledge nor a capability for being surprised.

10. Some time after the whole experiment is over, you, the analyst of the experiment read Yvonne's report of the experimental data including the four matrices with the summary statistics. After checking that no mistake has been made you publish the four matrices $A,B,E,F$ in a scientific journal. (Possibly you post in addition refined statistics on a web supplement.) You and the readers of the journal are the ones who have knowledge and therefore may or may not be surprised about the published findings, depending on your world view.

The matrices $A,B,E,F$ are the published output of the experiment. They
(and potentially more) are to be predicted by various existing or hypothetical theories for how certain specific signals sent by Norbert may affect the observation devices.

Last edited: Jan 20, 2016
3. Jan 17, 2016

### stevendaryl

Staff Emeritus
Just for historical accuracy, even though I made up this set-up from memory, Bell sketched the same sort of setup in his book "Speakable and Unspeakable in Quantum Mechanics", in the chapter "Bertlmann's socks and the nature of reality", section number 4.

4. Jan 17, 2016

### Mentz114

Does Norbert send a signal to B every time a signal is sent to A, or is Norbert selecting which direction to send each signal ?
(Yes, I cannot follow simple English ).

5. Jan 17, 2016

### A. Neumaier

Nothing specific is being said so far about the nature of the signal.

It could (for example) be no signal at all, a signal sent to one or both of the detectors only, two different signals sent to Alice and Bob, a coherent superposition of quantum states, a mixture of classical or quantum states, or signals loaded with hidden variable information. Whatever the (real or hypothetical) theory underlying the subsequent analysis allows and accounts for.

This will allow an objective discussion without being bogged down by possibly restrictive assumptions on the way the signals are prepared, transmitted, or detected.

6. Jan 17, 2016

### Mentz114

Thank you.

7. Jan 17, 2016

### Staff: Mentor

Someone has described this exact thought experiment already - Bell, says stevendaryl and I seem to recall seeing it elsewhere as well. One of Aspect's retrospectives? It seems a good framework for discussion, although it becomes more interesting when you add statements about the correlations that Alice and Bob find.

Are you assuming that Norbert's signals are not transmitted superluminally? And also that the content of each signal is independent of the content of the previous ones (if not, conspiratorial theories will be allowed)?

8. Jan 17, 2016

### A. Neumaier

At this point, nothing is assumed. The assumptions will be part of the theory that models the way predictions are made, and can bedifferent for different theories. For example if the signals consist of thermal waves satisfying a parabolic equation, information transmission is instantaneous, while if your model is relativistic, this is forbidden.

At present I am just creating the framework, interactively with all of you. The main question at present is if the framework is deemed wide enough such that everyone taking part can accommodate on this abstract level one instance of their favorite explanatory theory, be it Bohmian mechanics, or quantum mechanics with collapse, or a particle-free field theory, or whatever one of you may come up with. Concrete matrices and theories may be specified at a later stage.

9. Jan 17, 2016

### ddd123

I'm guessing in the case you don't want postselection you can be more specific about the signal so that no events are discarded.

10. Jan 17, 2016

### A. Neumaier

Since in the thought experiment there is no need to optimize the number of valid observations, discarding some of the signals doesn't matter; it cannot change the observed asymptotic probabilities. I think postselection is always beneficial since it reduces artifacts coming from signals due to detector inefficiencies and detector sensitivity to signals that come from the environment rather than from Norbert's source. Otherwise I'd have added another statistical observable that counts the number of data mismatches. But I think this number doesn't tell much about maters of principles, hence can be safely ignored.

11. Jan 17, 2016

### wle

Here's a couple of criticisms made from the point of view that this is supposed to be a Bell test. You've stated that you want to keep this open for now, but I assume that the main point is to abstract Bell experiments (possibly among other things) and if it doesn't capture a Bell test then you'll want to change it.

Something you might want to think about: why do you need a Norbert at all? You later state that you make no assumptions about what signals Norbert is emitting or who he is sending them to. Presumably the time of emission also shouldn't be critical to the analysis. So why not just drop Norbert? If you want to keep the scenario as generic and black box as possible, then just have an Alice and a Bob each choosing from a set of possible measurements and recording one of a set of possible results.

This kind of postselection (deciding what you count as an event based on the results obtained) is dangerous: it's possible for a local hidden variable model to effectively fake a Bell violation if you postselect the results like this. In general you need to decide what will be counted as an event in advance.

The simplest way to do this is that fits this requirement is to use predefined time windows: require that Alice and Bob choose measurements $x_{n}$ and $y_{n}$ ($n \in \mathbb{N}$) at or just after times $t_{0} + n \Delta t$ (in some reference frame) and must record corresponding outcomes $a_{n}$ and $b_{n}$ at or before times $t_{0} + n \Delta t + \delta t$, where $\delta t <\Delta t$ is chosen such that light would take longer than $\delta t$ to travel between Alice and Bob.

Also, why are you saying that Alice's and Bob's pointers take three values? Why not two (e.g., for CHSH), or $N$, or even $N_{\mathrm{A}}$ and $N_{\mathrm{B}}$ for Alice and Bob individually?

Depending on how general you want to be, already defining a summary of the statistics might be premature. If you're willing to assume the underlying explanation for the results is i.i.d. (which is reasonable if you want to keep things simple to begin with), then the usual object of study is the probability $P(ab \mid xy)$ that Alice and Bob get results $a$ and $b$ conditioned on performing measurements $x$ and $y$. Assuming things are i.i.d., $P(ab \mid xy)$ is in principle well defined and summarises the experimental results. If you want to abandon the i.i.d. assumption then things might get more complicated.

Last edited: Jan 17, 2016
12. Jan 17, 2016

### Mentz114

I can't see anything in the framework that says Bob and Alice must use all 3 settings. Clarification is required though.

13. Jan 17, 2016

### wle

I'm just wondering where the number three comes from. If he's using a small fixed number of measurements for simplicity then the simplest Bell inequality only needs two. If he explicitly wants to be more general then the number of measurements can just as well be $N$.

14. Jan 17, 2016

### stevendaryl

Staff Emeritus
I always go for three, because it's the easiest to see the weirdness of quantum statistics. Three was also used by Dr. Chinese in his essay here:
http://drchinese.com/David/Bell_Theorem_Easy_Math.htm

The statement of Bell's inequality uses 4 settings: Two for Alice and two for Bob. In my opinion, it's the perfect anti-correlations that are the most stark fact about EPR, and those only show up if Alice and Bob have the possibility of making the same choices. But if Alice can choose settings $\alpha_1$ or $\alpha_2$, and Bob has the same two choices, you don't have enough statistics to rule out hidden-variables. The following hidden-variable theory explains the statistics perfectly:
1. With probability $\frac{1}{2} cos^2(\frac{\theta}{2})$, Alice's particle will be measured to be spin-up along either axis ($\alpha_1$ or $\alpha_2$, and Bob's particle will be measured to be spin-down along those axes (where $\theta$ is the angle between the axes).
2. With probability $\frac{1}{2} sin^2(\frac{\theta}{2})$, Alice's particle will be measured to be spin-up along axis $\alpha_1$, and will be measured to be spin-down along axis $\alpha_2$, and Bob's particle will be measured to be the opposite.
3. With probability $\frac{1}{2} cos^2(\frac{\theta}{2})$, Alice's particle will be measured to be spin-down along either axis ($\alpha_1$ or $\alpha_2$, and Bob's particle will be measured to be spin-up along those axes.
4. With probability $\frac{1}{2} sin^2(\frac{\theta}{2})$, Alice's particle will be measured to be spin-down along axis $\alpha_1$, and will be measured to be spin-up along axis $\alpha_2$, and Bob's particle will be measured to be the opposite.
With three choices, you can show that no hidden variable theory works, but not with just two.

15. Jan 17, 2016

### ddd123

16. Jan 17, 2016

### A. Neumaier

I edited my original setting to make the following amendment
in order to meet the criticism of wle.

17. Jan 17, 2016

### A. Neumaier

I introduced Norbert in analogy to Alice and Bob (who are dispensable as well in a minimal setting) in order that one can talking about all degrees of freedom in the traditional personalized way. This is only done as a figure of speech; nothing depends on it: Norbert, Alice and Bob are not human beings but the control programs behind the automatized source control and detector controls, respectively. Actually, to make this perfect I'll add in a moment another change to the setting introducing Yvonne, who does the postselection instead of Alice and Bob.

I was taking stevendaryl's picture as blueprint. It contained 3 pointer settings, so I assumed them. This covers the 2 pointer setting since nothing was specified about how the pointer affects the results. (This is one of the strengths of the setting.) it is easy to wire a concrete detector such that pointers 2 and 3 have exactly the same effect on the lights.

Also stevendaryl didnt refer to Bell experiments, so I didn't either. Bell is relevant only for one special case of the analysis - when the underlying hypothetical theory is a local hidden variable theory of particles moving along the transmission lines. At the present stage of the discussion, the only thing that needs to be ensured is that using Nature rather than a hypothetical model, Norbert can prepare at least one kind of signals resulting in matrices $E$ and $F$ violating the predictions of Bell's theorem. I trust that stevendaryl made his original proposal with that in mind.

For simplicity, I also assumed perfect symmetry between Alice and Bob.

Which particular items in the postselection protocol give rise to the loophole you claimed exists? I don't see how my postselection scheme is essentially different from yours. Note that the postselection scheme is known in advance, hence what are the final events (pairs counted) is decided in advance, as you required.

Last edited: Jan 17, 2016
18. Jan 17, 2016

### A. Neumaier

As just promised, I reattributed some items to Yvonne, and added the following additional rule of the game:
Since nothing changed in the data collection and analysis, this should not affect the scientific content of the setting, but removes any trace of anthropomorphism.

Last edited: Jan 17, 2016
19. Jan 17, 2016