billschnieder said:
I did not say they were wasting time. I said if as you suggested earlier, the "event-ready" information was available before the "experiment" (your definition), then it makes no sense for Alice(Bob) to measure the "bad" results -- that would be a waste of time. That they waited until *after* the measurement to filter out the "good" from the "bad" owes to the fact that the "good" signal was only available *after* the measurements had been made, which was my point all along. That is why it is *post*-selection.
Heinera said:
Now it's clear that you don't understand the experiment. No information from the results at Alice or Bob is needed to filter the runs in this experiment. Only information from C is needed, which is available even before Alice and Bob have performed their run.
According to the text and Figure 2 of the ArXiv preprint [
arXiv:1508.05949], the "go/no go" determination event is spacelike separated from Alice's and Bob's measurement choices and recording of outcomes, so the ##C = \text{go/no go}## outcome should neither causally influence nor be influenced by Alice's and Bob's measurement choices or results.
As far as Bell's theorem is concerned, at least one way you can formally accommodate the postselection is by considering it part of a three-party Bell scenario. Under the locality hypothesis, this means that the (single-round) joint probability distribution should factorise according to $$P(abc \mid xy) = \int \mathrm{d}\lambda \, \rho(\lambda) \, P_{\mathrm{A}}(a \mid x; \lambda) \, P_{\mathrm{B}}(b \mid y; \lambda) \, P_{\mathrm{C}}(c \mid \lambda) \,,$$ where ##a, b, x, y## are Alice's and Bob's inputs and outputs and ##c \in \{\text{go}, \text{no go}\}## is the event-ready outcome. From there, high school-level probability theory will tell you that conditioning on ##c = \text{go}## does not allow you to fake a nonlocal probability distribution between Alice and Bob's systems: $$\begin{eqnarray*}
P(ab \mid xy; c = \text{go}) &=& \frac{P(ab, c = \text{go} \mid xy)}{P(c = \text{go} \mid xy)} \\
&=& \int \mathrm{d}\lambda \, \rho(\lambda) \, \frac{P_{\mathrm{C}}(c = \text{go} \mid \lambda)}{P(c = \text{go})} \, P_{\mathrm{A}}(a \mid x; \lambda) \, P_{\mathrm{B}}(b \mid y; \lambda) \\
&=& \int \mathrm{d}\lambda \, \rho(\lambda \mid c = \text{go}) \, P_{\mathrm{A}}(a \mid x; \lambda) \, P_{\mathrm{B}}(b \mid y; \lambda) \,,
\end{eqnarray*}$$ which uses the no-signalling conditions (implied by locality) to say that ##P(c = \text{go} \mid xy) = P(c = \mathrm{go})## is independent of ##x## and ##y## and Bayes' theorem to say that ##\rho(\lambda \mid c = \text{go}) = \rho(\lambda) \, \frac{P_{\mathrm{C}}(c = \text{go} \mid \lambda)}{P(c = \text{go})}##.
The end result, with the conditioning on "go", has the same local factorisation that is used in the (single round) derivation of Bell inequalities. (And from there, statistical analyses of the sort done by Gill and others explain how you can turn this into a hypothesis test applicable to a real multi-round Bell experiment.)