I really really don't get the equivocation in your responses, unless it's to intentionally maintain a conflation. I'll demonstrate:
(Going to color code sections of your response, and index my response to those colors.)
DrChinese said:
The Fair Sampling Assumption is: due to some element(s) of the collection and detection apparati, either Alice or Bob (or both) did not register a member of an entangled photon pair that "should" have been seen. AND FURTHER, that photon, if detected, was one which would support the predictions of local realism and not QM. The Fair Sampling concept makes no sense if it is not misleading us.
{RED} - The question specifically avoided any detection failures whatsoever. It has no bearing whatsoever on the question, but ok. Except that you got less specific in blue for some reason.
{BLUE} - Here when you say "would support", by what model would the "would support" qualify? In fact
one of the
many assumptions contained in "would support" qualifier involves how you chose define the equivalence of simultaneity between two spatially separated time intervals. Yet with "would support" you are tacitly requiring a whole range of assumptions to be the a priori truth. It logically simplifies to the statement: It's true because I chose definitions to make it true.
DrChinese said:
It is not the Fair Sampling Assumption to say that the entire universe is not sampled. That is a function of the scientific method and applies to all experiments. Somewhat like saying the speed of light is 5 km/hr but that experiments measuring the usual value are biased because we chose an unfair day to sample. The requirement for science is that the experiment is repeatable, which is not in question with Bell tests. The only elements of Fair Sampling that should be considered are as I describe in the paragraph above.
{RED} - But you did not describe any element in the paragraph above. You merely implied such elements are contained in the term "would support", and left it to our imagination that since "would support" defines itself to contains proper methods and assumptions, and that "would support" contains it's own truth specifier, then it must be valid. Intellectually absurd.
DrChinese said:
So I think that is a YES to your 1), as you might detect a photon but be unable to match it to its partner. Or it might have been lost before arriving at the detector.
{RED} - I did not specify "a" partner was detected. I explicitly specified that BOTH partners are ALWAYS detected. Yet that still doesn't explicitly require the timing of those detections to match.
{BLUE} - Note the blue doesn't specify that the detection of the partner photon didn't fail. I explicitly specified that this partner detection didn't fail, and that only the time window to specify it as a partner failed.
So granted, you didn't explicitly reject that both partners were detected, but you did explicitly input interpretations which were explicitly defined not to exist in this context, while being vague on the both partner detections with correlation failures.
So, if I accept your yes answer, what does it mean? Does it mean both partners of a correlated pair can be detected, and still not register as a correlation? Does it mean you recognized the truth in the question, and merely chose to conflate the answer with interpretations that is by
definition are invalid in the stated question, so that you can avoid an explicitly false answer while implicitly justifying the conflation of an an entirely different interpretation that was experimentally and a priori defined to be invalid?
I still don't know how to take it, and I think it presumptuous of me to assume a priori that "yes" actually accepts the question as stated. You did after all explicitly input what the question explicitly stated could not possibly be relevant, and referred to pairs in singular form while not acknowledging the success of the detection of both photons. This is a non-answer.
DrChinese said:
For 2), I am not sure I follow the question. I think QM and LR would make the same predictions for likelihood of detection. But I guess that to make the LR model work out, you have to find some difference. But no one was looking for that until Bell tests started blowing away LR theories.
True, I clearly and repeatedly, usually beginning with "Note", clarified that it did not in any way demonstrate the legitimacy of any particular LR model. All it does is demonstrate that, even if photon detection efficiency is 100%, then a model that only involves offsets in how the photon detection pairs are correlated need not result in excess or undercount of total photon detections. It was specifically designed, and failed, as an attempt to invalidate a "fair sampling" argument when that "fair sampling" argument did not involve missing detections.
There may be, as previously noted, other ways to rule out this type of bias. By recording and comparing the actual time stamps of the uncorrelated photon detections, then, if this is happening, the time window spread between near correlated photon pairs should statistically appear to increase as the angle difference was increased. If pairs of uncorrelated detections were truly uncorrelated, then there should be no statistical variance in the timing of the pairs of time stamps. The assumption that they are correlated, even when not measured to be so, is what would make such a statistical variance possible. May be worth investigating experimentally. Pre-existing raw data might be sufficient, depending on whether time stamps were recorded, or merely hit/misses recorded.