1. It is a new paper, and hardly the last word. Certainly would not be considered authoritative at this point. However, I will accord it the courtesy of addressing it on its merits.
That particular paper is new, but Santos, Marshall, Jaynes and others have been criticizing the EPR-Bell experiment claims since the late 70s (check listings there, there are at least couple dozen papers by Marshall-Santos group). This wasn't merely a critique based on artificial narrow counterexamples for the particular experimental claims but a full fledged local realistic theory of quantum optics phenomena (stochastic electrodynamics; it falls short for the massive particles although the Barut's self-field electrodynamics covers fermions as well as QED to the orders it was computed).
Regardless of the ultimate value of stochastic electrodynamics as an alternative theory (it is incomplete as it stands), the mere existence of a local fields model for the actual EPR-Bell experimental data plainly demonstrates that the claims that any local realistic mechanism is being excluded by the experiments is false.
2. Bell's Inequalities: I did not take away from the Santos paper any real criticism of the Bell derivation.
The Santos-Marshall group makes distinction between the QM dynamics, which they accept, and the "measurement theory" (the non-dynamical, mystical part - projection postulate) which they reject. The Bell's theorem needs a collapse of the remote state to achieve its locality violation. They reject such collapse and point out that it hasn't been demonstrated by the experiments.
The problem nowdays with challenging the general state collapse hypothesis (projection postulate) is that it is a key ingredient necessary for Quantum Computing to work. If it is not true in the full generality, the QC won't work any better than a classical analog computer. Thus the challenge is not merely against ideas but against the funding draw QC has, a sure recipe to get yourself cut off from the leading journals and conferences. (Before the QC hype, there was a healthy debate and they were published in every major journal.)
There are some pretty big claims here, and I don't think they are warranted. Fair sampling is far from an absurd assumption.
In any deterministic hidden variable theory, the detection probability must by definition depend on some hidden variable value. The "fair sampling" hypothesis is thus an
assumption that the hidden variable affecting the detection probability (the probability of triggering the avalanche and its timing when coincidence time-windows are used for pair detection)
is independent from the hidden variables affecting the detected outcome (i.e. +/- choice).
Therefore that is all that experiments exclude -- the local theories for which the two sets of hidden variables are independent of each other. That is not true even for the most simple minded classical electrodynamics models of polarization and detection (or for stochastic electrodynamics or for Barut's self-field ED).
Thus the assumption is
absurd since it helps experiments exclude something that isn't even included among the proposed alternatives. This is no different "exclusion" than the "refinements" of the experiments to use randomly varying polarizer direction (which you brought up earlier) -- it
topples down its own strawman, not the actual theories being proposed by the opponents.
There has never been a single experimental test of a quantum variable which has even slightly hinted at the existence of a deeper level of reality than is currently predicted by QM. Hardly what I would call "absurd".
QM doesn't offer any "reality" deeper or otherwise. If you believe in any reality, local or not, the quantum phenomena require explanation beyond the prescriptions on how to calculate the probabilities.
You might say that it is an unwarranted or burdensome requirement. But I don't even follow that line of reasoning. Clearly, the requirement is that a LHV theory otherwise provide identical predictions to QM. Fair sampling fits this naturally.
There is no need for "unwarranted" or "burdensome" attributes in order to analyze what is it exactly that the "fair sampling" (purely mathematically) excludes -- it is an ad hoc constraint on hidden variables, which hand-waves off the table several proposed alternatives, leaving only the strawman local theories (that no one has proposed) for the experiments to refute.
For more discussion on the "fair" sampling hypothesis and the proposed simple additional experiment to test it for the existent EPR-Bell setups check the paper by
G. Adenier, A. Khrennikov. I haven't seen as yet any of the several active quantum optics groups, who are claiming to have established Bell inequality violations, checking the assumption on their setup. Since the additional tests proposed are quite simple on the existent setup, it is suprising that no one has yet picked the clear cut open challenge of the above paper, especially considering that the verification of the fair sampling as proposed would eliminate all known plausible LHV theories (they all rely on "unfair" sampling). Or maybe some have tried it and the data didn't come out the way they wished, and they didn't want to be the first with the "bad" news. We'll have to wait and see.
PS: After writing the above, I contacted the authors of the cited paper and the status is that even though they had contacted all the groups which have done or plan to do EPR-Bell experiments,
oddly no one was interested in testing the 'fair sampling' hypothesis. [/color]
Clearly, the requirement is that a LHV theory otherwise provide identical predictions to QM. Fair sampling fits this naturally.
As pointed out by Santos, the QM has two sharply divided components, dynamics and the measurement theory. They reject the measurement theory (in its full generality) and some of its implications. That is precisely what the Bell EPR tests were supposed to clarify - does the world behave that way. The result so far have not produced the type of distant collapse (projection of the composite state) as assumed by Bell for his inequalities.
The "fair sampling" is an assumption outside of QM (or any other theory or any experiment). The actually proposed alternative theories do not satisfy fair sampling, i.e. the hidden variables do not decouple into independent sets which separately control the detection timing and probability from variables controlling the +/- outcome.