Originally posted by DrChinese
I wonder if that is because many do not accept the "fair sampling" critique as valid?
The "fair sampling" hypothesis is part of all photon experiments so far (often labeled euphemistically as "detection loophole"). It is necessary because the actual accepted pairs represent only a few percent of all pairs produced by the source. There is also nothing to dispute about what kind of LHVs are eliminated by "fair sampling" assumption all by itself (before anyone had set a foot in the lab) -- it eliminates upfront all LHV theories in which (hidden) variables determining whether a valid pair is detected are independent from hidden variables determining the +/- outcome. That means the experiments test only a subset of LHVs for which these two sets of variables are separate and independent. Experiment says nothing about remaining LHV theories.
The above is commonly understood and there is nothing to dispute about it. The only question where taste/judgment has some room is whether the class of LHV theories which have not been tested so far (those LHVs where the two sets of HVs are mutually dependent) have any chance of developing into more useful and economical/elegant (in postulates) theories than existent QM/QED.
It has nothing to do with the "validity of the critique." No serious paper on the subject (exclude here most popular and pedagogical expositions) takes head on the "fair sampling" critique and proves it is invalid. The critique either doesn't get mentioned at all (often a case in recent years) or it gets acknowledged and dismissed as "loophole" (i.e. like tax loopholes, only cheats would look for those).
I have read the Santos paper now and I admit I don't agree with a lot of what he is saying. The approach is to blast existing experiment as if it shows nothing, when clearly they show a lot.
He simply states more directly and without euphemisms the factual situation -- what kind of LHVs are being tested by the experiments. He and Marshall (and few other critics) have been saying that for over a decade. There has been no direct counter-argument to refute it. Only silence (including the rejection of submitted papers as "nothing new, it is all well known, nothing to see here, move on") or euphemistic rephrasing of the well known unpleasant facts (as in recent Aspect's paper).
...The question is whether any local realistic theory can make the same predictions as QM. From my perspective the Bell paper conclusively shows it cannot.
It does show that. But keep in mind that
not even a single experiment agrees with the predictions of the QM measurement model used by Bell.
The actual data, of course, agrees well with the
quantum optics predictions, which provides the correct model for these experiments (which takes into account the non-sharp/Poissonian or super-Poissonian photon numbers from lasers or from spontaneous emissions in atomic gas, detection efficiency, dark current, appertures,...etc). But the quantum optics also agrees in this domain with the semi-classical theories (such as stochastic optics, a pure LHV theory; there is a Sudarshan-Glauber theorem from 1963 proving this equivalence). And neither agrees with the Bell's QM prediction.
The Bell's QM prediction is based on QM Measurement theory (the projection postulate). That prediction is not what the actual data shows or what the proper quantum optics model predicts. Only data adjusted and highly extrapolated under additional assumptions, assumptions untested and outside of the theory/postulates, finally matches the Bell's QM prediction.
On the theoretical side, the critics reject QM measurement theory -- historically its mathematical basis and the core motivation was the faulty von Neumann's proof of impossibility of hidden variables. Without that "impossibility" there is no reason to attach postulates about observer and non-physical collapse of wave function to the theory (just as Maxwell's or Newton's theories didn't need to speculate about "observer's conscousness" or any such). Unfortunately, once it became clear that that proof was no good (as early as 1950s when Bohm developed his theory), there were decades of invested work and armies of "experts" on the (suddenly obsoleted) "measurement theory," teaching and publishing. Bohm's result was mostly ignored (or hand-waved away), its implications not talked about, until a new weaker "proof" (it doesn't eliminate all HV but only LHVs) was produced by Bell. With this new fallback defense line, suddenly the flaws of the von Neumman's proof became somehow obvious to everyone and have made it into textbooks.
If someone wants to test the predictions of QM, great. That is the point of science, after all, and I certainly agree that nothing should be held too sacred to question. However, given the body of experimental knowledge, I don't see QM falling anytime soon - at least as regards electrodynamics. And I certainly don't see any experimental evidence here that QM is wrong.
You need a finer distinctions among the parts of QM. There is QM/QED dynamics which is certainly well tested. Born's probability postulate provides operational interpretation of the wave function/state vector -- that part is solid and necessary, as well.
Then there is a
QM measurement theory (originated by von Neumann), the core of which is the projection postulate -- the non-physical collapse of the wave function, which somehow suspends the dynamical evolution and performs its magic (non-local, instant, non-dynamical) state transformation (by observer's consiousness, as von Neumann suggested, by "irreversible process" by Bohr and others, by universe branching by Everett, by "subdynamics" of "dissipative, far from equilibrium systems" by Prigogine,...), then the newly produced state is released from its momentary spell and returned back to the dynamical postulates to carry on evolving as before.
That whole "theory" is what the critics reject. And that "theory" is also the key ingredient of the Bell's "QM prediction". There is no other practical use for the "measurement theory" (von Neumann's proof having been shown as invalid) or other direct test than the EPR-Bell experiments.
It props the Bell's "QM prediction," which in turn is all that props any need for the "measurement theory"[/color] (a kind of mutual back-patting society). Otherwise It is entirely
gratuitous, sterile add-on to QM (since the original reason/prop, von Neumann's "HV impossibility," is now acknowledged as invalid). QED, QCD,... etc. don't use "measurement theory" (collapse/projection postulate) but merely dynamics and Born's postulate. You can drop the closed loop of Bell's theorem and "QM measurement theory" and no harm would be done to predictive power of quantum physics.
I mean, let's get real here. The repeatable experimental results as adjusted just happen to exactly agree with QM and rule out LHV theories. What an odd coincidence out of all of the possible results we might see!
The results are not merely "adjusted" but are
almost entirely product of extrapolation -- well over 90% of the "data" which reproduce the Bell's "QM prediction" (the "idealized" measurement theory prediction) are not obtained by the measurement but are added in by hand (under the "fair sampling" assumption). With that much freedom, 90%+ to add by hand, you can match anything. There are astronomically many ways to extrapolate the 90%+ missing data points.