nightlight said:
You may have lost the context of the arguments that brought us to the detecton question -- the alleged insufficiency of the non-quantized EM models to account for various phenomena (Bell tests, PDC, coherent beam splitters).
No, the logic in my approach is the following.
You claim that we will never have raw EPR data with photons, or with anything for that matter, and you claim that to be something fundamental. While I can easily accept the fact that maybe we'll never have raw EPR data ; after all, there might indeed be limits to all kinds of experiments, at least in the forseable future, I have difficulties with its fundamental character. After all, I think we both agree on the fact that if there are individual technical reasons for this inability to have EPR data, this is not a sufficient reason to conclude that the QM model is wrong. If it is something fundamental, it should be operative on a fundamental level, and not depend on technological issues we understand. After all (maybe that's where we differ in opinion), if you need a patchwork of different technical reasons for each different setup, I don't consider that as something fundamental, but more like the technical difficulties people once had to make an airplane go faster than the speed of sound.
You see, what basically bothers me in your approach, is that you seem to have one goal in mind: explaining semiclassically the EPR-like data. But in order to be a plausible explanation, it has to fit into ALL THE REST of physics, and so I try to play the devil's advocate by taking each individual explanation you need, and try to find counterexamples when it is moved outside of the EPR context (but should, as a valid physical principle, still be applicable).
To refute the "fair sampling" hypothesis used to "upconvert" raw data with low efficiencies into EPR data, you needed to show that visible photon detectors are apparently plagued by a FUNDAMENTAL problem of tradeoff between quantum efficiency and dark current. If this is something FUNDAMENTAL, I don't see why this tradeoff should occur for visible light and not for gamma rays ; after all, the number of modes (and its "half photon energy") in the zero-point stuff scales up from the eV range to the MeV range. So if it is something that is fundamental, and not related to a specific technology, you should understand my wondering why this happens in the case that interests you, namely the eV photons, and not in the case that doesn't interest you, namely gamma rays. If after all, this phenomenon doesn't appear for gamma rays, you might not bother because there's another reason why we cannot do well EPR experiments with gamma rays, but to me, you should still explain why a fundamental property of EM radiation at eV suddenly disappears in the MeV range when you don't need it for your specific need of refuting EPR experiments.
Like with perpetuum mobile claims (which these absolute non-classicality claims increasingly resemble, after three, or even five, decades of excuses and ever more creative euphemisms for the failure), each device may have its own loophole or ways to create illusion of energy excess.
Let's say that where we agree, is that current EPR data do not exclude classical explanations. However, they conform completely with quantum predictions, including the functioning of the detectors. It seems that it is rather your point of view which needs to do strange gymnastics to explain the EPR data, together with all the rest of physics.
It is probably correct to claim that no experiment can exclude ALL local realistic models. However, they all AGREE with the quantum predictions - quantum predictions that you do have difficulties with explaining in a fundamental way without giving trouble somewhere else, like the visible photon versus gamma photon detection.
I asked you where is the point-photon in QED. You first tried via the apparent coexistence of anti-correlation and the coherence on a beam splitter.
No, I asked you whether the "classical photon" beam is to be considered as short wavetrains, or as a continuous beam.
In the first case (which I thought was your point of view, but apparently not), you should find extra correlations beyond the Poisson prediction. But of course in the continuous beam case, you find the same anti-correlations as in the billiard ball photon picture.
However, in this case, I saw another problem: namely if all light beams are continuous beams, then how can we obtain extra correlations when we have a 2-photon process (which I suppose, you deny, and just consider as 2 continuous beams). This is still opaque to me.
You may wonder why is it always so that there is something else that blocks the non-classicality from manifesting. My guess is that it is so because it doesn't exist and all attempted contraptions claiming to achieve it are bound to fail one way or the other, just as perpetuum mobile has to.
The reason I have difficulties with this explanation is that the shifting reasons should be fundamental, and not "technological". I can understand your viewpoint and the comparison to perpetuum mobile, in that for each different construction, it is yet ANOTHER reason. That by itself is no problem. However, each time the "another reason" should be fundamental (meaning, not depending on a specific technology, but a property common to all technologies that try to achieve the same functionality). If the reason photon detectors in the visible range are limited in QE/darkcurrent tradeoff, it should be due to something fundamental - and you say it is due to the ZPF.
However, then my legitime question was: where's this problem then in gamma photon detectors ?
cheers,
Patrick.