vanesch After all, I can just as well work in the Hamiltonian eigenstates.
The output of PDC source is not same as picking a state in Fock space freely. That is why they restricted their analysis to PDC sources where they can show that the resulting states will not have the Wigner distribution negative beyond what the detection & counting callibrated to null result for the 'vacuum fluctuations alone' would produce. That source doesn't produce eigenstates of free Hamiltonian (consider also the time resolution of such modes with sharp energy). It also doesn't produce gamma photons.
because you should then find the same with gammas, which isn't the case (and is closer to my experience, I admit).
You're trying to make the argument universal which it is not. It is merely addressing an overlooked effect for the particular non-classicality claim setup (which also includes particular type of source and nearly perfectly efficient polarizer and beam splitters). The interaction constants, cross sections, tunneling rates,... don't scale with the photon energy[/color]. You can have a virtually perfect detector for gamma photons. But you won't have a perfect analyzer or a beam splitter. Thus, for gamma you can get nearly perfect particle-like behavior (and very weak wave-like behavior) which is no more puzzling or non-classical than a mirror with holes in the coating scanned by a thin light beam mentioned earlier.
To preempt the loose argument shifts of this kind, I will recall the essence of contention here. We're looking at a setup where a wave packet splits into two equal, coherent parts A and B (packet fragments in orbital space). If brought together to a common area, A and B will produce perfect interference. If any phase shifts are inserted in the paths of A or B, the interference pattern will shift depening on relative phase shift on two paths, implying that in each try the two packet fragments propagate on both paths (this is also the propagation that the dynamical/Maxwell equations describe for the amplitude).
The point of contention is what happens if you insert two detectors DA and DB in paths of A and B. I am saying that the two fragments propagate to respective detectors, interact with the detector and each detectors triggers or doesn't trigger, regardless of what happened on the other detector. The dynamical evolution is never suspended and the triggering is solely a result of the interaction between the local fragment and its detector.
You're saying that, at some undefined stage of triggering process of the detector DA, the dynamical evolution of the fragment B will stop, the fragment B will somehow shrink/vanish even if it is light years away from A and DA. Then, again at some undefined later time, the dynamical evolution of B/DB will be resumed.
The alleged empirical consequence of this conjecture will be the "exclusion" of the trigger B whenever trigger A occurs. The "exclusion" is such that it cannot be explained by the local mechanism of independent detection under the uninterrupted dynamical evolution of each fragment and its detector.
Your subsequent attempt to illustrate this "exclusion" unfortunately mixes up the entirely trivial forms of exclusions[/color], which are perfectly consistent with the model of uninterrupted local dynamics. To clarify the main mixup (and assuming no mixups regarding the entirely classical correlation aspect due to any amplitude modulation), let's look at the Poissonian square law detectors (which apply to the energies of photons relevant here, i.e. those for which there are nearly perfect coherent splitters).
Suppose we have a PDC source and we use "photon 1" of the pair as a reference to define our "try" so that whenever detector D1 triggers we have a time window T in which we enable detection of "photon 2." Keep also in mind that the laser pump which feeds the non-linear crystal is Poissonian source (produces coherent states which superpose all photon number states using for coefficient magnutudes the square-roots of Poissonian probabilities), thus neither the input nor the output states are sharp photon number states (pedagogical toy derivations might use as the input the sharp number state, thus they'll show a sharp number state in output).
To avoid the issues of detector dead time or multiphoton capabilities, imagine we use a perfect coherent splitter, split the beam, then we add in each path another perfect splitter, and so on, for L levels of splitting, and place ND=2^L detectors in the final layer of subpaths. The probability of k detectors (Poissonian, square law detectors relevant here) triggering in a given try is P(n,k)=n^k exp(-n)/k! where n is the average number of triggers. A single multiphoton capable detector with no dead time would show this same distribution of k for a given average rate n.
Let's say we tune down the input power (or sensitivity of the detectors) to get an average number of "photon 2" detectors triggering as n=1. Thus the probability of exactly 1 among the ND detectors triggering is P(n=1,k=1)=1/e=37%. Probability of no ND trigger is P(n=1,k=0)=1/e=37%. Thus, the probability of more than 1 detector triggering is 26%, which doesn't look very "exclusive".
Your suggestion was to lower (e.g. via adjustments of detectors thresholds or by lowering the input intensity) the average n to a value much smaller than 1. So, let's look at n=0.1, i.e. on average we get .1 ND triggers for each trigger on the reference "photon 1" detector. The probability of a single ND trigger is P(n=0.1,k=1)=9% and of no trigger P(n=0.1,k=0)=90%.
Thus the probability of multiple ND triggers is now only 1%, i.e. we have 9 times more single triggers than the multiple triggers, while before, for n=1, we had only 37/26=1.4 times more single triggers than multiple triggers. It appears we had greatly improved the "exclusivity". By lowering n further we can make this ratio as large as we wish, thus the counts will appear as "exclusive" as we wish. But does this kind of low intensity exclusivity, which is what your argument keeps returning to[/color], indicate in any way a collapse of the wave packet fragments on all ND-1 detectors as soon as the 1 detector triggers?
Of course not. Let's look what happens under assumption that each of ND detectors triggers via its own Poissonian entirely independently of others. Since the "photon 2" beam splits its intensity into ND equal parts, the Poissonian for each of ND detectors will be P(m,k), where m=n/ND is the average trigger rate of each of ND detectors. Let's denote p0=P(m,k=0) the probability that one (specific) detector will not trigger. Thus p0=exp(-m). The probability that this particular detector will trigger at all (indicating 1 or more "photons") is then p1=1-p0=1-exp(-m). In your high "exclusivity" (i.e. low intensity) limit n->0, we will have m<<1 and p0~1-m, p1~m.
The probability that none of ND's will trigger, call it D(0), is thus D(0)=p0^ND=exp(-m*ND)=exp(-n), which is, as expected, the same as no-trigger probability of the single perfect multiphoton (square law Poissonian) detector capturing all of the "photon 2". Since we can select k detectors in C[ND,k] ways (C[] is a binomial coefficient), the probability of exactly k detectors triggering is D(k)=p1^k*p0^(ND-k)*C[ND,k], which is a binomial distribution with average number of triggers p1*ND. In the low intensity limit (n->0) and for large ND (corresponding to a perfect multiphoton resolution), D(k) becomes (using Stirling approximation and using p1*ND~m*ND=n) precisely the Poisson distribution P(n,k). Therefore, this low intensity exclusivity which you keep bringing up is trivial[/color] since it is precisely what the independent triggers of each detector predict no matter how you divide and combine the detectors (it is, after all, the basic property of the Poissonian distribution).
The real question is how to deal with the apparent sub-Poissonian cases as in PDC. That is where these kinds of trivial arguments don't help. One has to, as
Marshall & Santos do, look at the specific output states and find the precise degree of the non-classicality (which they express for convenience in the Wigner function formalism). Their ZPF ("vacuum fluctuations" in conventional treatment) based
detection and coincidence counting model allows for a limited degree of non-classicality in the adjusted counts[/color]. Their PDC series of papers shows that for PDC sources all non-classicality is of this apparent type (the same holds for laser/coherent/Poissonian sources and chaotic/super-Poissonian sources).
Without the universal locality principle, you can only refute specific overlooked effects of a particular claimed non-classicality setup. This does not mean that the nature somehow conspires to thwart non-locality through some obscure loopholes[/color]. It simply means that a particular experimental design has overlooked some effect and that it is more likely that the experiment designer will overlook more obscure effects.
In a fully objective scientific system one wouldn't have to bother refuting anything about any of these flawed experiments since their data hasn't objectively shown anything non-local. But in the present nature-is-non-local zeitgeist, a mere wishful excuse by an experimenter that the failure is a minor technical glitch which will be remedied by future technology, becomes, by the time it trickles down to the popular and pedagogical levels, an experimentally established non-locality.