Cthugha said:
Giustina's does. Kwiat's does not. He even explicitly gives a classical model which models Giustina's results, but not his. It still has another loophole, though.
The "locality" loophole they emphasize as remaining is a red herring. You don't expect that switching randomly polarization angles will change anything much? It never did in other photon experiments. That's relevant only for experiment where the subsystems interact, which is not true for the optical photons.
The real loophole they don't mention is that they are misusing Eberhard's inequality on a state to which it doesn't apply. Their field state is a Poissonian mixture of PDC pairs (which is always the case for laser source e.g. see
Saleh95) not the pure 2 photon state they claim to have. Short of having their full counts for all settings and detailed calibration data to reconstruct its real parameters closer (such as actual distribution in the mixture), the main indicator of this real loophole is a
peculiar anomaly (significant discrepancy with QM for the pure state) in their data pointed out
in this recent preprint.
As table 1, p. 2 shows, while almost all of their counts are very close to QM prediction for the claimed pure state, the critical C(a2,b2) count is 69.79K which is
5.7 times greater than the QM prediction of 12.25K. The problem is that the excess counts of XC = 57.54K are very close to half of their total violation J = -126.7K. Hence there was some source of accidental coincidences (multiple PDC pairs) which stood out on the lowest coincidence counts, but is buried in other 3 coincidences which are more than order of magnitude larger quantities. The problem with that is that those remaining 3 coincidence counts C(a1,b1), C(a1,b2) and C(a2,b1) enter J with negative sign, while C(a2,b2) enters with positive sign. Hence any accidental coincidences with effective net counts XC~57.54K i.e. of similar magnitude, in the other 3 coincidence values will yield the net effect on J which is -3*XC + 1*XC = -2*XC ~ -115K which is nearly the same as their actual violation.
Unfortunately they don't provide the measured counts data for all 4 settings or respective calibrations on each, hence one can't get closer to what went on there. But Eberhard's model (Eb93) is
loophole free only under assumption of at most a single photon pair in coincidence window, detected in full or partially, but not in excess (such as the one their anomaly shows) i.e. his event space doesn't include double triggers on one side, such as oo, oe, eo and ee on side A, but only o, e and u (undetected) on each side. Hence instead of 3x3 matrix for one setting he is considering, there is in general case a 7x7 matrix per setting. In the two channel experiment these double trigger cases would be detected and subtracted from the accepted counts (leaving a loophole of missing pairs).
Conveniently Giustina et al. chose single channel experiment where they in effect throw away half the data available for measurement (the 2nd detector on each side), where the only effect of multiple pairs would be those double triggers and two photon detections on the remaining detector appearing as single detection case o, except with
artificially amplified detection rates.
Of course, in single channel experiment this excess would still show up as a discrepancy from QM prediction in coincidence counts, which it does on the count which is the most sensitive to any additions, the one that is order of magnitude lower than others C(a2,b2). It would also
artificially amplify the ratio of coincidences vs. singles on either side (by a factor ~(2-f)>1 for a detector of efficiency f and 2 PDC pairs), which is exactly what one wants in order to obtain the appearance of violation. Of course, the Eb93's expression for J is not valid for multiple pairs, hence any such "amplification" would invalidate the claim, if disclosed. But they didn't even give the basic counts for all 4 settings or backgrounds, let alone calibration with 2 channels to quantify the excesses (and subtract them).
The Kwiat's group replication provides few more counts for each setting, but fails to violate J>=0 altogether on the reported counts. It also
shows exactly the same anomaly on the C(a2,b2) as the other experiment.
What's obvious is they have both tapped into the new loophole peculiar to Eb93 J>=0 inequality in single channel experiments, where they get a hidden effective "enhancement" (in CH74 terminology of loopholes), provided they keep the full data private, don't disclose the full background counts and ignore the anomaly on the lowest coincidence counts, just as they did.
While they can fix the experiment without generalizing Eb93 event space to 7x7, by merely adding the 2nd channel and subtracting the events which fall outside of the Eb93's 3x3 event space for which his J expression was derived, that would reclassify these events into the background counts within Eb93 model, but the Eb93 is
extremely sensitive to the smallest background levels (see Fig 1 in Eb93; at their max efficiencies 75% the max allowed background is ~0.24%). Their excess of ~57K taken as added background would push the required detector efficiency into 85% range according to Eb93 table II, which is well beyond their 72-75% efficiencies. Note also that they don't provide actual background levels at all, but merely hint indirectly via visibility figure, showing that actual background could be as large as 2.5%, which further boosts the required detector efficiency well beyond those they had, independently from the multiple pair boost mentioned.
I suppose they can run with this kind of stage magic for a little while, cashing on the novelty of the loophole, until the more complete data emerge or someone publishes Eb93 update for 7x7 event space and points out the inapplicability of their expression for J to the actual state they had (for which they don't provide any experimental data either that establishes it).
Even on its face, from birds eye view, if you imagine they measured first both channels and got data which had the usual detection efficiency loophole (characteristic of 2 channel experiment), so to fix it, they just turn the efficiency on half of detectors to 0, i.e. drop half the data, and suddenly on remaining data the detector efficiency loophole goes away. It is a bit like magic show even at that level -- you got 4 detectors with much too low efficiency, so what to do? Just turn the efficiency down to 0 on half of them and voila, the low efficiency problem is solved. If only real world would work like that -- you got 4 credit cards with too low limits to buy what you want at the store, so you just go home, cut two of them into trash, go back to store and a suddenly you can buy the stuff with the two remaining ones you couldn't before.
Hmm, up to now I thought you were misguided, but you clearly do not understand Glauber's work. The interesting quantities are found in the hierarchy of normalized correlation functions. The first one just gives you the mean. g2 gives you the relative variance. g3 and g4 are proportional to skewness and kurtosis, respectively. This gives you info about the moments of the underlying distribution without running into the problem of detector efficiency in experiments. Most importantly, this really has nothing to do with what Glauber does between 2.60 and 2.61. There he just restricts his discussion to binary detectors in their ground state by neglecting all events where the same detector fires more than once.
You seem to keep missing the point and going off way downstream, to applications of Gn() in his lectures, from the eq. (2.60) -> (2.61) transition I was talking about.
The point that (2.60) -> (2.61) makes is that
Gn() does not represent the independent, raw event counts taken on n detectors, as QM MT and Bell assume in his derivation of BI (by requiring factorization of probabilities corresponding to C4()), but rather
Gn() are non-locally filtered quantities extracted by counting into Gn() only certain combinations of n events, while discarding all other combinations of n events.
This operation (or transition from all event counts in 2.60 to filtered counts in 2.61), whatever its practical usefulness and rationale,
reduces n^n terms to n! terms and is
explicitly non-local.
Hence, G4() is not what the elementary QM MT claims it to be in BI derivations -- simply the set of independent counts on 4 detectors, but rather it is a quantity extracted via non-local subtractions (as is evident in any experiment). That of course invalidates the probability factorization requirement used in BI derivation since it is trivial to violate BI in a classical theory if one can perform the non-local subtractions (exclude or include events on detector A based on events on remote detector B).
You keep going into tangents about applied usefulness and rationales for term dropping, which are all good and fine but have nothing to do with the point being made -- Gn() is not what elementary QM MT assumes it to be -- the mere combination of n local, independent detection events. Gn() is obviously not that kind of quantity but a non-locally extracted function.
Operator ordering is not a filtering mechanism but just includes the fact that the detection of a photon destroys it. If you think that it is filtering that single photon detection events do not show up in two-photon counting rates, this is just odd. For detectors which do not do destroy photons, you need to use a different ordering. The Mandel/Wolf has a short chapter on this.
Of course it is, it removes the contribution of vacuum photons i.e. it operationally corresponds to subtraction of background/dark counts due to vacuum/ZPF photons.
No. You still claim that without showing why it should be so.
The response is the point above -- the removal of the (n^n - n!) product terms is a non-local operation since the factors in each term represent events "we are not interested in" on remote locations. The usefulness and application of procedure or any other rationale are irrelevant for the point being made i.e. what kind of quantity Gn() is -- it is clearly not the result of just taking together counts taken independently at n locations since it is computed by taking out by hand the (n^n - n!) product term contributions.
Are you actually taking the opposite position -- that Gn() is a mere combination count of n events,
each obtained locally and independently (i.e. without any regard for the events on the other n-1 locations), as QM MT takes it to be (at least for derivation of BI)?
How can that position be consistent with the subtractions corresponding to discarded (n^n-n!) terms, the factors of which are specific combinations of n absorption events at n locations? The QM independence assumption obviously doesn't hold for the Gn() defined in QED as non-locally filtered function.
While the Thorn paper is and has always been quite a bad one, physics has been way ahead of that and detectors have become way better. Antibunching has been demonstrated e.g. in:
Nature Communications 3, Article number: 628 (2012) doi:10.1038/ncomms1637, Phys. Rev. Lett. 108, 093602 (2012), Optics Express, Vol. 19, Issue 5, pp. 4182-4187 (2011) and a gazillion of other papers.
I don't have access to paywalled physics papers at my current job (I work as
researcher in computer industry not in academia). If you attach a copy I can check it out. If it doesn't explicitly provide non-filtered counts, or is silent altogether on the subject of loopholes (i.e. how exactly it gets around and defeats the ZPF based classical models), you don't need to bother, I have seen plenty of that kind that preach to the choir.
That is wrong. G2 consists of the two-photon detection events
G2() or Gn() are derived for general field, not just for the n photon number state in case of Gn() as you seem to suggest above? The n in Gn() has nothing to do with field state it is averaged over (which can be any state, such as photon number state, coherent state, etc).
, so you only consider the events in which both detectors fire. For G2 you are always only interested in two-photon events. This does not at all depend on the field you try to measure. G2, however, does not tell you much about non-classicality.
The G2() for is 0 for single photon states (e.g. Glauber's lectures sec. 2.6 pp. 51-52), which is what the Thorn et al. were claiming to demonstrate experimentally (see their paper; note that they use normalized correlation functions g2 rather than non-normalized G2 discussed here). As noted this is tautologically so, by virtue of subtractions of wrong number of photons, hence there is nothing to demonstrate experimentally. Their entire experiment is thus based on misreading of what QED predicts. There is no prediction of G2() = 0 on non-filtered data, as they imagine, since G2() is by its basic definition a non-locally filtered function (extracted from all events on 2 detectors via non-local subtractions cf. 2.60->2.61).
No, that is not irrelevant. Otherwise you could simply use a device as a detector which throws away the input and simply puts out the same fixed sequence of designed "detection events" which lead to g2<1 every time. Which is very similar to what the "counterexamples" actually do.
That's ridiculous even for a strawman since such detector would fail to show the critical interference aspects of the same setup. The combination of exclusivity of g2<1
and interference (if one chooses to remove detector from each path and put it in some point of overlap) is what makes the claim non-classical/paradoxical. The exclusivity on its own or interference on its own are trivially classical properties (e.g. for classical particles or waves). The SED detectors do function, although they may show some characteristics that are different than those of a particular physical detector.
For an experimental
absolute non-classicality claim (i.e. 'this set of data cannot be generated by any local model'), as long as the local model reproduces the data provided (including exclusivity + interference in the beam splitter/double slit setup since it is the combination which is non-classical) it falsifies the absolute impossibility claim by definition. It doesn't matter how it gets them, as long as it is done via local computations and is capable of reproducing that class of experimental data claiming impossibility. After all, no one knows how the nature computes it all anyway. Algorithms that make up our current physics are merely a transient coarse grained approximations of some aspects of the overall pattern, which includes life forms and intelligent beings trying to figure it out, being computed by the chief programmer of the universe.