I don't see what's weird about the double slit experiment

  • #26
Cthugha
Science Advisor
1,956
319
That's mixing up apples and oranges. For people seeking to come up with next physics at Planck scale, such as t'Hooft or Wolfram, it is quite relevant whether cellular automata or Planckian networks or some other distributed, local computational model could at least in principle replicate the empirical facts of existent physics.
So the mystery of the double slit is now physics at the Planck scale? No.

There was no "erroneous statement" that anything you said pointed out. If I missed it you are welcome to provide a link to specific demonstration of such error you claim to have produced in the discussion.
It is in the stuff you have not answered to. You claimed there is no demonstration of non-classicality. This is not even true for your "genuine" non-classicallity as there is antibunching.

The following cliams are wrong:
- Measuring gn requires non-local filtering. For once, it is not filtering to develop the photon number distribution into a series of moments. Just like a Fourier expansion or a Taylor expansion is not filtering. For gn you instead expand in the orders of the distribution: mean, relative variance, skewness, kurtosis and so on. You can restrict yourself in analysis to low order because the relative variance is enough demonstrate subpoissonian behaviour. Even the usage of a beam splitter and several detectors is not necessary. As shown in the link in my last post, antibunching can be and was demonstrated using a single detector. One could even use high photon-number resolving detectors.

- You claimed there are classical counterexamples for actually measured non-classical g2-values. This is not correct. There are examples for simultaneously proposed pairs of light fields and detectors (usually those with threshold) which would yield g2-values considered non-classical for other detectors. However, each kind of detector has its own limit of where unambiguous non-classicality starts. Showing that you can get g2 below 1 with an arbitrary detector is not a counterexample. What people would need to show is that the hypothetical detector used in modeling indeed is a good model of the real detector used. Once this is done, one can start thinking about counterexamples. Unfortunately all the detectors used for modeling in SED share the same achilles heel. They cannot explain well, how detectors fire and have a problem with the firing to random noise vs. not firing at ZPF issue. See e.g. the Carmichael paper I cited earlier.
Antibunching in subpoissonian light as seen in experiments is a demonstration of non-classicality and there still is no classical counterexample explaining the measurements.
 
  • #27
187
0
So the mystery of the double slit is now physics at the Planck scale? No.
No to the strawman which has no relation to what I said.

It is in the stuff you have not answered to. You claimed there is no demonstration of non-classicality. This is not even true for your "genuine" non-classicallity as there is antibunching.
We're talking genuine non-classicality, not about word play defining label 'non-classical' for phenomena which can be simulated via local computation (i.e. local realist models), such as QO 'non-classicality' which can entirely be modeled by SED, as even Yariv admits in his textbook.

The only genuine non-classicality would be BI violation and that hasn't happened either (and no , the latest Giustina et al. paper making such claim has unmentioned loophole and anomalies beyond the locality loophole they acknowledge; similarly the newer Kwiat's group preprint about the same single channel experiment has the same anomaly and its raw data don't even violate J>=0 inequality).

The following cliams are wrong:
- Measuring gn requires non-local filtering. For once, it is not filtering to develop the photon number distribution into a series of moments. Just like a Fourier expansion or a Taylor expansion is not filtering. For gn you instead expand in the orders of the distribution: mean, relative variance, skewness, kurtosis and so on.
You seem completely lost as what the Glauber's derivation is doing. It has nothing to do with expansion of "photon number distribution into series of moments." He is expanding transition probabilities for n detectors interacting with general quantized EM field expressed in interaction picture, in powers of interaction Hamiltonian for general EM field. The starting expression (2.60) has n^n terms (that already includes implicitly the single detector filtering from the earlier chapter via operator ordering rule), then he provides rationale for eliminating most of them, ending with n! terms in eq. (2.61), which is ~n^n / e^n, i.e. he retains (1/e^n)-th fraction of the original terms.

It doesn't actually matter at all what his rationale for filtering (aka global term dropping) is, that is plainly a non-local filtering on its face, since elimination of events on one detector depends on the events on the other detectors.

The resulting Gn() is what is being measured, such as G4() in BI violation experiments. But that G4() is result of non-local filtering which goes from 4^4=256 terms down to 4!=24 terms by simply requiring that some combinations of multi-detector events not be counted as contribution to G4() counts.

Note that it is G4() which contains the QM prediction (the 'cos^2() law'). But the above non-local filtering (term dropping) involved in defining and counting G4(), violates independence assumption in Bell's model i.e. one cannot require factorization of probabilities for remote events which are being filtered non-locally (the events contributing to G4() counts are selected based on the results on all 4 detectors). Hence the predicted QM violation is entirely unremarkable since any local classical model can do that, provided it can filter results on detector A based on results on detector B.


- You claimed there are classical counterexamples for actually measured non-classical g2-values. This is not correct.
Which experiment demonstrates g2 < 1 on non-filtered data? The latest one trying to do that was the one from 2004 by Thorn et al. discussed earlier in which they contrive separate setup to prove exclusivity of detection events.

But for that setup (and that one only) they misconfigure TAC timings (grossly violating the manufacturer's specified delays) just so that it cannot detect practically any coincidences, resulting in apparent exclusivity i.e. g2< 1 without subtractions/filtering (they went out of their way to emphasize that non-filtering feature of the experiment).

Despite being asked by several readers for the clarification of the timing configuration anomaly, they never stated what the actual settings and the resulting counts with such corrected settings were, but merely kept repeating 'trust me, it was correct' (while acknowledging that it was different from the published one). Strangely, the wrong TAC setting is emphasized in several places in the published paper (and in their web site instructions for other universities who wish to use the same demonstration for their students) as important for the experiment. No correction was ever issued. Hence, taking into account everything published and stated in emails, it was cheap stage trick designed to wow the undergrads.

The mere fact though, that they had to use such cheap gimmicks in 2004 to get g2<1 on raw counts, should tell you that no one knows how to make g2<1 happen without filtering (well, the non-local filtering would defeat the very point of the demonstration; I can see students, after seeing how data is being filtered, saying "duh, so what's the big deal about that?").

Note also that Glauber's derivation of Gn() shows that g2=0 doesn't happen without non-local filtering, hence no wonder they couldn't get it either. Namely, according to his term-dropping prescription in going from (2.60) -> (2.61) for G2() measured on 1 photon state, you have to drop all events in which both detectors or no detectors trigger (since that would be a 2 photon or 0 photon state which "we are not interested in"), you are going from 2^2=4 terms to 2!=2 terms i.e. you get perfect exclusivity by definition of G2(), hence g2=0 tautologically, but entirely via non-local filtering.

There are examples for simultaneously proposed pairs of light fields and detectors (usually those with threshold) which would yield g2-values considered non-classical for other detectors. However, each kind of detector has its own limit of where unambiguous non-classicality starts. Showing that you can get g2 below 1 with an arbitrary detector is not a counterexample. What people would need to show is that the hypothetical detector used in modeling indeed is a good model of the real detector used. Once this is done, one can start thinking about counterexamples. Unfortunately all the detectors used for modeling in SED share the same achilles heel. They cannot explain well, how detectors fire and have a problem with the firing to random noise vs. not firing at ZPF issue. See e.g. the Carmichael paper I cited earlier.
Antibunching in subpoissonian light as seen in experiments is a demonstration of non-classicality and there still is no classical counterexample explaining the measurements.
For purpose of counter-example to absolute non-classicality claim it is irrelevant whether SED detector models are generally valid or not. If someone claims number 57 is prime it suffices to tell them 'try dividing with 3' to invalidate their claim even though the messenger may not have any math degree or know much math at all. The counter-example does all it needs to falsify the claim '57 is a prime'.
 
Last edited:
  • #28
Cthugha
Science Advisor
1,956
319
The only genuine non-classicality would be BI violation and that hasn't happened either (and no , the latest Giustina et al. paper making such claim has unmentioned loophole and anomalies beyond the locality loophole they acknowledge; similarly the newer Kwiat's group preprint about the same single channel experiment has the same anomaly and its raw data don't even violate J>=0 inequality).
Giustina's does. Kwiat's does not. He even explicitly gives a classical model which models Giustina's results, but not his. It still has another loophole, though.

You seem completely lost as what the Glauber's derivation is doing. It has nothing to do with expansion of "photon number distribution into series of moments." He is expanding transition probabilities for n detectors interacting with general quantized EM field expressed in interaction picture, in powers of interaction Hamiltonian for general EM field. The starting expression (2.60) has n^n terms (that already includes implicitly the single detector filtering from the earlier chapter via operator ordering rule), then he provides rationale for eliminating most of them, ending with n! terms in eq. (2.61), which is ~n^n / e^n, i.e. he retains (1/e^n)-th fraction of the original terms.
Hmm, up to now I thought you were misguided, but you clearly do not understand Glauber's work. The interesting quantities are found in the hierarchy of normalized correlation functions. The first one just gives you the mean. g2 gives you the relative variance. g3 and g4 are proportional to skewness and kurtosis, respectively. This gives you info about the moments of the underlying distribution without running into the problem of detector efficiency in experiments. Most importantly, this really has nothing to do with what Glauber does between 2.60 and 2.61. There he just restricts his discussion to binary detectors in their ground state by neglecting all events where the same detector fires more than once.

Operator ordering is not a filtering mechanism but just includes the fact that the detection of a photon destroys it. If you think that it is filtering that single photon detection events do not show up in two-photon counting rates, this is just odd. For detectors which do not do destroy photons, you need to use a different ordering. The Mandel/Wolf has a short chapter on this.

It doesn't actually matter at all what his rationale for filtering (aka global term dropping) is, that is plainly a non-local filtering on its face, since elimination of events on one detector depends on the events on the other detectors.
No. You still claim that without showing why it should be so.

The resulting Gn() is what is being measured, such as G4() in BI violation experiments. But that G4() is result of non-local filtering which goes from 4^4=256 terms down to 4!=24 terms by simply requiring that some combinations of multi-detector events not be counted as contribution to G4() counts.
This is a strawman. For binary counters like SPADs, these terms do not exist. A binary detector cannot fire more than once at the same time. Therefore the terms, where it fires 4 times at once are not considered.

Which experiment demonstrates g2 < 1 on non-filtered data? The latest one trying to do that was the one from 2004 by Thorn et al. discussed earlier in which they contrive separate setup to prove exclusivity of detection events.
While the Thorn paper is and has always been quite a bad one, physics has been way ahead of that and detectors have become way better. Antibunching has been demonstrated e.g. in:
Nature Communications 3, Article number: 628 (2012) doi:10.1038/ncomms1637, Phys. Rev. Lett. 108, 093602 (2012), Optics Express, Vol. 19, Issue 5, pp. 4182-4187 (2011) and a gazillion of other papers.

The mere fact though, that they had to use such cheap gimmicks in 2004 to get g2<1 on raw counts, should tell you that no one knows how to make g2<1 happen without filtering (well, the non-local filtering would defeat the very point of the demonstration; I can see students, after seeing how data is being filtered, saying "duh, so what's the big deal about that?").
This was an paper published in a pretty low level journal and was even at that time way behind the technical state of the art. I have seen enough raw data already to know that your claim is without substance.

Note also that Glauber's derivation of Gn() shows that g2=0 doesn't happen without non-local filtering, hence no wonder they couldn't get it either. Namely, according to his term-dropping prescription in going from (2.60) -> (2.61) for G2() measured on 1 photon state, you have to drop all events in which both detectors or no detectors trigger (since that would be a 2 photon or 0 photon state which "we are not interested in"), you are going from 2^2=4 terms to 2!=2 terms i.e. you get perfect exclusivity by definition of G2(), hence g2=0 tautologically, but entirely via non-local filtering.
That is wrong. G2 consists of the two-photon detection events, so you only consider the events in which both detectors fire. For G2 you are always only interested in two-photon events. This does not at all depend on the field you try to measure. G2, however, does not tell you much about non-classicality. g2 does, which also includes the other detection events in the normalization (g2=(<:n^2:>)/(<n>^2), where the : denotes normal ordering of the underlying field operators and the mean photon number is found by taking into account ALL detection events. This is simply a detector efficiency invariant measurement of the relative variance of the photon number distribution.

For purpose of counter-example to absolute non-classicality claim it is irrelevant whether SED detector models are generally valid or not. If someone claims number 57 is prime it suffices to tell them 'try dividing with 3' to invalidate their claim even though the messenger may not have any math degree or know much math at all. The counter-example does all it needs to falsify the claim '57 is a prime'.
No, that is not irrelevant. Otherwise you could simply use a device as a detector which throws away the input and simply puts out the same fixed sequence of designed "detection events" which lead to g2<1 every time. Which is very similar to what the "counterexamples" actually do.
 
Last edited:
  • #29
974
47
Despite decades of analysis and thought about "the experiment", at the time Feynman was giving his lecture about it (vol 1. 37-4 An experiment with electrons), he mentioned that, "This experiment has never been done in just this way."

It was around this time that Claus Jönsson was the first to do it with other than light (electrons) in 1961.
 
  • #30
825
54
In case others haven't come across it, there are macroscopic analogs to the single particle diffraction by Couder's group:

Single-Particle Diffraction and Interference at a Macroscopic Scale
http://people.isy.liu.se/jalar/kurser/QF/assignments/Couder2006.pdf

Or Video:
https://www.youtube.com/watch?v=W9yWv5dqSKk

There are even macroscopic bouncing droplet experiments showing analogs to tunneling, quantized orbits, non-locality, superposed state, etc.:

A macroscopic-scale wave-particle duality
http://www.physics.utoronto.ca/~colloq/Talk2011_Couder/Couder.pdf
 
Last edited:
  • #31
187
0
Giustina's does. Kwiat's does not. He even explicitly gives a classical model which models Giustina's results, but not his. It still has another loophole, though.
The "locality" loophole they emphasize as remaining is a red herring. You don't expect that switching randomly polarization angles will change anything much? It never did in other photon experiments. That's relevant only for experiment where the subsystems interact, which is not true for the optical photons.

The real loophole they don't mention is that they are misusing Eberhard's inequality on a state to which it doesn't apply. Their field state is a Poissonian mixture of PDC pairs (which is always the case for laser source e.g. see Saleh95) not the pure 2 photon state they claim to have. Short of having their full counts for all settings and detailed calibration data to reconstruct its real parameters closer (such as actual distribution in the mixture), the main indicator of this real loophole is a peculiar anomaly (significant discrepancy with QM for the pure state) in their data pointed out in this recent preprint.

As table 1, p. 2 shows, while almost all of their counts are very close to QM prediction for the claimed pure state, the critical C(a2,b2) count is 69.79K which is 5.7 times greater than the QM prediction of 12.25K. The problem is that the excess counts of XC = 57.54K are very close to half of their total violation J = -126.7K. Hence there was some source of accidental coincidences (multiple PDC pairs) which stood out on the lowest coincidence counts, but is buried in other 3 coincidences which are more than order of magnitude larger quantities. The problem with that is that those remaining 3 coincidence counts C(a1,b1), C(a1,b2) and C(a2,b1) enter J with negative sign, while C(a2,b2) enters with positive sign. Hence any accidental coincidences with effective net counts XC~57.54K i.e. of similar magnitude, in the other 3 coincidence values will yield the net effect on J which is -3*XC + 1*XC = -2*XC ~ -115K which is nearly the same as their actual violation.

Unfortunately they don't provide the measured counts data for all 4 settings or respective calibrations on each, hence one can't get closer to what went on there. But Eberhard's model (Eb93) is loophole free only under assumption of at most a single photon pair in coincidence window, detected in full or partially, but not in excess (such as the one their anomaly shows) i.e. his event space doesn't include double triggers on one side, such as oo, oe, eo and ee on side A, but only o, e and u (undetected) on each side. Hence instead of 3x3 matrix for one setting he is considering, there is in general case a 7x7 matrix per setting. In the two channel experiment these double trigger cases would be detected and subtracted from the accepted counts (leaving a loophole of missing pairs).

Conveniently Giustina et al. chose single channel experiment where they in effect throw away half the data available for measurement (the 2nd detector on each side), where the only effect of multiple pairs would be those double triggers and two photon detections on the remaining detector appearing as single detection case o, except with artificially amplified detection rates.

Of course, in single channel experiment this excess would still show up as a discrepancy from QM prediction in coincidence counts, which it does on the count which is the most sensitive to any additions, the one that is order of magnitude lower than others C(a2,b2). It would also artificially amplify the ratio of coincidences vs. singles on either side (by a factor ~(2-f)>1 for a detector of efficiency f and 2 PDC pairs), which is exactly what one wants in order to obtain the appearance of violation. Of course, the Eb93's expression for J is not valid for multiple pairs, hence any such "amplification" would invalidate the claim, if disclosed. But they didn't even give the basic counts for all 4 settings or backgrounds, let alone calibration with 2 channels to quantify the excesses (and subtract them).

The Kwiat's group replication provides few more counts for each setting, but fails to violate J>=0 altogether on the reported counts. It also shows exactly the same anomaly on the C(a2,b2) as the other experiment.

What's obvious is they have both tapped into the new loophole peculiar to Eb93 J>=0 inequality in single channel experiments, where they get a hidden effective "enhancement" (in CH74 terminology of loopholes), provided they keep the full data private, don't disclose the full background counts and ignore the anomaly on the lowest coincidence counts, just as they did.

While they can fix the experiment without generalizing Eb93 event space to 7x7, by merely adding the 2nd channel and subtracting the events which fall outside of the Eb93's 3x3 event space for which his J expression was derived, that would reclassify these events into the background counts within Eb93 model, but the Eb93 is extremely sensitive to the smallest background levels (see Fig 1 in Eb93; at their max efficiencies 75% the max allowed background is ~0.24%). Their excess of ~57K taken as added background would push the required detector efficiency into 85% range according to Eb93 table II, which is well beyond their 72-75% efficiencies. Note also that they don't provide actual background levels at all, but merely hint indirectly via visibility figure, showing that actual background could be as large as 2.5%, which further boosts the required detector efficiency well beyond those they had, independently from the multiple pair boost mentioned.

I suppose they can run with this kind of stage magic for a little while, cashing on the novelty of the loophole, until the more complete data emerge or someone publishes Eb93 update for 7x7 event space and points out the inapplicability of their expression for J to the actual state they had (for which they don't provide any experimental data either that establishes it).

Even on its face, from birds eye view, if you imagine they measured first both channels and got data which had the usual detection efficiency loophole (characteristic of 2 channel experiment), so to fix it, they just turn the efficiency on half of detectors to 0, i.e. drop half the data, and suddenly on remaining data the detector efficiency loophole goes away. It is a bit like magic show even at that level -- you got 4 detectors with much too low efficiency, so what to do? Just turn the efficiency down to 0 on half of them and voila, the low efficiency problem is solved. If only real world would work like that -- you got 4 credit cards with too low limits to buy what you want at the store, so you just go home, cut two of them into trash, go back to store and a suddenly you can buy the stuff with the two remaining ones you couldn't before.

Hmm, up to now I thought you were misguided, but you clearly do not understand Glauber's work. The interesting quantities are found in the hierarchy of normalized correlation functions. The first one just gives you the mean. g2 gives you the relative variance. g3 and g4 are proportional to skewness and kurtosis, respectively. This gives you info about the moments of the underlying distribution without running into the problem of detector efficiency in experiments. Most importantly, this really has nothing to do with what Glauber does between 2.60 and 2.61. There he just restricts his discussion to binary detectors in their ground state by neglecting all events where the same detector fires more than once.
You seem to keep missing the point and going off way downstream, to applications of Gn() in his lectures, from the eq. (2.60) -> (2.61) transition I was talking about.

The point that (2.60) -> (2.61) makes is that Gn() does not represent the independent, raw event counts taken on n detectors, as QM MT and Bell assume in his derivation of BI (by requiring factorization of probabilities corresponding to C4()), but rather Gn() are non-locally filtered quantities extracted by counting into Gn() only certain combinations of n events, while discarding all other combinations of n events.

This operation (or transition from all event counts in 2.60 to filtered counts in 2.61), whatever its practical usefulness and rationale, reduces n^n terms to n! terms and is explicitly non-local.

Hence, G4() is not what the elementary QM MT claims it to be in BI derivations -- simply the set of independent counts on 4 detectors, but rather it is a quantity extracted via non-local subtractions (as is evident in any experiment). That of course invalidates the probability factorization requirement used in BI derivation since it is trivial to violate BI in a classical theory if one can perform the non-local subtractions (exclude or include events on detector A based on events on remote detector B).

You keep going into tangents about applied usefulness and rationales for term dropping, which are all good and fine but have nothing to do with the point being made -- Gn() is not what elementary QM MT assumes it to be -- the mere combination of n local, independent detection events. Gn() is obviously not that kind of quantity but a non-locally extracted function.

Operator ordering is not a filtering mechanism but just includes the fact that the detection of a photon destroys it. If you think that it is filtering that single photon detection events do not show up in two-photon counting rates, this is just odd. For detectors which do not do destroy photons, you need to use a different ordering. The Mandel/Wolf has a short chapter on this.
Of course it is, it removes the contribution of vacuum photons i.e. it operationally corresponds to subtraction of background/dark counts due to vacuum/ZPF photons.

No. You still claim that without showing why it should be so.
The response is the point above -- the removal of the (n^n - n!) product terms is a non-local operation since the factors in each term represent events "we are not interested in" on remote locations. The usefulness and application of procedure or any other rationale are irrelevant for the point being made i.e. what kind of quantity Gn() is -- it is clearly not the result of just taking together counts taken independently at n locations since it is computed by taking out by hand the (n^n - n!) product term contributions.

Are you actually taking the opposite position -- that Gn() is a mere combination count of n events, each obtained locally and independently (i.e. without any regard for the events on the other n-1 locations), as QM MT takes it to be (at least for derivation of BI)?

How can that position be consistent with the subtractions corresponding to discarded (n^n-n!) terms, the factors of which are specific combinations of n absorption events at n locations? The QM independence assumption obviously doesn't hold for the Gn() defined in QED as non-locally filtered function.

While the Thorn paper is and has always been quite a bad one, physics has been way ahead of that and detectors have become way better. Antibunching has been demonstrated e.g. in:
Nature Communications 3, Article number: 628 (2012) doi:10.1038/ncomms1637, Phys. Rev. Lett. 108, 093602 (2012), Optics Express, Vol. 19, Issue 5, pp. 4182-4187 (2011) and a gazillion of other papers.
I don't have access to paywalled physics papers at my current job (I work as researcher in computer industry not in academia). If you attach a copy I can check it out. If it doesn't explicitly provide non-filtered counts, or is silent altogether on the subject of loopholes (i.e. how exactly it gets around and defeats the ZPF based classical models), you don't need to bother, I have seen plenty of that kind that preach to the choir.

That is wrong. G2 consists of the two-photon detection events
G2() or Gn() are derived for general field, not just for the n photon number state in case of Gn() as you seem to suggest above? The n in Gn() has nothing to do with field state it is averaged over (which can be any state, such as photon number state, coherent state, etc).

, so you only consider the events in which both detectors fire. For G2 you are always only interested in two-photon events. This does not at all depend on the field you try to measure. G2, however, does not tell you much about non-classicality.
The G2() for is 0 for single photon states (e.g. Glauber's lectures sec. 2.6 pp. 51-52), which is what the Thorn et al. were claiming to demonstrate experimentally (see their paper; note that they use normalized correlation functions g2 rather than non-normalized G2 discussed here). As noted this is tautologically so, by virtue of subtractions of wrong number of photons, hence there is nothing to demonstrate experimentally. Their entire experiment is thus based on misreading of what QED predicts. There is no prediction of G2() = 0 on non-filtered data, as they imagine, since G2() is by its basic definition a non-locally filtered function (extracted from all events on 2 detectors via non-local subtractions cf. 2.60->2.61).

No, that is not irrelevant. Otherwise you could simply use a device as a detector which throws away the input and simply puts out the same fixed sequence of designed "detection events" which lead to g2<1 every time. Which is very similar to what the "counterexamples" actually do.
That's ridiculous even for a strawman since such detector would fail to show the critical interference aspects of the same setup. The combination of exclusivity of g2<1 and interference (if one chooses to remove detector from each path and put it in some point of overlap) is what makes the claim non-classical/paradoxical. The exclusivity on its own or interference on its own are trivially classical properties (e.g. for classical particles or waves). The SED detectors do function, although they may show some characteristics that are different than those of a particular physical detector.

For an experimental absolute non-classicality claim (i.e. 'this set of data cannot be generated by any local model'), as long as the local model reproduces the data provided (including exclusivity + interference in the beam splitter/double slit setup since it is the combination which is non-classical) it falsifies the absolute impossibility claim by definition. It doesn't matter how it gets them, as long as it is done via local computations and is capable of reproducing that class of experimental data claiming impossibility. After all, no one knows how the nature computes it all anyway. Algorithms that make up our current physics are merely a transient coarse grained approximations of some aspects of the overall pattern, which includes life forms and intelligent beings trying to figure it out, being computed by the chief programmer of the universe.
 
Last edited:
  • #32
Cthugha
Science Advisor
1,956
319
The "locality" loophole they emphasize as remaining is a red herring. You don't expect that switching randomly polarization angles will change anything much? It never did in other photon experiments. That's relevant only for experiment where the subsystems interact, which is not true for the optical photons.
Sorry, I am still not interested in BI, but maybe your point is of interest to others.

You seem to keep missing the point and going off way downstream, to applications of Gn() in his lectures, from the eq. (2.60) -> (2.61) transition I was talking about.

The point that (2.60) -> (2.61) makes is that Gn() does not represent the independent, raw event counts taken on n detectors, as QM MT and Bell assume in his derivation of BI (by requiring factorization of probabilities corresponding to C4()), but rather Gn() are non-locally filtered quantities extracted by counting into Gn() only certain combinations of n events, while discarding all other combinations of n events.
This is still wrong. For a system of n BINARY detectors which can only distinguish between photons present or no photons present, the hierarchy of all orders of Gn takes all detections into account. What is not considered is the event of (for example) one single detector being hit by four photons simultaneously. This is trivial as the detector will give the same response as if it was only hit by one photon and there is no four-photon detection event. This is also why one always needs a good theory of the present detector. The events a detector reacts to, of course also define what the threshold of non-classical behavior is. This is also why one needs to do all the math again and get a new criterion for non-classicality when using a different kind of detector. Physics is already WAY beyond these simple binary detectors, although they are still widely used because they are well characterized. In practice, e.g. when using SPADs, one must of course operate in such a regime that the probability of additional photons arriving during detector dead time - the photons which cannot make the detector fire - is small. But that can be quantitatively accessed. It changes the bound of where non-classical behavior starts. This is why - I stress it again - detector theory is of highest importance.

This operation (or transition from all event counts in 2.60 to filtered counts in 2.61), whatever its practical usefulness and rationale, reduces n^n terms to n! terms and is explicitly non-local.
Non-local? If you insist on using more than one detector and have these spacelike separated, you can get introduce non-local stuff if you want to. This is still irrelevant for antibunching, which can even be seen with one detector alone, even photon-number resolving ones. For photon-number resolving detectors you do NOT reduce the terms. If you detect 4 photons, then these give you 6 two-photon detections. See, for example Optics Express, Vol. 18, Issue 19, pp. 20229-20241 (2010) (free) for an example of how such a detector works. This paper, however, discusses only classical stuff, but it is a reasonable explanation for getting into it. It even includes a picture of the bare counts that you seem to be so interested in.

You keep going into tangents about applied usefulness and rationales for term dropping, which are all good and fine but have nothing to do with the point being made -- Gn() is not what elementary QM MT assumes it to be -- the mere combination of n local, independent detection events. Gn() is obviously is not that kind of quantity but a non-locally extracted function.
Only if you insist on using only technology older than - say - 2000, maybe 1995 (and even there I do not agree). The Gn() derived by Glauber in that paper is ONLY and explicitly for binary detectors - a good model for SPADs. The terms dropped are dropped because the detector is not responsive to them. That is all.

Of course it is, it removes the contribution of vacuum photons i.e. it operationally corresponds to subtraction of background/dark counts due to vacuum/ZPF photons.
No, the normal ordering forces you to commute operators once, so that <:n^2:> becomes (for equal times) <n(n-1)>. Normal ordering just takes the destruction of the first photon upon detection into account. You would not use normal ordering for non-destructive weak photon number measurements or even photon amplifiers.

Are you actually taking the opposite position -- that Gn() is a mere combination count of n events, each obtained locally and independently (i.e. without any regard for the events on the other n-1 locations), as QM MT takes it to be (at least for derivation of BI)?

How can that position be consistent with the subtractions corresponding to discarded (n^n-n!) terms, the factors of which are specific combinations of n absorption events at n locations? The QM independence assumption obviously doesn't hold for the Gn() defined in QED as non-locally filtered function.
Gn is a quantity which is designed to be related to experiments. Therefore, Gn is whatever the detector in question allows it to be. If I use a single photon-number resolving detector to investigate antibunching, that is pretty local. If I distribute 30 SPADs all over the world, it is not.

I don't have access to paywalled physics papers at my current job (I work as researcher in computer industry not in academia). If you attach a copy I can check it out. If it doesn't explicitly provide non-filtered counts, or is silent altogether on the subject of loopholes (i.e. how exactly it gets around and defeats the ZPF based classical models), you don't need to bother, I have seen plenty of that kind that preach to the choir.
They certainly do not explicitly discuss ZPF based models. Nobody does that. They show the directly measured g2 without background subtraction and other corrections for accidental counts. The NComms might be freely available. A version of the PRL might be on the ArXiv.

G2() or Gn() are derived for general field, not just for the n photon number state in case of Gn() as you seem to suggest above? The n in Gn() has nothing to do with field state it is averaged over (which can be any state, such as photon number state, coherent state, etc).
Ehm, no. You suggested that Gn depends on the field by saying "for G2() measured on 1 photon state, you have to drop all events in which both detectors or no detectors trigger (since that would be a 2 photon or 0 photon state which "we are not interested in")". For G2 using SPADs you have a look at when both detectors fire (which is usually never for a single photon state). You also do not really drop the other terms as you need them for normalization. For G2 using other detectors you have a look at photon pairs or two-photon combinations, depending on what your detector does.

The G2() for is 0 for single photon states (e.g. Glauber's lectures sec. 2.6 pp. 51-52), which is what the Thorn et al. were claiming to demonstrate experimentally (see their paper; note that they use normalized correlation functions g2 rather than non-normalized G2 discussed here).
Of course they use g2 instead of G2. How would you know that your variance is below that of a Poissonian distribution without having the proper normalization? G2 alone does not indicate non-classical behavior. g2 does.

As noted this is tautologically so, by virtue of subtractions of wrong number of photons, hence there is nothing to demonstrate experimentally. Their entire experiment is thus based on misreading of what QED predicts. There is no prediction of G2() = 0 on non-filtered data, as they imagine, since G2() is by its basic definition a non-locally filtered function (extracted from all events on 2 detectors via non-local subtractions cf. 2.60->2.61).
For a single photon state G2 is always predicted to be 0, no matter what you do. You detect the photon once. It cannot be detected again. More importantly g2 is predicted to be 0, too. Even in the presence of noise, it will be <1 as long as noise is not dominant. This stays true, if you do not drop terms in Gn.

That's ridiculous even for a strawman since such detector would fail to show the critical interference aspects of the same setup. The combination of exclusivity of g2<1 and interference (if one chooses to remove detector from each path and put it in some point of overlap) is what makes the claim non-classical/paradoxical. The exclusivity on its own or interference on its own are trivially classical properties (e.g. for classical particles or waves). The SED detectors do function, although they may show some characteristics that are different than those of a particular physical detector.
But this is what the hypothetical detectors in the "counterexamples" do. They predict an insane g1, which is what governs the interference. I must emphasize again that Gn always goes along with the detector in question. Of course you can design detectors that give gn<1 for classical light fields. That is trivial. It also completely misses the point that g2<1 is a non-classicality criterion when using a certain kind of detector. The task is to model a classical light field that beats the assumed non-classicality criterion for a given detector, not to design a detector that gives a response having a different non-classicality criterion.

After all, no one knows how the nature computes it all anyway. Algorithms that make up our current physics are merely a transient coarse grained approximations of some aspects of the overall pattern, which includes life forms and intelligent beings trying to figure it out, being computed by the chief programmer of the universe.
I disagree. I am quite sure the NSA already eavesdropped on the chief programmer.
 
  • #33
187
0
Sorry, I am still not interested in BI, but maybe your point is of interest to others.
You brought it up and repeated their claim.


This is still wrong. For a system of n BINARY detectors which can only distinguish between photons present or no photons present, the hierarchy of all orders of Gn takes all detections into account. What is not considered is the event of (for example) one single detector being hit by four photons simultaneously.
But if you have n-photon state, then a 2-photon absorptions at location 1, implies there will be zero photon absorption at some other location 2 (by pigeonhole principle) i.e. the combinations of detections eliminated include double hit on one and no hit on 2. That is by definition a non-local filtering i.e. even though detector 1 is treated as binary detector, that trigger event on detector 1, which appears as perfectly valid trigger on 1, doesn't contribute to Gn() since there was also missing trigger on remote detector 2.

Hence the contributions counted in Gn() are specific global combinations of n triggers/non-triggers. In other words it is obvious that Gn() is not a simple collation of n independent counts from n locations but it is a non-locally filtered function which keeps or rejects particular global combinations of triggers.

Whatever you wish to call such procedure (since you keep quibbling about the term 'non-local filtering'), one cannot apply probability factorization rule to the resulting Gn() functions, hence derive BI by assuming that correlation functions are merely collated counts from independently obtained events on n locations. The Glauber's filtering procedure 2.60 -> 2.61 which removes by hand (n^n - n!) terms from all events on n locations that the full dynamical evolution yields in eq. 2.60 (whether "we are interested in" them or not is irrelevant for this point), results in Gn() of (2.61) which is tautologically non-classical or non-local not because of any strange physical non-locality or non-classicality, but by mere choice to drop the terms which depend on detection events on remote (e.g. space-like) locations.

The prediction of unmodified eq. (2.60), i.e. the raw unfiltered counts, cannot show anything non-local (they all evolve via local field equations and all space-like operators commute, hence that's already a plain local-realistic model of the setup all by itself). But those of (2.61) certainly can appear to do so if one ignores that they are not predictions about n independent events at n locations but about globally filtered subset of all events. Hence to replicate the unfiltered counts of (2.60) one can legitimately require from a local realist model (some other than field equations, which are also a local realist model) the independence of detections and thus probability factorization, but for those of (2.61), hence of Gn(), the factorization requirement is illegitimate (since any classical model can violate factorization if allowed to do similar non-local subtractions).

No, the normal ordering forces you to commute operators once, so that <:n^2:> becomes (for equal times) <n(n-1)>. Normal ordering just takes the destruction of the first photon upon detection into account. You would not use normal ordering for non-destructive weak photon number measurements or even photon amplifiers.
The normal ordering does remove vacuum photon contributions e.g. check Glauber's book, pdf page 23, sec 1.1, the expression <0| E^2 |0> != 0 where he explicitly calls these "zero point oscillations" (ZPF in SED). The operational counterpart to this normal ordering requirement is the subtraction of dark counts from contributions to Gn() counts.

While these are all fine procedures as far QO applications and extractions of Gn() from counts data go, one has to understand that this is done and know exact values in non-locality/non-classicality experiments. One cannot assume that the reported Gn() are obtained without such subtraction and legitimately require that any local/classical model must replicate such "ideal" subtraction free case, when these subtractions are mandated by the definition of Gn() (via the normal ordering, along with the other subtractions introduced in the n photon chapter) and included in the standard QO counting procedures.


Ehm, no. You suggested that Gn depends on the field by saying "for G2() measured on 1 photon state, you have to drop all events in which both detectors or no detectors trigger (since that would be a 2 photon or 0 photon state which "we are not interested in")".
I am saying that 'n' in Gn() is unrelated to photon number of the field state as you seemed to conflate in few places. The value of Gn() function depends of course on field state. But the 'n' in Gn() does not imply anything about what kind of field state rho you can use for the expectation value.

For a single photon state G2 is always predicted to be 0, no matter what you do. You detect the photon once. It cannot be detected again. More importantly g2 is predicted to be 0, too. Even in the presence of noise, it will be <1 as long as noise is not dominant. This stays true, if you do not drop terms in Gn.
The Gn() is by definition filtered (i.e. it is defined after dropping the terms 2.60 -> 2.61, plus requiring normal ordering which drops additional counts). There is no unfiltered Gn().

There are unfiltered independent/raw counts from n locations, but that's a different quantity than Gn() since raw counts correspond to eq. (2.60), before subtractions which defined Gn from (2.61) and on. Once you start picking whether to drop result on detector A based on whether it combines as contribution to Gn() in the "right" or "wrong" way with a result on the remote detector B, all classicality is out the window. You can get g2<1 or any other violation of 'classicality' just for taking. But there is nothing genuinely non-local or non-classical about any such 'non-classicality' conjured via definitions.

I disagree. I am quite sure the NSA already eavesdropped on the chief programmer.
That's pretty funny. Although I think computing technology of CPoU is about 10^80 times better than NSA's best, assuming Planck scale gates his gear may be working at, which is in linear scale at least 10^20 times smaller than gates made from protons (the smallest we can possibly go with our gates), which means 10^60 times more gates per unit volume than their best, then the clock that is 10^20 times faster (due to 10^20 times smaller distances), yielding the net edge of 10^80 times more powerful computing gear for the CPoU. How would they hope to outwit someone that is at least 10^80 times quicker/smarter? Plus he's got a super-smart bug in every atom of every computer or communication device they got.
 
  • #34
Cthugha
Science Advisor
1,956
319
But if you have n-photon state, then a 2-photon absorptions at location 1, implies there will be zero photon absorption at some other location 2 (by pigeonhole principle) i.e. the combinations of detections eliminated include double hit on one and no hit on 2. That is by definition a non-local filtering i.e. even though detector 1 is treated as binary detector, that trigger event on detector 1, which appears as perfectly valid trigger on 1, doesn't contribute to Gn() since there was also missing trigger on remote detector 2.
Not even though it is a binary detector, but because it is a binary detector. As already explained several times, these terms contribute to Gn if you use photon-number resolving detectors. Also, the missed events are not relevant for photon number states. You know what to expect for classical states and how many events you miss as the "best" or lowest noise result you can get is that of statistically independent photons (a second-order coherent beam). The number of those events is governed by the variance and the mean photon number of your light field, so you can find out whether your variance is below that of a coherent field, even when using binary detectors.

Hence the contributions counted in Gn() are specific global combinations of n triggers/non-triggers. In other words it is obvious that Gn() is not a simple collation of n independent counts from n locations but it is a non-locally filtered function which keeps or rejects particular global combinations of triggers.
Just for binary detectors. However, although that is the favorite point of BI-deniers, it is irrelevant for antibunching.

Whatever you wish to call such procedure (since you keep quibbling about the term 'non-local filtering'), one cannot apply probability factorization rule to the resulting Gn() functions, hence derive BI by assuming that correlation functions are merely collated counts from independently obtained events on n locations. The Glauber's filtering procedure 2.60 -> 2.61 which removes by hand (n^n - n!) terms from all events on n locations that the full dynamical evolution yields in eq. 2.60 (whether "we are interested in" them or not is irrelevant for this point), results in Gn() of (2.61) which is tautologically non-classical or non-local not because of any strange physical non-locality or non-classicality, but by mere choice to drop the terms which depend on detection events on remote (e.g. space-like) locations.
That is still irrelevant for antibunching.

The normal ordering does remove vacuum photon contributions e.g. check Glauber's book, pdf page 23, sec 1.1, the expression <0| E^2 |0> != 0 where he explicitly calls these "zero point oscillations" (ZPF in SED). The operational counterpart to this normal ordering requirement is the subtraction of dark counts from contributions to Gn() counts.
There is no subtraction of dark counts as (to quote from Glauber's book) "We may verify immediately from Eq. (1.12) that the rate at which photons are detected in the empty, or vacuum, state vanishes.".

While these are all fine procedures as far QO applications and extractions of Gn() from counts data go, one has to understand that this is done and know exact values in non-locality/non-classicality experiments. One cannot assume that the reported Gn() are obtained without such subtraction and legitimately require that any local/classical model must replicate such "ideal" subtraction free case, when these subtractions are mandated by the definition of Gn() (via the normal ordering, along with the other subtractions introduced in the n photon chapter) and included in the standard QO counting procedures.
Classical models must be designed for a specific detector. As I already pointed out several times, the terms you consider in Gn do so, too.

I am saying that 'n' in Gn() is unrelated to photon number of the field state as you seemed to conflate in few places.
Ehm, no. I never said that.

The value of Gn() function depends of course on field state. But the 'n' in Gn() does not imply anything about what kind of field state rho you can use for the expectation value.
Of course not.

The Gn() is by definition filtered (i.e. it is defined after dropping the terms 2.60 -> 2.61, plus requiring normal ordering which drops additional counts). There is no unfiltered Gn().
Again: This is the definition of Gn for BINARY detectors only. You seem to imply that people are not aware of the limitations of their setups. This is incorrect. A good treatment of n binary detectors is for example given in Phys. Rev. A 85, 023820 (2012) (http://arxiv.org/abs/1202.5106), which explicitly discusses that the non-classicality thresholds for one detector may be very different than for a different detector. They also very quickly hint at Bell measurements in a sidenote. Explicit non-classicality criteria and their verification have been presented in follow-up publications. My point stays: Knowing your detector is of critical importance.

You can get g2<1 or any other violation of 'classicality' just for taking. But there is nothing genuinely non-local or non-classical about any such 'non-classicality' conjured via definitions.
That is still off. Of course g2<1 demonstrates non-classicality (for some detectors). Not non-locality though, but that is not what antibunching is about. Is there any reason why you did not respond at all to the important part of my last post in which I explained why you do not drop terms in Gn when your detector allows you to do so?
 
Last edited:
  • #35
stevendaryl
Staff Emeritus
Science Advisor
Insights Author
8,407
2,594
If there is absolute LHV prohibition by the empirically established facts, there is mystery in double slit. Otherwise the empirical facts of double slit experiment standing on their own are perfectly compatible with local model.
As I said, "compatible with a local model" does not mean that there is no mystery. Is there a plausible local model that is consistent with the predictions of quantum mechanics?

The usual pedagogical presentation a la Feynman is misleading with its exclusivity claim (which you seem to have bought). Even though Feynman presented it in early 1960s as a major mystery (based on von Neumann's faulty no-LHV theorem), in fact even a recent experiment trying to show the above exclusivity had to cheat to achieve such appearance as discussed in an earlier PF thread.
I'm certainly open to the possibility that the loopholes for local variables have not all been closed. But that's very different from saying that there is a viable, plausible LHV model.


A "cheap" (Einstein's characterization) LHV model is the de Broglie-Bohm theory (it is LHV theory if you reject the no-go claims for LHV theories or the instantaneous wave function collapse since both claims lack empirical support).
Yes, certainly the arguments against LHV are assuming interaction speeds limited by lightspeed. Without that limitation, you can reproduce the predictions of quantum mechanics.

A much better theory of that type is Barut's Self-Field Electrodynamics which replicates the high precision QED results (radiative corrections to alpha^5 order, which was as far as they were known at the time in early 1990s; Barut died in 1995). That was also discussed in the above thread; the related SFED survey post written around same time with detailed citations is on sci.phys.research.
And that predicts the results of EPR-type twin particle experiments, as well?
 
  • #36
126
5
if the wave-particle duality exists for all classic particles, atoms, electrons, photons and so on then my question is has this duality been observed in sub-atomic particles mesons and so on, if not does that mean they are the only 'particles' known that do not display this duality ??

And what does that mean ?

how can you detect a photon before the slits without first destroying it ?

Could it be argued that a photon simple does not exist between it's emission and it's detection in our physical universe, in that it has no time and therefore no speed or distance. If you were a photon, you would not know you have travelled 13 billion light years, I think you would just begin to exist then instantly cease to exist.
 

Related Threads on I don't see what's weird about the double slit experiment

Replies
2
Views
1K
Replies
16
Views
3K
Replies
12
Views
3K
Replies
2
Views
958
Replies
18
Views
9K
Replies
3
Views
2K
Replies
3
Views
2K
Replies
42
Views
5K
Replies
21
Views
3K
Replies
1
Views
1K
Top