I don't see what's weird about the double slit experiment

In summary, the double slit experiment is considered weird and mysterious because it challenges our traditional understanding of particles and waves. The fact that a single electron can exhibit both wave-like and particle-like behavior, and that its behavior can be influenced by the setup of the experiment, is considered strange and fascinating. It also raises questions about the fundamental nature of particles and the role of observation in altering their behavior.
  • #1
cataclysmic
8
0
I've been reading & watching videos about the double slit experiment, and I'm failing to see what's so interesting & strange about it (I hope someone can enlighten me on what I'm missing).

The weirdness is NOT the fact that observing an electron changes it's behaviour (right?)

At first I thought the strangeness was that simply observing a single electron could make it go through one slit instead of two (i.e. simply looking at a particle can change its behaviour - wow spooky!). But I recently learned, if I understand correctly, that there's nothing spooky about that at all - it's simply that in order to observe an electron you need to fire a photon at it, and the photon is energy which of course alters the electron in some way. Right? Simple and logical - not weird at all.

Is the weirdness the fact that a single electron goes through both slits?

The only other potential weirdness I can see is that single electrons fired one at a time at the two slits cause an interference pattern. This is a bit weird because the only explanation seems to be that each single electron somehow goes through 2 slits and interferes with itself. But is that actually weird? It doesn't seem any weirder to me than that fact that light sometimes acts like a particle and sometimes like a wave. If light can do it, why not an electron? Why is it so strange that a single electron acts like a wave and goes through both slits? It doesn't seem that weird and mysterious to me.

I'm sure there probably IS something weird/mysterious/fascinating about the double slit experiment since everyone says there is and thinks it's a big deal, so if someone can help me to understand what that is I'd appreciate it.
 
Physics news on Phys.org
  • #2
Well..

The thing is, you get an interference pattern, something you expect from waves, not from particles (hence wave/particle duality). Back then, we did not know that massive particles had. The second weird thing is that, when you put a detector on one of the two slits; this interference pattern disappears. This doesn't have to be with a photon (as far is I heard), this could as well be through a changing electric field.

The weirdness of the single electron going through both slits:
First you need to, yet again, know that we didn't expect this behaviour from electrons. Life back then was deterministic, this experiment changed that view.

And photons are massless particles, electrons aren't...But double slit has always confused me a little bit, so I'd wait for the post of other members ;)
 
  • #3
There have been versions of the double slit experiment that do not disturb the wavefunction and simply the potential availability of the flight path of the electron changes its wave-like (particle-like) behavior.
It's wrong to think of electrons as particles at all times and it's even more wrong to assume elementary particles are small bullets. It's possible get partial information about what electrons are doing when not being measured through weak measurements and you'd be surprized what they seem capable of doing.

Is the weirdness the fact that a single electron goes through both slits?
Aside from its dual nature, a single electron is an undefined concept according to the best knowledge in the field. Its existence cannot be defined outside the measurement environment and this is a very strong hint that electrons and elementary 'particles' are not fundamental.
 
Last edited:
  • #4
cataclysmic said:
I'm sure there probably IS something weird/mysterious/fascinating about the double slit experiment since everyone says there is and thinks it's a big deal, so if someone can help me to understand what that is I'd appreciate it.



If the above didn't help, look into electron splitting into spin, charge and orbit components moving about separately(wave properties of elementary particles do not prohibit this).


http://www.nature.com/news/not-quite-so-elementary-my-dear-electron-1.10471
 
  • #5
That is an article about quasiparticles in condensed matter which is quit different from a free electron in a vacuum.In my opinion the weird part about the double slit exp. is that it works if you fire a single electron (or photon or whatever) at a time.

Also, you don't have to hit the electron with a photon. One could measure the change in the magnetic field. No matter how gentle one tries to make the interaction, wave-like behavior will not be observed if the particular slit can be determined.

Frankly, if it doesn't seem strange it is probably because you have known about it for quite some time. The fact that the Earth goes around the sun doesn't seem weird, but clearly it did for most of the history of humanity. Now we learn this fact from day one so it seems normal.
 
  • #6
Well whether something is Weird or not generally depends on your frame of Reference and that changes usually with your level of your knowledge & Reflection. I'd Like to Quote Bohr here :

We are suspended in language in such a way that we cannot say what is up and what is down. The word "reality" is also a word, a word which we must learn to use correctly. :bugeye:

Most of the Times, the Concept of Wave-particle Duality seems absurd to many people. But, the fact Remains - The Electron/photon is not a wave and its not a particle. It's just an Electron.

The weirdness is NOT the fact that observing an electron changes it's behaviour (right?)
To my mind, The Weirdness of the Whole wave Particle Duality is the Fact that it's Sufficient that a setup where two paths to the Electron are possible, Exists. If it exists, Electron will show interference.
This of course is not the Case with Classical Waves, where a wave is a Wave.Period.
If the Dual Path aiding interference is Removed, even before the electron Enters the Apparatus, we can be Sure that Particle behavior would be observed.

I'd Refer you to Wheeler's Delayed Choice Experiment : http://en.wikipedia.org/wiki/Wheeler%27s_delayed_choice_experiment

So on the Bottom Line : It's Marvelous Coz the Nature of the Apparatus is Sufficient to Decide which type of Behavior the Electron will Show.
 
  • #7
cataclysmic said:
Is the weirdness the fact that a single electron goes through both slits?

The only other potential weirdness I can see is that single electrons fired one at a time at the two slits cause an interference pattern. This is a bit weird because the only explanation seems to be that each single electron somehow goes through 2 slits and interferes with itself. But is that actually weird? It doesn't seem any weirder to me than that fact that light sometimes acts like a particle and sometimes like a wave. If light can do it, why not an electron? Why is it so strange that a single electron acts like a wave and goes through both slits? It doesn't seem that weird and mysterious to me.
Well, if you don't find weird that light behaves in that way, then, as you said, the similar behavior of electron is not any weirder. But people usually DO find weird that light behaves that way.
 
  • #8
cataclysmic said:
Is the weirdness the fact that a single electron goes through both slits?

The only other potential weirdness I can see is that single electrons fired one at a time at the two slits cause an interference pattern. This is a bit weird because the only explanation seems to be that each single electron somehow goes through 2 slits and interferes with itself. But is that actually weird? It doesn't seem any weirder to me than that fact that light sometimes acts like a particle and sometimes like a wave. If light can do it, why not an electron? Why is it so strange that a single electron acts like a wave and goes through both slits? It doesn't seem that weird and mysterious to me.

I'm sure there probably IS something weird/mysterious/fascinating about the double slit experiment since everyone says there is and thinks it's a big deal, so if someone can help me to understand what that is I'd appreciate it.

You are right; there is nothing weird about a single electron going through both slits, IF you treat the electron as a wave:

DoubleSlitExperiment_secondspace_2013-01-12.gif


And it’s also obvious that if we block one of the slits (with a measurement), it will be impossible to create interference, right? And this of course also works for ‘ordinary’ water:

https://www.youtube.com/watch?v=egRFqSKFmWQ


Now, when it comes to a single electron, things get a bit more interesting in the quantum world:

https://www.youtube.com/watch?v=ZJ-0PBRuthc


As you see the single “electron wave” manifest itself as a localized dot on the detector plane. I guess you can imagine the problem of getting a water wave acting this way, right? And this weirdness goes for single electron (waves) also.

But if you have a simple explanation, please let us know. (quickly!)
 
Last edited by a moderator:
  • Like
Likes Spinnor
  • #9
DevilsAvocado said:
Now, when it comes to a single electron, things get a bit more interesting in the quantum world:

https://www.youtube.com/watch?v=ZJ-0PBRuthc


As you see the single “electron wave” manifest itself as a localized dot on the detector plane. I guess you can imagine the problem of getting a water wave acting this way, right? And this weirdness goes for single electron (waves) also.

But if you have a simple explanation, please let us know. (quickly!)


There is nothing that would surprise a 19th century physicist about such picture. Imagine an aftermath of a strong wind in a forrest -- there would be few scattered trees knocked down even though the wind came in continuous waves over the entire forrest. Similarly, if you watch crystallization of a uniform solution, the resulting crystals will introduce non-uniform lumps.

That aspect is not the surprise (except perhaps to mesmerized physics undergrads). The surprise is supposed to be in entirely different aspect of the phenomenon (not the mere granularity or symmetry breaking by the detection process), as discussed in depth in the "Photon 'Wave Collapse' Experiment" PF thread. That was a "pedagogical" experiment in which they cheated to demonstrate the "surprising" aspect. (Un)fortunately, no one has yet shown that kind of experiment to behave in a genunely surprising (non-classical) way without non-local filtering/post-selection (which misses the point of the "surprise").

The only decisive non-classicality experiment is the genuine violation of Bell inequalities, and even that doesn't work after half a century of attempts, despite the few recent claims of loophole free violations (both had anomalies in the experiments because of using Eberhard's inequality without verifying that their event space matches his model; it doesn't due to multiple PDC pairs evident in the anomaly).
 
Last edited by a moderator:
  • #10
nightlight said:
There is nothing that would surprise a 19th century physicist about such picture.

Having slight trouble following the ‘logic’... are you saying that something happened to the 20th century physicists? That made them less intelligent, or what??

Imagine an aftermath of a strong wind in a forrest -- there would be few scattered trees knocked down even though the wind came in continuous waves over the entire forrest.

Really?? :eek: Could you please tell me where I could find a picture of perfect “wind/forest interference patterns”?? That repeats itself every time...

Wavepanel.png



The only decisive non-classicality experiment is the genuine violation of Bell inequalities, and even that doesn't work after half a century of attempts,

So what you’re basically saying is that QM is wrong/false/not working, right??

Unless you have a published paper and Nobel Prize to back up this erroneous claim, it’s nothing but hogwash.

Can we agree that your computer (that you wrote the reply on) is working okay, right? Are you aware that the semiconductor devices in your computer would NOT work if QM was wrong? Or do you have a ‘classical’ explanation for quantum tunneling... banging the head against the wall and sometimes it goes thru??

EffetTunnel.gif


https://www.youtube.com/watch?v=K64Tv2mK5h4
 
Last edited by a moderator:
  • #11
DevilsAvocado said:
Having slight trouble following the ‘logic’... are you saying that something happened to the 20th century physicists? That made them less intelligent, or what??

I am saying that the alleged "magic" of discrete detection points is not surprising in the least (a resonance phenomenon with non-linear response). You are missing the point of where the surprise is supposed to be (check the paper linked earlier, the intro section where they explain why the discrete/point-like detections are not surprising and why their "collapse" experiment is needed).

Really?? :eek: Could you please tell me where I could find a picture of perfect “wind/forest interference patterns”?? That repeats itself every time...

Ever seen Chladny patterns in acoustics? Or any wave phenomena on water? Even ancient Greeks wouldn't be surprised by that picture.

So what you’re basically saying is that QM is wrong/false/not working, right??

The old QM measurement theory making "prediction" of BI violations is empirically unfounded. There are actually two quantum measurement theories (MT) -- the older elementary/pedagogical QM MT (due mainly to von Neumann and Luders in 1930s) and a newer, more rigorous QED MT developed by Glauber in 1964-5 (ref [1] has his lectures, which is the most detailed derivation and exposition; I will cite from the reprint of the lectures in [2]). Bell and other QM 'magicians' and popularizers use QM MT, while the actual Quantum Opticians, including experimenters, use Glauber's QED MT since that fits better what they observe.

The principal difference between the two MT-s is how they deal with measurement on composite systems.

In QM MT, such measurement is postulated via 'projection postulate' and 'ideal apparatus' which operationally implements such projection in the form of multiple, local, independent subsystem measurements (each a projections, too, but local). Hence, in QM MT, measuring a composite system observable S1xS2 is postulated to consist for "ideal apparatus" of two independent, local measurements of S1 on system #1 and S2 on system #2.

In contrast, Glauber's QED MT doesn't postulate anything about measurement on composite system but derives the composite measurement via dynamical treatment, by considering interaction of multiple point detectors (single atoms which ionize) with quantized EM field. In terms of Heisenberg-von Neumann cut between 'classical apparatus' and 'quantum system', Glauber puts the cut after the photo-electrons in each detector (as a classical measurement of resulting electric current) while treating everything below that, from photo-electrons through quantized EM field, dynamically. Hence, the only "measurement" QED MT postulates is the plain, non-controversial local projection (fully local, single electron detection).

After deriving general expression for such interaction ([2], sect. 2.5, eq. 2.60, pp. 47-48; or in [1], p. 85, eq. 5.4), in order to obtain n-point transition probabilities, his correlation functions Gn(x1,x2,...xn), Glauber explicitly drops the terms "we are not interested in", which includes dropping of vacuum induced transitions (via operator ordering rule obtained in the single photon detector chapter 2.4 of [2]) and all transitions involving 'wrong' number of photons (i.e. different than n photons he is "interested in" measuring). The operational meaning of his procedure for extracting Gn() from observed data (transition from eqs. 2.60 -> 2.61 in [2] or 5.4 -> 5.5 in [1]) is a non-local recipe for subtractions or filtering of background/dark counts and all 'accidental' and other 'wrong-n' coincidences.

While in QM MT, this non-local filtering is hand-waved away as being result of momentary technological imperfections relative to the "ideal apparatus" (which is postulated to exist), in QED MT the necessity of the non-local filtering is fundamental and is derived from full QED treatment of the measurement process.

The experimental Quantum Optics uses Glauber's QED MT, hence they perform non-local filtering as he prescribed in [1], while physics students are taught the old QM MT, hence they have entirely wrong idea what the 'BI violations' actually mean. They (along with QM theoreticians publishing on the subject) are unaware that the operational procedures for extracting these "non-local" QM correlations out of detector events are fundamentally non-local procedures, which makes such QM "non-locality" tautological and unremarkable i.e. the Bell's independence assumption of two measurements of S1 and S2 (used to require factorization of probabilities) which "predicts" violation of the tautological inequalities (already known for many decades before Bell) is not valid in either QED MT or in actual experimental procedures used to test it.

Hence, the quantum theory (QED MT) and experiment of BI violations agree perfectly, as they always did and as a good theory and experiment should -- there is no real BI violations as either theoretical prediction of QED MT (since the vital factorization of probablities is precluded due to non-local filtering) or as an experimental fact.

In contrast, the QM MT claims it predicts genuine BI violation and since experimental facts refuse to go along, it is the experiments which are currently "imperfect", falling short of the "ideal apparatus" (it's one of those, 'do you believe me or your lying eyes'). The most peculiar (unique to QM magic) euphemistic language has evolved to 'splain away the persistent experimental failure of the QM MT prediction, in which "loophole free violation" means what in plain language is called simply "violation" (the "loophole free" euphemism makes the admission of failure sound as if the violation was nearly always observed, except for some obscure, rare "loophole unfree" case).

Unless you have a published paper and Nobel Prize to back up this erroneous claim, it’s nothing but hogwash.

Aren't you conceding the defeat a bit early in the discussion by falling back to ad hominem argument?

Can we agree that your computer (that you wrote the reply on) is working okay, right? Are you aware that the semiconductor devices in your computer would NOT work if QM was wrong? Or do you have a ‘classical’ explanation for quantum tunneling... banging the head against the wall and sometimes it goes thru??

There is nothing in the invention or design of this computer that relies on QM MT composite system projection postulate via independent local measurements. The latter is a parasitic add on with no distinguishing consequences other than QM non-locality "predictions" that experiments fail to support. Anything that actually gets measured is done via non-local filtering (if it involves separate sub-systems) as prescribed by the QED MT.-------------- Ref
[1] R. J. Glauber, "Optical coherence and photon statistics" in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji (Gordon and Breach, New York, 1965), pp. 63–185.

[2] http://www.scribd.com/doc/144406419/Quantum-Theory-of-Optical-Coherencereprint of [1] in Glauber's selected papers book "Quantum Theory of Optical Coherence",, 2007 (you can download the PDF file for easier reading & searching).
 
Last edited:
  • #12
nightlight said:
The only decisive non-classicality experiment is the genuine violation of Bell inequalities

That is absolutely incorrect. There is antibunching which is completely non-classical. There are negative probabilities in the Wigner quasiprobability distribution if you perform quantum state tomography of many non-classical light fields like Fock states, Schrödinger cat states or squeezed light and there are also signs of their non-classicality in the Glauber-Sudarshan P-distribution.

edit:
nightlight said:
After deriving general expression for such interaction ([2], sect. 2.5, eq. 2.60, pp. 47-48; or in [1], p. 85, eq. 5.4), in order to obtain n-point transition probabilities, his correlation functions Gn(x1,x2,...xn), Glauber explicitly drops the terms "we are not interested in", which includes dropping of vacuum induced transitions (via operator ordering rule obtained in the single photon detector chapter 2.4 of [2]) and all transitions involving 'wrong' number of photons (i.e. different than n photons he is "interested in" measuring). The operational meaning of his procedure for extracting Gn() from observed data (transition from eqs. 2.60 -> 2.61 in [2] or 5.4 -> 5.5 in [1]) is a non-local recipe for subtractions or filtering of background/dark counts and all 'accidental' and other 'wrong-n' coincidences.

No, he absolutely does not. Glauber neglects terms which do not conserve energy. This works well unless you go into the ultrastrong coupling regime (see, e.g. Nature Physics 6, 772–776 (2010) by Niemczyk et al.) where you may encounter interaction strengths on the order of one interaction per light cycle. On that scale these processes need to be considered, but this regime is usually only reached for low energies/long cycle durations. That means microwaves and maybe the very far IR. People would love to get there for visible light, but it is pretty much out of reach. This is completely unrelated to subtracting background counts. The transition to 2.61 is also not related to background counts. The terms dropped are multi-photon absorptions, where one atom absorbs many photons. You can also do detector theory for n-photon absorption of photons which in sum have the necessary energy for the transition in question, but there is no point in discussing these in introductory texts.
 
Last edited:
  • #13
Cthugha said:
That is absolutely incorrect. There is antibunching which is completely non-classical. There are negative probabilities in the Wigner quasiprobability distribution if you perform quantum state tomography of many non-classical light fields like Fock states, Schrödinger cat states or squeezed light and there are also signs of their non-classicality in the Glauber-Sudarshan P-distribution.

The negative probabilities in photon experiments have been modeled in Stochastic Electrodynamics since 1970s. They are "non-classical" only in a narrow technical/tautological sense of "classical" meaning with zero fields as initial and boundary conditions. If you include ZPF (zero point field, equivalent in energy to 1/2 photon per mode), the negative probabilities arise in locations where energy densities becomes lower than ZPF (e.g. due to interference with signal fields). To a detector calibrated not to trigger (or have low probability of trigger) below ZPF, such defects are non-events indistingushable from ZPF/vacuum non-events, yet they are in phase with energy excesses in other places and the two types can thus still interfere with each other. Any such 'negative probability' phenomena can be trivially simulated via localized energy density defects relative to ZPF level. Of course, experiments subtract the background & accidental coincidences in non-local manner, hence there cannot be any claim (or factorization of probabilities assumption) of non-locality based such explicitly non-local operational procedures.

No, he absolutely does not. Glauber neglects terms which do not conserve energy.

Read the cited material, e.g. in [2] the transition between eqs. (2.60) and (2.61) where he explains what he is doing:

Many of these terms, however, have nothing to do with the process we are considering, since we require each atom to participate by absorbing a photon once and only once. Terms involving repetitions of the Hamiltonian for a given atom describe processes other than those we are interested in.

The entire derivation until that point is fully rigorus dynamical treatment in interaction picture of the quantized EM fields interacting with n atoms, hence there is nothing that preceeded eq. (2.60) that would have violated "energy conservation" and that he needs to fix before going to eq (2.61) by some ad hoc non-local subtractions or filtering of terms derived (as you claim he is doing). The dynamical evolution described does not violate energy (or any other) conservation, hence nothing needs to be done to maintain or recover such conservations. There are no extra terms that need to be removed to uphold "energy conservation", dynamical evolution equations are doing all that just fine.

As he plainly explains, he is simply looking at how to extract from all the events that happen at these n points, contained in the equations derived rigorously, the properties he is interested in, the n-point absorption (detection) processes. Hence, he proposes the non-local filtering of any other events that contradict such n-point absorptions and the resulting expressions obtained after subtractions of the "noise" (the stuff he is "not interested in", but that happens anyway, by the equations derived until 2.60) are the "signal" expressed via his correlation functions Gn().
 
  • #14
nightlight said:
The negative probabilities in photon experiments have been modeled in Stochastic Electrodynamics since 1970s. They are "non-classical" only in a narrow technical/tautological sense of "classical" meaning with zero fields as initial and boundary conditions. If you include ZPF (zero point field, equivalent in energy to 1/2 photon per mode), the negative probabilities arise in locations where energy densities becomes lower than ZPF (e.g. due to interference with signal fields). To a detector calibrated not to trigger (or have low probability of trigger) below ZPF, such defects are non-events indistingushable from ZPF/vacuum non-events, yet they are in phase with energy excesses in other places and the two types can thus still interfere with each other. Any such 'negative probability' phenomena can be trivially simulated via localized energy density defects relative to ZPF level. Of course, experiments subtract the background & accidental coincidences in non-local manner, hence there cannot be any claim (or factorization of probabilities assumption) of non-locality based such explicitly non-local operational procedures.

The negative probabilities occur in the quasiprobability distribution in phase space and are therefore not necessarily bound to specific location. The discussion of a detector triggering or not is pointless. The way to measure these fields is balanced homodyne detection, where you amplify the field of interest by mixing it with a strong local oscillator to very classical values and afterwards get the signal field in terms of the difference current between the two mixed beams at very classical detectors. You do not need detectors working at the single photon level to measure that.

Also I do not understand why you discuss non-locality at the end when this is about non-classicality. There is non-classicality without non-locality.

nightlight said:
Read the cited material, e.g. in [2] the transition between eqs. (2.60) and (2.61) where he explains what he is doing:

When and how to neglect the negative frequency operator is already discussed around (2.40). He does not go into rotating wave approximation at this point, but he does so regularly in his book, starting exactly around (2.61).

nightlight said:
Many of these terms, however, have nothing to do with the process we are considering, since we require each atom to participate by absorbing a photon once and only once. Terms involving repetitions of the Hamiltonian for a given atom describe processes other than those we are interested in.

The entire derivation until that point is fully rigorus dynamical treatment in interaction picture of the quantized EM fields interacting with n atoms, hence there is nothing that preceeded eq. (2.60) that would have violated "energy conservation" and that he needs to fix before going to eq (2.61) by some ad hoc non-local subtractions or filtering of terms derived (as you claim he is doing). The dynamical evolution described does not violate energy (or any other) conservation, hence nothing needs to be done to maintain or recover such conservations. There are no extra terms that need to be removed to uphold "energy conservation", dynamical evolution equations are doing all that just fine.

Yes, at this point he just gets rid of multi-photon absorption.

nightlight said:
As he plainly explains, he is simply looking at how to extract from all the events that happen at these n points, contained in the equations derived rigorously, the properties he is interested in, the n-point absorption (detection) processes. Hence, he proposes the non-local filtering of any other events that contradict such n-point absorptions and the resulting expressions obtained after subtractions of the "noise" (the stuff he is "not interested in", but that happens anyway, by the equations derived until 2.60) are the "signal" expressed via his correlation functions Gn().

That is plain wrong. He gets rid of multi-photon absorption (and he also does not consider reemission from the atoms here) and similar processes. He explicitly states that:"Terms involving repetitions of the Hamiltonian for a given atom describe processes other than those we are interested in.". He omits all processes where one atom is involved more than once. He, however, does not care whether all detections are caused by a signal field or whether there are noise contributions. Noise will just give you different correlation functions. There have been numerous papers and books about detector theory. Check, for example, the QO book by Vogel and Welsch.

Could you please point out where he explicitly claims that he gets rid of noise here? Otherwise your claim is simply without any substance. While it is true, that only n-detection events are considered for the nth order correlation function, this is trivial. n-1 and n+1 detection events are covered by the correlation functions of order n-1 and n+1, respectively (however, n+1 photon detection events also represent n+1 n-photon detection events, obviously). The hierarchy of those gives the description of the total light field. Their normalized version is the one that gives clues about non-classicality.
 
Last edited:
  • #15
Cthugha said:
The negative probabilities occur in the quasiprobability distribution in phase space and are therefore not necessarily bound to specific location.

The quasi-probabilities are function of space-time and have very limited regions of negativity in space-time (limited essentially by what can escape undetected within Heisenberg uncertainty). Smearing them with Gaussian distribution, such as in Husimi variant of quasi-probabilities, turns them positive. The only apparent exception are the instances of factors (for specific degree of freedom, such as spin or polarization, formally considred seprately from the spatio-temporal degrees of freedom) of such functions which lack space-time dependency. The "exception" is thus an artificat of formal separation of degrees of freedom and of ignoring the space-time factors. In actual measurements of these internal degrees of freedom, there is always translation to spatio-temporal factors (via coupling between degrees of freedom).

The discussion of a detector triggering or not is pointless.

It's not irrelevant for experiments claiming to demonstrate violations of classicial inequalities which are derived by considering constraints of the event space of single detection events. The obliteration of distintion between sub-ZPF and ZPF level fields by the detectors (due to threshold calibration & background subtractions) is critical for such non-classicality demonstrations, since it makes it appear as if the absence of detection counts on, say, one side of the beam splitter implies nothing on that side could cause interference if detectors were removed and the two beams were brought again into common region.

Except for Bell inequalities the violation of which cannot be simulated by SED, all other non-classicalities of QO can be replicaded in SED via ZPF effects of classical fields.

The way to measure these fields is balanced homodyne detection, where you amplify the field of interest by mixing it with a strong local oscillator to very classical values and afterwards get the signal field in terms of the difference current between the two mixed beams at very classical detectors. You do not need detectors working at the single photon level to measure that.

Data in homodyne detection cannot violate any classical inequality i.e. one could easily simulate results of such measurement via local automata i.e. field in continuum limit. The only way such detection was used for non-classicality claims was by inferring a state of the quantized EM field, which if measured with "ideal aparatus" of QM MT, which is always beyond present technology, would have violated some classicality condition. The classicality conditions are always derived by looking at constraints of the event space of single detection events.

Also I do not understand why you discuss non-locality at the end when this is about non-classicality. There is non-classicality without non-locality.

Yes, of course there is. But that's not subject of the thread, which is to explain what is surprising about double slit experiment where the system appears in detection localized (as a particle), while passing through separate slits as a wave i.e. it behaves either as a particle which can somehow sense a remote slit it didn't pass through, or as a wave which collapses globally as soon as it trips one detector, preventing any further detections on other/remote detectors (the particle-like exclusivity of wave).

In either picture, particle or wave, it seems that there is some action at a distance or non-locality. At least within QM MT story. Making that work in a real experiment is another matter and in pedagogical settings requires a bit of stage magic as illustrated in an earlier thread where the particular sleight of hand in demonstrating the above "surprise" was identified.

When and how to neglect the negative frequency operator is already discussed around (2.40). He does not go into rotating wave approximation at this point, but he does so regularly in his book, starting exactly around (2.61).

As pointed out in initial post, the part of the subtractions is done in the chapter on single detector in the earlier chapter. Those results are then used (by being included implicitly via operatior ordering rule he derived earlier) in the n-point case being discussed. While they were perfectly local filtering procedure in single point detector case, they become non-local when carried over to n-detector apparatus. E.g. background subtractions across multiple points can yield appearance of particle-like exclusivity (sub-Poissonian counts) as discussed in that earlier thread.

Yes, at this point he just gets rid of multi-photon absorption.

That is plain wrong. He gets rid of multi-photon absorption (and he also does not consider reemission from the atoms here) and similar processes. He explicitly states that:"Terms involving repetitions of the Hamiltonian for a given atom describe processes other than those we are interested in.". He omits all processes where one atom is involved more than once.

The relevance is that multi-photon absorption on one detector for the n-photon Fock state of the field also results in missed detections on the remaining detectors. For example, in BI violation experiments where G4() is used (in 2-channel variant) with 2-photon state, the events with two detections on one side of the apparatus as well as no detection on one side within coincidence window have to be discarded (resulting in detection loophole).

All these instances are filtering out of the "wrong" number of absorptions or multiple absorptions on "wrong" detectors and they are obviously discarded not because of technological imperfections of the non-ideal apparatus as claimed by QM MT, but because they are not the measurement "we are interested in" as he puts it.

Hence, the non-local post-selection is how you pick which Gn() you are measuring. It is particular Gn() (which encodes the info about the field state hence about its past interactions) that one wants to measure, not just anything that happens on the n detectors. But to extract the target Gn() requires non-local filtering as his derivation shows -- you have to drop the terms that don't contribute to the Gn() you're "interested in."

While that's all perfectly fine when all one is interested in is extracting the given Gn() from detection events, the non-local filtering procedure invalidates the subsystem independence and locality assumptions of QM MT i.e. it preculudes one from making prediction based on such non-locally filtered Gn() that require factorization of probabilities as used in derivation of Bell inequalities.

The probabilities on such non-locally filtered counts need not factorize hence the prediction of violation of classical tautologies cannot be derived within QED MT. It's a unique peculiarity of the old QM MT that allows such prediction -- the imaginary "ideal apparatus" of QM MT on which S1xS2 can be measured via two local, independent measurements: S1 on subsystem #1 and S2 on subsystem #2. In QED MT, the concidences expressed in Gn() are obviously not independent, as the required non-local filtering to derive them shows.

He, however, does not care whether all detections are caused by a signal field or whether there are noise contributions. Noise will just give you different correlation functions.

There is no "noise" or "signal" in the exact dynamical treatment of the system of EM field and n atoms which he carries out up to some point. There is simply what happens in the interaction of n detectors with quantized EM field.

The "noise" and "signal" concepts come into picture when one wishes to extract specific Gn() out of all the events on n detectors. That's where the non-local filtering is mandated as his derivation of Gn() shows. Such filtering has nothing to do with the technological imperfections as QM MT suggests. His detectors are already the idealized point-like detectors as perfect as they can be (single atoms).

There have been numerous papers and books about detector theory. Check, for example, the QO book by Vogel and Welsch.

Could you please point out where he explicitly claims that he gets rid of noise here? Otherwise your claim is simply without any substance.

As explained above, there is no "noise" in his dynamical treatment. But he seeks expression containing specific Gn(), i.e. he wishes to perform the measurement he "is interested in" such as the eq. (2.64) he arrives at. Everything else (other terms or events at the detectors) is discarded as not belonging to the target measurement of given Gn().

Similarly, in BI violations where G4() is measured, everything else (such as accidental & background counts, missed detections) is discarded since it doesn't contribute to G4() extraction sought there.

The labels "signal" and "noise" is my characterization of the procedure in which you seek to identify certain subset of events you are "interested in" ("signal" i.e. his Gn() in 2.64), while dropping events you are "not interested in" i.e. the "noise" as he does from eq. (2.60) (plus vacuum contributions in single detector chapter, which are implicit in the n-detector chapter).

You seem to be distracted by the arbitrary terms (such as noise/signal), while missing the point -- the fundamental non-local filtering required to extract a given Gn() out of all events on the n ideal point detectors derived via pure dynamical treatment. The point is that Gn() doesn't come for free out of n indepent local detection events on n detectors making up some imaginary "ideal aparatus" (his detectors are already ideal) but has to be filtered out non-locally.

But once you have such non-local filtering procedure, required to extract Gn() on n-detector apparatus, the factorization of probabilities assumed by Bell (in his treatment of G4() example) doesn't apply or hold according to Glauber's QED MT.

Hence, there is no prediction of QED MT of BI violation since the G4() extracted via non-local filtering (2.60) -> (2.61) (plus the implicit vacuum effects subtractions via operator ordering rules) violates the independence assumption of "ideal apparatus" of QM MT i.e. the measurement of S1xS2 observable on composite system is witin QED MT not even in principle the same as the independent local measurements of S1 on subsystem #1 and and S2 on subsystem #2 as assumed by Bell by using QM MT and its conjectured "ideal apparatus."

While it is true, that only n-detection events are considered for the nth order correlation function, this is trivial. n-1 and n+1 detection events are covered by the correlation functions of order n-1 and n+1, respectively (however, n+1 photon detection events also represent n+1 n-photon detection events, obviously). The hierarchy of those gives the description of the total light field. Their normalized version is the one that gives clues about non-classicality.

That paragraph is somewhat non sequitur for the discussion at hand. In any case, you seem to be conflating above the "n detector" correlations, which his Gn() represent (after the non-local filtering) and n-photon field states which one may measure with the above n-detector apparatus. In the BI violations you are "interested in" measuring G4() on 2-photon state.
 
Last edited:
  • #16
nightlight said:
The quasi-probabilities are function of space-time and have very limited regions of negativity in space-time (limited essentially by what can escape undetected within Heisenberg uncertainty). Smearing them with Gaussian distribution, such as in Husimi variant of quasi-probabilities, turns them positive. The only apparent exception are the instances of factors (for specific degree of freedom, such as spin or polarization, formally considred seprately from the spatio-temporal degrees of freedom) of such functions which lack space-time dependency. The "exception" is thus an artificat of formal separation of degrees of freedom and of ignoring the space-time factors. In actual measurements of these internal degrees of freedom, there is always translation to spatio-temporal factors (via coupling between degrees of freedom).

Usually you have short pulses, so this already dictates your spatial and temporal extents. Nevertheless, the negative probabilities occur in phase space where your field quadratures are your two "axes". Of course smearing them turns them positive. This is what happens if you try to measure two orthogonal quadratures which is not possible without adding additional vacuum noise, e.g. by adding another beam splitter. But how is that relevant here? Wigner functions of Fock states have regions of negative probability. This is a sign of non-classicality.

nightlight said:
Except for Bell inequalities the violation of which cannot be simulated by SED, all other non-classicalities of QO can be replicaded in SED via ZPF effects of classical fields.

That is simply wrong. You cannot simulate antibunching for a stationary field classically. There are numerous crackpot papers claiming that, but failing miserably, but not a single credible one.

nightlight said:
Data in homodyne detection cannot violate any classical inequality i.e. one could easily simulate results of such measurement via local automata i.e. field in continuum limit. The only way such detection was used for non-classicality claims was by inferring a state of the quantized EM field, which if measured with "ideal aparatus" of QM MT, which is always beyond present technology, would have violated some classicality condition. The classicality conditions are always derived by looking at constraints of the event space of single detection events.

That is wrong again. While non-ideal quantum efficiency obviously reduces the amount of non-classicality, current technology indeed allows to identify non-classical fields. See Alex Lvovsky's review article on quantum state tomography (Rev. Mod. Phys. 81, 299–332 (2009)) and references therein for detailed explanations. You do not need single detection events for that.
nightlight said:
Making that work in a real experiment is another matter and in pedagogical settings requires a bit of stage magic as illustrated in an earlier thread where the particular sleight of hand in demonstrating the above "surprise" was identified.

Oh, if I had read that thread earlier that would have saved me some time. I was not aware that you are supporting the crackpot camp. That thread was closed for a very good reason.

nightlight said:
As pointed out in initial post, the part of the subtractions is done in the chapter on single detector in the earlier chapter. Those results are then used (by being included implicitly via operatior ordering rule he derived earlier) in the n-point case being discussed. While they were perfectly local filtering procedure in single point detector case, they become non-local when carried over to n-detector apparatus. E.g. background subtractions across multiple points can yield appearance of particle-like exclusivity (sub-Poissonian counts) as discussed in that earlier thread.

Except for the small problem that the discussion is simply invalid.

nightlight said:
The relevance is that multi-photon absorption on one detector for the n-photon Fock state of the field also results in missed detections on the remaining detectors. For example, in BI violation experiments where G4() is used (in 2-channel variant) with 2-photon state, the events with two detections on one side of the apparatus as well as no detection on one side within coincidence window have to be discarded (resulting in detection loophole).

You can discard them. You do not have to. If using binary photon detectors without photon number resolution, you need these rates to be small, but that is absolutely trivial. If using photon number sensitive detectors, you do not even have to do that.

nightlight said:
All these instances are filtering out of the "wrong" number of absorptions or multiple absorptions on "wrong" detectors and they are obviously discarded not because of technological imperfections of the non-ideal apparatus as claimed by QM MT, but because they are not the measurement "we are interested in" as he puts it.

No, this is nonsense. You can add them if you want to. If you do the proper normalization via the mean count rate, you indeed include them.

nightlight said:
Hence, the non-local post-selection is how you pick which Gn() you are measuring. It is particular Gn() (which encodes the info about the field state hence about its past interactions) that one wants to measure, not just anything that happens on the n detectors. But to extract the target Gn() requires non-local filtering as his derivation shows -- you have to drop the terms that don't contribute to the Gn() you're "interested in."

Terms with fewer photons do not contribute anyway. Terms with more photons can be taken into account. You are only talking about the limited case of binary detectors and thus creating a strawman.

nightlight said:
While that's all perfectly fine when all one is interested in is extracting the given Gn() from detection events, the non-local filtering procedure invalidates the subsystem independence and locality assumptions of QM MT i.e. it preculudes one from making prediction based on such non-locally filtered Gn() that require factorization of probabilities as used in derivation of Bell inequalities.

Why Bell again? You claimed there is no sign of non-locality besides Bell, but there is very obviously antibunching which does not need nonlocality.

nightlight said:
The probabilities on such non-locally filtered counts need not factorize hence the prediction of violation of classical tautologies cannot be derived within QED MT. It's a unique peculiarity of the old QM MT that allows such prediction -- the imaginary "ideal apparatus" of QM MT on which S1xS2 can be measured via two local, independent measurements: S1 on subsystem #1 and S2 on subsystem #2. In QED MT, the concidences expressed in Gn() are obviously not independent, as the required non-local filtering to derive them shows.

Ehm...just no. You do not need an ideal apparatus for non-classicality. That is the main point of measuring g2. You get a normalized variance which does not care about the mean photon number and detection efficiency. If you det a variance below shot noise, you are in the non-classical regime.

nightlight said:
The "noise" and "signal" concepts come into picture when one wishes to extract specific Gn() out of all the events on n detectors. That's where the non-local filtering is mandated as his derivation of Gn() shows. Such filtering has nothing to do with the technological imperfections as QM MT suggests. His detectors are already the idealized point-like detectors as perfect as they can be (single atoms).

No, it still does not need non-local filtering.

nightlight said:
As explained above, there is no "noise" in his dynamical treatment. But he seeks expression containing specific Gn(), i.e. he wishes to perform the measurement he "is interested in" such as the eq. (2.64) he arrives at. Everything else (other terms or events at the detectors) is discarded as not belonging to the target measurement of given Gn().

Everything else simply belongs to a different Gn. A complete characterization of course involves ALL orders of Gn. However, the second order is already sufficient to identify many non-classical light fields. This is Glauber's great finding.

nightlight said:
That paragraph is somewhat non sequitur for the discussion at hand. In any case, you seem to be conflating above the "n detector" correlations, which his Gn() represent (after the non-local filtering) and n-photon field states which one may measure with the above n-detector apparatus. In the BI violations you are "interested in" measuring G4() on 2-photon state.

And again Bell. This discussion is still not about bell.So, to summarize: Do you have any peer-reviewed publication supporting your point? That would be a real starting point for a discussion. Otherwise this discussion is pointless.
 
  • #17
cataclysmic said:
The only other potential weirdness I can see is that single electrons fired one at a time at the two slits cause an interference pattern. This is a bit weird because the only explanation seems to be that each single electron somehow goes through 2 slits and interferes with itself. But is that actually weird? It doesn't seem any weirder to me than that fact that light sometimes acts like a particle and sometimes like a wave. If light can do it, why not an electron? Why is it so strange that a single electron acts like a wave and goes through both slits? It doesn't seem that weird and mysterious to me.

Well, the weird thing here is indeed that it challenges our naive understanding of what the meaning of the term particle is. You either have to abandon the classical picture of "balls with well defined trajectories" or add a wave-like element like Bohmians do.

If you do not think that this is weird at the single particle level, then there is no hidden more fundamental weirdness.
 
  • #18
Cthugha said:
Usually you have short pulses, so this already dictates your spatial and temporal extents. Nevertheless, the negative probabilities occur in phase space where your field quadratures are your two "axes". Of course smearing them turns them positive. This is what happens if you try to measure two orthogonal quadratures which is not possible without adding additional vacuum noise, e.g. by adding another beam splitter. But how is that relevant here? Wigner functions of Fock states have regions of negative probability. This is a sign of non-classicality. ... That is simply wrong. You cannot simulate antibunching for a stationary field classically. There are numerous crackpot papers claiming that, but failing miserably, but not a single credible one.

It is a purely formal sign, depending on what you call "classicality". If you define it as classical fields without ZPF, then the clasical ED under such constraints cannot reproduce those effects. But if you drop such gratuitous constraints, then there is nothing non-classical about these effects. Marshall, Boyer, Jaynes and numerous others working on SED have been modeling such formal non-classicality phenomena with purely classical models ZPF + Maxwell ED since 1960s (e.g. check few refs).

Note that to refute a claim of non-classicality for some observation, any classical model, however contrived (or by whoever, crackpot or otherwise) and specific to data or phenomenon, suffices as a counter-example. To serve as direct counter-example to a non-classicality claim such model need not be a general theory of the phenomenon. That's like a conjecture in math about say some property of prime numbers -- it sufficies to show one numeric example on which the conjecture fails to refute it. The counter-example need not explain when does the conjectured property hold or say anything else about it.

While the SED is in fact insufficient as a general theory of quantized EM fields, that doesn't mean it can't provide classical model counter-examples for specific non-classicality claims of QO.

That is wrong again. While non-ideal quantum efficiency obviously reduces the amount of non-classicality, current technology indeed allows to identify non-classical fields. See Alex Lvovsky's review article on quantum state tomography (Rev. Mod. Phys. 81, 299–332 (2009)) and references therein for detailed explanations. You do not need single detection events for that.

Tomography doesn't yield data (counts) that cannot be reproduced by a local realist models. To generate inequalities which exclude local realist models in addition to such reconstructed states, one requires QM MT assumptions on how multiphoton detections work which contradict the actual coincidence measurement procedures of QO since they assume independence, while the non-local filtering used in QO coincidence measurements outright contradicts such assumption.

Oh, if I had read that thread earlier that would have saved me some time. I was not aware that you are supporting the crackpot camp. That thread was closed for a very good reason.

Resorting to name calling, eh. Thanks for admitting the lack of counter-points on matters of substance. The "crackpots" on this subject go back to Einstein, de Broglie and Schrodinger, continuing to Barut, Jaynes, t'Hooft, Wolfram and others more recently. Not a bad company to be in.

BTW, that thread was closed after it got revived (when Glauber won Nobel prize) many months after it wound down on its own due to someone getting offended. Your observation about it is yet another non sequitur.

Except for the small problem that the discussion is simply invalid.

Well that settles it I suppose, if we were a grade school and you were the teacher.

You can discard them. You do not have to. If using binary photon detectors without photon number resolution, you need these rates to be small, but that is absolutely trivial. If using photon number sensitive detectors, you do not even have to do that. .. No, this is nonsense. You can add them if you want to. If you do the proper normalization via the mean count rate, you indeed include them.

To get anything non-classical in genuine sense, rather that via some wishful definition of "classical", you need to discard/filter data non-locally. The experiment which could achieve such violation you claim to be routine would be a loophole free demonstration of non-classicality, which is still pending.

Why Bell again? You claimed there is no sign of non-locality besides Bell, but there is very obviously antibunching which does not need nonlocality. ... Ehm...just no. You do not need an ideal apparatus for non-classicality. That is the main point of measuring g2. You get a normalized variance which does not care about the mean photon number and detection efficiency. If you det a variance below shot noise, you are in the non-classical regime. ... No, it still does not need non-local filtering. ... Everything else simply belongs to a different Gn. A complete characterization of course involves ALL orders of Gn. However, the second order is already sufficient to identify many non-classical light fields. This is Glauber's great finding.

Sign or hint is a different matter than the loophole free demonstration of non-classicality in the sense of excluding any type of local realist model of the observation. You are conflating technical term 'non-classical' as used in QO with genuine non-classicality (with no restriction of initial or boundary contitions to ZPF free type). That kind of technical 'non-classicality' as used in QO jargon is neither genuine non-classicality nor is it surprising.

And again Bell. This discussion is still not about bell.

It is about whether there is anything surprising in the double slit experiments, which comes down to requiring either particle or wave with action at a distance features. Of course, that's only a qualitative hint at possible non-locality. The only decisive quantitative criterium as to whether that is the case is the violation of Bell inequalities, which is what any argument about it, in double slit or any other such experiment, eventally has to fall onto to preclude the assumption of pre-existent values of physical quantities merely manifesting in the measurement (hidden variables). All other QO 'non-classicalities' can be replicated by SED.

So, to summarize: Do you have any peer-reviewed publication supporting your point? That would be a real starting point for a discussion. Otherwise this discussion is pointless.

Check any of the top linked papers above, e.g. those by Marshall & Santos, who were debunking QO 'non-classicality' claims for over three decades. Even the most conventional and authoritative textbooks, such as Yariv's "Optical Electronics in Modern Communications" reluctantly acknowledges (cf. chap. 20, p. 703) that "Somehow to my surprise, I found that by asking student to accept just one result from quantum mechanics [ZPF], it is possible to treat all the above mentioned phenomena classically and obtain results that agree with those of quantum optics." The one "result" is the ZPF initial & boundary conditions used in SED which is a classical theory (ZPF is a plain classical field even though Yariv labels it a "result from quantum mechanics", in the sense of being inspired by QM, I suppose).

The ZPF is actually concept that goes back to Planck's 2nd theory (of 1911), which was purely classical EM theory with background zero point field (also classical field) which he used to re-derive his black-body formula in classical manner. Over decades the ZPF idea has been extended to cover any new 'non-classical' phenomenon in QO and it sufficed for all except the Bell inequality violations (which empirically don't happen anyway).
 
Last edited:
  • #19
nightlight said:
It is a purely formal sign, depending on what you call "classicality". If you define it as classical fields without ZPF, then the clasical ED under such constraints cannot reproduce those effects. But if you drop such gratuitous constraints, then there is nothing non-classical about these effects. Marshall, Boyer, Jaynes and numerous others working on SED have been modeling such formal non-classicality phenomena with purely classical models ZPF + Maxwell ED since 1960s (e.g. check few refs).

Non-classical are obviously all the things you cannot reproduce using classical distributions. Conditional homodyne detection gives good examples of that. See e.g. J. Opt. B: Quantum Semiclass. Opt. 6 S645 for a continuous variable example.

nightlight said:
Note that to refute a claim of non-classicality for some observation, any classical model, however contrived (or by whoever, crackpot or otherwise) and specific to data or phenomenon, suffices as a counter-example. To serve as direct counter-example to a non-classicality claim such model need not be a general theory of the phenomenon. That's like a conjecture in math about say some property of prime numbers -- it sufficies to show one numeric example on which the conjecture fails to refute it. The counter-example need not explain when does the conjectured property hold or say anything else about it.

I disagree. Of course you can come up with a classical model for the photoelectric effect, for some forms of antibunching (especially for nonstationary fields, not for the general case, though), for some stuff seen in squeezing and so on. But who is interested in model for very special cases for which it is well known that they are heavily flawed? Who is interested in showing that one can find a parameter set looking like antibunching, while at the same time producing unphysical first order coherence distributions not matching th experiment?

nightlight said:
While the SED is in fact insufficient as a general theory of quantized EM fields, that doesn't mean it can't provide classical model counter-examples for specific non-classicality claims of QO.

That just seems odd to me. Physics is about building working models with predictive power, not about building highly specialized approaches fulfilling random constraints. But, however, if someone desperately wants to do that: ok. I still consider it mathematics, not physics.

nightlight said:
Tomography doesn't yield data (counts) that cannot be reproduced by a local realist models.

I never claimed that. You claimed:" The only decisive non-classicality experiment is the genuine violation of Bell inequalities". I am not interested in the Bell inequality question or nonlocality at all. I am claiming that there are other non-classicality experiments not even remotely similar to Bell measurements.

nightlight said:
Resorting to name calling, eh. Thanks for admitting the lack of counter-points on matters of substance. The "crackpots" on this subject go back to Einstein, de Broglie and Schrodinger, continuing to Barut, Jaynes, t'Hooft, Wolfram and others more recently. Not a bad company to be in.

BTW, that thread was closed after it got revived (when Glauber won Nobel prize) many months after it wound down on its own due to someone getting offended. Your observation about it is yet another non sequitur.

That does not make your point more correct or even mainstream and "goes back to" is quite a stretch. The existence of researches supporting determinism or local realism at some point does not make it stronger. This is not a forum devoted to off-track physics.

nightlight said:
To get anything non-classical in genuine sense, rather that via some wishful definition of "classical", you need to discard/filter data non-locally. The experiment which could achieve such violation you claim to be routine would be a loophole free demonstration of non-classicality, which is still pending.

Sorry, the race of following what other people think needs to be shown for "genuine" nonclassicality is nothing I intend to join. It will end up with people finally claiming that one has to rule out superdeterminism to show that things are truly nonclassical. If you can provide a consistent classical model of antibunching, that is something worth discussing. The stuff by e.g. Marshall and Santos is not of interest from a physical point of view as it is not consistent. From the math point of view maybe.

nightlight said:
Sign or hint is is a different matter than the loophole free demonstration of non-classicality in the sense of excluding any type of local realist model of the observation. You are conflating technical term 'non-classical' as used in QO with genuine non-classicality (with no restriction of initial or boundary contitions to ZPF free type). That kind of technical 'non-classicality' as used in QO jargon is neither genuine non-classicality nor is it surprising.

No, I am not conflating them. I am not the slightest bit interested in what you call genuine non-classicality. You made the claim " The only decisive non-classicality experiment is the genuine violation of Bell inequalities" (explicitly not about what you call genuine non-classicality) and I simply oppose that. I am just interested in technical non-classicality as you intend to call it.

nightlight said:
All other QO 'non-classicalities' can be replicated by SED.

Still: no. I do not count qualitative similarity using different (and usually mutually exclusive) parameter sets as replication. A model lacking predictive power is a bad model. Physics is about predicting, not about replicating.

nightlight said:
Check any of the top linked papers above, e.g. those by Marshall & Santos, who were debunking QO 'non-classicality' claims for over three decades. Even the most conventional and authoritative textbooks, such as Yariv's "Optical Electronics in Modern Communications" reluctantly acknowledges (cf. chap. 20, p. 703) that "Somehow to my surprise, I found that by asking student to accept just one result from quantum mechanics [ZPF], it is possible to treat all the above mentioned phenomena classically and obtain results that agree with those of quantum optics." The one "result" is the ZPF initial & boundary conditions used in SED which is a classical theory (ZPF is a plain classical field even though Yariv labels it a "result from quantum mechanics", in the sense of being inspired by QM, I suppose).

The ZPF is actually concept that goes back to Planck's 2nd theory (of 1911), which was purely classical EM theory with background zero point field (also classical field) which he used to re-derive his black-body formula in classical manner. Over decades the ZPF idea has been extended to cover any new 'non-classical' phenomenon in QO and it sufficed for all except the Bell inequality violations (which empirically don't happen anyway).

Yes, I know that stuff. I think my comments above already gave my opinion about SED. Carmichael wrote "The Achilles heel of stochastic electrodynamics is its inability to give a plausible account of the firing of photoelectric detectors." This is still a mild verdict.
 
  • #20
Cthugha said:
Non-classical are obviously all the things you cannot reproduce using classical distributions. Conditional homodyne detection gives good examples of that. See e.g. J. Opt. B: Quantum Semiclass. Opt. 6 S645 for a continuous variable example.

Just because some quasiprobability distribution turns negative, hence inapplicable as probablistic model of the phenomenon, doesn't imply all conceivable models are inapplicable. The claim to have a phenomenon which excludes all classical models even in principle is an extremely strong claim and all it takes is a single counter-example, a model however contrived and specialized to the phenomenon, to falsify such extreme non-classicality claim. That doesn't make such model more useful than conventional theory or invalidate practical usefelness of the phenomenon. It merely falsifies the most extreme form of the extrapolation of the phenomenon.

In the above and the rest of your comments you seem to be confusing lack of practical usefulness or general applicability of some model with its ability to falsaify the extreme claims. Such falsification doesn't make phenomenon unimportant or less interesting in practical application. It only affects the grand claims of absolute non-classicality (which yet have to be demonstrated empirically).

I disagree. Of course you can come up with a classical model for the photoelectric effect, for some forms of antibunching (especially for nonstationary fields, not for the general case, though), for some stuff seen in squeezing and so on. But who is interested in model for very special cases for which it is well known that they are heavily flawed? Who is interested in showing that one can find a parameter set looking like antibunching, while at the same time producing unphysical first order coherence distributions not matching th experiment?

That's not about who is interested in some contrived counter-example and who is not, but whether the counter-example to a claim can exist at all, since the claim it aims at says it cannot exist even in principle.

I never claimed that. You claimed:" The only decisive non-classicality experiment is the genuine violation of Bell inequalities". I am not interested in the Bell inequality question or nonlocality at all. I am claiming that there are other non-classicality experiments not even remotely similar to Bell measurements.

The conventional QO non-classicalities based on negativity of P or some such technical criterium are in a different realm -- they explicitly preclude only certain kinds of classical models (such as those without ZPF). They are irrelevant for the question of this thread -- is there anything surprising or genuinely mysterious about double slit experiment? My initial comment was that seeing dots (or discrete detections) on the screen or interference patterns isn't the surprising aspect as some here have imagined (the paper [1] of the earlier thread explains in the intro why neither aspect is actually surprising and why their experiment is needed).

The only truly suprising effect would be genuine empirical violation of Bell inequalities, which in turn would make the double slit or beam splitter experiment mysterious by precluding local hidden variable models. Namely, Bohm-de Broglie alternative QM theories can explain the particle-wave interplay in the double slit and similar mysteries of the early QM via hidden variables. Their main problem is that if the Bell inequalities can be violated (which they don't predict since they don't use the strong composite system projection postulate of QM MT but a weak form a la QED MT), then their hidden variables are automatically non-local, which makes them much less convincing as explanation.

That does not make your point more correct or even mainstream and "goes back to" is quite a stretch. The existence of researches supporting determinism or local realism at some point does not make it stronger. This is not a forum devoted to off-track physics.

It was the question posed in the forum that we are supposed to be addressing. You are the one who was calling names. I am merely reminding you that you're shooting way over your head with such attacks. Labeling Einstein, Schrodinger, de Broglie, Bohm, Barut, t'Hooft, Wolfram, etc. crackpots is a bit of a stretch in my view.

Sorry, the race of following what other people think needs to be shown for "genuine" nonclassicality is nothing I intend to join. It will end up with people finally claiming that one has to rule out superdeterminism to show that things are truly nonclassical.

That's yet to be seen. First the actual violation without anomalies, unreported relevant counts data or loopholes have to be shown before any of super-determinism comes into play. There is no point debating on who believes in what possibility.

If you can provide a consistent classical model of antibunching, that is something worth discussing. The stuff by e.g. Marshall and Santos is not of interest from a physical point of view as it is not consistent. From the math point of view maybe.

As a direct counter-example to absolute non-classicality claims it suffices.

No, I am not conflating them. I am not the slightest bit interested in what you call genuine non-classicality. You made the claim " The only decisive non-classicality experiment is the genuine violation of Bell inequalities" (explicitly not about what you call genuine non-classicality) and I simply oppose that. I am just interested in technical non-classicality as you intend to call it.

But the thread question is whether there is something truly mysterious, not about technical 'non-classicality', such as negativity of particular quasiprobability function or violation of some contrived definition of 'classicality'.

Yes, I know that stuff. I think my comments above already gave my opinion about SED. Carmichael wrote "The Achilles heel of stochastic electrodynamics is its inability to give a plausible account of the firing of photoelectric detectors." This is still a mild verdict.

I am not supporter of SED either, it's an effective theory for a limited set of QO phenomena. Its main virtue was in taking down the grand non-classicality claims of the early QO from 1960s and putting them into their proper place -- as mere technical 'non-classicalities' (i.e. as terminological conventions). The only 'holy grail' in the non-classicality field are the loophole free violations of Bell inequalities and all genuinely surprising aspects of QM arise from the possibility of such violation since such violation would preclude any conceivable local hidden variables models (deterministic or stochastic). That's what would make the double slit discussed in this thread a genuine mystery since all natural explanations would be eliminated.
 
Last edited:
  • #21
nightlight said:
Just because some quasiprobability distribution turns negative, hence inapplicable as probablistic model of the phenomenon, doesn't imply all conceivable models are inapplicable. The claim to have a phenomenon which excludes all classical models even in principle is an extremely strong claim and all it takes is a single counter-example, a model however contrived and specialized to the phenomenon, to falsify such extreme non-classicality claim. That doesn't make such model more useful than conventional theory or invalidate practical usefelness of the phenomenon. It merely falsifies the most extreme form of the extrapolation of the phenomenon.

I disagree. The counterexample must be physical and explain at least the necessary measurements (at the very least g1 and g2). Most importantly it must come along with the necessary detector theory and be explicitly formulated for a specific detector. It is an important, but often neglected fact, that non-classicality conditions must go along with a precise description of the detector as each detector strictly speaking has its own non-classicality condition. For example you get sub-binomial instead of sub-Poissonian light for on-off detector arrays (Phys. Rev. Lett. 109, 093601 (2012) , Phys. Rev. Lett. 110, 173602 (2013) ). Otherwise people will come along with 13-dimensional entities or the famous how-to-fit-an-elephant formula. This is why I cannot take the "classical counter-example" seriously. You can always assume some detector that can reproduce some experimental result. However, I agree that the importance of the latter part is usually not emphasized enough. On the other hand it is cumbersome to write (and especially read) in every paper something like: "We demonstrate non-classicality for a two avalanche photo diode system with a dead time of 500 ps, quantum efficiency of 82% in start-stop geometry with bin sizes of 300 ps at a dark count rate of 300 counts/s an afterpulsing probability of 0.1% a mean count rate of 10^7/s per photo diode..." and so on and so forth.

A good counterexample should also be able to explain all measurements of the same field in different settings, like antibunching using a single detector (Phys. Rev. A 86, 053814 (2012)), photon number resolving detectors and two-photon absorption.

nightlight said:
In the above and the rest of your comments you seem to be confusing lack of practical usefulness or general applicability of some model with its ability to falsaify the extreme claims. Such falsification doesn't make phenomenon unimportant or less interesting in practical application. It only affects the grand claims of absolute non-classicality (which yet have to be demonstrated empirically).

Well, as said before, I was never discussing absolute non-classicality here. Although I think it is solid, too.

nightlight said:
The conventional QO non-classicalities based on negativity of P or some such technical criterium are in a different realm -- they explicitly preclude only certain kinds of classical models (such as those without ZPF). They are irrelevant for the question of this thread -- is there anything surprising or genuinely mysterious about double slit experiment?
[...]
The only truly suprising effect would be genuine empirical violation of Bell inequalities, which in turn would make the double slit or beam splitter experiment mysterious by precluding local hidden variable models.

So Bell Inequalities are the most interesting thing about the double slit? Come on, you cannot be serious.

nightlight said:
It was the question posed in the forum that we are supposed to be addressing.

No. Not when it comes to the double slit. I just adressed your erroneous claim that there is no non-locality.

nightlight said:
But the thread question is whether there is something truly mysterious, not about technical 'non-classicality', such as negativity of particular quasiprobability function or violation of some contrived definition of 'classicality'.

The world does not seem to share your definition of what is truly mysterious. Most people do not worry about Bell when they discuss the double slit.

nightlight said:
The only 'holy grail' in the non-classicality field are the loophole free violations of Bell inequalities and all genuinely surprising aspects of QM arise from the possibility of such violation since such violation would preclude any conceivable local hidden variables models (deterministic or stochastic). That's what would make the double slit discussed in this thread a genuine mystery since all natural explanations would be eliminated.

Ehm...no. Bell does not add to the double slit. It is an interesting topic of its own. The double slit is not even quantum. As said: I just responded to correct your erroneous statement. And as this is about the double slit and not about Bell and not even about non-classical physics, I will stop this discussion here as it is out of place. If there is anything else to add to the last discussion, please open another thread and/or ask the mods to move the posts. This is moving way too far from the topic at hand.
 
  • #22
Cthugha said:
I disagree. The counterexample must be physical and explain at least the necessary measurements (at the very least g1 and g2). ... A good counterexample should also be able to explain all measurements of the same field in different settings, like antibunching using a single detector (Phys. Rev. A 86, 053814 (2012)), photon number resolving detectors and two-photon absorption.

That's mixing up apples and oranges. For people seeking to come up with next physics at Planck scale, such as t'Hooft or Wolfram, it is quite relevant whether cellular automata or Planckian networks or some other distributed, local computational model could at least in principle replicate the empirical facts of existent physics. If there is something in the existent physics which precludes it at fundamental level, they and others with similar interests would certainly love to know it before pursuing 'squaring of the circle' path.

But any such fundamental prohibition would need to be a rigorous derivation of Bell inequalities violations by QM based squarely on the empirically backed requirements, not on some arbitrary or wishful interpretations of QM measurement process for which weaker alternative would work as well as far as the empirical facts go.

The present derivation of QM prediction which violates BI falls well short in this regard, since it is based on QM MT which makes very strong assumptions about measurements of composite system observables -- it assumes that measurement of S1xS2 observable can be done by an "ideal apparatus" via two independent, local measurements of S1 on subsystem #1 and of S2 on subsystem #2.

In contrast, the more recent and more fundamental QED MT doesn't make the above assumption about S1xS2, but derives the composite system measurements from a much weaker, empirically well backed assumption about local detection of photo-electrons. This derivation, as cited earlier, shows that the independence assumption of QM MT does not hold in QED MT on composite systems -- as Glauber shows, to measure the Gn() from the counts on n detectors, requires non-local filtering of the observed counts.

With that type of non-local filtering procedure derived in QED MT from weaker measurement postulates, any non-locality or non-classicality claims (based on probability factorization requirements for classical models) are no more valid than a magician claiming he can demonstrate telepathic powers, provided he is allowed to read the answer from the remote 'sender' with his regular eyes first, before telepathically divining it via his third eye.

Well, as said before, I was never discussing absolute non-classicality here.

That was the only aspect I was discussing. The thread topic was not whether one can define "surprising" in a way that makes double slit experiment "surprising" (which is apparently your understanding of the thread question), but whether there is something genuinely surprising about it, as Feynman's lectures and myriad others inspired by them suggest.

My response is that they are surprising only if you prohibit LHV based explanations. But the LHV prohibition depends critically on the status of BI violations since that's the only quantitative, falsifiable criterium so far for such exclusion.

So Bell Inequalities are the most interesting thing about the double slit? Come on, you cannot be serious.

As explained, as a matter of excluding of some paths of possible future physics, that's indeed the most relevant aspect -- whether it excludes LHV models in absolute sense or not. The double slit or beam splitter experiments, at least as pedagogically presented, are qualitatively suggestive of such exclusion. But the only quantitative, falsifiable criterium for fundamental exclusion of LHV models is at present the BI violation.

Since that violation hasn't been empirically achieved as yet, while QM prediction of the violation is dependent on an unnecessarily strong composite system measurement postulate, the question on what kind of future physics is possible, specifically whether some local computation (e.g. at Planck scale) can in principle replicate the existent physics, remains open.

No. Not when it comes to the double slit. I just adressed your erroneous claim that there is no non-locality.

What exactly is "erroneous"? I didn't notice anything but a series of non sequitur and ad hominem arguments so far.

As explained above, you are confusing practical relevance and theoretical generality of LHV counter-examples with fundamental questions whether they refute absolute non-locality claims (the absolute exclusion of LHV models for the observed phenomena).

To refute absolute claims of non-existence of such models that could be compatible with some empirical data, all that is needed is to replicate the narrow data in question via LHV model, however contrived and irrelevant as a general theory such model may be. The mere existence of an LHV model for empirical data claimed to exclude as a matter of principle any such model as explanation, suffices to falsify the absolute non-existence claim.

While one can weaken a non-classicality claim, as you are doing throughout the discussion by qualifying it with saying that it merely excludes "interesting" models, that's fine, but that kind of weaker and subjective non-classicality is mere terminological convention which is irrelevant for the fundamental questions such as to what conceivable paths some future physics may take.

The world does not seem to share your definition of what is truly mysterious. Most people do not worry about Bell when they discuss the double slit.

If you're practical Quantum Optician or applied physicist, BI violations may be irrelevant. For those curious about whether future physics may be based on distributed local computations at Planck scale (such as Wolfram's NKS models), it is quite relevant question.

Ehm...no. Bell does not add to the double slit. It is an interesting topic of its own. The double slit is not even quantum.

Have you read Feynman's lectures? Or myriad others based or inspired by them? The mystery is how to explain double slit without LHV model which is supposedly excluded by BI violations. Since the latter was not empirically demonstrated, one can drop the no-LHV-allowed requirement, and the double slit mystery goes away.

As said: I just responded to correct your erroneous statement. And as this is about the double slit and not about Bell and not even about non-classical physics, I will stop this discussion here as it is out of place. If there is anything else to add to the last discussion, please open another thread and/or ask the mods to move the posts. This is moving way too far from the topic at hand.

There was no "erroneous statement" that anything you said pointed out. If I missed it you are welcome to provide a link to specific demonstration of such error you claim to have produced in the discussion.
 
  • #23
I don't think that anyone really has answered the question of what's weird about the double slit experiment.

In the experiment, we send a stream of electrons (or beam of light) through two parallel slits and the waves or particles or whatever they are are absorbed by a screen (say a photographic plate). There are three experimental facts we discover:

1. At high intensity, a characteristic interference pattern appears on the screen. The pattern changes when you close off one slit or the other.

2. If you drop the intensity sufficiently, the pattern on the screen can be seen to be made up of discrete, localized interactions (dots on the screen).

3. If the intensity is low enough then the appearance of dots on the screen can be seen to be exclusionary; if a dot appears on one spot, then no dot will appear at a different spot at the same time.

What is weird is that we don't have a satisfactory explanation of these facts in terms of localized processes and local interactions.

The interference pattern by itself isn't weird--classical electrodynamics predicts the same pattern. But the discreteness that is observed at low intensity isn't predicted by classical theory.

You could try to incorporate the discreteness by assuming that there is some nondeterminism in the interaction between the wave (showing the interference pattern) and the screen. The interaction results in a discrete dot (or lack thereof), but it is unpredictable what outcome will happen.

But assuming that the dot results from a localized interaction between the wave and the points on the screen fails to explain the exclusionary property. If a dot appears at one point, then a dot will NOT appear at a distant point at the same time. This exclusionary property is required by conservation of energy and/or particle number, but it's hard to see what could enforce it. If the wave interacting with the screen nondeterministically produces a dot, what's to prevent a dot from appearing at a distant point at the same time. It would seem to require a nonlocal interaction to make sure that a dot appearing at one point implies no dot appears at a distant point.

If you try to explain the exclusionary property in terms of local interactions, you seem to be forced to the conclusion that the particle (electron or photon) must have already "decided" which way it was going to go much earlier. That way, you can assure absolute particle number conservation; the electron is at some definite position at every moment (you just don't know where) and so of course if it's at one point making a dot, it can't be at another point making a different dot. This is the hidden variables approach. But that has its own problems. In particular, it's hard to account for the interference pattern; if the electron has a definite position, then it must go through one slit or the other. If it goes through one slit, then why should it make any difference to that electron that the other slit is open?

So it's not so much that there is anything contradictory about the experimental results of the two slit experiment, but those results defy explanation in terms of localized interactions.
 
  • #24
nightlight said:
Have you read Feynman's lectures? Or myriad others based or inspired by them? The mystery is how to explain double slit without LHV model which is supposedly excluded by BI violations. Since the latter was not empirically demonstrated, one can drop the no-LHV-allowed requirement, and the double slit mystery goes away.

That comment doesn't make any sense to me. The mystery doesn't "go away" because there is a loophole in a proof. To make the mystery go away would require that you actually come up with a satisfactory local hidden variables explanation for the empirical facts. Whether or not there is a proof that no such explanation is possible, we have mystery until we have such an explanation.

Are you saying that there is a satisfactory local hidden variables model for quantum mechanics? Or are you saying that there is no conclusive proof that there is no such model? The latter is a very weak double-negative, and I certainly wouldn't consider that to be a "solution" to the mystery.
 
  • #25
stevendaryl said:
That comment doesn't make any sense to me. The mystery doesn't "go away" because there is a loophole in a proof.

If there is absolute LHV prohibition by the empirically established facts, there is mystery in double slit. Otherwise the empirical facts of double slit experiment standing on their own are perfectly compatible with local model. The usual pedagogical presentation a la Feynman is misleading with its exclusivity claim (which you seem to have bought). Even though Feynman presented it in early 1960s as a major mystery (based on von Neumann's faulty no-LHV theorem), in fact even a recent experiment trying to show the above exclusivity had to cheat to achieve such appearance as discussed in an earlier PF thread. The exclusivity on non-filtered data is actually no sharper than Poissonian distribution allows for i.e. the chance of simultaneous hit in two places is at least p^2 (where p is chance of one hit; it's larger for super-Poissonian sources, such as chaotic light).

To make the mystery go away would require that you actually come up with a satisfactory local hidden variables explanation for the empirical facts. Whether or not there is a proof that no such explanation is possible, we have mystery until we have such an explanation.

You have double slit empirical facts wrong. There is no real (sub-Poissonian) exclusivity in detections as the empirical fact as you seem to believe. See that thread about recent experiment and their sleight of hand used to create the illusion of exclusivity (they used separate setup to check for exclusivity and had cut the timing on TAC in half from that required by the spec, which superficially eliminated most of the double detections).

Are you saying that there is a satisfactory local hidden variables model for quantum mechanics? Or are you saying that there is no conclusive proof that there is no such model? The latter is a very weak double-negative, and I certainly wouldn't consider that to be a "solution" to the mystery.

A "cheap" (Einstein's characterization) LHV model is the de Broglie-Bohm theory (it is LHV theory if you reject the no-go claims for LHV theories or the instantaneous wave function collapse since both claims lack empirical support).

A much better theory of that type is Barut's Self-Field Electrodynamics which replicates the high precision QED results (radiative corrections to alpha^5 order, which was as far as they were known at the time in early 1990s; Barut died in 1995). That was also discussed in the above thread; the related SFED survey post written around same time with detailed citations is on sci.phys.research.

The SFED starts with coupled Maxwell-Dirac equations, treated as interacting classical EM and matter fields, which due to self-interaction of Dirac field evolve via non-linear PDEs. Barut shows an ansatz which turns this system into conventional multiplarticle QM (MPQM) phase space representation, provided one drops non-linear terms left after the ansatz. If one retains the non-linearity and iterates the solutions one gets QED radiative corrections valid to at least alpha^5. Einstein and Schrodinger also worked on similar idea (they apparently knew a variant of the same ansatz), except they were much more ambitious, seeking the unified non-linear theory that includes gravity which Barut's SFED ignores.
 
  • #26
nightlight said:
That's mixing up apples and oranges. For people seeking to come up with next physics at Planck scale, such as t'Hooft or Wolfram, it is quite relevant whether cellular automata or Planckian networks or some other distributed, local computational model could at least in principle replicate the empirical facts of existent physics.

So the mystery of the double slit is now physics at the Planck scale? No.

nightlight said:
There was no "erroneous statement" that anything you said pointed out. If I missed it you are welcome to provide a link to specific demonstration of such error you claim to have produced in the discussion.

It is in the stuff you have not answered to. You claimed there is no demonstration of non-classicality. This is not even true for your "genuine" non-classicallity as there is antibunching.

The following cliams are wrong:
- Measuring gn requires non-local filtering. For once, it is not filtering to develop the photon number distribution into a series of moments. Just like a Fourier expansion or a Taylor expansion is not filtering. For gn you instead expand in the orders of the distribution: mean, relative variance, skewness, kurtosis and so on. You can restrict yourself in analysis to low order because the relative variance is enough demonstrate subpoissonian behaviour. Even the usage of a beam splitter and several detectors is not necessary. As shown in the link in my last post, antibunching can be and was demonstrated using a single detector. One could even use high photon-number resolving detectors.

- You claimed there are classical counterexamples for actually measured non-classical g2-values. This is not correct. There are examples for simultaneously proposed pairs of light fields and detectors (usually those with threshold) which would yield g2-values considered non-classical for other detectors. However, each kind of detector has its own limit of where unambiguous non-classicality starts. Showing that you can get g2 below 1 with an arbitrary detector is not a counterexample. What people would need to show is that the hypothetical detector used in modeling indeed is a good model of the real detector used. Once this is done, one can start thinking about counterexamples. Unfortunately all the detectors used for modeling in SED share the same achilles heel. They cannot explain well, how detectors fire and have a problem with the firing to random noise vs. not firing at ZPF issue. See e.g. the Carmichael paper I cited earlier.
Antibunching in subpoissonian light as seen in experiments is a demonstration of non-classicality and there still is no classical counterexample explaining the measurements.
 
  • #27
Cthugha said:
So the mystery of the double slit is now physics at the Planck scale? No.

No to the strawman which has no relation to what I said.

It is in the stuff you have not answered to. You claimed there is no demonstration of non-classicality. This is not even true for your "genuine" non-classicallity as there is antibunching.

We're talking genuine non-classicality, not about word play defining label 'non-classical' for phenomena which can be simulated via local computation (i.e. local realist models), such as QO 'non-classicality' which can entirely be modeled by SED, as even Yariv admits in his textbook.

The only genuine non-classicality would be BI violation and that hasn't happened either (and no , the latest Giustina et al. paper making such claim has unmentioned loophole and anomalies beyond the locality loophole they acknowledge; similarly the newer Kwiat's group preprint about the same single channel experiment has the same anomaly and its raw data don't even violate J>=0 inequality).

The following cliams are wrong:
- Measuring gn requires non-local filtering. For once, it is not filtering to develop the photon number distribution into a series of moments. Just like a Fourier expansion or a Taylor expansion is not filtering. For gn you instead expand in the orders of the distribution: mean, relative variance, skewness, kurtosis and so on.

You seem completely lost as what the Glauber's derivation is doing. It has nothing to do with expansion of "photon number distribution into series of moments." He is expanding transition probabilities for n detectors interacting with general quantized EM field expressed in interaction picture, in powers of interaction Hamiltonian for general EM field. The starting expression (2.60) has n^n terms (that already includes implicitly the single detector filtering from the earlier chapter via operator ordering rule), then he provides rationale for eliminating most of them, ending with n! terms in eq. (2.61), which is ~n^n / e^n, i.e. he retains (1/e^n)-th fraction of the original terms.

It doesn't actually matter at all what his rationale for filtering (aka global term dropping) is, that is plainly a non-local filtering on its face, since elimination of events on one detector depends on the events on the other detectors.

The resulting Gn() is what is being measured, such as G4() in BI violation experiments. But that G4() is result of non-local filtering which goes from 4^4=256 terms down to 4!=24 terms by simply requiring that some combinations of multi-detector events not be counted as contribution to G4() counts.

Note that it is G4() which contains the QM prediction (the 'cos^2() law'). But the above non-local filtering (term dropping) involved in defining and counting G4(), violates independence assumption in Bell's model i.e. one cannot require factorization of probabilities for remote events which are being filtered non-locally (the events contributing to G4() counts are selected based on the results on all 4 detectors). Hence the predicted QM violation is entirely unremarkable since any local classical model can do that, provided it can filter results on detector A based on results on detector B.
- You claimed there are classical counterexamples for actually measured non-classical g2-values. This is not correct.

Which experiment demonstrates g2 < 1 on non-filtered data? The latest one trying to do that was the one from 2004 by Thorn et al. discussed earlier in which they contrive separate setup to prove exclusivity of detection events.

But for that setup (and that one only) they misconfigure TAC timings (grossly violating the manufacturer's specified delays) just so that it cannot detect practically any coincidences, resulting in apparent exclusivity i.e. g2< 1 without subtractions/filtering (they went out of their way to emphasize that non-filtering feature of the experiment).

Despite being asked by several readers for the clarification of the timing configuration anomaly, they never stated what the actual settings and the resulting counts with such corrected settings were, but merely kept repeating 'trust me, it was correct' (while acknowledging that it was different from the published one). Strangely, the wrong TAC setting is emphasized in several places in the published paper (and in their website instructions for other universities who wish to use the same demonstration for their students) as important for the experiment. No correction was ever issued. Hence, taking into account everything published and stated in emails, it was cheap stage trick designed to wow the undergrads.

The mere fact though, that they had to use such cheap gimmicks in 2004 to get g2<1 on raw counts, should tell you that no one knows how to make g2<1 happen without filtering (well, the non-local filtering would defeat the very point of the demonstration; I can see students, after seeing how data is being filtered, saying "duh, so what's the big deal about that?").

Note also that Glauber's derivation of Gn() shows that g2=0 doesn't happen without non-local filtering, hence no wonder they couldn't get it either. Namely, according to his term-dropping prescription in going from (2.60) -> (2.61) for G2() measured on 1 photon state, you have to drop all events in which both detectors or no detectors trigger (since that would be a 2 photon or 0 photon state which "we are not interested in"), you are going from 2^2=4 terms to 2!=2 terms i.e. you get perfect exclusivity by definition of G2(), hence g2=0 tautologically, but entirely via non-local filtering.

There are examples for simultaneously proposed pairs of light fields and detectors (usually those with threshold) which would yield g2-values considered non-classical for other detectors. However, each kind of detector has its own limit of where unambiguous non-classicality starts. Showing that you can get g2 below 1 with an arbitrary detector is not a counterexample. What people would need to show is that the hypothetical detector used in modeling indeed is a good model of the real detector used. Once this is done, one can start thinking about counterexamples. Unfortunately all the detectors used for modeling in SED share the same achilles heel. They cannot explain well, how detectors fire and have a problem with the firing to random noise vs. not firing at ZPF issue. See e.g. the Carmichael paper I cited earlier.
Antibunching in subpoissonian light as seen in experiments is a demonstration of non-classicality and there still is no classical counterexample explaining the measurements.

For purpose of counter-example to absolute non-classicality claim it is irrelevant whether SED detector models are generally valid or not. If someone claims number 57 is prime it suffices to tell them 'try dividing with 3' to invalidate their claim even though the messenger may not have any math degree or know much math at all. The counter-example does all it needs to falsify the claim '57 is a prime'.
 
Last edited:
  • #28
nightlight said:
The only genuine non-classicality would be BI violation and that hasn't happened either (and no , the latest Giustina et al. paper making such claim has unmentioned loophole and anomalies beyond the locality loophole they acknowledge; similarly the newer Kwiat's group preprint about the same single channel experiment has the same anomaly and its raw data don't even violate J>=0 inequality).

Giustina's does. Kwiat's does not. He even explicitly gives a classical model which models Giustina's results, but not his. It still has another loophole, though.

nightlight said:
You seem completely lost as what the Glauber's derivation is doing. It has nothing to do with expansion of "photon number distribution into series of moments." He is expanding transition probabilities for n detectors interacting with general quantized EM field expressed in interaction picture, in powers of interaction Hamiltonian for general EM field. The starting expression (2.60) has n^n terms (that already includes implicitly the single detector filtering from the earlier chapter via operator ordering rule), then he provides rationale for eliminating most of them, ending with n! terms in eq. (2.61), which is ~n^n / e^n, i.e. he retains (1/e^n)-th fraction of the original terms.

Hmm, up to now I thought you were misguided, but you clearly do not understand Glauber's work. The interesting quantities are found in the hierarchy of normalized correlation functions. The first one just gives you the mean. g2 gives you the relative variance. g3 and g4 are proportional to skewness and kurtosis, respectively. This gives you info about the moments of the underlying distribution without running into the problem of detector efficiency in experiments. Most importantly, this really has nothing to do with what Glauber does between 2.60 and 2.61. There he just restricts his discussion to binary detectors in their ground state by neglecting all events where the same detector fires more than once.

Operator ordering is not a filtering mechanism but just includes the fact that the detection of a photon destroys it. If you think that it is filtering that single photon detection events do not show up in two-photon counting rates, this is just odd. For detectors which do not do destroy photons, you need to use a different ordering. The Mandel/Wolf has a short chapter on this.

nightlight said:
It doesn't actually matter at all what his rationale for filtering (aka global term dropping) is, that is plainly a non-local filtering on its face, since elimination of events on one detector depends on the events on the other detectors.

No. You still claim that without showing why it should be so.

nightlight said:
The resulting Gn() is what is being measured, such as G4() in BI violation experiments. But that G4() is result of non-local filtering which goes from 4^4=256 terms down to 4!=24 terms by simply requiring that some combinations of multi-detector events not be counted as contribution to G4() counts.

This is a strawman. For binary counters like SPADs, these terms do not exist. A binary detector cannot fire more than once at the same time. Therefore the terms, where it fires 4 times at once are not considered.

nightlight said:
Which experiment demonstrates g2 < 1 on non-filtered data? The latest one trying to do that was the one from 2004 by Thorn et al. discussed earlier in which they contrive separate setup to prove exclusivity of detection events.

While the Thorn paper is and has always been quite a bad one, physics has been way ahead of that and detectors have become way better. Antibunching has been demonstrated e.g. in:
Nature Communications 3, Article number: 628 (2012) doi:10.1038/ncomms1637, Phys. Rev. Lett. 108, 093602 (2012), Optics Express, Vol. 19, Issue 5, pp. 4182-4187 (2011) and a gazillion of other papers.

nightlight said:
The mere fact though, that they had to use such cheap gimmicks in 2004 to get g2<1 on raw counts, should tell you that no one knows how to make g2<1 happen without filtering (well, the non-local filtering would defeat the very point of the demonstration; I can see students, after seeing how data is being filtered, saying "duh, so what's the big deal about that?").

This was an paper published in a pretty low level journal and was even at that time way behind the technical state of the art. I have seen enough raw data already to know that your claim is without substance.

nightlight said:
Note also that Glauber's derivation of Gn() shows that g2=0 doesn't happen without non-local filtering, hence no wonder they couldn't get it either. Namely, according to his term-dropping prescription in going from (2.60) -> (2.61) for G2() measured on 1 photon state, you have to drop all events in which both detectors or no detectors trigger (since that would be a 2 photon or 0 photon state which "we are not interested in"), you are going from 2^2=4 terms to 2!=2 terms i.e. you get perfect exclusivity by definition of G2(), hence g2=0 tautologically, but entirely via non-local filtering.

That is wrong. G2 consists of the two-photon detection events, so you only consider the events in which both detectors fire. For G2 you are always only interested in two-photon events. This does not at all depend on the field you try to measure. G2, however, does not tell you much about non-classicality. g2 does, which also includes the other detection events in the normalization (g2=(<:n^2:>)/(<n>^2), where the : denotes normal ordering of the underlying field operators and the mean photon number is found by taking into account ALL detection events. This is simply a detector efficiency invariant measurement of the relative variance of the photon number distribution.

nightlight said:
For purpose of counter-example to absolute non-classicality claim it is irrelevant whether SED detector models are generally valid or not. If someone claims number 57 is prime it suffices to tell them 'try dividing with 3' to invalidate their claim even though the messenger may not have any math degree or know much math at all. The counter-example does all it needs to falsify the claim '57 is a prime'.

No, that is not irrelevant. Otherwise you could simply use a device as a detector which throws away the input and simply puts out the same fixed sequence of designed "detection events" which lead to g2<1 every time. Which is very similar to what the "counterexamples" actually do.
 
Last edited:
  • #29
Despite decades of analysis and thought about "the experiment", at the time Feynman was giving his lecture about it (vol 1. 37-4 An experiment with electrons), he mentioned that, "This experiment has never been done in just this way."

It was around this time that Claus Jönsson was the first to do it with other than light (electrons) in 1961.
 
  • #30
In case others haven't come across it, there are macroscopic analogs to the single particle diffraction by Couder's group:

Single-Particle Diffraction and Interference at a Macroscopic Scale
http://people.isy.liu.se/jalar/kurser/QF/assignments/Couder2006.pdf

Or Video:
https://www.youtube.com/watch?v=W9yWv5dqSKk

There are even macroscopic bouncing droplet experiments showing analogs to tunneling, quantized orbits, non-locality, superposed state, etc.:

A macroscopic-scale wave-particle duality
http://www.physics.utoronto.ca/~colloq/Talk2011_Couder/Couder.pdf
 
Last edited:
  • #31
Cthugha said:
Giustina's does. Kwiat's does not. He even explicitly gives a classical model which models Giustina's results, but not his. It still has another loophole, though.

The "locality" loophole they emphasize as remaining is a red herring. You don't expect that switching randomly polarization angles will change anything much? It never did in other photon experiments. That's relevant only for experiment where the subsystems interact, which is not true for the optical photons.

The real loophole they don't mention is that they are misusing Eberhard's inequality on a state to which it doesn't apply. Their field state is a Poissonian mixture of PDC pairs (which is always the case for laser source e.g. see Saleh95) not the pure 2 photon state they claim to have. Short of having their full counts for all settings and detailed calibration data to reconstruct its real parameters closer (such as actual distribution in the mixture), the main indicator of this real loophole is a peculiar anomaly (significant discrepancy with QM for the pure state) in their data pointed out in this recent preprint.

As table 1, p. 2 shows, while almost all of their counts are very close to QM prediction for the claimed pure state, the critical C(a2,b2) count is 69.79K which is 5.7 times greater than the QM prediction of 12.25K. The problem is that the excess counts of XC = 57.54K are very close to half of their total violation J = -126.7K. Hence there was some source of accidental coincidences (multiple PDC pairs) which stood out on the lowest coincidence counts, but is buried in other 3 coincidences which are more than order of magnitude larger quantities. The problem with that is that those remaining 3 coincidence counts C(a1,b1), C(a1,b2) and C(a2,b1) enter J with negative sign, while C(a2,b2) enters with positive sign. Hence any accidental coincidences with effective net counts XC~57.54K i.e. of similar magnitude, in the other 3 coincidence values will yield the net effect on J which is -3*XC + 1*XC = -2*XC ~ -115K which is nearly the same as their actual violation.

Unfortunately they don't provide the measured counts data for all 4 settings or respective calibrations on each, hence one can't get closer to what went on there. But Eberhard's model (Eb93) is loophole free only under assumption of at most a single photon pair in coincidence window, detected in full or partially, but not in excess (such as the one their anomaly shows) i.e. his event space doesn't include double triggers on one side, such as oo, oe, eo and ee on side A, but only o, e and u (undetected) on each side. Hence instead of 3x3 matrix for one setting he is considering, there is in general case a 7x7 matrix per setting. In the two channel experiment these double trigger cases would be detected and subtracted from the accepted counts (leaving a loophole of missing pairs).

Conveniently Giustina et al. chose single channel experiment where they in effect throw away half the data available for measurement (the 2nd detector on each side), where the only effect of multiple pairs would be those double triggers and two photon detections on the remaining detector appearing as single detection case o, except with artificially amplified detection rates.

Of course, in single channel experiment this excess would still show up as a discrepancy from QM prediction in coincidence counts, which it does on the count which is the most sensitive to any additions, the one that is order of magnitude lower than others C(a2,b2). It would also artificially amplify the ratio of coincidences vs. singles on either side (by a factor ~(2-f)>1 for a detector of efficiency f and 2 PDC pairs), which is exactly what one wants in order to obtain the appearance of violation. Of course, the Eb93's expression for J is not valid for multiple pairs, hence any such "amplification" would invalidate the claim, if disclosed. But they didn't even give the basic counts for all 4 settings or backgrounds, let alone calibration with 2 channels to quantify the excesses (and subtract them).

The Kwiat's group replication provides few more counts for each setting, but fails to violate J>=0 altogether on the reported counts. It also shows exactly the same anomaly on the C(a2,b2) as the other experiment.

What's obvious is they have both tapped into the new loophole peculiar to Eb93 J>=0 inequality in single channel experiments, where they get a hidden effective "enhancement" (in CH74 terminology of loopholes), provided they keep the full data private, don't disclose the full background counts and ignore the anomaly on the lowest coincidence counts, just as they did.

While they can fix the experiment without generalizing Eb93 event space to 7x7, by merely adding the 2nd channel and subtracting the events which fall outside of the Eb93's 3x3 event space for which his J expression was derived, that would reclassify these events into the background counts within Eb93 model, but the Eb93 is extremely sensitive to the smallest background levels (see Fig 1 in Eb93; at their max efficiencies 75% the max allowed background is ~0.24%). Their excess of ~57K taken as added background would push the required detector efficiency into 85% range according to Eb93 table II, which is well beyond their 72-75% efficiencies. Note also that they don't provide actual background levels at all, but merely hint indirectly via visibility figure, showing that actual background could be as large as 2.5%, which further boosts the required detector efficiency well beyond those they had, independently from the multiple pair boost mentioned.

I suppose they can run with this kind of stage magic for a little while, cashing on the novelty of the loophole, until the more complete data emerge or someone publishes Eb93 update for 7x7 event space and points out the inapplicability of their expression for J to the actual state they had (for which they don't provide any experimental data either that establishes it).

Even on its face, from birds eye view, if you imagine they measured first both channels and got data which had the usual detection efficiency loophole (characteristic of 2 channel experiment), so to fix it, they just turn the efficiency on half of detectors to 0, i.e. drop half the data, and suddenly on remaining data the detector efficiency loophole goes away. It is a bit like magic show even at that level -- you got 4 detectors with much too low efficiency, so what to do? Just turn the efficiency down to 0 on half of them and voila, the low efficiency problem is solved. If only real world would work like that -- you got 4 credit cards with too low limits to buy what you want at the store, so you just go home, cut two of them into trash, go back to store and a suddenly you can buy the stuff with the two remaining ones you couldn't before.

Hmm, up to now I thought you were misguided, but you clearly do not understand Glauber's work. The interesting quantities are found in the hierarchy of normalized correlation functions. The first one just gives you the mean. g2 gives you the relative variance. g3 and g4 are proportional to skewness and kurtosis, respectively. This gives you info about the moments of the underlying distribution without running into the problem of detector efficiency in experiments. Most importantly, this really has nothing to do with what Glauber does between 2.60 and 2.61. There he just restricts his discussion to binary detectors in their ground state by neglecting all events where the same detector fires more than once.

You seem to keep missing the point and going off way downstream, to applications of Gn() in his lectures, from the eq. (2.60) -> (2.61) transition I was talking about.

The point that (2.60) -> (2.61) makes is that Gn() does not represent the independent, raw event counts taken on n detectors, as QM MT and Bell assume in his derivation of BI (by requiring factorization of probabilities corresponding to C4()), but rather Gn() are non-locally filtered quantities extracted by counting into Gn() only certain combinations of n events, while discarding all other combinations of n events.

This operation (or transition from all event counts in 2.60 to filtered counts in 2.61), whatever its practical usefulness and rationale, reduces n^n terms to n! terms and is explicitly non-local.

Hence, G4() is not what the elementary QM MT claims it to be in BI derivations -- simply the set of independent counts on 4 detectors, but rather it is a quantity extracted via non-local subtractions (as is evident in any experiment). That of course invalidates the probability factorization requirement used in BI derivation since it is trivial to violate BI in a classical theory if one can perform the non-local subtractions (exclude or include events on detector A based on events on remote detector B).

You keep going into tangents about applied usefulness and rationales for term dropping, which are all good and fine but have nothing to do with the point being made -- Gn() is not what elementary QM MT assumes it to be -- the mere combination of n local, independent detection events. Gn() is obviously not that kind of quantity but a non-locally extracted function.

Operator ordering is not a filtering mechanism but just includes the fact that the detection of a photon destroys it. If you think that it is filtering that single photon detection events do not show up in two-photon counting rates, this is just odd. For detectors which do not do destroy photons, you need to use a different ordering. The Mandel/Wolf has a short chapter on this.

Of course it is, it removes the contribution of vacuum photons i.e. it operationally corresponds to subtraction of background/dark counts due to vacuum/ZPF photons.

No. You still claim that without showing why it should be so.

The response is the point above -- the removal of the (n^n - n!) product terms is a non-local operation since the factors in each term represent events "we are not interested in" on remote locations. The usefulness and application of procedure or any other rationale are irrelevant for the point being made i.e. what kind of quantity Gn() is -- it is clearly not the result of just taking together counts taken independently at n locations since it is computed by taking out by hand the (n^n - n!) product term contributions.

Are you actually taking the opposite position -- that Gn() is a mere combination count of n events, each obtained locally and independently (i.e. without any regard for the events on the other n-1 locations), as QM MT takes it to be (at least for derivation of BI)?

How can that position be consistent with the subtractions corresponding to discarded (n^n-n!) terms, the factors of which are specific combinations of n absorption events at n locations? The QM independence assumption obviously doesn't hold for the Gn() defined in QED as non-locally filtered function.

While the Thorn paper is and has always been quite a bad one, physics has been way ahead of that and detectors have become way better. Antibunching has been demonstrated e.g. in:
Nature Communications 3, Article number: 628 (2012) doi:10.1038/ncomms1637, Phys. Rev. Lett. 108, 093602 (2012), Optics Express, Vol. 19, Issue 5, pp. 4182-4187 (2011) and a gazillion of other papers.

I don't have access to paywalled physics papers at my current job (I work as researcher in computer industry not in academia). If you attach a copy I can check it out. If it doesn't explicitly provide non-filtered counts, or is silent altogether on the subject of loopholes (i.e. how exactly it gets around and defeats the ZPF based classical models), you don't need to bother, I have seen plenty of that kind that preach to the choir.

That is wrong. G2 consists of the two-photon detection events

G2() or Gn() are derived for general field, not just for the n photon number state in case of Gn() as you seem to suggest above? The n in Gn() has nothing to do with field state it is averaged over (which can be any state, such as photon number state, coherent state, etc).

, so you only consider the events in which both detectors fire. For G2 you are always only interested in two-photon events. This does not at all depend on the field you try to measure. G2, however, does not tell you much about non-classicality.
The G2() for is 0 for single photon states (e.g. Glauber's lectures sec. 2.6 pp. 51-52), which is what the Thorn et al. were claiming to demonstrate experimentally (see their paper; note that they use normalized correlation functions g2 rather than non-normalized G2 discussed here). As noted this is tautologically so, by virtue of subtractions of wrong number of photons, hence there is nothing to demonstrate experimentally. Their entire experiment is thus based on misreading of what QED predicts. There is no prediction of G2() = 0 on non-filtered data, as they imagine, since G2() is by its basic definition a non-locally filtered function (extracted from all events on 2 detectors via non-local subtractions cf. 2.60->2.61).

No, that is not irrelevant. Otherwise you could simply use a device as a detector which throws away the input and simply puts out the same fixed sequence of designed "detection events" which lead to g2<1 every time. Which is very similar to what the "counterexamples" actually do.

That's ridiculous even for a strawman since such detector would fail to show the critical interference aspects of the same setup. The combination of exclusivity of g2<1 and interference (if one chooses to remove detector from each path and put it in some point of overlap) is what makes the claim non-classical/paradoxical. The exclusivity on its own or interference on its own are trivially classical properties (e.g. for classical particles or waves). The SED detectors do function, although they may show some characteristics that are different than those of a particular physical detector.

For an experimental absolute non-classicality claim (i.e. 'this set of data cannot be generated by any local model'), as long as the local model reproduces the data provided (including exclusivity + interference in the beam splitter/double slit setup since it is the combination which is non-classical) it falsifies the absolute impossibility claim by definition. It doesn't matter how it gets them, as long as it is done via local computations and is capable of reproducing that class of experimental data claiming impossibility. After all, no one knows how the nature computes it all anyway. Algorithms that make up our current physics are merely a transient coarse grained approximations of some aspects of the overall pattern, which includes life forms and intelligent beings trying to figure it out, being computed by the chief programmer of the universe.
 
Last edited:
  • #32
nightlight said:
The "locality" loophole they emphasize as remaining is a red herring. You don't expect that switching randomly polarization angles will change anything much? It never did in other photon experiments. That's relevant only for experiment where the subsystems interact, which is not true for the optical photons.

Sorry, I am still not interested in BI, but maybe your point is of interest to others.

nightlight said:
You seem to keep missing the point and going off way downstream, to applications of Gn() in his lectures, from the eq. (2.60) -> (2.61) transition I was talking about.

The point that (2.60) -> (2.61) makes is that Gn() does not represent the independent, raw event counts taken on n detectors, as QM MT and Bell assume in his derivation of BI (by requiring factorization of probabilities corresponding to C4()), but rather Gn() are non-locally filtered quantities extracted by counting into Gn() only certain combinations of n events, while discarding all other combinations of n events.

This is still wrong. For a system of n BINARY detectors which can only distinguish between photons present or no photons present, the hierarchy of all orders of Gn takes all detections into account. What is not considered is the event of (for example) one single detector being hit by four photons simultaneously. This is trivial as the detector will give the same response as if it was only hit by one photon and there is no four-photon detection event. This is also why one always needs a good theory of the present detector. The events a detector reacts to, of course also define what the threshold of non-classical behavior is. This is also why one needs to do all the math again and get a new criterion for non-classicality when using a different kind of detector. Physics is already WAY beyond these simple binary detectors, although they are still widely used because they are well characterized. In practice, e.g. when using SPADs, one must of course operate in such a regime that the probability of additional photons arriving during detector dead time - the photons which cannot make the detector fire - is small. But that can be quantitatively accessed. It changes the bound of where non-classical behavior starts. This is why - I stress it again - detector theory is of highest importance.

nightlight said:
This operation (or transition from all event counts in 2.60 to filtered counts in 2.61), whatever its practical usefulness and rationale, reduces n^n terms to n! terms and is explicitly non-local.

Non-local? If you insist on using more than one detector and have these spacelike separated, you can get introduce non-local stuff if you want to. This is still irrelevant for antibunching, which can even be seen with one detector alone, even photon-number resolving ones. For photon-number resolving detectors you do NOT reduce the terms. If you detect 4 photons, then these give you 6 two-photon detections. See, for example Optics Express, Vol. 18, Issue 19, pp. 20229-20241 (2010) (free) for an example of how such a detector works. This paper, however, discusses only classical stuff, but it is a reasonable explanation for getting into it. It even includes a picture of the bare counts that you seem to be so interested in.

nightlight said:
You keep going into tangents about applied usefulness and rationales for term dropping, which are all good and fine but have nothing to do with the point being made -- Gn() is not what elementary QM MT assumes it to be -- the mere combination of n local, independent detection events. Gn() is obviously is not that kind of quantity but a non-locally extracted function.

Only if you insist on using only technology older than - say - 2000, maybe 1995 (and even there I do not agree). The Gn() derived by Glauber in that paper is ONLY and explicitly for binary detectors - a good model for SPADs. The terms dropped are dropped because the detector is not responsive to them. That is all.

nightlight said:
Of course it is, it removes the contribution of vacuum photons i.e. it operationally corresponds to subtraction of background/dark counts due to vacuum/ZPF photons.

No, the normal ordering forces you to commute operators once, so that <:n^2:> becomes (for equal times) <n(n-1)>. Normal ordering just takes the destruction of the first photon upon detection into account. You would not use normal ordering for non-destructive weak photon number measurements or even photon amplifiers.

nightlight said:
Are you actually taking the opposite position -- that Gn() is a mere combination count of n events, each obtained locally and independently (i.e. without any regard for the events on the other n-1 locations), as QM MT takes it to be (at least for derivation of BI)?

How can that position be consistent with the subtractions corresponding to discarded (n^n-n!) terms, the factors of which are specific combinations of n absorption events at n locations? The QM independence assumption obviously doesn't hold for the Gn() defined in QED as non-locally filtered function.

Gn is a quantity which is designed to be related to experiments. Therefore, Gn is whatever the detector in question allows it to be. If I use a single photon-number resolving detector to investigate antibunching, that is pretty local. If I distribute 30 SPADs all over the world, it is not.

nightlight said:
I don't have access to paywalled physics papers at my current job (I work as researcher in computer industry not in academia). If you attach a copy I can check it out. If it doesn't explicitly provide non-filtered counts, or is silent altogether on the subject of loopholes (i.e. how exactly it gets around and defeats the ZPF based classical models), you don't need to bother, I have seen plenty of that kind that preach to the choir.

They certainly do not explicitly discuss ZPF based models. Nobody does that. They show the directly measured g2 without background subtraction and other corrections for accidental counts. The NComms might be freely available. A version of the PRL might be on the ArXiv.

nightlight said:
G2() or Gn() are derived for general field, not just for the n photon number state in case of Gn() as you seem to suggest above? The n in Gn() has nothing to do with field state it is averaged over (which can be any state, such as photon number state, coherent state, etc).

Ehm, no. You suggested that Gn depends on the field by saying "for G2() measured on 1 photon state, you have to drop all events in which both detectors or no detectors trigger (since that would be a 2 photon or 0 photon state which "we are not interested in")". For G2 using SPADs you have a look at when both detectors fire (which is usually never for a single photon state). You also do not really drop the other terms as you need them for normalization. For G2 using other detectors you have a look at photon pairs or two-photon combinations, depending on what your detector does.

nightlight said:
The G2() for is 0 for single photon states (e.g. Glauber's lectures sec. 2.6 pp. 51-52), which is what the Thorn et al. were claiming to demonstrate experimentally (see their paper; note that they use normalized correlation functions g2 rather than non-normalized G2 discussed here).

Of course they use g2 instead of G2. How would you know that your variance is below that of a Poissonian distribution without having the proper normalization? G2 alone does not indicate non-classical behavior. g2 does.

nightlight said:
As noted this is tautologically so, by virtue of subtractions of wrong number of photons, hence there is nothing to demonstrate experimentally. Their entire experiment is thus based on misreading of what QED predicts. There is no prediction of G2() = 0 on non-filtered data, as they imagine, since G2() is by its basic definition a non-locally filtered function (extracted from all events on 2 detectors via non-local subtractions cf. 2.60->2.61).

For a single photon state G2 is always predicted to be 0, no matter what you do. You detect the photon once. It cannot be detected again. More importantly g2 is predicted to be 0, too. Even in the presence of noise, it will be <1 as long as noise is not dominant. This stays true, if you do not drop terms in Gn.

nightlight said:
That's ridiculous even for a strawman since such detector would fail to show the critical interference aspects of the same setup. The combination of exclusivity of g2<1 and interference (if one chooses to remove detector from each path and put it in some point of overlap) is what makes the claim non-classical/paradoxical. The exclusivity on its own or interference on its own are trivially classical properties (e.g. for classical particles or waves). The SED detectors do function, although they may show some characteristics that are different than those of a particular physical detector.

But this is what the hypothetical detectors in the "counterexamples" do. They predict an insane g1, which is what governs the interference. I must emphasize again that Gn always goes along with the detector in question. Of course you can design detectors that give gn<1 for classical light fields. That is trivial. It also completely misses the point that g2<1 is a non-classicality criterion when using a certain kind of detector. The task is to model a classical light field that beats the assumed non-classicality criterion for a given detector, not to design a detector that gives a response having a different non-classicality criterion.

nightlight said:
After all, no one knows how the nature computes it all anyway. Algorithms that make up our current physics are merely a transient coarse grained approximations of some aspects of the overall pattern, which includes life forms and intelligent beings trying to figure it out, being computed by the chief programmer of the universe.

I disagree. I am quite sure the NSA already eavesdropped on the chief programmer.
 
  • #33
Cthugha said:
Sorry, I am still not interested in BI, but maybe your point is of interest to others.

You brought it up and repeated their claim.
This is still wrong. For a system of n BINARY detectors which can only distinguish between photons present or no photons present, the hierarchy of all orders of Gn takes all detections into account. What is not considered is the event of (for example) one single detector being hit by four photons simultaneously.

But if you have n-photon state, then a 2-photon absorptions at location 1, implies there will be zero photon absorption at some other location 2 (by pigeonhole principle) i.e. the combinations of detections eliminated include double hit on one and no hit on 2. That is by definition a non-local filtering i.e. even though detector 1 is treated as binary detector, that trigger event on detector 1, which appears as perfectly valid trigger on 1, doesn't contribute to Gn() since there was also missing trigger on remote detector 2.

Hence the contributions counted in Gn() are specific global combinations of n triggers/non-triggers. In other words it is obvious that Gn() is not a simple collation of n independent counts from n locations but it is a non-locally filtered function which keeps or rejects particular global combinations of triggers.

Whatever you wish to call such procedure (since you keep quibbling about the term 'non-local filtering'), one cannot apply probability factorization rule to the resulting Gn() functions, hence derive BI by assuming that correlation functions are merely collated counts from independently obtained events on n locations. The Glauber's filtering procedure 2.60 -> 2.61 which removes by hand (n^n - n!) terms from all events on n locations that the full dynamical evolution yields in eq. 2.60 (whether "we are interested in" them or not is irrelevant for this point), results in Gn() of (2.61) which is tautologically non-classical or non-local not because of any strange physical non-locality or non-classicality, but by mere choice to drop the terms which depend on detection events on remote (e.g. space-like) locations.

The prediction of unmodified eq. (2.60), i.e. the raw unfiltered counts, cannot show anything non-local (they all evolve via local field equations and all space-like operators commute, hence that's already a plain local-realistic model of the setup all by itself). But those of (2.61) certainly can appear to do so if one ignores that they are not predictions about n independent events at n locations but about globally filtered subset of all events. Hence to replicate the unfiltered counts of (2.60) one can legitimately require from a local realist model (some other than field equations, which are also a local realist model) the independence of detections and thus probability factorization, but for those of (2.61), hence of Gn(), the factorization requirement is illegitimate (since any classical model can violate factorization if allowed to do similar non-local subtractions).

No, the normal ordering forces you to commute operators once, so that <:n^2:> becomes (for equal times) <n(n-1)>. Normal ordering just takes the destruction of the first photon upon detection into account. You would not use normal ordering for non-destructive weak photon number measurements or even photon amplifiers.

The normal ordering does remove vacuum photon contributions e.g. check Glauber's book, pdf page 23, sec 1.1, the expression <0| E^2 |0> != 0 where he explicitly calls these "zero point oscillations" (ZPF in SED). The operational counterpart to this normal ordering requirement is the subtraction of dark counts from contributions to Gn() counts.

While these are all fine procedures as far QO applications and extractions of Gn() from counts data go, one has to understand that this is done and know exact values in non-locality/non-classicality experiments. One cannot assume that the reported Gn() are obtained without such subtraction and legitimately require that any local/classical model must replicate such "ideal" subtraction free case, when these subtractions are mandated by the definition of Gn() (via the normal ordering, along with the other subtractions introduced in the n photon chapter) and included in the standard QO counting procedures.
Ehm, no. You suggested that Gn depends on the field by saying "for G2() measured on 1 photon state, you have to drop all events in which both detectors or no detectors trigger (since that would be a 2 photon or 0 photon state which "we are not interested in")".

I am saying that 'n' in Gn() is unrelated to photon number of the field state as you seemed to conflate in few places. The value of Gn() function depends of course on field state. But the 'n' in Gn() does not imply anything about what kind of field state rho you can use for the expectation value.

For a single photon state G2 is always predicted to be 0, no matter what you do. You detect the photon once. It cannot be detected again. More importantly g2 is predicted to be 0, too. Even in the presence of noise, it will be <1 as long as noise is not dominant. This stays true, if you do not drop terms in Gn.

The Gn() is by definition filtered (i.e. it is defined after dropping the terms 2.60 -> 2.61, plus requiring normal ordering which drops additional counts). There is no unfiltered Gn().

There are unfiltered independent/raw counts from n locations, but that's a different quantity than Gn() since raw counts correspond to eq. (2.60), before subtractions which defined Gn from (2.61) and on. Once you start picking whether to drop result on detector A based on whether it combines as contribution to Gn() in the "right" or "wrong" way with a result on the remote detector B, all classicality is out the window. You can get g2<1 or any other violation of 'classicality' just for taking. But there is nothing genuinely non-local or non-classical about any such 'non-classicality' conjured via definitions.

I disagree. I am quite sure the NSA already eavesdropped on the chief programmer.

That's pretty funny. Although I think computing technology of CPoU is about 10^80 times better than NSA's best, assuming Planck scale gates his gear may be working at, which is in linear scale at least 10^20 times smaller than gates made from protons (the smallest we can possibly go with our gates), which means 10^60 times more gates per unit volume than their best, then the clock that is 10^20 times faster (due to 10^20 times smaller distances), yielding the net edge of 10^80 times more powerful computing gear for the CPoU. How would they hope to outwit someone that is at least 10^80 times quicker/smarter? Plus he's got a super-smart bug in every atom of every computer or communication device they got.
 
  • #34
nightlight said:
But if you have n-photon state, then a 2-photon absorptions at location 1, implies there will be zero photon absorption at some other location 2 (by pigeonhole principle) i.e. the combinations of detections eliminated include double hit on one and no hit on 2. That is by definition a non-local filtering i.e. even though detector 1 is treated as binary detector, that trigger event on detector 1, which appears as perfectly valid trigger on 1, doesn't contribute to Gn() since there was also missing trigger on remote detector 2.

Not even though it is a binary detector, but because it is a binary detector. As already explained several times, these terms contribute to Gn if you use photon-number resolving detectors. Also, the missed events are not relevant for photon number states. You know what to expect for classical states and how many events you miss as the "best" or lowest noise result you can get is that of statistically independent photons (a second-order coherent beam). The number of those events is governed by the variance and the mean photon number of your light field, so you can find out whether your variance is below that of a coherent field, even when using binary detectors.

nightlight said:
Hence the contributions counted in Gn() are specific global combinations of n triggers/non-triggers. In other words it is obvious that Gn() is not a simple collation of n independent counts from n locations but it is a non-locally filtered function which keeps or rejects particular global combinations of triggers.

Just for binary detectors. However, although that is the favorite point of BI-deniers, it is irrelevant for antibunching.

nightlight said:
Whatever you wish to call such procedure (since you keep quibbling about the term 'non-local filtering'), one cannot apply probability factorization rule to the resulting Gn() functions, hence derive BI by assuming that correlation functions are merely collated counts from independently obtained events on n locations. The Glauber's filtering procedure 2.60 -> 2.61 which removes by hand (n^n - n!) terms from all events on n locations that the full dynamical evolution yields in eq. 2.60 (whether "we are interested in" them or not is irrelevant for this point), results in Gn() of (2.61) which is tautologically non-classical or non-local not because of any strange physical non-locality or non-classicality, but by mere choice to drop the terms which depend on detection events on remote (e.g. space-like) locations.

That is still irrelevant for antibunching.

nightlight said:
The normal ordering does remove vacuum photon contributions e.g. check Glauber's book, pdf page 23, sec 1.1, the expression <0| E^2 |0> != 0 where he explicitly calls these "zero point oscillations" (ZPF in SED). The operational counterpart to this normal ordering requirement is the subtraction of dark counts from contributions to Gn() counts.

There is no subtraction of dark counts as (to quote from Glauber's book) "We may verify immediately from Eq. (1.12) that the rate at which photons are detected in the empty, or vacuum, state vanishes.".

nightlight said:
While these are all fine procedures as far QO applications and extractions of Gn() from counts data go, one has to understand that this is done and know exact values in non-locality/non-classicality experiments. One cannot assume that the reported Gn() are obtained without such subtraction and legitimately require that any local/classical model must replicate such "ideal" subtraction free case, when these subtractions are mandated by the definition of Gn() (via the normal ordering, along with the other subtractions introduced in the n photon chapter) and included in the standard QO counting procedures.

Classical models must be designed for a specific detector. As I already pointed out several times, the terms you consider in Gn do so, too.

nightlight said:
I am saying that 'n' in Gn() is unrelated to photon number of the field state as you seemed to conflate in few places.

Ehm, no. I never said that.

nightlight said:
The value of Gn() function depends of course on field state. But the 'n' in Gn() does not imply anything about what kind of field state rho you can use for the expectation value.

Of course not.

nightlight said:
The Gn() is by definition filtered (i.e. it is defined after dropping the terms 2.60 -> 2.61, plus requiring normal ordering which drops additional counts). There is no unfiltered Gn().

Again: This is the definition of Gn for BINARY detectors only. You seem to imply that people are not aware of the limitations of their setups. This is incorrect. A good treatment of n binary detectors is for example given in Phys. Rev. A 85, 023820 (2012) (http://arxiv.org/abs/1202.5106), which explicitly discusses that the non-classicality thresholds for one detector may be very different than for a different detector. They also very quickly hint at Bell measurements in a sidenote. Explicit non-classicality criteria and their verification have been presented in follow-up publications. My point stays: Knowing your detector is of critical importance.

nightlight said:
You can get g2<1 or any other violation of 'classicality' just for taking. But there is nothing genuinely non-local or non-classical about any such 'non-classicality' conjured via definitions.

That is still off. Of course g2<1 demonstrates non-classicality (for some detectors). Not non-locality though, but that is not what antibunching is about. Is there any reason why you did not respond at all to the important part of my last post in which I explained why you do not drop terms in Gn when your detector allows you to do so?
 
Last edited:
  • #35
nightlight said:
If there is absolute LHV prohibition by the empirically established facts, there is mystery in double slit. Otherwise the empirical facts of double slit experiment standing on their own are perfectly compatible with local model.

As I said, "compatible with a local model" does not mean that there is no mystery. Is there a plausible local model that is consistent with the predictions of quantum mechanics?

The usual pedagogical presentation a la Feynman is misleading with its exclusivity claim (which you seem to have bought). Even though Feynman presented it in early 1960s as a major mystery (based on von Neumann's faulty no-LHV theorem), in fact even a recent experiment trying to show the above exclusivity had to cheat to achieve such appearance as discussed in an earlier PF thread.

I'm certainly open to the possibility that the loopholes for local variables have not all been closed. But that's very different from saying that there is a viable, plausible LHV model.


A "cheap" (Einstein's characterization) LHV model is the de Broglie-Bohm theory (it is LHV theory if you reject the no-go claims for LHV theories or the instantaneous wave function collapse since both claims lack empirical support).

Yes, certainly the arguments against LHV are assuming interaction speeds limited by lightspeed. Without that limitation, you can reproduce the predictions of quantum mechanics.

A much better theory of that type is Barut's Self-Field Electrodynamics which replicates the high precision QED results (radiative corrections to alpha^5 order, which was as far as they were known at the time in early 1990s; Barut died in 1995). That was also discussed in the above thread; the related SFED survey post written around same time with detailed citations is on sci.phys.research.

And that predicts the results of EPR-type twin particle experiments, as well?
 
<h2>1. What is the double slit experiment?</h2><p>The double slit experiment is a classic physics experiment that demonstrates the wave-particle duality of light. It involves shining a beam of light through two parallel slits and observing the resulting interference pattern on a screen behind the slits.</p><h2>2. Why is the double slit experiment considered weird?</h2><p>The double slit experiment is considered weird because it challenges our understanding of the behavior of light. It shows that light can behave as both a wave and a particle, which goes against our classical understanding of physics.</p><h2>3. How does the double slit experiment support the wave-particle duality of light?</h2><p>The double slit experiment supports the wave-particle duality of light by showing that light can behave as a wave, as evidenced by the interference pattern, and as a particle, as evidenced by the individual particles hitting the screen behind the slits.</p><h2>4. What implications does the double slit experiment have for our understanding of the universe?</h2><p>The double slit experiment has significant implications for our understanding of the universe, as it suggests that the behavior of particles at the quantum level is fundamentally different from what we observe at the macroscopic level. It also raises questions about the nature of reality and the role of observation in shaping it.</p><h2>5. Can the double slit experiment be applied in other areas of science?</h2><p>Yes, the principles of the double slit experiment can be applied in other areas of science, such as electron diffraction and quantum computing. It has also been used to study the behavior of other particles, such as electrons and atoms, and has even been applied in fields like biology and chemistry.</p>

Related to I don't see what's weird about the double slit experiment

1. What is the double slit experiment?

The double slit experiment is a classic physics experiment that demonstrates the wave-particle duality of light. It involves shining a beam of light through two parallel slits and observing the resulting interference pattern on a screen behind the slits.

2. Why is the double slit experiment considered weird?

The double slit experiment is considered weird because it challenges our understanding of the behavior of light. It shows that light can behave as both a wave and a particle, which goes against our classical understanding of physics.

3. How does the double slit experiment support the wave-particle duality of light?

The double slit experiment supports the wave-particle duality of light by showing that light can behave as a wave, as evidenced by the interference pattern, and as a particle, as evidenced by the individual particles hitting the screen behind the slits.

4. What implications does the double slit experiment have for our understanding of the universe?

The double slit experiment has significant implications for our understanding of the universe, as it suggests that the behavior of particles at the quantum level is fundamentally different from what we observe at the macroscopic level. It also raises questions about the nature of reality and the role of observation in shaping it.

5. Can the double slit experiment be applied in other areas of science?

Yes, the principles of the double slit experiment can be applied in other areas of science, such as electron diffraction and quantum computing. It has also been used to study the behavior of other particles, such as electrons and atoms, and has even been applied in fields like biology and chemistry.

Similar threads

Replies
32
Views
2K
  • Quantum Physics
2
Replies
36
Views
1K
  • Quantum Physics
Replies
14
Views
1K
Replies
26
Views
1K
Replies
60
Views
3K
Replies
1
Views
1K
  • Quantum Physics
Replies
4
Views
1K
  • Quantum Physics
Replies
19
Views
1K
Replies
5
Views
786
  • Quantum Physics
Replies
33
Views
2K
Back
Top