Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I don't see what's weird about the double slit experiment

  1. Jul 14, 2013 #1
    I've been reading & watching videos about the double slit experiment, and I'm failing to see what's so interesting & strange about it (I hope someone can enlighten me on what I'm missing).

    The weirdness is NOT the fact that observing an electron changes it's behaviour (right?)

    At first I thought the strangeness was that simply observing a single electron could make it go through one slit instead of two (i.e. simply looking at a particle can change its behaviour - wow spooky!). But I recently learned, if I understand correctly, that there's nothing spooky about that at all - it's simply that in order to observe an electron you need to fire a photon at it, and the photon is energy which of course alters the electron in some way. Right? Simple and logical - not weird at all.

    Is the weirdness the fact that a single electron goes through both slits?

    The only other potential weirdness I can see is that single electrons fired one at a time at the two slits cause an interference pattern. This is a bit weird because the only explanation seems to be that each single electron somehow goes through 2 slits and interferes with itself. But is that actually weird? It doesn't seem any weirder to me than that fact that light sometimes acts like a particle and sometimes like a wave. If light can do it, why not an electron? Why is it so strange that a single electron acts like a wave and goes through both slits? It doesn't seem that weird and mysterious to me.

    I'm sure there probably IS something weird/mysterious/fascinating about the double slit experiment since everyone says there is and thinks it's a big deal, so if someone can help me to understand what that is I'd appreciate it.
  2. jcsd
  3. Jul 14, 2013 #2

    The thing is, you get an interference pattern, something you expect from waves, not from particles (hence wave/particle duality). Back then, we did not know that massive particles had. The second weird thing is that, when you put a detector on one of the two slits; this interference pattern disappears. This doesn't have to be with a photon (as far is I heard), this could as well be through a changing electric field.

    The weirdness of the single electron going through both slits:
    First you need to, yet again, know that we didn't expect this behaviour from electrons. Life back then was deterministic, this experiment changed that view.

    And photons are massless particles, electrons aren't...

    But double slit has always confused me a little bit, so I'd wait for the post of other members ;)
  4. Jul 14, 2013 #3
    There have been versions of the double slit experiment that do not disturb the wavefunction and simply the potential availability of the flight path of the electron changes its wave-like (particle-like) behavior.
    It's wrong to think of electrons as particles at all times and it's even more wrong to assume elementary particles are small bullets. It's possible get partial information about what electrons are doing when not being measured through weak measurements and you'd be surprized what they seem capable of doing.

    Aside from its dual nature, a single electron is an undefined concept according to the best knowledge in the field. Its existence cannot be defined outside the measurement environment and this is a very strong hint that electrons and elementary 'particles' are not fundamental.
    Last edited: Jul 14, 2013
  5. Jul 14, 2013 #4

    If the above didn't help, look into electron splitting into spin, charge and orbit components moving about separately(wave properties of elementary particles do not prohibit this).

  6. Jul 14, 2013 #5
    That is an article about quasiparticles in condensed matter which is quit different from a free electron in a vacuum.

    In my opinion the weird part about the double slit exp. is that it works if you fire a single electron (or photon or whatever) at a time.

    Also, you don't have to hit the electron with a photon. One could measure the change in the magnetic field. No matter how gentle one tries to make the interaction, wave-like behavior will not be observed if the particular slit can be determined.

    Frankly, if it doesn't seem strange it is probably because you have known about it for quite some time. The fact that the Earth goes around the sun doesn't seem weird, but clearly it did for most of the history of humanity. Now we learn this fact from day one so it seems normal.
  7. Jul 14, 2013 #6
    Well whether something is Weird or not generally depends on your frame of Reference and that changes usually with your level of your knowledge & Reflection. I'd Like to Quote Bohr here :

    Most of the Times, the Concept of Wave-particle Duality seems absurd to many people. But, the fact Remains - The Electron/photon is not a wave and its not a particle. It's just an Electron.

    To my mind, The Weirdness of the Whole wave Particle Duality is the Fact that it's Sufficient that a setup where two paths to the Electron are possible, Exists. If it exists, Electron will show interference.
    This of course is not the Case with Classical Waves, where a wave is a Wave.Period.
    If the Dual Path aiding interference is Removed, even before the electron Enters the Apparatus, we can be Sure that Particle behavior would be observed.

    I'd Refer you to Wheeler's Delayed Choice Experiment : http://en.wikipedia.org/wiki/Wheeler%27s_delayed_choice_experiment

    So on the Bottom Line : It's Marvelous Coz the Nature of the Apparatus is Sufficient to Decide which type of Behavior the Electron will Show.
  8. Jul 15, 2013 #7


    User Avatar
    Science Advisor

    Well, if you don't find weird that light behaves in that way, then, as you said, the similar behavior of electron is not any weirder. But people usually DO find weird that light behaves that way.
  9. Jul 15, 2013 #8


    User Avatar
    Gold Member

    You are right; there is nothing weird about a single electron going through both slits, IF you treat the electron as a wave:


    And it’s also obvious that if we block one of the slits (with a measurement), it will be impossible to create interference, right? And this of course also works for ‘ordinary’ water:


    Now, when it comes to a single electron, things get a bit more interesting in the quantum world:


    As you see the single “electron wave” manifest itself as a localized dot on the detector plane. I guess you can imagine the problem of getting a water wave acting this way, right? And this weirdness goes for single electron (waves) also.

    But if you have a simple explanation, please let us know. (quickly!)
    Last edited by a moderator: Sep 25, 2014
  10. Jul 15, 2013 #9
    There is nothing that would surprise a 19th century physicist about such picture. Imagine an aftermath of a strong wind in a forrest -- there would be few scattered trees knocked down even though the wind came in continuous waves over the entire forrest. Similarly, if you watch crystallization of a uniform solution, the resulting crystals will introduce non-uniform lumps.

    That aspect is not the surprise (except perhaps to mesmerized physics undergrads). The suprise is supposed to be in entirely different aspect of the phenomenon (not the mere granularity or symmetry breaking by the detection process), as discussed in depth in the "Photon 'Wave Collapse' Experiment" PF thread. That was a "pedagogical" experiment in which they cheated to demonstrate the "surprising" aspect. (Un)fortunately, no one has yet shown that kind of experiment to behave in a genunely surprising (non-classical) way without non-local filtering/post-selection (which misses the point of the "surprise").

    The only decisive non-classicality experiment is the genuine violation of Bell inequalities, and even that doesn't work after half a century of attempts, despite the few recent claims of loophole free violations (both had anomalies in the experiments because of using Eberhard's inequality without verifying that their event space matches his model; it doesn't due to multiple PDC pairs evident in the anomaly).
    Last edited by a moderator: Sep 25, 2014
  11. Jul 16, 2013 #10


    User Avatar
    Gold Member

    Having slight trouble following the ‘logic’... are you saying that something happened to the 20th century physicists? That made them less intelligent, or what??

    Really?? :eek: Could you please tell me where I could find a picture of perfect “wind/forest interference patterns”?? That repeats itself every time...


    So what you’re basically saying is that QM is wrong/false/not working, right??

    Unless you have a published paper and Nobel Prize to back up this erroneous claim, it’s nothing but hogwash.

    Can we agree that your computer (that you wrote the reply on) is working okay, right? Are you aware that the semiconductor devices in your computer would NOT work if QM was wrong? Or do you have a ‘classical’ explanation for quantum tunneling... banging the head against the wall and sometimes it goes thru??


    Last edited by a moderator: Sep 25, 2014
  12. Jul 16, 2013 #11
    I am saying that the alleged "magic" of discrete detection points is not surprising in the least (a resonance phenomenon with non-linear response). You are missing the point of where the surprise is supposed to be (check the paper linked earlier, the intro section where they explain why the discrete/point-like detections are not surprising and why their "collapse" experiment is needed).

    Ever seen Chladny patterns in acoustics? Or any wave phenomena on water? Even ancient Greeks wouldn't be surprised by that picture.

    The old QM measurement theory making "prediction" of BI violations is empirically unfounded. There are actually two quantum measurement theories (MT) -- the older elementary/pedagogical QM MT (due mainly to von Neumann and Luders in 1930s) and a newer, more rigorous QED MT developed by Glauber in 1964-5 (ref [1] has his lectures, which is the most detailed derivation and exposition; I will cite from the reprint of the lectures in [2]). Bell and other QM 'magicians' and popularizers use QM MT, while the actual Quantum Opticians, including experimenters, use Glauber's QED MT since that fits better what they observe.

    The principal difference between the two MT-s is how they deal with measurement on composite systems.

    In QM MT, such measurement is postulated via 'projection postulate' and 'ideal apparatus' which operationally implements such projection in the form of multiple, local, independent subsystem measurements (each a projections, too, but local). Hence, in QM MT, measuring a composite system observable S1xS2 is postulated to consist for "ideal apparatus" of two independent, local measurements of S1 on system #1 and S2 on system #2.

    In contrast, Glauber's QED MT doesn't postulate anything about measurement on composite system but derives the composite measurement via dynamical treatment, by considering interaction of multiple point detectors (single atoms which ionize) with quantized EM field. In terms of Heisenberg-von Neumann cut between 'classical apparatus' and 'quantum system', Glauber puts the cut after the photo-electrons in each detector (as a classical measurement of resulting electric current) while treating everything below that, from photo-electrons through quantized EM field, dynamically. Hence, the only "measurement" QED MT postulates is the plain, non-controversial local projection (fully local, single electron detection).

    After deriving general expression for such interaction ([2], sect. 2.5, eq. 2.60, pp. 47-48; or in [1], p. 85, eq. 5.4), in order to obtain n-point transition probabilities, his correlation functions Gn(x1,x2,...xn), Glauber explicitly drops the terms "we are not interested in", which includes dropping of vacuum induced transitions (via operator ordering rule obtained in the single photon detector chapter 2.4 of [2]) and all transitions involving 'wrong' number of photons (i.e. different than n photons he is "interested in" measuring). The operational meaning of his procedure for extracting Gn() from observed data (transition from eqs. 2.60 -> 2.61 in [2] or 5.4 -> 5.5 in [1]) is a non-local recipe for subtractions or filtering of background/dark counts and all 'accidental' and other 'wrong-n' coincidences.

    While in QM MT, this non-local filtering is hand-waved away as being result of momentary technological imperfections relative to the "ideal apparatus" (which is postulated to exist), in QED MT the necessity of the non-local filtering is fundamental and is derived from full QED treatment of the measurement process.

    The experimental Quantum Optics uses Glauber's QED MT, hence they perform non-local filtering as he prescribed in [1], while physics students are taught the old QM MT, hence they have entirely wrong idea what the 'BI violations' actually mean. They (along with QM theoreticians publishing on the subject) are unaware that the operational procedures for extracting these "non-local" QM correlations out of detector events are fundamentally non-local procedures, which makes such QM "non-locality" tautological and unremarkable i.e. the Bell's independence assumption of two measurements of S1 and S2 (used to require factorization of probabilities) which "predicts" violation of the tautological inequalities (already known for many decades before Bell) is not valid in either QED MT or in actual experimental procedures used to test it.

    Hence, the quantum theory (QED MT) and experiment of BI violations agree perfectly, as they always did and as a good theory and experiment should -- there is no real BI violations as either theoretical prediction of QED MT (since the vital factorization of probablities is precluded due to non-local filtering) or as an experimental fact.

    In contrast, the QM MT claims it predicts genuine BI violation and since experimental facts refuse to go along, it is the experiments which are currently "imperfect", falling short of the "ideal apparatus" (it's one of those, 'do you believe me or your lying eyes'). The most peculiar (unique to QM magic) euphemistic language has evolved to 'splain away the persistent experimental failure of the QM MT prediction, in which "loophole free violation" means what in plain language is called simply "violation" (the "loophole free" euphemism makes the admission of failure sound as if the violation was nearly always observed, except for some obscure, rare "loophole unfree" case).

    Aren't you conceding the defeat a bit early in the discussion by falling back to ad hominem argument?

    There is nothing in the invention or design of this computer that relies on QM MT composite system projection postulate via independent local measurements. The latter is a parasitic add on with no distinguishing consequences other than QM non-locality "predictions" that experiments fail to support. Anything that actually gets measured is done via non-local filtering (if it involves separate sub-systems) as prescribed by the QED MT.

    -------------- Ref
    [1] R. J. Glauber, "Optical coherence and photon statistics" in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji (Gordon and Breach, New York, 1965), pp. 63–185.

    [2] Online reprint of [1] in Glauber's selected papers book "Quantum Theory of Optical Coherence",, 2007 (you can download the PDF file for easier reading & searching).
    Last edited: Jul 16, 2013
  13. Jul 16, 2013 #12


    User Avatar
    Science Advisor

    That is absolutely incorrect. There is antibunching which is completely non-classical. There are negative probabilities in the Wigner quasiprobability distribution if you perform quantum state tomography of many non-classical light fields like Fock states, Schrödinger cat states or squeezed light and there are also signs of their non-classicality in the Glauber-Sudarshan P-distribution.

    No, he absolutely does not. Glauber neglects terms which do not conserve energy. This works well unless you go into the ultrastrong coupling regime (see, e.g. Nature Physics 6, 772–776 (2010) by Niemczyk et al.) where you may encounter interaction strengths on the order of one interaction per light cycle. On that scale these processes need to be considered, but this regime is usually only reached for low energies/long cycle durations. That means microwaves and maybe the very far IR. People would love to get there for visible light, but it is pretty much out of reach. This is completely unrelated to subtracting background counts. The transition to 2.61 is also not related to background counts. The terms dropped are multi-photon absorptions, where one atom absorbs many photons. You can also do detector theory for n-photon absorption of photons which in sum have the necessary energy for the transition in question, but there is no point in discussing these in introductory texts.
    Last edited: Jul 16, 2013
  14. Jul 16, 2013 #13
    The negative probabilities in photon experiments have been modeled in Stochastic Electrodynamics since 1970s. They are "non-classical" only in a narrow technical/tautological sense of "classical" meaning with zero fields as initial and boundary conditions. If you include ZPF (zero point field, equivalent in energy to 1/2 photon per mode), the negative probabilities arise in locations where energy densities becomes lower than ZPF (e.g. due to interference with signal fields). To a detector calibrated not to trigger (or have low probability of trigger) below ZPF, such defects are non-events indistingushable from ZPF/vacuum non-events, yet they are in phase with energy excesses in other places and the two types can thus still interfere with each other. Any such 'negative probability' phenomena can be trivially simulated via localized energy density defects relative to ZPF level. Of course, experiments subtract the background & accidental coincidences in non-local manner, hence there cannot be any claim (or factorization of probabilities assumption) of non-locality based such explicitly non-local operational procedures.

    Read the cited material, e.g. in [2] the transition between eqs. (2.60) and (2.61) where he explains what he is doing:

    Many of these terms, however, have nothing to do with the process we are considering, since we require each atom to participate by absorbing a photon once and only once. Terms involving repetitions of the Hamiltonian for a given atom describe processes other than those we are interested in.

    The entire derivation until that point is fully rigorus dynamical treatment in interaction picture of the quantized EM fields interacting with n atoms, hence there is nothing that preceeded eq. (2.60) that would have violated "energy conservation" and that he needs to fix before going to eq (2.61) by some ad hoc non-local subtractions or filtering of terms derived (as you claim he is doing). The dynamical evolution described does not violate energy (or any other) conservation, hence nothing needs to be done to maintain or recover such conservations. There are no extra terms that need to be removed to uphold "energy conservation", dynamical evolution equations are doing all that just fine.

    As he plainly explains, he is simply looking at how to extract from all the events that happen at these n points, contained in the equations derived rigorously, the properties he is interested in, the n-point absorption (detection) processes. Hence, he proposes the non-local filtering of any other events that contradict such n-point absorptions and the resulting expressions obtained after subtractions of the "noise" (the stuff he is "not interested in", but that happens anyway, by the equations derived until 2.60) are the "signal" expressed via his correlation functions Gn().
  15. Jul 16, 2013 #14


    User Avatar
    Science Advisor

    The negative probabilities occur in the quasiprobability distribution in phase space and are therefore not necessarily bound to specific location. The discussion of a detector triggering or not is pointless. The way to measure these fields is balanced homodyne detection, where you amplify the field of interest by mixing it with a strong local oscillator to very classical values and afterwards get the signal field in terms of the difference current between the two mixed beams at very classical detectors. You do not need detectors working at the single photon level to measure that.

    Also I do not understand why you discuss non-locality at the end when this is about non-classicality. There is non-classicality without non-locality.

    When and how to neglect the negative frequency operator is already discussed around (2.40). He does not go into rotating wave approximation at this point, but he does so regularly in his book, starting exactly around (2.61).

    Yes, at this point he just gets rid of multi-photon absorption.

    That is plain wrong. He gets rid of multi-photon absorption (and he also does not consider reemission from the atoms here) and similar processes. He explicitly states that:"Terms involving repetitions of the Hamiltonian for a given atom describe processes other than those we are interested in.". He omits all processes where one atom is involved more than once. He, however, does not care whether all detections are caused by a signal field or whether there are noise contributions. Noise will just give you different correlation functions. There have been numerous papers and books about detector theory. Check, for example, the QO book by Vogel and Welsch.

    Could you please point out where he explicitly claims that he gets rid of noise here? Otherwise your claim is simply without any substance. While it is true, that only n-detection events are considered for the nth order correlation function, this is trivial. n-1 and n+1 detection events are covered by the correlation functions of order n-1 and n+1, respectively (however, n+1 photon detection events also represent n+1 n-photon detection events, obviously). The hierarchy of those gives the description of the total light field. Their normalized version is the one that gives clues about non-classicality.
    Last edited: Jul 16, 2013
  16. Jul 16, 2013 #15
    The quasi-probabilities are function of space-time and have very limited regions of negativity in space-time (limited essentially by what can escape undetected within Heisenberg uncertainty). Smearing them with Gaussian distribution, such as in Husimi variant of quasi-probabilities, turns them positive. The only apparent exception are the instances of factors (for specific degree of freedom, such as spin or polarization, formally considred seprately from the spatio-temporal degrees of freedom) of such functions which lack space-time dependency. The "exception" is thus an artificat of formal separation of degrees of freedom and of ignoring the space-time factors. In actual measurements of these internal degrees of freedom, there is always translation to spatio-temporal factors (via coupling between degrees of freedom).

    It's not irrelevant for experiments claiming to demonstrate violations of classicial inequalities which are derived by considering constraints of the event space of single detection events. The obliteration of distintion between sub-ZPF and ZPF level fields by the detectors (due to threshold calibration & background subtractions) is critical for such non-classicality demonstrations, since it makes it appear as if the absence of detection counts on, say, one side of the beam splitter implies nothing on that side could cause interference if detectors were removed and the two beams were brought again into common region.

    Except for Bell inequalities the violation of which cannot be simulated by SED, all other non-classicalities of QO can be replicaded in SED via ZPF effects of classical fields.

    Data in homodyne detection cannot violate any classical inequality i.e. one could easily simulate results of such measurement via local automata i.e. field in continuum limit. The only way such detection was used for non-classicality claims was by inferring a state of the quantized EM field, which if measured with "ideal aparatus" of QM MT, which is always beyond present technology, would have violated some classicality condition. The classicality conditions are always derived by looking at constraints of the event space of single detection events.

    Yes, of course there is. But that's not subject of the thread, which is to explain what is surprising about double slit experiment where the system appears in detection localized (as a particle), while passing through separate slits as a wave i.e. it behaves either as a particle which can somehow sense a remote slit it didn't pass through, or as a wave which collapses globally as soon as it trips one detector, preventing any further detections on other/remote detectors (the particle-like exclusivity of wave).

    In either picture, particle or wave, it seems that there is some action at a distance or non-locality. At least within QM MT story. Making that work in a real experiment is another matter and in pedagogical settings requires a bit of stage magic as illustrated in an earlier thread where the particular sleight of hand in demonstrating the above "surprise" was identified.

    As pointed out in initial post, the part of the subtractions is done in the chapter on single detector in the earlier chapter. Those results are then used (by being included implicitly via operatior ordering rule he derived earlier) in the n-point case being discussed. While they were perfectly local filtering procedure in single point detector case, they become non-local when carried over to n-detector apparatus. E.g. background subtractions across multiple points can yield appearance of particle-like exclusivity (sub-Poissonian counts) as discussed in that earlier thread.

    The relevance is that multi-photon absorption on one detector for the n-photon Fock state of the field also results in missed detections on the remaining detectors. For example, in BI violation experiments where G4() is used (in 2-channel variant) with 2-photon state, the events with two detections on one side of the apparatus as well as no detection on one side within coincidence window have to be discarded (resulting in detection loophole).

    All these instances are filtering out of the "wrong" number of absorptions or multiple absorptions on "wrong" detectors and they are obviously discarded not because of technological imperfections of the non-ideal apparatus as claimed by QM MT, but because they are not the measurement "we are interested in" as he puts it.

    Hence, the non-local post-selection is how you pick which Gn() you are measuring. It is particular Gn() (which encodes the info about the field state hence about its past interactions) that one wants to measure, not just anything that happens on the n detectors. But to extract the target Gn() requires non-local filtering as his derivation shows -- you have to drop the terms that don't contribute to the Gn() you're "interested in."

    While that's all perfectly fine when all one is interested in is extracting the given Gn() from detection events, the non-local filtering procedure invalidates the subsystem independence and locality assumptions of QM MT i.e. it preculudes one from making prediction based on such non-locally filtered Gn() that require factorization of probabilities as used in derivation of Bell inequalities.

    The probabilities on such non-locally filtered counts need not factorize hence the prediction of violation of classical tautologies cannot be derived within QED MT. It's a unique peculiarity of the old QM MT that allows such prediction -- the imaginary "ideal apparatus" of QM MT on which S1xS2 can be measured via two local, independent measurements: S1 on subsystem #1 and S2 on subsystem #2. In QED MT, the concidences expressed in Gn() are obviously not independent, as the required non-local filtering to derive them shows.

    There is no "noise" or "signal" in the exact dynamical treatment of the system of EM field and n atoms which he carries out up to some point. There is simply what happens in the interaction of n detectors with quantized EM field.

    The "noise" and "signal" concepts come into picture when one wishes to extract specific Gn() out of all the events on n detectors. That's where the non-local filtering is mandated as his derivation of Gn() shows. Such filtering has nothing to do with the technological imperfections as QM MT suggests. His detectors are already the idealized point-like detectors as perfect as they can be (single atoms).

    As explained above, there is no "noise" in his dynamical treatment. But he seeks expression containing specific Gn(), i.e. he wishes to perform the measurement he "is interested in" such as the eq. (2.64) he arrives at. Everything else (other terms or events at the detectors) is discarded as not belonging to the target measurement of given Gn().

    Similarly, in BI violations where G4() is measured, everything else (such as accidental & background counts, missed detections) is discarded since it doesn't contribute to G4() extraction sought there.

    The labels "signal" and "noise" is my characterization of the procedure in which you seek to identify certain subset of events you are "interested in" ("signal" i.e. his Gn() in 2.64), while dropping events you are "not interested in" i.e. the "noise" as he does from eq. (2.60) (plus vacuum contributions in single detector chapter, which are implicit in the n-detector chapter).

    You seem to be distracted by the arbitrary terms (such as noise/signal), while missing the point -- the fundamental non-local filtering required to extract a given Gn() out of all events on the n ideal point detectors derived via pure dynamical treatment. The point is that Gn() doesn't come for free out of n indepent local detection events on n detectors making up some imaginary "ideal aparatus" (his detectors are already ideal) but has to be filtered out non-locally.

    But once you have such non-local filtering procedure, required to extract Gn() on n-detector apparatus, the factorization of probabilities assumed by Bell (in his treatment of G4() example) doesn't apply or hold according to Glauber's QED MT.

    Hence, there is no prediction of QED MT of BI violation since the G4() extracted via non-local filtering (2.60) -> (2.61) (plus the implicit vacuum effects subtractions via operator ordering rules) violates the independence assumption of "ideal apparatus" of QM MT i.e. the measurement of S1xS2 observable on composite system is witin QED MT not even in principle the same as the independent local measurements of S1 on subsystem #1 and and S2 on subsystem #2 as assumed by Bell by using QM MT and its conjectured "ideal apparatus."

    That paragraph is somewhat non sequitur for the discussion at hand. In any case, you seem to be conflating above the "n detector" correlations, which his Gn() represent (after the non-local filtering) and n-photon field states which one may measure with the above n-detector apparatus. In the BI violations you are "interested in" measuring G4() on 2-photon state.
    Last edited: Jul 16, 2013
  17. Jul 16, 2013 #16


    User Avatar
    Science Advisor

    Usually you have short pulses, so this already dictates your spatial and temporal extents. Nevertheless, the negative probabilities occur in phase space where your field quadratures are your two "axes". Of course smearing them turns them positive. This is what happens if you try to measure two orthogonal quadratures which is not possible without adding additional vacuum noise, e.g. by adding another beam splitter. But how is that relevant here? Wigner functions of Fock states have regions of negative probability. This is a sign of non-classicality.

    That is simply wrong. You cannot simulate antibunching for a stationary field classically. There are numerous crackpot papers claiming that, but failing miserably, but not a single credible one.

    That is wrong again. While non-ideal quantum efficiency obviously reduces the amount of non-classicality, current technology indeed allows to identify non-classical fields. See Alex Lvovsky's review article on quantum state tomography (Rev. Mod. Phys. 81, 299–332 (2009)) and references therein for detailed explanations. You do not need single detection events for that.

    Oh, if I had read that thread earlier that would have saved me some time. I was not aware that you are supporting the crackpot camp. That thread was closed for a very good reason.

    Except for the small problem that the discussion is simply invalid.

    You can discard them. You do not have to. If using binary photon detectors without photon number resolution, you need these rates to be small, but that is absolutely trivial. If using photon number sensitive detectors, you do not even have to do that.

    No, this is nonsense. You can add them if you want to. If you do the proper normalization via the mean count rate, you indeed include them.

    Terms with fewer photons do not contribute anyway. Terms with more photons can be taken into account. You are only talking about the limited case of binary detectors and thus creating a strawman.

    Why Bell again? You claimed there is no sign of non-locality besides Bell, but there is very obviously antibunching which does not need nonlocality.

    Ehm....just no. You do not need an ideal apparatus for non-classicality. That is the main point of measuring g2. You get a normalized variance which does not care about the mean photon number and detection efficiency. If you det a variance below shot noise, you are in the non-classical regime.

    No, it still does not need non-local filtering.

    Everything else simply belongs to a different Gn. A complete characterization of course involves ALL orders of Gn. However, the second order is already sufficient to identify many non-classical light fields. This is Glauber's great finding.

    And again Bell. This discussion is still not about bell.

    So, to summarize: Do you have any peer-reviewed publication supporting your point? That would be a real starting point for a discussion. Otherwise this discussion is pointless.
  18. Jul 16, 2013 #17


    User Avatar
    Science Advisor

    Well, the weird thing here is indeed that it challenges our naive understanding of what the meaning of the term particle is. You either have to abandon the classical picture of "balls with well defined trajectories" or add a wave-like element like Bohmians do.

    If you do not think that this is weird at the single particle level, then there is no hidden more fundamental weirdness.
  19. Jul 16, 2013 #18
    It is a purely formal sign, depending on what you call "classicality". If you define it as classical fields without ZPF, then the clasical ED under such constraints cannot reproduce those effects. But if you drop such gratuitous constraints, then there is nothing non-classical about these effects. Marshall, Boyer, Jaynes and numerous others working on SED have been modeling such formal non-classicality phenomena with purely classical models ZPF + Maxwell ED since 1960s (e.g. check few refs).

    Note that to refute a claim of non-classicality for some observation, any classical model, however contrived (or by whoever, crackpot or otherwise) and specific to data or phenomenon, suffices as a counter-example. To serve as direct counter-example to a non-classicality claim such model need not be a general theory of the phenomenon. That's like a conjecture in math about say some property of prime numbers -- it sufficies to show one numeric example on which the conjecture fails to refute it. The counter-example need not explain when does the conjectured property hold or say anything else about it.

    While the SED is in fact insufficient as a general theory of quantized EM fields, that doesn't mean it can't provide classical model counter-examples for specific non-classicality claims of QO.

    Tomography doesn't yield data (counts) that cannot be reproduced by a local realist models. To generate inequalities which exclude local realist models in addition to such reconstructed states, one requires QM MT assumptions on how multiphoton detections work which contradict the actual coincidence measurement procedures of QO since they assume independence, while the non-local filtering used in QO coincidence measurements outright contradicts such assumption.

    Resorting to name calling, eh. Thanks for admitting the lack of counter-points on matters of substance. The "crackpots" on this subject go back to Einstein, de Broglie and Schrodinger, continuing to Barut, Jaynes, t'Hooft, Wolfram and others more recently. Not a bad company to be in.

    BTW, that thread was closed after it got revived (when Glauber won Nobel prize) many months after it wound down on its own due to someone getting offended. Your observation about it is yet another non sequitur.

    Well that settles it I suppose, if we were a grade school and you were the teacher.

    To get anything non-classical in genuine sense, rather that via some wishful definition of "classical", you need to discard/filter data non-locally. The experiment which could achieve such violation you claim to be routine would be a loophole free demonstration of non-classicality, which is still pending.

    Sign or hint is a different matter than the loophole free demonstration of non-classicality in the sense of excluding any type of local realist model of the observation. You are conflating technical term 'non-classical' as used in QO with genuine non-classicality (with no restriction of initial or boundary contitions to ZPF free type). That kind of technical 'non-classicality' as used in QO jargon is neither genuine non-classicality nor is it surprising.

    It is about whether there is anything surprising in the double slit experiments, which comes down to requiring either particle or wave with action at a distance features. Of course, that's only a qualitative hint at possible non-locality. The only decisive quantitative criterium as to whether that is the case is the violation of Bell inequalities, which is what any argument about it, in double slit or any other such experiment, eventally has to fall onto to preclude the assumption of pre-existent values of physical quantities merely manifesting in the measurement (hidden variables). All other QO 'non-classicalities' can be replicated by SED.

    Check any of the top linked papers above, e.g. those by Marshall & Santos, who were debunking QO 'non-classicality' claims for over three decades. Even the most conventional and authoritative textbooks, such as Yariv's "Optical Electronics in Modern Communications" reluctantly acknowledges (cf. chap. 20, p. 703) that "Somehow to my surprise, I found that by asking student to accept just one result from quantum mechanics [ZPF], it is possible to treat all the above mentioned phenomena classically and obtain results that agree with those of quantum optics." The one "result" is the ZPF initial & boundary conditions used in SED which is a classical theory (ZPF is a plain classical field even though Yariv labels it a "result from quantum mechanics", in the sense of being inspired by QM, I suppose).

    The ZPF is actually concept that goes back to Planck's 2nd theory (of 1911), which was purely classical EM theory with background zero point field (also classical field) which he used to re-derive his black-body formula in classical manner. Over decades the ZPF idea has been extended to cover any new 'non-classical' phenomenon in QO and it sufficed for all except the Bell inequality violations (which empirically don't happen anyway).
    Last edited: Jul 16, 2013
  20. Jul 16, 2013 #19


    User Avatar
    Science Advisor

    Non-classical are obviously all the things you cannot reproduce using classical distributions. Conditional homodyne detection gives good examples of that. See e.g. J. Opt. B: Quantum Semiclass. Opt. 6 S645 for a continuous variable example.

    I disagree. Of course you can come up with a classical model for the photoelectric effect, for some forms of antibunching (especially for nonstationary fields, not for the general case, though), for some stuff seen in squeezing and so on. But who is interested in model for very special cases for which it is well known that they are heavily flawed? Who is interested in showing that one can find a parameter set looking like antibunching, while at the same time producing unphysical first order coherence distributions not matching th experiment?

    That just seems odd to me. Physics is about building working models with predictive power, not about building highly specialized approaches fulfilling random constraints. But, however, if someone desperately wants to do that: ok. I still consider it mathematics, not physics.

    I never claimed that. You claimed:" The only decisive non-classicality experiment is the genuine violation of Bell inequalities". I am not interested in the Bell inequality question or nonlocality at all. I am claiming that there are other non-classicality experiments not even remotely similar to Bell measurements.

    That does not make your point more correct or even mainstream and "goes back to" is quite a stretch. The existence of researches supporting determinism or local realism at some point does not make it stronger. This is not a forum devoted to off-track physics.

    Sorry, the race of following what other people think needs to be shown for "genuine" nonclassicality is nothing I intend to join. It will end up with people finally claiming that one has to rule out superdeterminism to show that things are truly nonclassical. If you can provide a consistent classical model of antibunching, that is something worth discussing. The stuff by e.g. Marshall and Santos is not of interest from a physical point of view as it is not consistent. From the math point of view maybe.

    No, I am not conflating them. I am not the slightest bit interested in what you call genuine non-classicality. You made the claim " The only decisive non-classicality experiment is the genuine violation of Bell inequalities" (explicitly not about what you call genuine non-classicality) and I simply oppose that. I am just interested in technical non-classicality as you intend to call it.

    Still: no. I do not count qualitative similarity using different (and usually mutually exclusive) parameter sets as replication. A model lacking predictive power is a bad model. Physics is about predicting, not about replicating.

    Yes, I know that stuff. I think my comments above already gave my opinion about SED. Carmichael wrote "The Achilles heel of stochastic electrodynamics is its inability to give a plausible account of the firing of photoelectric detectors." This is still a mild verdict.
  21. Jul 17, 2013 #20
    Just because some quasiprobability distribution turns negative, hence inapplicable as probablistic model of the phenomenon, doesn't imply all conceivable models are inapplicable. The claim to have a phenomenon which excludes all classical models even in principle is an extremely strong claim and all it takes is a single counter-example, a model however contrived and specialized to the phenomenon, to falsify such extreme non-classicality claim. That doesn't make such model more useful than conventional theory or invalidate practical usefelness of the phenomenon. It merely falsifies the most extreme form of the extrapolation of the phenomenon.

    In the above and the rest of your comments you seem to be confusing lack of practical usefulness or general applicability of some model with its ability to falsaify the extreme claims. Such falsification doesn't make phenomenon unimportant or less interesting in practical application. It only affects the grand claims of absolute non-classicality (which yet have to be demonstrated empirically).

    That's not about who is interested in some contrived counter-example and who is not, but whether the counter-example to a claim can exist at all, since the claim it aims at says it cannot exist even in principle.

    The conventional QO non-classicalities based on negativity of P or some such technical criterium are in a different realm -- they explicitly preclude only certain kinds of classical models (such as those without ZPF). They are irrelevant for the question of this thread -- is there anything surprising or genuinely mysterious about double slit experiment? My initial comment was that seeing dots (or discrete detections) on the screen or interference patterns isn't the surprising aspect as some here have imagined (the paper [1] of the earlier thread explains in the intro why neither aspect is actually surprising and why their experiment is needed).

    The only truly suprising effect would be genuine empirical violation of Bell inequalities, which in turn would make the double slit or beam splitter experiment mysterious by precluding local hidden variable models. Namely, Bohm-de Broglie alternative QM theories can explain the particle-wave interplay in the double slit and similar mysteries of the early QM via hidden variables. Their main problem is that if the Bell inequalities can be violated (which they don't predict since they don't use the strong composite system projection postulate of QM MT but a weak form a la QED MT), then their hidden variables are automatically non-local, which makes them much less convincing as explanation.

    It was the question posed in the forum that we are supposed to be addressing. You are the one who was calling names. I am merely reminding you that you're shooting way over your head with such attacks. Labeling Einstein, Schrodinger, de Broglie, Bohm, Barut, t'Hooft, Wolfram, etc. crackpots is a bit of a stretch in my view.

    That's yet to be seen. First the actual violation without anomalies, unreported relevant counts data or loopholes have to be shown before any of super-determinism comes into play. There is no point debating on who believes in what possibility.

    As a direct counter-example to absolute non-classicality claims it suffices.

    But the thread question is whether there is something truly mysterious, not about technical 'non-classicality', such as negativity of particular quasiprobability function or violation of some contrived definition of 'classicality'.

    I am not supporter of SED either, it's an effective theory for a limited set of QO phenomena. Its main virtue was in taking down the grand non-classicality claims of the early QO from 1960s and putting them into their proper place -- as mere technical 'non-classicalities' (i.e. as terminological conventions). The only 'holy grail' in the non-classicality field are the loophole free violations of Bell inequalities and all genuinely surprising aspects of QM arise from the possibility of such violation since such violation would preclude any conceivable local hidden variables models (deterministic or stochastic). That's what would make the double slit discussed in this thread a genuine mystery since all natural explanations would be eliminated.
    Last edited: Jul 17, 2013
  22. Jul 17, 2013 #21


    User Avatar
    Science Advisor

    I disagree. The counterexample must be physical and explain at least the necessary measurements (at the very least g1 and g2). Most importantly it must come along with the necessary detector theory and be explicitly formulated for a specific detector. It is an important, but often neglected fact, that non-classicality conditions must go along with a precise description of the detector as each detector strictly speaking has its own non-classicality condition. For example you get sub-binomial instead of sub-Poissonian light for on-off detector arrays (Phys. Rev. Lett. 109, 093601 (2012) , Phys. Rev. Lett. 110, 173602 (2013) ). Otherwise people will come along with 13-dimensional entities or the famous how-to-fit-an-elephant formula. This is why I cannot take the "classical counter-example" seriously. You can always assume some detector that can reproduce some experimental result. However, I agree that the importance of the latter part is usually not emphasized enough. On the other hand it is cumbersome to write (and especially read) in every paper something like: "We demonstrate non-classicality for a two avalanche photo diode system with a dead time of 500 ps, quantum efficiency of 82% in start-stop geometry with bin sizes of 300 ps at a dark count rate of 300 counts/s an afterpulsing probability of 0.1% a mean count rate of 10^7/s per photo diode..." and so on and so forth.

    A good counterexample should also be able to explain all measurements of the same field in different settings, like antibunching using a single detector (Phys. Rev. A 86, 053814 (2012)), photon number resolving detectors and two-photon absorption.

    Well, as said before, I was never discussing absolute non-classicality here. Although I think it is solid, too.

    So Bell Inequalities are the most interesting thing about the double slit? Come on, you cannot be serious.

    No. Not when it comes to the double slit. I just adressed your erroneous claim that there is no non-locality.

    The world does not seem to share your definition of what is truly mysterious. Most people do not worry about Bell when they discuss the double slit.

    Ehm...no. Bell does not add to the double slit. It is an interesting topic of its own. The double slit is not even quantum. As said: I just responded to correct your erroneous statement. And as this is about the double slit and not about Bell and not even about non-classical physics, I will stop this discussion here as it is out of place. If there is anything else to add to the last discussion, please open another thread and/or ask the mods to move the posts. This is moving way too far from the topic at hand.
  23. Jul 17, 2013 #22
    That's mixing up apples and oranges. For people seeking to come up with next physics at Planck scale, such as t'Hooft or Wolfram, it is quite relevant whether cellular automata or Planckian networks or some other distributed, local computational model could at least in principle replicate the empirical facts of existent physics. If there is something in the existent physics which precludes it at fundamental level, they and others with similar interests would certainly love to know it before pursuing 'squaring of the circle' path.

    But any such fundamental prohibition would need to be a rigorous derivation of Bell inequalities violations by QM based squarely on the empirically backed requirements, not on some arbitrary or wishful interpretations of QM measurement process for which weaker alternative would work as well as far as the empirical facts go.

    The present derivation of QM prediction which violates BI falls well short in this regard, since it is based on QM MT which makes very strong assumptions about measurements of composite system observables -- it assumes that measurement of S1xS2 observable can be done by an "ideal apparatus" via two independent, local measurements of S1 on subsystem #1 and of S2 on subsystem #2.

    In contrast, the more recent and more fundamental QED MT doesn't make the above assumption about S1xS2, but derives the composite system measurements from a much weaker, empirically well backed assumption about local detection of photo-electrons. This derivation, as cited earlier, shows that the independence assumption of QM MT does not hold in QED MT on composite systems -- as Glauber shows, to measure the Gn() from the counts on n detectors, requires non-local filtering of the observed counts.

    With that type of non-local filtering procedure derived in QED MT from weaker measurement postulates, any non-locality or non-classicality claims (based on probability factorization requirements for classical models) are no more valid than a magician claiming he can demonstrate telepathic powers, provided he is allowed to read the answer from the remote 'sender' with his regular eyes first, before telepathically divining it via his third eye.

    That was the only aspect I was discussing. The thread topic was not whether one can define "surprising" in a way that makes double slit experiment "surprising" (which is apparently your understanding of the thread question), but whether there is something genuinely surprising about it, as Feynman's lectures and myriad others inspired by them suggest.

    My response is that they are surprising only if you prohibit LHV based explanations. But the LHV prohibition depends critically on the status of BI violations since that's the only quantitative, falsifiable criterium so far for such exclusion.

    As explained, as a matter of excluding of some paths of possible future physics, that's indeed the most relevant aspect -- whether it excludes LHV models in absolute sense or not. The double slit or beam splitter experiments, at least as pedagogically presented, are qualitatively suggestive of such exclusion. But the only quantitative, falsifiable criterium for fundamental exclusion of LHV models is at present the BI violation.

    Since that violation hasn't been empirically achieved as yet, while QM prediction of the violation is dependent on an unnecessarily strong composite system measurement postulate, the question on what kind of future physics is possible, specifically whether some local computation (e.g. at Planck scale) can in principle replicate the existent physics, remains open.

    What exactly is "erroneous"? I didn't notice anything but a series of non sequitur and ad hominem arguments so far.

    As explained above, you are confusing practical relevance and theoretical generality of LHV counter-examples with fundamental questions whether they refute absolute non-locality claims (the absolute exclusion of LHV models for the observed phenomena).

    To refute absolute claims of non-existence of such models that could be compatible with some empirical data, all that is needed is to replicate the narrow data in question via LHV model, however contrived and irrelevant as a general theory such model may be. The mere existence of an LHV model for empirical data claimed to exclude as a matter of principle any such model as explanation, suffices to falsify the absolute non-existence claim.

    While one can weaken a non-classicality claim, as you are doing throughout the discussion by qualifying it with saying that it merely excludes "interesting" models, that's fine, but that kind of weaker and subjective non-classicality is mere terminological convention which is irrelevant for the fundamental questions such as to what conceivable paths some future physics may take.

    If you're practical Quantum Optician or applied physicist, BI violations may be irrelevant. For those curious about whether future physics may be based on distributed local computations at Planck scale (such as Wolfram's NKS models), it is quite relevant question.

    Have you read Feynman's lectures? Or myriad others based or inspired by them? The mystery is how to explain double slit without LHV model which is supposedly excluded by BI violations. Since the latter was not empirically demonstrated, one can drop the no-LHV-allowed requirement, and the double slit mystery goes away.

    There was no "erroneous statement" that anything you said pointed out. If I missed it you are welcome to provide a link to specific demonstration of such error you claim to have produced in the discussion.
  24. Jul 17, 2013 #23


    User Avatar
    Staff Emeritus
    Science Advisor

    I don't think that anyone really has answered the question of what's weird about the double slit experiment.

    In the experiment, we send a stream of electrons (or beam of light) through two parallel slits and the waves or particles or whatever they are are absorbed by a screen (say a photographic plate). There are three experimental facts we discover:

    1. At high intensity, a characteristic interference pattern appears on the screen. The pattern changes when you close off one slit or the other.

    2. If you drop the intensity sufficiently, the pattern on the screen can be seen to be made up of discrete, localized interactions (dots on the screen).

    3. If the intensity is low enough then the appearance of dots on the screen can be seen to be exclusionary; if a dot appears on one spot, then no dot will appear at a different spot at the same time.

    What is weird is that we don't have a satisfactory explanation of these facts in terms of localized processes and local interactions.

    The interference pattern by itself isn't weird--classical electrodynamics predicts the same pattern. But the discreteness that is observed at low intensity isn't predicted by classical theory.

    You could try to incorporate the discreteness by assuming that there is some nondeterminism in the interaction between the wave (showing the interference pattern) and the screen. The interaction results in a discrete dot (or lack thereof), but it is unpredictable what outcome will happen.

    But assuming that the dot results from a localized interaction between the wave and the points on the screen fails to explain the exclusionary property. If a dot appears at one point, then a dot will NOT appear at a distant point at the same time. This exclusionary property is required by conservation of energy and/or particle number, but it's hard to see what could enforce it. If the wave interacting with the screen nondeterministically produces a dot, what's to prevent a dot from appearing at a distant point at the same time. It would seem to require a nonlocal interaction to make sure that a dot appearing at one point implies no dot appears at a distant point.

    If you try to explain the exclusionary property in terms of local interactions, you seem to be forced to the conclusion that the particle (electron or photon) must have already "decided" which way it was going to go much earlier. That way, you can assure absolute particle number conservation; the electron is at some definite position at every moment (you just don't know where) and so of course if it's at one point making a dot, it can't be at another point making a different dot. This is the hidden variables approach. But that has its own problems. In particular, it's hard to account for the interference pattern; if the electron has a definite position, then it must go through one slit or the other. If it goes through one slit, then why should it make any difference to that electron that the other slit is open?

    So it's not so much that there is anything contradictory about the experimental results of the two slit experiment, but those results defy explanation in terms of localized interactions.
  25. Jul 17, 2013 #24


    User Avatar
    Staff Emeritus
    Science Advisor

    That comment doesn't make any sense to me. The mystery doesn't "go away" because there is a loophole in a proof. To make the mystery go away would require that you actually come up with a satisfactory local hidden variables explanation for the empirical facts. Whether or not there is a proof that no such explanation is possible, we have mystery until we have such an explanation.

    Are you saying that there is a satisfactory local hidden variables model for quantum mechanics? Or are you saying that there is no conclusive proof that there is no such model? The latter is a very weak double-negative, and I certainly wouldn't consider that to be a "solution" to the mystery.
  26. Jul 17, 2013 #25
    If there is absolute LHV prohibition by the empirically established facts, there is mystery in double slit. Otherwise the empirical facts of double slit experiment standing on their own are perfectly compatible with local model. The usual pedagogical presentation a la Feynman is misleading with its exclusivity claim (which you seem to have bought). Even though Feynman presented it in early 1960s as a major mystery (based on von Neumann's faulty no-LHV theorem), in fact even a recent experiment trying to show the above exclusivity had to cheat to achieve such appearance as discussed in an earlier PF thread. The exclusivity on non-filtered data is actually no sharper than Poissonian distribution allows for i.e. the chance of simultaneous hit in two places is at least p^2 (where p is chance of one hit; it's larger for super-Poissonian sources, such as chaotic light).

    You have double slit empirical facts wrong. There is no real (sub-Poissonian) exclusivity in detections as the empirical fact as you seem to believe. See that thread about recent experiment and their sleight of hand used to create the illusion of exclusivity (they used separate setup to check for exclusivity and had cut the timing on TAC in half from that required by the spec, which superficially eliminated most of the double detections).

    A "cheap" (Einstein's characterization) LHV model is the de Broglie-Bohm theory (it is LHV theory if you reject the no-go claims for LHV theories or the instantaneous wave function collapse since both claims lack empirical support).

    A much better theory of that type is Barut's Self-Field Electrodynamics which replicates the high precision QED results (radiative corrections to alpha^5 order, which was as far as they were known at the time in early 1990s; Barut died in 1995). That was also discussed in the above thread; the related SFED survey post written around same time with detailed citations is on sci.phys.research.

    The SFED starts with coupled Maxwell-Dirac equations, treated as interacting classical EM and matter fields, which due to self-interaction of Dirac field evolve via non-linear PDEs. Barut shows an ansatz which turns this system into conventional multiplarticle QM (MPQM) phase space representation, provided one drops non-linear terms left after the ansatz. If one retains the non-linearity and iterates the solutions one gets QED radiative corrections valid to at least alpha^5. Einstein and Schrodinger also worked on similar idea (they apparently knew a variant of the same ansatz), except they were much more ambitious, seeking the unified non-linear theory that includes gravity which Barut's SFED ignores.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook