Photon Wave Collapse Experiment (Yeah sure; AJP Sep 2004, Thorn )

  1. Photon "Wave Collapse" Experiment (Yeah sure; AJP Sep 2004, Thorn...)

    There was a recent paper claiming to demonstrate the indivisibility of
    photons in a beam splitter experiment (the remote "wave collapse"
    upon "measurement" of "photon" in one of the detectors).

    1. J.J. Thorn, M.S. Neel, V.W. Donato, G.S. Bergreen, R.E. Davies, M. Beck
    "Observing the quantum behavior of light in an undergraduate laboratory"
    Am. J. Phys., Vol. 72, No. 9, 1210-1219 (2004).
    Experiment Home Page

    The authors claim to violate "classicality" by 377 standard deviations,
    which is by far the largest violation ever for this type of experiment.
    The setup is an archetype of quantum mystery: A single photon arrives at
    a 50:50 beam splitter. One could verify that the two photon wave packet
    branches (after the beam splitter) interfere nearly perfectly, yet if
    one places a photodetector in each path, only one of the two detectors
    will trigger in each try. As Feynman put it - "In reality, it contains
    the only mystery." How does "it" do it? The answer is -- "it" doesn't
    do "it" and the mysterious appearance is no more than a magic trick.

    Unlike the earlier well known variants of this experiment ([2],[3]),
    the present one describes the setup in sufficient detail that the
    sleight of hand can be spotted. The setup is sketched below, but
    you should get the paper since I will refer to figure and formula
    numbers there.

    Code (Text):
         G photon Source   TR photon    PBS  T
    DG <---------- [PDC] ----------------\----------> DT
                                         |
                                         | R
                                         V
                                         DR
     
    The PDC Source generates two photons, G and TR. The G photon is used
    as a "gate" photon, meaning that the trigger of its detector DG defines
    the time windows (of 2.5ns centered around the DG trigger) in which
    to count the events on the detectors DT and DG, which detect the
    photons in the Transmitted and Reflected beams (wave packets). The
    "quantum" effect they wish to show is that after a detector, say,
    DT triggers, the other detector DR will not trigger. That would
    demonstrate the "indivisibility" of the photon and a "collapse"
    of the remote wave packet at DR location as soon as the photon
    was "found" at the location DT.

    In order to quantify the violation of classicality, [1] defines
    a coefficient g2 which is a normalized probability of joint
    trigger of DT and DR (within the windows defined by DG trigger)
    and is given via:

    .... g2 = P(GTR) / [P(GT)*P(GR)] ... (1)

    or in terms of the (ideal) counts as:

    .... g2 = N(GTR)*N(G) / [N(GT)*N(GR)] ... (2)

    where the N(GTR) is count of triple trigger, the N(GT) of double
    triggers DG and DT, etc. The classical prediction is that g2>=1
    (the equality g2=1 would hold for a perfectly steady laser source,
    the "coherent light"). This inequality is eq (AJP.3). The quantum
    "prediction" (eq AJP.8,13) is that for a single photon state TR,
    the g2=0. The paper claims they obtained g2=0.0177 +/- 0.0026.
    The accidental (background) coincidences alone would yield
    g2a=0.0164, so that the g2-g2a is just 0.0013, well within the
    std deviation 0.0026 from the quantum prediction. Perfection.

    The two tiny, little clouds in this paradise of experimental
    and theoretical perfection are:

    a) while there is a QED prediction of g2=0, it is not for this
    kind of detection setup (that's a separate story which we could
    [post=529314]pursue later[/post]), and

    b) the experiment doesn't show that (2) yields g2=0, since they
    didn't measure at all the actual triple coincidence N(GTR) but
    just a small part of it.

    Let's see what was the sleight of hand in (b). We can look at the
    coincident detections scheme as sampling of the EM fields T and R
    where the sampling time windows are defined by the triggers of
    gate DG. Here we had sampling window of 2.5ns and they measured
    around 100,000 counts/sec on DG (and 8000 c/s on DT+DR). Thus
    the sampled EM field represents just 0.25 ms out of each second.

    The classical prediction g2>=1 applies for either continuous or
    sampled measurements, provided the samples of GT and GR are
    taken from the same position in the EM stream. For the coincidences
    GT and GR, [1] does seem to select the properly matching sampling
    windows since they tuned the GT and GR coincidence units (TAC/SCA,
    see AJP p.1215-1216, sect C, Fig 5) to maximize each rate (they
    don't give unfortunately any actual counts used for computing via
    (2) their final results in AJP, Table I, but we'll grant them this).

    Now, one would expect, that obtaining the sequence of properly
    aligned GT and GR samples (say a sequence of length N(G) of
    0's and 1's), one would extract the triple coincidence count
    N(GTR) by simply adding 1 to N(GTR), whenever both GT and
    GR contain 1 for the same position in the bit array.

    But no, that's not what [1] does. For "mysterious" reasons
    they add a third, separate coincidence unit (AJP.Fig 5; GTR)
    which they tune on its own to extract its own separate sample
    of EM fields. That alone is a gigantic loophole, a secret pocket
    in the magicians sleeve. If the sampling windows GT and GR for
    the new GTR unit are different enough from the earlier GT/GR
    windows (e.g. shifted by just 1.25 ns in opposite directions),
    the classical prediction via (2) will also be g2=0 (just background).
    And as they're about to finish they pause at the door, with
    'and by the way' look say "There is one last trick used in setting
    up this threefold coincidence unit." where they explain how they
    switch optical fibers, from DR to DG, then tune to G+T to stand
    in for R+T coincidences, because they say "we expect an absence
    of coincidences between T and R" (p 1216). Well, funny they
    should mention it, since after all this, somehow, I am also
    beginning to expect the absence of any GTR coincidences.

    Also, unlike the GT and GR units which operated in START/STOP
    TAC mode the GTR unit operated in START GATE mode, where a
    separate pulse from DG is required to enable the acceptance
    of DT and DR signals (here the DT was used as START and DR as
    STOP input to TAC/SCA, see fig 5, while in the other two units
    G was used for SATRT and T or R for STOP). It surely is getting
    curioser and curioser, all these seeming redundancies with all
    their little differences.


    The experiment [3] also had a 3rd GTR unit with its own tuning,
    but they didn't give any details at all. The AJP authors [1] give
    only the qualitative sketch, but no figures on the T and R sampling
    window positions for GTR unit (e.g. relative to those from GT and
    GR units) were available from the chief author. Since the classical
    prediction is sensitive to the sampling window positions, and can
    easily produce via (2) anything from g2=0 to g2>1, just by changing
    the GTR windows, this is a critical bit of data the experimenters
    should provide, at least mention how they checked it and what
    the windows were.

    Of course, after that section, I was about ready to drop it
    as yet another phony 'quantum magic show'. Then I noticed
    at the end they give part numbers for their TAC/SCA units,
    (p1218, Appendix C): ORTEC TAC/SCA 567. The data sheet for
    the model 567 lists the required delay of 10ns for the START
    (which was here DT signal, see AJP.Fig 5) from the START GATE
    signal (which was here DG signal) in order for START to get
    accepted. But the AJP.Fig 5, and the several places in the
    text give their delay line between DG and DT as 6 ns. That
    means when DG triggers at t0, 6ns later (+/- 1.25ns) the
    DT will trigger (if at all), but the TAC will ignore it
    since it won't be ready yet, and for another 4ns. Then,
    at t0+10ns the TAC is finally enabled, but without START
    no event will be registered. The "GTR" coincidence rate
    will be close to accidental background (slightly above
    since if the real T doesn't trip DT and the subsequent
    background DT hits at t0+10, then the DG trigger, which
    is now more likely than background, at t0+12ns will
    allow the registration). And that is exactly the g2 they
    claim (other papers claim much smaller violations and
    only on subtracted data, not the raw data, which is how
    it should be, the "nonclassicality" as the Quantum Otpics
    term of art, not anything nonclassical for real).

    So, as described in [1], the GTR unit settings will cut off
    almost all genuine GTR coincidences, yielding the "perfect"
    g2. I did ask the chief experimenter about the 10ns delay
    and the inconsistency. Oh, he knew it all along, and they have
    actually used the proper delay (and not the 6ns as stated the
    paper), but the paper was too long for such details. Lemme see,
    they wrote the paper and the AJP told them to shorten it to such
    and such length. Now, say, the six of them sweated for a day
    editing the text, and just had it at about the right length,
    except for 5 extra characters. Now, having cut out all they could
    think of, they sit at the table wondering, vot to do? Then suddenly
    a lightening strikes and they notice that if they were to replace
    the delay they actually used (say 15ns, or anything above 11.5,
    thus 2+ digit anyway) with 6 ns they could shorten the text by
    10 characters. Great, let's do it, and so they edit the paper,
    replacing the "real" delay of say 15ns with the fake delay of 6ns.
    Yep, that's how the real delay which was actually and truly greater
    than 10ns must have become the 6ns reported in the paper in 10 places.
    Very plausible - it was the paper length that did it.

    And my correspondent ended his reply with: "If I say I measured
    triple coincidences (or lack thereof) then I did. Period. End of
    discussion." Yes, Sir. Jahwol, Herr Professor.



    --- Additional References:

    2. J. F. Clauser, ``Experimental distinction between the quantum and
    classical field-theoretic predictions for the photoelectric effect,''
    Phys. Rev. D 9, 853-860 (1974).

    3. P. Grangier, G. Roger, and A. Aspect, ``Experimental evidence for
    a photon anticorrelation effect on a beam splitter: A new light on
    single-photon interferences,'' Europhys. Lett. 1, 173-179 (1986).

    4. R. J. Glauber, "Optical coherence and photon statistics" in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji (Gordon and Breach, New York, 1965), pp. 63–185.
     
    Last edited: Apr 14, 2005
  2. jcsd
  3. ZapperZ

    ZapperZ 29,980
    Staff Emeritus
    Science Advisor
    Education Advisor

    Is this is true, then you should write a REBUTTAL to AJP and see if the authors will respond formally. If not, your complaint here will simply disappear into oblivion.

    Zz.
     
  4. You can check the paper and the ORTEC data sheet at the links provided. It is, of course, true. Even the chief author acknowledges the paper had wrong delays. As to writing to AJP, I am not subscribed any more to AJP (I was getting it for few years after grad school). I will be generous and let the authors issue their own errata since I did correspond with them first. Frankly, I don't buy their "article was too long" explanation -- it makes no sense that one would make time delay shorter because the "article was already too long". Plain ridiculous. And after the replies I got, I don't believe they will do the honest experiment either. Why confuse the kids with messy facts, when the story they tell them is so much more exciting.
     
  5. ZapperZ

    ZapperZ 29,980
    Staff Emeritus
    Science Advisor
    Education Advisor

    I have been aware of the paper since the week it appeared online and have been highlighting it ever since. If it is true that they either messed up the timing, or simply didn't publish the whole picture, then the referee didn't do as good a job as he/she should, and AJP should be made aware via a rebuttal. Being a "subscriber" is irrelevant. ANYONE can write a rebuttal to ANY paper in any journal.

    Furthermore, the section that this paper appeared in has no page limit as far as I can tell, if one is willing to pay the publication cost. So them saying the paper was getting too long is a weak excuse.

    However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained. Accounting for dark counts has ALWAYS been a problem with single photon detectors such as this - this is a frequent ammo for some people to point at the so-called detection loophole in EPR-type experiments. But this doesn't mean such detectors are completely useless and cannot produce reliable results either. This is where the careful use of statistical analysis comes in. If this is where they messed up, then it needs to be clearly explained.

    Zz.
     
  6. vanesch

    vanesch 6,236
    Staff Emeritus
    Science Advisor
    Gold Member

    Honestly, I think there is no point. In such a paper you're not going to write down evident details. What nightlight is complaining about is that the authors didn't take into account an extra delay in a component. I'm pretty sure they did, and published only the EFFECTIVE delays after having taken these details into account ; at least that is what I gather from the response of the original author. So he didn't bother writing down these experimentally evident details. He also didn't bother to write down that he put in the right power supply ; that doesn't mean you should complain that he forgot to power his devices.
    You know, the amplifiers and discriminators also have small time delays. You're not going to write all those technicalities down in a paper, are you ?

    It can happen, but most of the time, people writing up an experimental publication are not completely incompetent nutcases who don't know how to use their apparatus.

    cheers,
    Patrick.
     
  7. Being a "subscriber" is irrelevant. ANYONE can write a rebuttal to ANY paper in any journal.

    Well, if I write I would like to see it, and see the replies, etc.

    However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained.

    It changes quite a bit. The semi-classical model works perfectly here, when one models the subtractions as well (Marshall & Santos wrote about it way back when Grangier et al published their version in 1986). The problem with this AJP paper is that they claim nearly perfect "quantum" g2, without any subtractions (of accidental coincidences and of unpaired DG triggers, which lowers substantially g2, via eq (2) where N(G) would drop from 100,000 c/s to 8000 c/s) . If true, it would imply genuine physical collapse at spacelike region. If they had it for real, you would be looking at a Nobel prize work. But, it just happens to be contrary to the facts, experimental or theorietical.

    Check for example a recent preprint by Chiao & Kwiat where they do acknowledge in their version that no real remote collapse was shown, although they still like that collapse imagery (for its heuristic and mnemonic value, I suppose). Note also that they acknowledge that classical model can account for any g2 >= eta (the setup efficiency, when accounting for the unpaired DG singles in classical model), which is much smaller than 1. Additional subtraction of background accidentals can lower the classical g2 still further (if one accounts for it in the classical model of the experiment). The experimental g2 would have to go below both effects to show anything nonclassical -- or, put simply, the experiment would have to show the violation on the raw data, and that has never happened. The PDC source, though, cannot show anything nonclassical since it can be perfectly modelled by local semiclassical theory, the conventional Stochastic Electrodynamics (see also last chapter in Yariv's Quantum Optics book on this).
     
  8. ZapperZ

    ZapperZ 29,980
    Staff Emeritus
    Science Advisor
    Education Advisor

    If those ARE the missing delays that is the subject here, then yes, I'd agree with you. *I* personally do not consider those in my electronics since we have calibrated them to make sure we only measure all time parameters of the beam dynamics rather than our electronics delay.

    Again, the important point here is the central point of the result. Would that significantly change if we consider such things? From what I have understood, I don't see how that would happen. I still consider this as a good experiment for undergrads to do.

    Zz.
     
  9. ZapperZ

    ZapperZ 29,980
    Staff Emeritus
    Science Advisor
    Education Advisor

    OK, so now YOU are the one offering a very weak excuse. I'll make a deal with you. If you write it, and it gets published, *I* will personally make sure you get a copy. Deal?

    There appears to be something self-contradictory here, caroline. You first do not believe their results, and then you said look, it also works well when explained using semi classical model. Now either you discard it completely and don't use it, or you buy it and deal with the fact that two different descriptions, at best, can explain it.

    If you claim that the result DOES change, then put your money where your mouth is and send a rebuttal. If not, this goes into oblivion.

    Zz.
     
  10. What nightlight is complaining about is that the authors didn't take into account an extra delay in a component.

    Not quite so. If the delay between DG and DT pulses was more than 6ns, say it was 15ns, why was the 6ns written in the paper at all? I see no good reason to change the reproted delay from 15ns to 6ns. There is no economy in stating it was 6ns.

    I'm pretty sure they did, and published only the EFFECTIVE delays after having taken these details into account ;

    There is no effective vs real delay. It is a plain delay between a DG pulse and DT pulses. It is not like it would have added any length to the article to speak of.

    You know, the amplifiers and discriminators also have small time delays. You're not going to write all those technicalities down in a paper, are you ?

    And detector has latency and dead time, etc. That's not relevant. Talking about needless elements here, adding and going into trouble of tuning the independent sampling of EM fields on T and R for the third unit is, and describing it all is just fine. But saying that delay was 15ns instead of 6ns is way too much trouble. Yeah, sure, that sounds very plausible.

    If I wanted to do the collapse magic trick, I would add that third unit, too. There are so many more ways to reach the "perfection" with that unit in there than to simply reuse the already obtained samples from the GT and GR units (as AND-ed signal).

    In any case, are you saying you can show violation (g2<1) without any subtractions, on raw data?
     
  11. ZapperZ

    ZapperZ 29,980
    Staff Emeritus
    Science Advisor
    Education Advisor

    1. It's interesting that you would even know WHO I was refering to.

    2. You also ignored the self-contradiction that you made.

    3. And it appears that you will not put your money where your mouth is and send a rebuttal to AJP. Oblivion land, here we come!

    Zz.
     
  12. There appears to be something self-contradictory here.... You first do not believe their results, and then you said look, it also works well when explained using semi classical model. Now either you discard it completely and don't use it, or you buy it and deal with the fact that two different descriptions, at best, can explain it.

    No, that's not what I said. You can't get what they claim, the violation of g2>=0 without subtractions (the accidentals and unpaired singles; the two can be traded by settings on the detectors, window sizes, etc). But if you do the subtractions and if you model semi-classically not just the raw data but also the subtractions themselves, then the semiclassical model is still fine. The subtraction-adjusted g2, say g2' is now much smaller than 1, but that is not a violation of "classicality" in any real sense but merely a convention of speech (when the adjusted g2' is below 1 then we define such phenomenone as "nonclassical"). But a classical model, which also does the same subtraction on its predictions will work fine.


    If you claim that the result DOES change, then put your money where your mouth is and send a rebuttal. If not, this goes into oblivion.

    As I said, being a nice guy, I will let them issue their own errata. They can simply say that the delays were not 6ns and give the correct value, if that is true (which from their g2 nearly 0, I don't think it is, unless they had few backup "last tricks" already active in the setup) . But if they do the experiment with the longer delay they won't have g2<1 on raw data (unless they fudge it in any of the 'million' other ways that their very 'flexible' setup offers).
     
  13. ZapperZ

    ZapperZ 29,980
    Staff Emeritus
    Science Advisor
    Education Advisor

    Sorry, but HOW would you know it "will work fine" with the "classical model"? You haven't seen the full data, nor have you seen what kind of subtraction that has to be made. Have you ever performed a dead-count measurement on a photodetector and look at the energy spectrum from a dark count? This is a well-studied area and we do know how to distinguish between these and actual count rate.

    No, you are not being "nice" at all. In fact, all I see is a cop out. Being "nice" means taking the responsibility to correct something that one sees as a possible error or misleading information and informing the community in question about it. You refused to do that. Instead, you whinned about it here where it possibly will make ZERO impact on anything. So why even bother in the first place? The authors may not make any corrections at all based on what you have described as their response, so most likely the paper will stand AS IS. They will continue to get the recognition, you get NOTHING.

    Zz.
     
  14. ZapperZ

    ZapperZ 29,980
    Staff Emeritus
    Science Advisor
    Education Advisor

    Then you must have missed her claims on Yahoo groups that "Maxwell equations are just math" when I asked her to derive the wave equation from them (she keeps claiming that light is ONLY a wave, yet she doesn't know what a "wave" is). She also has repeatedly claim that QM is dead" despite admitting she knows very little of it and of classical mechanics.

    So the person you championed is ignorant of the very subject she's criticizing, by her own admission.

    I agree. The kind of info you are advertizing fits very nicely in the bastion of the Usenet.

    It also appears that this conspiracy theory about physics journals has finally reared its ugly head again. Have you ever TRIED submitting to AJP, ever? Or did you already made up your mind and use that prejudice in deciding what they will do? It is ironic that you are criticizing a paper in which you claim they have possibly made a bias decision on what to select, and yet you practice the very same thing here. I hate to think you do this often in your profession.

    Zz.
     
  15. DrChinese

    DrChinese 5,656
    Science Advisor
    Gold Member

    That is correct, they were rebuffed in their attempts to rebutt Aspect. And for good reason. They were wrong, although many (including Caroline and yourself) can't see it. It is clear the direction things are going in: more efficiency means greater violation of the local realistic position. So naturally you don't want the Thorn paper to be good, it completely undermines your theoretical objections to Bell tests - i.e. photons are really waves, not particles.

    But you have missed THE most important point of the Thorn paper in the process. It is a cost-effective experiment which can be repeated in any undergraduate laboratory. When it is repeated your objections will eventually be accounted for and the results will either support or refute your hypothesis.

    I do not agree that your objections are valid anyway, but there is nothing wrong with checking it out if resources allow. Recall that many felt that Aspect had left a loophole before he added the time-varying analyzers. When he added them, the results did not change. A scientist would conclude that a signal is not being sent from one measuring apparatus to the other at a speed of c or less. Other "loopholes" have been closed over the years as well. Now there is little left. New experiments using PDC sources don't have a lot of the issues the older cascade type did. But I note that certain members of the LR camp operate as if nothing has changed.

    Make no mistake about it: loopholes do not invalidate all experimental results automatically. Sometimes you have to work with what you get until something better comes along. After all, almost any experiment has some loophole if you look it as hard as the LR crew has.

    I wonder why - if the LR position is correct - the Thorn experiment supports the quantum view? If it is a "magic trick", why does it work? Hmmm.
     
  16. ZapperZ

    ZapperZ 29,980
    Staff Emeritus
    Science Advisor
    Education Advisor

    Yes it does, because you were the one who made her to be what she's not - someone who posesses the knowledge to know what she's talking about. Unless you believe that one can learn physics in bits and pieces, and that physics are not interconnected, then knowing one aspect of it does not necessarily mean one has understood it. This is what she does and this is who you touting.

    But it IS physics - it is the PRACTICE of physics, which I do every single working day. Unlike you, I cannot challenge something based on what you PERCEIVE as the whole raw data. While you have no qualms in already making a definitive statement that the raw data would agree with "semi-classical" model, I can't! I haven't seen this particular raw data and simply refuse to make conjecture on what it SHOULD agree with.

    Again, I will ask you if you have made ANY dark current or dark counts measurement, and have looked at the dark count energy spectrum, AND based on this, if we CAN make a definitive differentiation between dark counts and actual measurements triggered by the photons we're detecting. And don't tell me this is irrelevant, because it boils down to the justification of making cuts in the detection count not only in this type of experiments, but in high energy physics detectors, in neutrino background measurements, in photoemission spectra, etc.. etc.

    Zz.
     
  17. Sorry, but HOW would you know it "will work fine" with the "classical model"? You haven't seen the full data, nor have you seen what kind of subtraction that has to be made.

    All the PDC source (as well as laser & thermal light) can produce is semi-classical phenomena. You need to read Glauber's derivation (see [4]) of his G functions (called "correlation" functions) and his detection model and see the assumptions being made. In particular, any detection is modelled as quantized EM field-atomic dipole interaction (which is Ok for his purpose) and expanded perturbatively (to 1st order for single detector, n-th order for n-detectors). The essential points for Glauber's n-point G coincidences are:

    a) All terms with vacuum produced photons are dropped (that results in the normally ordered products of creation and anhilation EM field operators). This formal procedure corresponds to the operational procedure of subtracting the accidentals and unpaired singles. For single detector that subtraction is non-controversial local pocedure and it is built into the design i.e. the detector doesn't trigger if no external light is incident on its cathode. {This of course is only approximate, since the vacuum and the signal fields are superposed when they interact with electrons, so you can't subtract the vacuum part accurately from just knowing the average square amplitude of vacuum alone (which is 1/2 hv per EM mode on average) and the square amplitude of the superposition (the vector sum), i.e. knowing only V^2 and (V+S)^2 (V and S are vectors), you can't deduce what is the S^2, the pure signal intensity. Hence, detector by its design effectively subtracts some average for all possible vectorial additions, which it gets slightly wrong on both sides -- the existence of subtraction which are too small results in (vacuum) shot noise, and of subtractions which are too large results missing some of the signal (absolute efficiency less than 1; note that conventional QE definition already includes background subtractions, so QE could be 1, but with very large dark rate, see e.g. QE=83% detector, with noise comparable to the signal photon counts). }

    But for the multiple detectors, the vacuum removal procedure built into the Glauber's "correlations" is non-local -- you cannot design a "Glauber" detector, not even in principle, which can subtract locally the accidental coincidences or unpaired singles . And these are the "coincidences" predicted by the Glauber's Gn() functions -- they predict, by definition, the coincidences modified by the QO subtractions. All their nonlocality comes from non-local formal operation (dropping of terms with absorptions of spacelike vacuum photons), or operationally, from inherently non-local subtractions (you need to gather data from all detectors and only then you can subtract accidentals and unpaired singles). Of, course, when you add this same nonlocal subtraction procedure to semi-classical correlation predictions, they have no problem replicating Glauber's Gn() functions. The non-locality of Gn() "correlations" comes exclusively from non-local subtractions, and it has nothing to do with the particle-like indivisible photon (that assumption never entered Glauber's derivation, that is a metaphysical add-on, with no empirical consequences or a formal counterpart with such properties, grafted on top of the theory, for their mnemonic and heuristic value).

    b) Glauber's detector (the physical model behind the Gn() or g2=0, eq.8 of AJP paper) produces 1 count if and only if it absorbs the "whole photon" (which is a shorthand for the quantized EM field mode; e.g. an approximately localized single photon |Psi.1> is a superposition of vectors Ck(t)|1k>, where Ck(t) are complex functions of time and 4-vector k=(w,kx,ky,kz), and |1k> are Fock states in some base, which depend on k as well; note that hv quantum for photon energy is an idealization applicable to infinite plane wave). This absorption process in Glauber's perturbative derivation of Gn() is is purely dynamical process i.e. the |Psi.1> is absorbed as an interaction of EM field with atomic dipole, all being purely local EM field-matter field dynamics treated perturbatively (in 2nd quantized formalism). Thus, to absorb the "full photon" the Glauber detector has to interact with all the photon's field. There is no magic collapse of anything in this dynamics (the von Neumann's boundary classical-quantum is moved one layer above the detector) -- the fields merely follow their local dynamics (as captured in 2nd quantized perturbative approximation).

    Now, what does g2=0 (where g2 is eq AJP.8) mean for the single photon incident field? It means that T and R are fields of that photon and the Glauber detector which absorbs and counts this photon, and to which g2() of eq (AJP.8) applies, must interact with the entire mode, which is |T> + |R>, which means the Glauber detector which counts 1 for this single photon is spread out to interact with/capture both T and R beams. By its definition this Glauber detector leaves EM vacuum as the result of the absorpotion, thus any second Glauber detector (which would have to be somewhere else, e.g. behind it in space, thus no signal EM field would reach it) would absorb and count 0, just the vacuum. That of course is the trivial kind of "anticorrelation" predicted by the so-called "quantum" g2=0 (g2 of eq AJP.8). There is no great mystery about it, it is simply a way to label T and R detectors as one Glauber detector for single photon |Psi.1>=|T>+|R> and then declare it has count 1 when one or both DT or DR trigger and declare it has count 0 if none of DT or DR triggers. It is a trivial prediction. You could do the same (as is often done) with photo-electron counts on a single detector, declare 1 when 1 or more photo-electrons are emitted, 0 if none is emitted. The only difference is that here you would have this G detector spread out to capture distant T and R packets (and you don't differentiate their photoelectrons regarding declared counts of the G detector).

    The actual setup for the AJP experiment has two separate detectors. Neither of them is the Glauber detector for the single mode superposed of T and R vectors, since they don't interact with the "whole mode", but only with the part of it. The QED model of detector (which is not Glauber's detector any more) for this situation doesn't predict g2=0 but g2>=1, the same as semiclassical model does, each detector triggers on average half the time, independently of each other (within the gate G window when both T and R have the same intensity PDC pulse). Since each detector here gets just the half of the "signal photon" field, the "missing" energy needed for its trigger is just the vacuum field which is superposed to the signal photon (to its field) on the beam splitter (cf eq. AJP.10, the a_v operators). If you were to split further each T and R into halves, and then these 1/4 beams to further halves, and so on for L levels, with N=2^L final beams and N detectors, the number of the triggers for each G event would be Binomial distribution (provided the detection time is short enough that signal field is roughly constant; otherwise you compund Binomial, which is super-Poissonian), i.e. the probability of exactly k detectors triggering is p(k,N)=C(N,k)*p^k*(1-p)^(N-k), where p=p0/N, and p0 is probability of trigger of a single detector capturing the whole incident photon (which may be defined as 1 for ideal detector and ideal 1 photon state). If N is large enough, you can approximate Binomial distribution p(k,N) with Poissonian distribution p(k)=a^k exp(-a)/k!, where a=N*p=p0, i.e. you get the same result as the Poissonian distribution of the photoelectrons (i.e. these N detectors behave as N electrons of a cathode of a single detector).

    In conclusion, there is no anticorrelation effect "discovered" by the AJP authors for the setup they had. They imagined it and faked the experiment to prove it (compare AJP claim to much more cautious claim of the Chiao & Kwiat preprint cited earlier). The g2 of their eq (8) which does have g2=0 applies to a trivial case of a single Glauber detector absorbing the whole EM field of the "single photon" |T>+|R>. No theory predicts nonclassical anticorreltation, much less anything non-local, for their setup with the two independent detectors. It is a pure fiction resulting from an operational misinterpretation of QED for optical phenomena in some circles of Quantum Opticians and QM popularizers.

    Have you ever performed a dead-count measurement on a photodetector and look at the energy spectrum from a dark count? This is a well-studied area and we do know how to distinguish between these and actual count rate.

    Engineering S/N enhancements, while certainly important, e.g. when dealing with TV signals, have no place in these types of experiments. In these experiments (be it this kind of beam splitter "collapse" or Bell inequality tests) the "impossibility" is essentially of enumerative kind, like a pigeonhole principle. Namely, if you can't violate classical inequalities on raw data, you can't possibly claim a violation on subtracted data since any such subtraction can be added to classical model, it is a perfectly classical extra operation.

    The only way the "violation" is claimed is to adjust the the data and then compare it to the original classical prediction that didn't perform the same subtractions on its predictions. That kind term-of-art "violation" is the only kind that exists so far (the "loophole" euphemisms aside).

    -- Ref

    4. R. J. Glauber, "Optical coherence and photon statistics" in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji (Gordon and Breach, New York, 1965), pp. 63–185.
     
    Last edited: Apr 14, 2005
  18. ZapperZ

    ZapperZ 29,980
    Staff Emeritus
    Science Advisor
    Education Advisor

    Wait... so how did what you wrote answered the question I asked?

    This isn't just a S/N ratio issue! It goes even deeper than that in the sense of how much is known about dark current/dark counts in a photodetector. It was something I had to deal with when I did photoemission spectroscopy, and something I deal with NOW in a photoinjector accelerator. And considering that I put in as much as 90 MV/m of E-field in the RF cavity, I'd better know damn well which ones are the dark currents and which are the real counts.

    I will ask for the third time: Have you performed dark counts measurement on a photodetector and measure the dark current spectrum from it?

    Zz.
     
  19. That is correct, they were rebuffed in their attempts to rebutt Aspect. And for good reason. They were wrong,

    How about substantiating your assertion. Can you give me specific paper and result and what was "wrong" (maybe you meant "unpopular" ir "unfashinable" instead of "wrong") about it.

    It is clear the direction things are going in: more efficiency means greater violation of the local realistic position.

    Yep, better, like these AJP 377 std deviation, after they short-circuit triple concidences out of the loop all together. There are a handful fraudsters selling quantum computers to investors. Let me know when any of it actually works. After 30+ years of claims, your claim sounds just like perpetuum mobile excuses from centuries ago. It's just around the corner, as soon as this friction can be taken out and this reservoir temperature dropped few thousand degrees lower (before thermodynamics).

    So naturally you don't want the Thorn paper to be good, it completely undermines your theoretical objections to Bell tests - i.e. photons are really waves, not particles.

    This AJP paper claim is plain wrong. Their timings don't add up, even the author acknowledges that much. You can check the paper and data sheet and 'splain to me how does that work. Their result contradicts the theory as well -- the g2<1 occurs only for subtracted data, not the raw counts. There is nothing nonclassical about that kind of g2<1, since the semi-classical model can be extended to subtract the same way as done in the experiment or as done in the QO formalism via the Glauber's normal order prescription for his correlation function.

    But you have missed THE most important point of the Thorn paper in the process. It is a cost-effective experiment which can be repeated in any undergraduate laboratory.

    That's what worries me. Bunch of deluded kids trying to become physicist, imagining instant vanishing of EM fields at spacelike distances, just because some far detector somewhere triggered. That effect doesn't exist. The actual collapse of EM fields does occur, as shown in the semi-classical and 2nd quantized theories of photoeffect -- it occurs through purely local interactions -- the photons do vanish. Check for example what a highly respected Quantum Optician from MIT, Herman Haus wrote on this matter:
    When it is repeated your objections will eventually be accounted for and the results will either support or refute your hypothesis.

    You can tell me when it happens. As it stands in print, it can't work. The g2~0 is due to an experimental error. You can't wave away the 6ns delay figure in 10 places in the paper, by blaming the article length restriction. If it was 15ns or 20ns, and not 6ns the article length difference is negligible. It's the most ridiculous excuse I have heard, at least since my age when dogs ate homeworks.

    I do not agree that your objections are valid anyway, but there is nothing wrong with checking it out if resources allow.

    Anything to back-up your faith in AJP paper claim?

    Recall that many felt that Aspect had left a loophole before he added the time-varying analyzers.

    No that was a red herring, a pretense to be fixing something that wasn't a problem. Nobody seriously thought, much less proposed a model or theory, that the far apart detectors or polarizers are somehow communicating in which position they are so the they could all conspire to replicate the alleged correlations (which were never obtained in the first place, at least not on the measured data; the "nonclassical correlations" exist only on vastly extrapolated "fair sample" data, which is, if you will pardon the term, the imagined data). It was a self-created strawman. Why don't they tackle and test the fair sampling conjecture. There are specific proposals how to test it, you know (even though the "quantum magic" appologists often claim that the "fair sampling" is untestable), e.g. check Khrennikov's papers.

    Other "loopholes" have been closed over the years as well. Now there is little left. New experiments using PDC sources don't have a lot of the issues the older cascade type did. But I note that certain members of the LR camp operate as if nothing has changed.

    Yes, the change is that 30+ years of promisses have passed. And oddly, no one wants to touch the "fair sampling" tests. All the fixes fixed what weren't really challenged. Adding few std deviations or aperture depolarization (that PDC fixed from cascades) wasn't a genuine objection. The semiclassical models weren't even speculating in those areas (one would need some strange conspirtatorial models to make those work). I suppose, it is easier to answer the question you or your friend pose to yourselves than to answer with some subtance to challenges from the opponents.


    Make no mistake about it: loopholes do not invalidate all experimental results automatically. Sometimes you have to work with what you get until something better comes along. After all, almost any experiment has some loophole if you look it as hard as the LR crew has.

    Term "loophole" is a verbal smokescreen. It either works or doesn't. The data either violates inequalities or it doesn't (what some imagined data does, doesn't matter, no matter how you call it).

    I wonder why - if the LR position is correct - the Thorn experiment supports the quantum view? If it is a "magic trick", why does it work?

    This experiment is contrary to QED prediction (see reply to Zapper, items a and b on Glauber's detectors and the g2=0 setup). You can't obtain g2<1 on raw data. Find someone who does it and you will be looking at the Nobel prize experiment, since it would overturn the present QED -- it would imply a discovery of a mechanism to perform spacelike non-interacting absorption of the quantized EM fields, which according to the existent QED evolve only locally and can be absorbed only through the local dynamics.
     
    Last edited: Apr 13, 2005
  20. DrChinese

    DrChinese 5,656
    Science Advisor
    Gold Member

    You have already indicated that no evidence will satisfy you; you deny all relevant published experimental results! I don't. Yet, time and time again, the LR position must be advocated by way of apology because you have no actual experimental evidence in your favor. Instead, you explain why all counter-evidence is wrong.

    Please tell me, what part of the Local Realist position explains why one would expect all experiments to show false correlations? (On the other hand, please note that all experiments just happen to support the QM predictions out of all of the "wrong" answers possible.)

    The fact is, the Thorn et al experiment is great. I am pleased you have highlighted it with this thread, because it is the simplest way to see that photons are quantum particles. Combine this with the Marcella paper (ZapperZ's ref on the double slit interference) and the picture is quite clear: Wave behavior is a result of quantum particles and the HUP.
     
  21. Wait... so how did what you wrote answered the question I asked? This isn't just a S/N ratio issue!

    Subtracting background accidentals and unpaired singles is irrelevant for the question of classicality of the experiment, as explained. If the semiclassical model M of the phenomenon explains raw data RD(M), then if you perform some adjustemnet operation A and create adjusted data A(RD(M)), then for a classical model to follow, it simply needs to model the adjustment procedure itself on its original prediction, which is always possible (subtracting experimentally obtained numbers from the predicted correlations and removing others which experiment showed to be unpaired singles -- it will still match because the undadjusted data matched, and the adjustments counts come from the experiment, just as you do to compare to the QO prediction via Glauber's Gn(). The only difference is that Glauber's QO predictions refer directly to subtracted data while the semiclassical refer to either, depending on how you treat the noise vs signal separation, but they both have to use experimentally obtained corrections and the raw experimental correlations, and both can match filtered or unfiltered data).

    It goes even deeper than that in the sense of how much is known about dark current/dark counts in a photodetector. It was something I had to deal with when I did photoemission spectroscopy, and something I deal with NOW in a photoinjector accelerator. And considering that I put in as much as 90 MV/m of E-field in the RF cavity, I'd better know damn well which ones are the dark currents and which are the real counts.

    These are interesting and deep topics, but they are just not related to the question of "nonclassicality".

    I will ask for the third time: Have you performed dark counts measurement on a photodetector and measure the dark current spectrum from it?

    Yes, I had whole lot of lab work, especially in graduate school at Brown (the undergraduate work in Belgrade had less advanced technology, more classical kind of experiments, although it did have a better nuclear physics lab than Brown, and avoided that one as much as I could), including the 'photon counting' experiments with the full data processing, and I hated it at the time. It was only years later that I realized that it wasn't all that bad and that I should have gotten more out of it. I guess, that wisdom will have wait for the next reincarnation.
     
    Last edited: Apr 13, 2005
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook

Have something to add?