Photon "Wave Collapse" Experiment (Yeah sure; AJP Sep 2004, Thorn...)

by nightlight
Tags: 2004, experiment, photon, thorn, wave collapse, yeah
P: 186
 Quote by vanesch I never did. I cannot remember ever having taken the position of "remote collapse", except as a calculational tool.
Well, the collapse in the sense of beam splitter anticorrelations discussed in this thread. You definitely, in our detector discussion took position that there will be genuinely reduced detection probability on detector DT whenever there was a trigger on the matching detctor DR. You argued that there was a drop in the trigger probability of DT trigger in this case and that this drop is genuine (in the sense of anticorrelation not being an artifact of subtractions of accidentals and unpaired singles). As I understand you don't believe any more in that kind of non-interacting spacelike anticorrelation (the reduction of remote subsystem state). You certianly did argue consistently that it was a genuine anticorrelation and not an artifact of subtractions.
Emeritus
Sci Advisor
PF Gold
P: 6,238
 Quote by nightlight You definitely, in our detector discussion took position that there will be genuinely reduced detection probability on detector DT whenever there was a trigger on the matching detctor DR.
Ah, yes, and I still do. But that has nothing to do with true collapse or not !
There will be a branch where DR triggers and not DT, and there will be a branch where DT triggers, and not DR. There will never be a branch where DR and DT trigger together (apart from double events).

I just happen to be an observer in one of the two branches. But the other one can happily exist. Collapse means that suddenly, that other branch "disappears". It doesn't have to. I will simply not observe it, because I don't happen to be in that branch.

Observationally this is of course indistinguishable from the claim that the branch "I am not in" somehow "doesn't exist anymore". If you do that, you have a collapse. But I find it more pleasing to say that that branch still exists, but is not open to my observation, because I happen to have made a bifurcation in another branch.
It all depends what you want to stick to: do you stick to the postulate that things can only exist when I can observe them, or do you stick to esthetics in the mathematical formalism ? I prefer the latter. You don't have to.

cheers,
Patrick.
Emeritus
Sci Advisor
PF Gold
P: 6,238
 Quote by nightlight ORTEC TAC/SCA 567. The data sheet for the model 567 lists the required delay of 10ns for the START (which was here DT signal, see AJP.Fig 5) from the START GATE signal (which was here DG signal) in order for START to get accepted. But the AJP.Fig 5, and the several places in the text give their delay line between DG and DT as 6 ns. That means when DG triggers at t0, 6ns later (+/- 1.25ns) the DT will trigger (if at all), but the TAC will ignore it since it won't be ready yet, and for another 4ns. Then, at t0+10ns the TAC is finally enabled, but without START no event will be registered. The "GTR" coincidence rate will be close to accidental background (slightly above since if the real T doesn't trip DT and the subsequent background DT hits at t0+10, then the DG trigger, which is now more likely than background, at t0+12ns will allow the registration).
I read in detail your analysis here, and it is correct in a certain sense. Indeed, in the data sheet is a requirement for 10 ns for the gate to be "active" before the start comes in. Now, does that mean that if you give 6ns, you're guaranteed NOT to have a count ? I wouldn't think so ! Doing quite some electronics devellopment myself, I know that when I specify some limits on utilisation, I'm usually sure of a much better performance ! So if I make a device of which I'm relatively sure that, say, a delay of 2 ns is sufficient in most circumstances, I'd rather specify 5 or 10 ns, so that I'm absolutely sure that I will keep the specs, except if I'm pushed to the limits.
So I'm pretty sure that the 6ns is sufficient to make the actually trigger.
A point is of course that in this particular example, the remedy is extremely simple: use longer delay lines !!
I've already been in such a situation, you know: you do an experiment, everything works OK, you write up your stuff and you submit it. Then there is a guy who points out that you didn't respect a certain specification. So you get pale, and you wonder if you have to retract your submission. You think about it, and then you say: ah, but simply using longer delay lines will do the trick ! So you rush to the lab, you correct for the error, and you find more or less equivalent results ; it didn't matter in the end. What do you do ? Do you write to the editor saying that you made a mistake in the paper, but that it actually doesn't matter, if only you're allowed to change a few things ??
OF COURSE NOT. You only do that if the problem DOES matter.
So that's in my opinion what happened, and that's why the author told you that everything is all right, and that these 6ns delays are not correct but that everything is all right.
There's also another very simple test to check the coincidence counting: connect the three inputs G, T and R to one single pulse generator. In that case, you should find identical counts in GT, GR and GTR.

After all, they 1) WORK (even if they don't strictly respect the specs of the counter) 2) they are in principle correct if we use fast enough electronics, 3) they are a DETAIL.
Honestly, if it WOULDN'T WORK, then the author has ALL POSSIBLE REASONS to publish it:
1) he would have discovered a major thing, a discrepancy with quantum optics
2) people will soon find out ; so he would stand out as the FOOL THAT MISSED AN IMPORTANT DISCOVERY BECAUSE HE DIDN'T PAY ATTENTION TO HIS APPARATUS.

After all, he suggests, himself, to make true counting electronics. Hey, if somebody pays me enough for it, I'll design it myself ! I'd think that for \$20000,- I make you a circuit without any problem. That's what I do part of my job time.

I'll see if we have such an ORTEC 567 and try to gate it at 6ns.

Now, if you are convinced that it IS a major issue, just try to find out a lab that DID build the set up, and promise them a BIG SURPRISE when they change the cables into longer ones, on condition that you will share the Nobel prize with them.

cheers,
Patrick
Emeritus
Sci Advisor
PF Gold
P: 6,238
 Quote by vanesch So if I make a device of which I'm relatively sure that, say, a delay of 2 ns is sufficient in most circumstances, I'd rather specify 5 or 10 ns, so that I'm absolutely sure that I will keep the specs, except if I'm pushed to the limits.
This is not babbling in the air! For instance, together with a collegue, we devellopped a card for charge division read out. After reception of the analogue pulses, the position address on 8 bits, taking into account all calibration curves, is ready at the output in 370 ns. Well, we specified 1 microsecond, because the design specifications were 2 microseconds...

cheers,
Patrick.
 P: 186 This is not babbling in the air! For instance, together with a collegue, we devellopped a card for charge division read out. After reception of the analogue pulses, the position address on 8 bits, taking into account all calibration curves, is ready at the output in 370 ns. Well, we specified 1 microsecond, because the design specifications were 2 microseconds... Your example shows an additional reason why it is not plausible in this case. In your case there were no ten other vendors with competing designs and specs fighting for the same customer. You had monopoly and what you say goes, thus you're better off being conservative with stating limitations. In a competitive situation, the manufacturers will push the specs as far as they can get away with (i.e. in the cost/benefit analysis they would estimate how much they will lose from returned units vs los of sales, prestige, customers to competition). Just think of what limits you would state had that been a competition for that job -- there were ten candidates and they all design a unit to given minimum requirements (but they're allowed to improve) and the employer will pick one they like best.
 P: 186 I read in detail your analysis here, and it is correct in a certain sense. Indeed, in the data sheet is a requirement for 10 ns for the gate to be "active" before the start comes in. Now, does that mean that if you give 6ns, you're guaranteed NOT to have a count ? No it didn't work by luck. The author acknowledges the figure 6ns (and thus the 12ns) is wrong and that they have used the correct longer timing. You can ask him at his web page (just be tactful, he got very angry after I asked him about it). The problem with "leave it alone, don't issue errata, since it worked" doesn't wash either. It didn't work. The g2=0 for eq (AJP.8) applies to normalized Glauber's correlation function which means you have to do subtractions of accidentals and removal of the unpaired singles (the counts on DG for which there were no T or R events). Otherwise you haven't removed the vacuum photon effects as the G2() does (see his derivation in [4]). Both adjustments lower the g2 in (AJP.14). But they already got nearly "perfect" result with raw counts. Note that they used N(G), which is 100,000 c/s, for their singles count in eq (AJP.14), while they should have used N(T+R) which is 8000 c/s. Using raw counts in eq (14), they could only get g2>=1. See Chiao, Kwiat paper , page 11, where they say that classical g2>= eta, where eta is "coincidence-detection efficiency." They also acknowledge that the experiment does have semi-classical model (after all, Marshall & Santos had shown that already for Grangier at al. 1986 case, way back then). Thus they had to call upon already established Bell's inequality violations to discount semi-classical model for their experiment (p 11). They understand that this kind of experiment can't rule out semi-classical model on its own, since there is no such QED prediction. With Thorn et al, it is so "perfect" it does it all by itself, and even without any subtractions at all. Keep also in mind that this is AJP and the experiment was meant to be a template for other undergraduate labs (as their title also says) to show their students, inexpensively and reliably the anticorrelation. That's why the figure 6ns is all over the article (in 11 places). That's the magic ingredient that makes it "work". Without it, on raw data you will have g2>=1.
 P: 186 Ah, yes, and I still do. But that has nothing to do with true collapse or not ! There will be a branch where DR triggers and not DT, and there will be a branch where DT triggers, and not DR. Oh good ol' MWI. We went this route before and you were backed into solipsism. Anything goes there. There will never be a branch where DR and DT trigger together (apart from double events). If you have an optical lab, try it out. That won't happen. The DT and DR will trigger no differently than classically i.e. your raw GTR coincidence data (nothing removed) plugged into (AJP.14) will give you g2>=1 (you'll probably get at least 1.5). The usual QO S/N enhancements via subtractions should not be done here. Namely, if your raw counts (which is what classical model with g2>=1 applies to) don't violate g2>=1, there is no point subtracting and checking whether that adjusted g2a goes below 1, since the same subtraction can be added to the classical model (so that it, too, corresponds to what was done with the data in the experiment) and it will follow the adjusted data as well and show g2a<1 (as acknowledged in Chiao,Kwiat paper). Note also that conventional QO "correlations", the Glauber's Gn(), already include in their definition the subtraction of accidentals and unpaired singles (the vacuum photon effects). All of the so-called nonclassicality of Quantum Optics is result of this difference in data post-processing convention -- they compute classical model for raw counts, while their own QO post-processing convention in Gn() is to count and correlate and than to subtract. So the two prediction will be "different" since they refer by their definitions to different quantities. The "classical" g2 of (AJP.1-2) is operationally different quantity than the "quantum" g2 of eq (AJP.8), but in QO nonclassicality claims, they will use same notation and imply they refer to the same thing (plain raw correlation), then proclaim non-classicality when the experiment using their QO convention (with all subtractions) matches the Gn()'s version of g2 (by which convention, after all, the data was post-processed), not the classical one (since it didn't use its post-processing convention). You can follow this QO magic show recipe right in this AJP article, that's exactly how they set it up {but then, unlike most other authors, they get too gready and want to show the complete "perfection" and reproducable, the real magic, and for that you really need to cheat in a more reckless way than just misleading by omission}. The same goes for most others where the QO non-classicality (anticorrelations, sub-Poissonian distributions, etc) is claimed as something genuine (as opposed to being a mere terminological convention for QO use of word 'non-classical', since that what it is and there is no more to it). Note also that the "single photon" case, where "quantum" prediction is g2=0 has operationally nothing to do with this experiment (see my previous explanation). The "single photon" Glauber detector is a detector which absorbs (via atomic dipole and EM interaction, cf [4]) both beams, the whole photon, which is the state |T>+|R>. Thus the Glauber detector for single photon |T>+|R> with g2=0 is either a single large detector covering both paths, or an external circuit attached to DT+DR which treats two real detectors DT and DR as a single detector and gives 1 when one or both trigger, i.e. as if they're a single cathode. The double trigger T & R case is simply equivalent to having two photoelectrons emitted on this large cathode (the photoelectron distribution is at best Poissonian for perfectly steady incident field, but it is compound/super Poissonian for variable incident fields).
 Sci Advisor PF Gold P: 5,148 Nightlight, You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate? Also, is it your opinion that photons are not quantum particles, but are instead waves? -DrC
 P: 186 You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate? The phenomenologial PDC hamiltonian used in Quantum Optics computations has been reproduced perfectly within the Stochastic Electrodynamics (e.g. see papers from Marshall & Santos group; recent editions of well respected Yariv's QO textbook have an extra last chapter which for all practical purposes recognizes this equivalence). Also, is it your opinion that photons are not quantum particles, but are instead waves? Photons in QED are quantized modes of EM field. For a free field you can construct these in any basis in Hilbert space, so the "photon number" operator [n] depends on basis convention. Consequently the answer to question "how many photons are here" depends on the convention for the basis. (No different than you asking me what is the speed number on your car, and if I say 2300; this obviously is meaningless, since you need to know what convention I use for my speed units.) For example if you have plane wave as a single mode, in its 1st excited state (as harmonic oscillator), in that particular base you have single photon, the state is an eigenstate of this [n]. But if you pick other bases, then you'll have a superposition of generally infinitely many of their "photons" and the plane wave is not the eigenstate of their [n]. The QO convention then calls "single photon" any superposition of the type |Psi.1> = Sum(k) of Ck(t) |1_k>, where sum goes over wave vectors k (a 4-vector) k=(w,kx,ky,kz) and |1_k> are eigenstates of some [n] with eigenvalue 1. This kind of photon is quasi-localized (with spread stretching across many wavelengths). Obviously, here you don't have any more E=hv relation since there is no single ferquency v superposed into the "single photon" state |Psi.1>. If the localization is very rough (many wavelengths superposed) then you could say that approximately |Psi.1> has some dominant and average v0, and one could say approximately E=hv0. But there is no position operator for a point-like photon (and it can't be constructed in QED) and no QED process generates QED "single photon", the Fock state |1> for some basis and its [n], (except as an approximation in the lowest order of perturbation theory). Thus there is no formal counterpart in QED for a point-like entity hiding somewhere inside EM field operators, much less of some such point being exclusive. A "photon" for laser light streches out for many miles. The equations these QED and QO "photons" follow in Heisenberg picture are plain old Maxwell equations for free fields or for any linear optical elements (mirrors, beam splitters, polarizers etc). For the EM interactions, the semiclassical and QED formalisms agree to at least alpha^4 order effects (as shown by Barut's version of semiclassical fields which include self-interaction). That is 8-9 digits of precision (it could well be more if one were to carry out the calculations). Barut unfortunately died in 1994, so that work has stalled. But their results up to 1987 are described in this ICTP preprint. ICTP has scanned 149 of his preprints, you can get pdfs here (type Barut in "author"; also interesting is his paper on revival of Schrodinger's original interpretation of Psi; his semiclassical approach starts in papers from 1980 and on). In summary, you can't count them except by convention, they appear and disappear in interactions, there is no point they can be said to be at, they have no position but just approximate regions of space defined by a mere convention of "non-zero values" for field operators (one can call these wave packets as well, since they move by the plain Maxwell wave equations, anyway; and they are detected by the same square-law detection as semiclassical EM wave packets). One can think, I suppose of point photons as a heuristics, but one has to watch not to take it too far and start imagining, as these AJP authors apparently did, that you have some genuine kind of exclusivity one would have for a particle. That exclusivity doesn't exist either in theory (QED) or in experiments (other than via misleading presentation or outright errors as in this case). The theoretical non-existence was already explained. In brief, the "quantum" g2 of (AJP.8-11 for n=0) corresponds to a single photon in the incident field. This "single photon" is |Psi.1> = |T> + |R> where |T> and |R> correspond to regions of the "single photon" field in T and R beams. The detector which (AJP.8) models is Glauber's ideal detector, which counts 1 if and only if it absorbs the whole single photon, leaving the vacuum EM field. But this "absorbtion" is (derived by Glauber in [4]) purely dynamical process, local interaction of quantized EM field of the "photon" with the atomic dipole and for the "whole photon" to be absorbed, the "whole EM field" of the "single photon" has to be absorbed (via resonance, a la antenna) with the dipole. (Note that the dipole can be much smaller than the incident EM wavelength, since the resonance absorption will absorb the surrounding area of the order of wavelength.) So, to absorb "single photon" |Psi.1> = |T> + |R>, the Glauber detector has to capture both branches of this single field, T and R, interact with them and resonantly absorb them, leaving EM vacuum as result, and counting 1. But to do this, the detector will have to be spread out to capture both T and R beams. Any second detector will get nothing, and you indeed have perfect anticorrelation, g2=0, but it is entirely trivial effect, with nothing non-classical or puzzling about it (semi-classical detector will do same if defined to capture full photon |T>+|R>). You could simulate this Glauber detector capturing "single photon" |T>+|R> by adding an OR circuit to outputs of two regular detectors DT and DR, so that the combined detector is Glaber_D = DT | DR and it reports 1 if either one or both of DT and DR trigger. This, of course, doesn't add anything non-trivial since this Glauber_D is one possible implementation of Glauber detector described in previous paragraph -- its triggers are exclusive relative to a second Glauber_Detector (e.g. made of another pair of regular detectors placed somewhere, say, behind the first pair). So the "quantum" g2=0 eq (8) (it is also semiclassical value, provided one models the Glauber Detector semiclassically), is valid but trivial and it doesn't correspond to the separate detection and counting used in this AJP experiment or to what the misguided authors (as their students will be after they "learn" it from this kind of fake experiment) had in mind. You can get g2<1, of course, if you subtract accidentals and unpaired singles (the DG triggers for which no DT or DR triggered). This is in fact what Glauber's g2 of eq. (8) already includes in its definition -- it is defined to predict the subtracted correlation, and the matching operational procedure in Quantum Optics is to compare it to subtracted measured correlations. That's the QO convention. The classical g2 of (AJP.2) is defined and derived to model the non-subtracted correlation, so lets call it g2c. The inequality (AJP.3) is g2c>=1 for non-subtracted correlation. Now, nothing is to stop you from defining another kind of classical "correlation" g2cq which includes subtraction in its definition, to match the QO convention. Then this g2cq will violate g2cq>=1, but there is nothing surprising here. Say, your subtractions are defined to discard the unpaired singles. Therefore in your new eq (14) you will put N(DR)+N(DT) (which was about 8000 c/s) instead of N(G) (which was 100,000 c/s) in the numerator of (14) and you have now g2cq which is 12.5 times smaller than g2c, and well below 1. But no magic. (The Chiao Kwiat paper recognizes this and doesn't claim any magic from their experiment.) Note that these subtracted g2's, "quantum" or "classical" are not the g2=0 of single photon case (eq AJP.11 for n=1), as that was a different way of counting where the perfect anticorrelation is entirely trivial. Therefore, the "nonclassicality" of Quantum Optics is a term-of-art, a verbal convention for that term (which somehow just happens to make their work sound more ground-breaking). Any well bred Quantum Optician is thus expected to declare a QO effect as "nonclassical" whenever its subtracted correlations (predicted via Gn or measured and subtracted) violate inequalities for correlations computed classically for the same setup, but without subtractions. But there is nothing genuinely nonclassical about any such "violations". These verbal gimmick kind of "violations" have nothing to do with theoretically conceivable genuine violations (where QED still might disagree with semiclassical theory). The genuine violations would have to be for the perturbative effects of orders alpha^5 or beyond, some kind of tiny difference beyond 8-9th decimal place, if there is any at all (unknown at present). QO operates mostly with 1st order effects, all its phenomena are plain semiclassical. All their "Bell inequality violations" with "photons" are just creatively worded magic tricks of the described kind -- they compare subtracted measured correlations with the unsubtracted classical predictions, all wrapped into whole lot of song and dance on "fair sampling" or "momentary technological detection loophole" or "non-enhancement hypothesis"... And after all the song and dance quiets down, lo and behold, the "results" match the subtracted prediction of Glauber's correlation function Gn (Bell's QM result cos^2() for correlations for photons are a special case Gn()) and violate nonsubtracted classical prediction. Duh.
Sci Advisor
PF Gold
P: 5,148
 Quote by nightlight You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate? The phenomenologial PDC hamiltonian used in Quantum Optics computations has been reproduced perfectly within the Stochastic Electrodynamics... ... And after all the song and dance quiets down, lo and behold, the "results" match the subtracted prediction of Glauber's correlation function Gn (Bell's QM result cos^2() for correlations for photons are a special case Gn()) and violate nonsubtracted classical prediction. Duh.
Very impressive analysis, seriously, and no sarcasm intended.

But as science, I consider it nearly useless. No amount of "semi-classical" explanation will ever cover for the fact that NOT ONE IOTA adds anything to our present day knowledge, which is the purpose of true scientific effort. It is strictly an elaborate catch-up to QM/CI by telling us that you can get the same answers a different way. Reminds me of the complex theories of how the sun and planets actually rotate around the Earth. (And all the while you criticize a theory which since 1927 has been accepted as one of the greatest scientific achievements in all history. The most Einstein ever could muster against it was that it was not complete; I doubt he ever thought it wrong per se.)

-------

1. I would love to hear you answer this question, previously asked and not answered: what is the true & correct correlation formula for entangled photon pair spin observations? Is it the same as QM - and thus subject to the issues of Bell's Theorem - or would you care to provide a specific formula which is different? (After all, the Bell Inequality and QM are a lot farther apart than the 7th or 8th decimal place.)

Your value for 0 degrees?
Your value for 22.5 degrees?
Your value for 45 degrees?
Your value for 67.5 degrees?
Your value for 90 degrees?

I ask because I would like to determine whether you agree or disagree with the predictions of QM.

2. Or in terms of the Thorn experiment: do you think that they suspected they might get results in accordance with the predictions of QM? Yes, because QM makes specific predictions which allow it to be falsified. So you argue that an experiment specifically designed to see if 2 photons can be detected on one side when only one is detected on the other is flawed -- exactly such that a false positive will be registered 100% of the time! And yet, a reasonable person might ask why 2300 photons aren't occasionally seen on one side, when only one is seen on the other if your concept is correct. After all, you specifically say one photon is really many photons.

3. In fact, can you provide any useful/testable prediction which is different from orthodox QM? You see, I don't actually believe your theory has the ability to make a single useful prediction that wasn't already in standard college textbooks years ago. (The definition of an AD HOC theory is one designed to fit the existing facts while predicting nothing new in the process.) I freely acknowledge that a future breakthrough might show one of your lines of thinking to have great merit or promise, although nothing concrete has yet been provided.

-------

I am open to persuasion, again no sarcasm intended. I disagree with your thinking, but I am fascinated as to why an intelligent person such as yourself would belittle QM and orthodox scientific views.

-DrC
 P: 186 But as science, I consider it nearly useless. No amount of "semi-classical" explanation will ever cover for the fact that NOT ONE IOTA adds anything to our present day knowledge, which is the purpose of true scientific effort. I wasn't writing a scientific paper here but merely discussing some common misconceptions. If/when I have something of importance to announce, I'll put it in a paper (I don't earn living by "publish or perish"). Although, these postings here are not of much "importance" or seriousness, I still find them useful in sharpening my own understanding and making me follow interesting thought patterns I wouldn't have otherwise. It is strictly an elaborate catch-up to QM/CI by telling us that you can get the same answers a different way. Again, I didn't create any theory much less make claims about "my" theory. I was refering you to what results exist, in particular Barut's and Jaynes work in QED and Marshall & Santos in Quantum Optics. I cited papers and links so you can look them up and learn some more and, if you're doubtful, to verify whether I made up anything. The most Einstein ever could muster against it was that it was not complete; I doubt he ever thought it wrong per se.) That's its basic defect, the incompleteness. On the other hand, the claims that any local fields theory must necessarily contradict it, depends how you interpret "it". As you know, there were impossibility proofs since von Neumann. Their basic problem was and is in excessive generalizing the interpretation of the formalism, requiring any future theory to satisfy requirements not implied by any known experiment. Among such generalizations, the remote noninteracting collapse (the projection postulate applied to non-interacting subsystems at spacelike intervals) is the primary source of the nonlocality in nonrelativistic QM. If one ignores the trivial kinds of nonlocality arising from nonrelativistic approximations to EM interactions (such as instantaneous action-at-a-distance Coulomb potential), the generalized projection postulate is the sole source of nonlocality in nonrelativistic QM. The Bell's QM prediction (which assumes that the remote subsystem will "collapse" into a pure state, with no interaction and at spacelike interval) doesn't follow without this remote projection postulate. The only test for that generalization of projection postulate is the Bell's inequality test. When considering optical experiments, the proper formalism is Quantum Optics (although the nonrelativistic QM is often used as a heuristic tool here). The photon pair Bell's QM prediction is derived here in more rigorous way (as Glauber's two point correlations, cf [5] for PDC pair), which makes it clearer that no genuine nonlocality is taking place, in theory or in experiments. The aspect made obvious is that the cos^(a) (or sin^2(a) in [5]) correlation is computed via Glauber's 2 point "correlation" of normal-ordered (E-) and (E+) operators. What that means is that, one is predicting prefiltered relations (between angles and coincidence rates) which filters out any 3 or 4 point events (there are 4 detectors). The use of normally ordered Glauber's G2() further implies that the prediction is made for still additionally filtered data, where the unpaired singles and any accidental coincidences will be subtracted. Thus, what was only implicit in nonrelativistic QM toy derivation (and what required a generalized projection postulate while no such additional postulate is needed here) becomes explicit here -- the types of filtering needed to extract the "signal" function cos^(a) or sin^(a) ("explicit" provided you understand what Glauber's correlation and detection theory is and what it assumes, cf [4] and points I made earlier about it). Thus, the Quantum Optics (which is QED applied to optical wavelengths, plus the detection theory for square-law detectors plus the Glauber's filtering conventions) doesn't really predict cos^(a) correlation, but it predicts merely the existence of the cos^2(a) "signal" burried within the actual measured correlation. It doesn't say how much is going to be discarded since that depends on specific detectors, lenses, polarizers, ... and all that real world stuff, but it says what kind of things must be discarded from the data to extract the general cos^() signal function. So, unlike the QM derivation, the QED derivation doesn't predict violation of Bell inequalities for the actual data, but only the existence of particular signal function. While some of the discarded data can be estimated for specific setup and technology, no one knows how to make a good enough estimate of all data which is to be discarded by the theory in order to extract the "signal" to be able to say whather the actual correlation can violate Bell's inequalities. And, on the experimental side, no one has so far obtained experimental violation either. The non-locality of the violation by the filtered "correlations" doesn't imply anything in particular regarding non-locality of correlations since the Quantum Optics filtering procedure is by definition non-local -- to subtract "accidentals" you need to measure the coincidence rate with source turned off, but that requires data collection from distand locations. Similarly to discard 3 or 4 detection events, you need to collect data from distant locations to know it was a 3rd or 4th click. Or to discard unpaired singles, you need the remote fact that none of the other detectors had triggered. Formally, this nonlocality is built into the Glauber's correlation functions by virtue of the disposal of all vacuum photon terms (from the full perturbative expression for multiple detections), where these terms refer to photon absorptions at different spacelike separated locations (e.g. "accidental" coincidence means e.g. a term which has one vacuum photon absorbed on A+ detectors and any photon, signal or vacuum, on B+/- detectors; any such term is dropped in the construction of the Glauber's filtering functions Gn()). 1. I would love to hear you answer this question, previously asked and not answered: what is the true & correct correlation formula for entangled photon pair spin observations? Is it the same as QM - and thus subject to the issues of Bell's Theorem - or would you care to provide a specific formula which is different? The correct formulas for time evolution of n detectors interacting with quantized EM field are only given as generalized interaction in perturbative expansion such as ref [6] eq's (4.29)-(4.31). They are of no use in practice since there is too much unknown to do anything with them. The simplified versions, but as imractical are Glauber's [4], Lect. V, as computations of scattering amplitudes (eq's 5.2-5.6) then he handwaves his way into filtered version, starting at eq 5.7; the more rigorous ref [6] (sec IV, pp 327), after using the same approximations, acknowledges regarding the Glauber-Sudarshan correlation functions "A question which remains unanswed is under what circumstances and how this simplification can be justified." The semiclassical results, which compute the regular, non-adjusted correlations are derived in [7]. The partially adjusted correlations (with only the local type of subtractions made) are same as the corresponding QED ones (cf. sect eq. 4.7 which only excludes single detector vacuum effects, not the combined nonlocal terms; one could do such removals by intorducing some fancy algorithm-like notation, I suppose). Again, as in QED formulas, these are too general to be useful. What is useful, though, is that these are semiclassical results, thus completely non-mysterious and transparently local, no matter what the specifics of detector design or materials are. The fields themselves (EM and matter fields) are the "hidden" (in plain sight) variables. Any further non-local subtractions on data made from there on are the only source of non-locality, which is entirely non-mysterious and non-magical. In principle one could say the same for 2nd quantized fields (separate local from non-local terms and discard only the local one as a design property of a single detector) , except that now, there is infitely redundant overkill on the number of such variables compared to semiclassical fields (1st quantized, the matter field+EM field). But as a matter of priciple, such equations do evolve system fully locally, so there can be no non-local effect deduced from them, except as an artifact of terminological conventions of, say, calling some later non-locally adjusted correlations, still the "correlations" which then would become non-local "correlations". Your value for 0 degrees? Your value for 22.5 degrees? Your value for 45 degrees? Your value for 67.5 degrees? Your value for 90 degrees? Get your calculator, set it to DEG mode, enter your numbers and for each pres cos, then x^2. That's what the filtered "signal" functions would be. But, as explained, that implies nothing regarding the Bell inequality violations (which refer to plain data correlations, not some fancy term "correlations" which includes non-local adjustements). As to what any real, nonadjusted data ought to look like, I would be curious to see some. (Experimenters in this AJP paper would not give out a single specific count used for their g2, if their life depended on it.) All I know that from theory of multiple detections, QED or semiclassical, there is nothing non-local about them. I ask because I would like to determine whether you agree or disagree with the predictions of QM. I agree that filtered "signal" functions will look like QM or Quantum Optics prediction (QM is too abstract to make fine distinctions, but QO "prediction" is explicit in that this is only a "signal" function extracted from correlations, and surely not the plain correlation between counts). But they will also look like the semiclassical prediction, when you apply the same non-local filtering on semiclassical predictions. Marshall and Santos have SED models that show this equivalence for atomic cascade and for PDC sources (see their numerous preprints in arXiv, I cited several in our earlier discussions). 2. Or in terms of the Thorn experiment: do you think that they suspected they might get results in accordance with the predictions of QM? This is quantum optics experiment. Nonrelativistic QM can't tell you precisely enough (other than postulating nonlocal collapse). QO derives what happens, and as explained nothing nonlocal happens with the plain correlations. No one has such experiment or a QED/QO derivation of a nonlocal prediction (assuming you understand what Gn() "correlations" represent and you don't get mislead by a wishful terminology of QO). I also explained in previous msg why the "quantum" g2=0 for 1 photon state (AJP.8) is a trivial kind of "anticorrelation". That the AJP authors (and few others in QO) misinterpreted it, that's their problem. Ask them why, if you're curious. Quantum Opticians have been known to suffer delusions of magic, as shown by Hanbury Brown and Twiss affair, when HBT using semiclassical thery predicted the HBT correlations in 1950s. In no time, the "priesthood" jumped in, papers "proving" it was impossible came out, experiments "proving" HBT were wrong were published in a hurry. It can't be so since photons can't do that.... Well, all the mighty "proofs", experimental and theroetical turned out fake, but not before Purcell published paper explaining how photons could do it. Well, then, sorry HBT, you were right. But... and then in 1963 came Harvard's Roy Glauber with his wishful terminology "correlation" (for a filtered "signal" function) to confuse students and the shallower grownups for decades to come. Yes, because QM makes specific predictions which allow it to be falsified. The prediction for a form of filtered "signal" function has no implication for Bell's inequalities. The cos^() isn't uniquely QM or QED prediction for the signal function. The semiclassical theory predicts the same signal function. The only difference is that the QO calls its "signal" function a "correlation" but defines it still as non-locally post-processed correlation function. The nonlocality is built into the definition of Gn() "correlation". So you argue that an experiment specifically designed to see if 2 photons can be detected on one side when only one is detected on the other is flawed It is not flawed. It merely doesn't correspond operationally to eq (8) & (11) with "single photon" input, |Psi.1>=|T>+|R>, which yield g2=0. To understand that, read again the previous msg (and the earlier one referred there) and check Glauber's [4] to see what does (8) and (11) mean operationally. As I said, |Psi.1> has to be absorbed as a whole by one detector. But to do that, it has to interact with the detector, all of its field. In [4] there is no magic instant collapse, it is QED, relativistic interaction, and "collapse" here turns out to be plain photon absorption, with all steps governed by EM field inmteracting with atom. It just happens that the nonzero part of the field of |Psi.1> is spread out across the two non-contiguous regions T and R. But it is still "one mode" (by definition, since you plug into (AJP.11) 1 mode state for the given basis operator). A detector which by definition has to capture the whole mode and leave the vacuum state as result (the Glauber detector to which eq. 8-11 apply), in relativistic theory (QED) has to interact with the whole mode, all of its field. In QED dynamics, there is no collapse, and AJP.8-11 were result of dynamical derivation (cf. [4]), not a postulate where you might twist it and turn it, so it is clear what they mean -- they mean absorption of the whole photon |Psi.1> via pure dynamics of quantized EM field and a matter-field detector (they don't 2nd quantize the matter field in [4]). After all, you specifically say one photon is really many photons. I said that photon number operator [n] is basis dependent, what is "one photon" in one basis need not be "one photon" in another basis. Again, the car speed example -- is it meaningfull to argue whether my car had a "speed number" 2300 yesterday at 8AM? In practice, the basis is selected to match best the problem geometry and physics (such as eigenstates of noninteracting Hamiltonian), in order to simplify the computations. There is much convention student learns over years so that one doesn't need to spell out on every turn what they mean (which can be a trap for novices or shallow people of any age). The "Glauber detector" which absorbs "one whole photon" and counts 1 (for which his Gn() apply, thus the eq's APJ.8-11) is therefore also ambiguous in the same sense. You need to define modes before you can speak what that detector is going to absorb. If you say you have state |Psi>=|T>+|R> and this state is your "mode" (you can always pick it for one of the basis vectors, since it is normalized), the "one photon", then you need to use that base for photon number operator [n] in APJ.11, and then that gives you g2=0. But all these choices also define how you Glauber detector for this "photon" |Psi> is to operate here -- it has to be spread out to interact and absorb (via EM field-matter QED dynamics) this |Psi>. 3. In fact, can you provide any useful/testable prediction which is different from orthodox QM? I am not offering "theory", just explaining the misleading QO terminology and confusion it could and does cause. I don't have "my theory predictions" but what I am saying is that, when QO terminological smoke and mirrors are removed, there is nothing non-local predicted by their own formalism. The non-adjusted correlations will always be local, i.e for them it will be g2>=1. And they will be the same value in semiclassical and the QED computation, at least to the alpha^4 perturbative QED expansion (if Barut's semiclassical theory is used), thus 8+ digits of precision will be same, possibly more (unknown at present). What will the adjusted g2 be, depends on the adjustments. If you make non-local adjustments (requiring data from multiple locations to compute amounts to subtract), yes, then you get some g2' which can't be obtained by plugging only locally collected counts into the (AJP.14). So what? That has nothing to do with non-locality. It just a way you define your g2'. Only if you do what the AJP paper does (along with other "nonclassicality" claimants in QO), and label it with the same symbol g2 for "classical" (and also nonadjusted correlation) and g2 for "quantum" (and also adjusted "correlation" or "signal" function Gn) models, then few paragraphs later you manage to forget (or more likely, never knew the difference) the "and also" parts and decide the sole difference was in "classical" vs "quantum" then you will succeed in self-deception that you have shown "nonclassicality." Otherwise if you label apples 'a' and oranges 'o', you won't have to marvel at the end why is 'a' is different than 'o', as you do when you ask why is g2 from classical case different than g2 from quantum case and then "conclude" that it must be something "nonclassical" in the quantum case that made the difference. --- Ref [5] Z.Y. Ou, L. Mandel "Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment" Phys. Rev. Lett. 61(1) pp 50-53 (1988). [6] P.L. Kelly and W.H. Kleiner, "Theory of electromagnetic field measurement and photoelectron counting" Phys. Rev. 136, A316–A334 (1964). [7] L. Mandel, E.C.G. Sudarshan, E. Wolf "Theory of Photoelectric Detection of Light Fluctuations" Proc. Phys Soc. 84 (1964) 435-444.
Emeritus
Sci Advisor
PF Gold
P: 6,238
 Quote by nightlight Using raw counts in eq (14), they could only get g2>=1.
You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??

I can easily provide you with such a series !

cheers,
Patrick.
Emeritus
Sci Advisor
PF Gold
P: 6,238
 Quote by vanesch You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??
In fact, without a lot of "quantum optics", the very fact of having "intensity threshold detectors" which give me a small P_gtr/(P_gt * P_gr) is already an indication that these intensities are not given by the Maxwell equations.
The reason is the following: in order to be able to have a stable interference pattern by the waves arriving at T and R, T and R need to be modulated by intensities which have a strong correlation on the timescale of the detection window. Indeed, if they are anticorrelated, when there is an intensity at T, there isn't any at R, and vice versa, there is never a moment when there is sufficient intensity from both to interfere. If the modulation depth is about 100%, this indicates that the intensities are essentially identical at T and R, on the time scale of the intensity detector (here, a few ns).
Identical intensities on this time scale is necessary to obtain extinction of intensity in the interference pattern (the fields have to cancel at any moment, otherwise some intensity is left).
So no matter how your detector operates, if it gives you a positive logical signal ABOVE an intensity theshold, and a negative logical signal BELOW an intensity threshold, these logical signals have to be correlated by about 100%.

Taking an arbitrary moment in time, and looking at the probability to have T AND R = 1, and T = 1 and R = 1, and calculating P_TR / (P_T P_R) is nothing else but an expression of this correlation of intensities, needed to obtain an interference pattern.
In fact, the probability expressions have the advantage of taking into account "finite efficiencies", meaning that to each intensity over the threshold corresponds only a finite probability of giving a positive logical signal. That's easily verified.

Subsampling these logical signals with just ANY arbitrary sampling sequence G gives you of course a good approximation of these probabilities. It doesn't matter if G is correlated or not, with T or R, because the logical signals from T and R are identical. If I now sample only at times given by a time series G, then I simply find the formula given in AJP (14): P_gtr/ (P_gt P_gr).
This is still close to 100% if I have interfering intensities from T and R.
No matter what time series. So I can just as well use the idler signal as G.

Now, you can say, if that's so simple, why do we go through the pain of generating idlers at all: why not use a random pulse generator.
The reason is that not the entire intensity function of R and T is usable. Most intensities are NOT correlated.
But that doesn't change the story: the only thing I now have to reproduce, is that I have an interference pattern from R and T when I have G-clicks. Then the same explanation holds, but only for the time windows defined by G.

If I have a time series G, defining time windows, in which I can have an interference pattern from T and R with high modulation, the intensities during T and R need to be highly correlated. This means that the expression P_GTR/(P_GT P_GR) needs to be close to 1.

So if I succeed, somehow, to have the following:

I have a time series generator G ;
I can show interference patterns, synchronized with this time series generator, of two light beams T and R ;

and I calculate an etimation of P_GTR / (P_GR P_GT),

then I should find a number close to 1 if this maxwellian picture holds.

Well, the results of the paper show that it is CLOSE TO 0.

cheers,
Patrick.
Sci Advisor
PF Gold
P: 5,148
 Quote by nightlight [i] I wasn't writing a scientific paper here but merely discussing some common misconceptions. If/when I have something of importance to announce, I'll put it in a paper (I don't earn living by "publish or perish"). Although, these postings here are not of much "importance" or seriousness, I still find them useful in sharpening my own understanding and making me follow interesting thought patterns I wouldn't have otherwise. ... --- Ref [5] Z.Y. Ou, L. Mandel "Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment" Phys. Rev. Lett. 61(1) pp 50-53 (1988). [6] P.L. Kelly and W.H. Kleiner, "Theory of electromagnetic field measurement and photoelectron counting" Phys. Rev. 136, A316–A334 (1964). [7] L. Mandel, E.C.G. Sudarshan, E. Wolf "Theory of Photoelectric Detection of Light Fluctuations" Proc. Phys Soc. 84 (1964) 435-444.
Thank you for the courtesy of your response. I do not agree with your conclusions, but definitely I want to study your thinking further.

And I definitely AGREE with what you say about you say about "sharpening" above... I get a lot out of these discussions in the same manner. Forces me to consider my views in a critical light, which is good.
P: 186
 Quote by vanesch You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ?? I can easily provide you with such a series !
It is trivial to have beam splitter give you nearly perfectly anticorrelated data (e.g. vary polarization of optical photons randomly and then set photodetector to very low noise so it picks only the photons with nearly parallel polarization with the PBS). Clauser's test [2] had produced one such.

It can't be done if you also require that the T and R are suporposed (as opposed to mixture) and that they carry equal energy in each instance. That part is normally verified by interfering the two beams T and R.

The authors [1] have such experiment but they didn't test their T and R EM field samples used for anticorrelation part. Instead they stuck interferometer in front of the beam splitter and showed its T' and R' interfere, but that is a different beam splitter, and different coincidence setup with entirely unrelated EM field samples being correlated. It is essential to use the same samples of EM field from T and R (or at most propagated by r/t adjustments for extended paths) and using the same detector and time windows settings to detect interference. The "same" samples mean: extracted the same way from G events, without subselecting and rejecting based on data available away from G (such as content of detections on T and R, e.g. to reject unpaired singles of G events).

The QO magic show usually considers time windows (defined via coincidence circuits settings) and detector settings as free parameter they can tweak on each try till it all "works" and the magic is happening as prescribed. Easy to make magic when you can change these between runs or polarizer angles. They use their normal engineering signal filtering reporting conventions in not reporting in their papers the bits of info which are critical for this kind of tests (although a routine for their engineering signal processing and "signal" filtering) -- to have the same samples of fields. (This was a rare one with few details and the trick which does the magic immediately jumps out.) Combine that with Glauberized jargon, where "correlation" isn't correlation, and you got all the tools for an impressive Quantum Optics magic show.

They've been pulling leg of physics community since Bell tests started with this kind of phony magic and have caused vast quantities of nonsense to be written by otherwise good and even great physicists on this topic. A non-Glauberized physicist uses word correlation in a normal way, so they get invariably taken in by Glauber's "correlation" which doesn't correlate anything but is just an engineering style filtered "signal" function extracted out of acual correlations via inherently non-local filtering procedure (standard QO subtractions). I worked on this topic for my masters and read all the experimental stuff available, yet I had no clue how truly flaky and nonexistent those violation "facts" were.
 P: 186 T and R need to be modulated by intensities which have a strong correlation on the timescale of the detection window. Indeed, if they are anticorrelated, when there is an intensity at T, there isn't any at R, and vice versa, there is never a moment when there is sufficient intensity from both to interfere. Keep in mind that the 2.5ns time windows are defined on the detectors output fields, they're electrical current counterpart to the optical pulses. The PDC optical pulses are thousands times shorter, though. Now, you can say, if that's so simple, why do we go through the pain of generating idlers at all: why not use a random pulse generator. ... So if I succeed, somehow, to have the following: I have a time series generator G ; I can show interference patterns, synchronized with this time series generator, of two light beams T and R ; and I calculate an etimation of P_GTR / (P_GR P_GT), then I should find a number close to 1 if this maxwellian picture holds. Well, the results of the paper show that it is CLOSE TO 0. The random time window (e.g. if you ignore G beam) won't even be poissonian, since there will be a large vacuum superposition, which has gaussian distribution. So, the unconditional sample (or a random sample) will give you a super-Poissonian for the PDC source and Poissonian for the coherent source. The main item to watch regarding the G-detection conditioned samples is to pick the "same" sample on T and R. For example, consider a GTR coincidence window setup shown with PDC pulse alignements:  ---- signals /......\ ----------------------------------------------> times |--------| Gate beam timing window |------| Transmitted beam window |------| Reflected beam window That obviously will give you a perfect anticorrelation while still allowing you to show the perfect interference if you change the T and W sampling windows for interference test (and align them properly). Even if you keep windows the "same" for the intereference test as for anticorrelations, you can still get both, provided you overlap partially T and R windows. Then by tweaking the detector thresholds, you can still create the appearance of violating visibility vs anticorrelation classical tradeoff. The basic validity test should be to feed laser split 50:50 instead of G and TR into the very same setup, optics, detectors, coincidence circuit and show that g2=1. If this gives you <1 the setup is cheating. Note that Grangier et al. test in [3] had used for this test the chaotic light and said they got g2>1, but for chaotic light g2 needs to be >= 2, so this is inconclusive. The stable laser should put you on the very boundary of "fair sampling" and it should be much easier to see cheating. Also the gratuitous separate sampling via the 3rd circuit for the triple coincidence is cheating all by itself. The triple coincidences should be derived (via software or the AND circuit) from the obtained GT and GR samples, not sampled and tuned on its own. Grangier et al. also used this third coincidence unit, but wisely chose not to give any detail about it. "My" prediction (or rather it is just a QED/QO prediction, but with properly assigned operational mapping to the right experiment) is that the correctly gated G conditioned sample will also give you at best the Poissonian. As explained, the Glauber's g2=0 for "single photon" state |T> + |R> is operationally completely misinterpreted by some Quantum Opticians who did the test, and dutifuylly parroted by the rest (although Glauber, or Chiao & Kwiat or Mandel use much more cautios language, recognizing that you need standard QO subtractions to drop into the non-classical g2). The distribution should be same as if you separated single cathode (large compared to wavelengths but capturing the incident beam on both partitions equally) into two regions and tried to correlate photoelectron counts from the two sides. The photoelectron count will be Poissonian at best (g2=1). Note that if you were to do experiment and show that the unsubtracted correlations (redundant, since correlations as normally understood should be unsubtracted, but it has to be said here in view of QO Glauberisms) are perfectly classical, it wouldn't be long before everyone in QO will declare, they knew it all along and pretend as if they never thought or wrote exactly the opposite. Namely, suddenly they will discover that the coherent light pump with the Poissonian superposition of the Fock states will simply generate Poissonian PDC pulses (actually they'll probably be Gaussian) and the problem is solved. The g2=0 for single photon states remains Ok, it just doesn't apply for this source (they will still continue searching for the magic source in other effects, so they'll blame the source, even though g2=0 for Glauber detector of "one photon" |T>+|R>, as explained, is a trivial anticorrelation; but they'll still maintain it is just matter of finding the "right source" and continue with the old misinterpretation of g2=0; QO "priesthood" is known for these kinds of delusions e.g. lookup on the Hanbury Brown and Twiss comedy of errors; or similar one with the "impossible" interference of independent laser beams which also caused grand mal convulsions). If you follow up another step from here, once you establish that the anticorrelation doesn't work as claimed but always gives g2>=1 on actual correlations, you'll discover that no optical Bell test will work either since you can't get rid of the double +/- Poissonian hits for the same photon without reducing detection efficiency, which then invalidates it from the other "loophole". This is more transparent in QO/QED derivations of Bell's QM prediction where they use only the 2 point Glauber correlation (cf. [5] Ou,Mandel), thus discarding explicitly all triple and quadruple hits, making it clear they're using sub-sample. The QO derivation is sharper than the abstract QM 2x2 toy derivation. In particular the QO/QED derivation doesn't use remote sybsystem projection postulate but derives effective "collapse" as a dynamical process of photon absorpotion, thus purely local and uncontroversial dynamical collapse. The additional explicit subtractions included into the definition of G2() make it clear that the cos^2() "correlation" isn't a correlation at all but an extracted "signal" function (a la Wigner distribution reconstructed via quantum tomography), thus one can't plug it into the Bell's inequalities as is without estimating the terms discared by Glauber's particular convention for signal vs noise dividing line. With the generic QM projection based 2x2 abstract derivation, that's all invisible. Also invisible is the constraint (part of Glauber's derivation [4]) that "collapse" is due to local field dynamics, it is just a plain photon absorption through local EM-atom interaction. The QO/QED derivation also shows explicitly how the non-locality enters into the von Neumann's generalized projection postulate (that projects remote noninteracting subsystems) -- it is result of the manifestly non-local data filtering procedures used in Glauber's Gn() subtraction conventions. That allone disqualifies any usage of such non-locally filtered "signal" function as a proof of non-locality by plugging them into Bell's inequalities. Earlier I cited Haus (who was one of few wise Quantum Opticians; he lived in my town but passed away recently, before I knew the QO grandmaster was just few blocks away) where he plainly said that "collapse" (von Neumann's projection) is a "shortcut" for genuine dynamical analysis of the detection process and needs to be applied with a good dose of salt.
Emeritus
Sci Advisor
PF Gold
P: 6,238
 Quote by nightlight Keep in mind that the 2.5ns time windows are defined on the detectors output fields, they're electrical current counterpart to the optical pulses. The PDC optical pulses are thousands times shorter, though.
The point was, that if the intensities have to be strongly correlated (not to say, identical) on the fast timescale (in order to prduce interferences), then they will de facto be correlated on longer timescales (which are just sums of smaller timescales).

 The random time window (e.g. if you ignore G beam) won't even be poissonian, since there will be a large vacuum superposition, which has gaussian distribution. So, the unconditional sample (or a random sample) will give you a super-Poissonian for the PDC source and Poissonian for the coherent source.
But it doesn't MATTER how the time windows are distributed. They could be REGULARLY distributed with a quartz clock, or when I decide to push a button, or anything. They are just a subsample of the entire intensity curve, and don't need to be in any way satisfy any statistical property.

 The main item to watch regarding the G-detection conditioned samples is to pick the "same" sample on T and R. For example, consider a GTR coincidence window setup shown with PDC pulse alignements:
Of course you need to apply the SAME time window to as well G, T and R.
Or, to have to make the windows such that, for instance, the GTR window is LARGER than the GR and TR windows, so that you get an overestimation of the quantity which you want to get low.

But it is the only requirement. If you obtain interference effects within these time windows (by G) and you get a low value for the quantity N_gtr N_g/ (N_gr N_gt) then this cannot be generated by beams in the Maxwellian way.
And you don't need any statistical property of the G intervals, except for a kind of repeatability: namely that they behave in a similar way during the interference test (when detectors r and t are not present) and during the coincidence measurement. The G intervals can be distributed in any way.

cheers,
Patrick.
 P: 186 But it doesn't MATTER how the time windows are distributed. They could be REGULARLY distributed with a quartz clock, or when I decide to push a button, or anything. They are just a subsample of the entire intensity curve, and don't need to be in any way satisfy any statistical property. The subsample of T and R corresopnding to DG defined windows can and does have different statistics of T, R events (on both the T or R as singles and TR pairs) than a random sample (unconditioned on G events). For example, for the DG window samples T or R had 4000 c/s singles rates, and only 250 c/s on non-DG samples. But it is the only requirement. If you obtain interference effects within these time windows (by G) and you get a low value for the quantity N_gtr N_g/ (N_gr N_gt) then this cannot be generated by beams in the Maxwellian way. Well, almost so. There is one effect which could violate this condition provided you have very close (such as order of a wavelength) detectors DT and DR i.e. they would need to be atomic detectors. Then a resonance absorption on DT would distort the incident field within the near field range of DT, and consequently the DR would get a lower EM flux than with DT absorption absent. Namely when you have a dipole which resonates with the incident plane EM wave, the field it radiates will superpose coherently with the plane wave field resulting in bending of Poynting vector toward the dipole, increasing thus the flux it absorbs much beyond the dipole size d, resulting in an absorbed flux from area lambda^2 instead of d^2 (where d is dipole size). With electron clouds of a detector (e.g. atom) there is a positive feedback loop, where the initial weak oscillations (from the small forward fronts of incident field) of the cloud cause the above EM-sucking distortion, which in turn increases the amplitude of oscillations, extending thus its reach farther (dipole emits stronger fields and bends flux more toward itself), thus enhancing the EM-sucking effect. Thus there is self-reinforcing loop, where EM-sucking increases exponentially, which finally results in an abrupt breakdown of the electron cloud. When you have N nearby atoms absorbing light, the net effect of EM-sucking multiplies by N. But due to initial phase differences of electron clouds, some atom would have a small initial edge in its oscillations over its neighbors and due to exponential nature of the positive feedback, the EM-sucking efect into that single atom will quickly get ahead (like different compund interests) and rob its neighbors of their (N-1) fluxes, thus enhancing the EM-sucking effect approximately N-fold compared to a single atom absorption. These absorptions will thus strongly anticorrelate between nearby atoms and create an effect similar to the Einstein's needle-radiation -- as if a pointlike photon had struck just one of the N atoms for each photo-absorption. There is a Russian group which has over last ten years developed a detailed photodetecton theory based on this physical picture, capable of explaining the discrete nature of detector pulses (without requiring point photon heuristics). Regarding the experiment, another side effect of this kind of absorption is that the absorbing cloud will emit back about half the radiation it absorbed (toward the source, as if paying the vacuum's 1/2 hv; the space behind the atom will have a corresponding reduction of EM flux, a shadow, as if scattering has occured). Although I haven't seen the following described, but based on the general gist of the phenomenon, it is conceivable that these back emissions, for very close beam splitter, could end up propagating the positive feedback to the beam-splitter, so that the incident EM field, which for each G pulse starts equally distributed into T and R, superposes with these back-emissions at the beam splitter in such a way to enhance the flux portion to the back-emitting detector, in an analogous way to the nearby-atom case. So the T and R would start equal during the weak leading front of TR pulse, but the initial phase imbalance between the cathodes of DT and DR would lead to amplification of the difference and lead to anticorrelation between the triger by the time the bulk of TR pulse traverses the beam splitter. But for the interference, there would be no separate absorber for T and R side, but just a single absorber much farther away and the interference would still occur. To detect this kind of resonant beam splitter flip-flop effect one would have to observe dependence of any anticorrelation on the distance of detectors from the beam splitter. Using one way mirrors in front of detectors might block the effect if it exists at a given distance.

 Related Discussions Quantum Physics 5 Quantum Physics 132 General Physics 0 General Physics 1