Photon Wave Collapse Experiment (Yeah sure; AJP Sep 2004, Thorn )

In summary: The authors claim to violate "classicality" by 377 standard deviations, which is by far the largest violation ever for this type of experiment. The setup is an archetype of quantum mystery: A single photon arrives ata 50:50 beam splitter. One could verify that the two photon wave packet branches (after the beam splitter) interfere nearly perfectly, yet if one places a photodetector in each path, only one of the two detectorswill trigger in each try. As Feynman put it - "In reality, it contains the only mystery." How does "it" do it? The answer is -- "it" doesn'tdo "it
  • #36
nightlight said:
Note that in our earlier discussion, you were claiming, too, that this kind of remote nondynamical collapse does occur.

I never did. I cannot remember ever having taken the position of "remote collapse", except as a calculational tool. All my arguments here, about delayed choice quantum erasers, EPR etc... were oriented to show that there is no remote collapse necessary, although you can use it.

In fact, the "remote collapse" poses problems from the moment you assign any ontology to the wave function. There problems are not unsurmountable (as is shown in Bohm's mechanics) but I prefer not to consider such solutions for the moment, based upon some kind of esthetics, which says that if you stick at all cost to certain symmetries for the wave function formalism, you should not spit on them when considering another rule (such as the guiding equation).
The "remote collapse" poses no problem if you don't assign any ontology to the wave function, and just see it as a calculational tool.

So if I ever talked about "remote collapse" it was because of 2 possible reasons: I was talking about a calculation OR I was drunk.

cheers,
Patrick.
 
Physics news on Phys.org
  • #37
vanesch said:
I never did. I cannot remember ever having taken the position of "remote collapse", except as a calculational tool.

Well, the collapse in the sense of beam splitter anticorrelations discussed in this thread. You definitely, in our detector discussion took position that there will be genuinely reduced detection probability on detector DT whenever there was a trigger on the matching detctor DR. You argued that there was a drop in the trigger probability of DT trigger in this case and that this drop is genuine (in the sense of anticorrelation not being an artifact of subtractions of accidentals and unpaired singles). As I understand you don't believe any more in that kind of non-interacting spacelike anticorrelation (the reduction of remote subsystem state). You certianly did argue consistently that it was a genuine anticorrelation and not an artifact of subtractions.
 
  • #38
nightlight said:
You definitely, in our detector discussion took position that there will be genuinely reduced detection probability on detector DT whenever there was a trigger on the matching detctor DR.

Ah, yes, and I still do. But that has nothing to do with true collapse or not !
There will be a branch where DR triggers and not DT, and there will be a branch where DT triggers, and not DR. There will never be a branch where DR and DT trigger together (apart from double events).

I just happen to be an observer in one of the two branches. But the other one can happily exist. Collapse means that suddenly, that other branch "disappears". It doesn't have to. I will simply not observe it, because I don't happen to be in that branch.

Observationally this is of course indistinguishable from the claim that the branch "I am not in" somehow "doesn't exist anymore". If you do that, you have a collapse. But I find it more pleasing to say that that branch still exists, but is not open to my observation, because I happen to have made a bifurcation in another branch.
It all depends what you want to stick to: do you stick to the postulate that things can only exist when I can observe them, or do you stick to esthetics in the mathematical formalism ? I prefer the latter. You don't have to.

cheers,
Patrick.
 
  • #39
nightlight said:
http://www.ortec-online.com/electronics/tac/567.htm . The data sheet for
the model 567 lists the required delay of 10ns for the START
(which was here DT signal, see AJP.Fig 5) from the START GATE
signal (which was here DG signal) in order for START to get
accepted. But the AJP.Fig 5, and the several places in the
text give their delay line between DG and DT as 6 ns. That
means when DG triggers at t0, 6ns later (+/- 1.25ns) the
DT will trigger (if at all), but the TAC will ignore it
since it won't be ready yet, and for another 4ns. Then,
at t0+10ns the TAC is finally enabled, but without START
no event will be registered. The "GTR" coincidence rate
will be close to accidental background (slightly above
since if the real T doesn't trip DT and the subsequent
background DT hits at t0+10, then the DG trigger, which
is now more likely than background, at t0+12ns will
allow the registration).

I read in detail your analysis here, and it is correct in a certain sense. Indeed, in the data sheet is a requirement for 10 ns for the gate to be "active" before the start comes in. Now, does that mean that if you give 6ns, you're guaranteed NOT to have a count ? I wouldn't think so ! Doing quite some electronics development myself, I know that when I specify some limits on utilisation, I'm usually sure of a much better performance ! So if I make a device of which I'm relatively sure that, say, a delay of 2 ns is sufficient in most circumstances, I'd rather specify 5 or 10 ns, so that I'm absolutely sure that I will keep the specs, except if I'm pushed to the limits.
So I'm pretty sure that the 6ns is sufficient to make the actually trigger.
A point is of course that in this particular example, the remedy is extremely simple: use longer delay lines !
I've already been in such a situation, you know: you do an experiment, everything works OK, you write up your stuff and you submit it. Then there is a guy who points out that you didn't respect a certain specification. So you get pale, and you wonder if you have to retract your submission. You think about it, and then you say: ah, but simply using longer delay lines will do the trick ! So you rush to the lab, you correct for the error, and you find more or less equivalent results ; it didn't matter in the end. What do you do ? Do you write to the editor saying that you made a mistake in the paper, but that it actually doesn't matter, if only you're allowed to change a few things ??
OF COURSE NOT. You only do that if the problem DOES matter.
So that's in my opinion what happened, and that's why the author told you that everything is all right, and that these 6ns delays are not correct but that everything is all right.
There's also another very simple test to check the coincidence counting: connect the three inputs G, T and R to one single pulse generator. In that case, you should find identical counts in GT, GR and GTR.

After all, they 1) WORK (even if they don't strictly respect the specs of the counter) 2) they are in principle correct if we use fast enough electronics, 3) they are a DETAIL.
Honestly, if it WOULDN'T WORK, then the author has ALL POSSIBLE REASONS to publish it:
1) he would have discovered a major thing, a discrepancy with quantum optics
2) people will soon find out ; so he would stand out as the FOOL THAT MISSED AN IMPORTANT DISCOVERY BECAUSE HE DIDN'T PAY ATTENTION TO HIS APPARATUS.

After all, he suggests, himself, to make true counting electronics. Hey, if somebody pays me enough for it, I'll design it myself ! I'd think that for $20000,- I make you a circuit without any problem. That's what I do part of my job time.

I'll see if we have such an ORTEC 567 and try to gate it at 6ns.

Now, if you are convinced that it IS a major issue, just try to find out a lab that DID build the set up, and promise them a BIG SURPRISE when they change the cables into longer ones, on condition that you will share the Nobel prize with them.

cheers,
Patrick
 
Last edited by a moderator:
  • #40
vanesch said:
So if I make a device of which I'm relatively sure that, say, a delay of 2 ns is sufficient in most circumstances, I'd rather specify 5 or 10 ns, so that I'm absolutely sure that I will keep the specs, except if I'm pushed to the limits.

This is not babbling in the air! For instance, together with a collegue, we devellopped a card for charge division read out. After reception of the analogue pulses, the position address on 8 bits, taking into account all calibration curves, is ready at the output in 370 ns. Well, we specified 1 microsecond, because the design specifications were 2 microseconds...

cheers,
Patrick.
 
  • #41
This is not babbling in the air! For instance, together with a collegue, we devellopped a card for charge division read out. After reception of the analogue pulses, the position address on 8 bits, taking into account all calibration curves, is ready at the output in 370 ns. Well, we specified 1 microsecond, because the design specifications were 2 microseconds...

Your example shows an additional reason why it is not plausible in this case. In your case there were no ten other vendors with competing designs and specs fighting for the same customer. You had monopoly and what you say goes, thus you're better off being conservative with stating limitations. In a competitive situation, the manufacturers will push the specs as far as they can get away with (i.e. in the cost/benefit analysis they would estimate how much they will lose from returned units vs los of sales, prestige, customers to competition). Just think of what limits you would state had that been a competition for that job -- there were ten candidates and they all design a unit to given minimum requirements (but they're allowed to improve) and the employer will pick one they like best.
 
  • #42
I read in detail your analysis here, and it is correct in a certain sense. Indeed, in the data sheet is a requirement for 10 ns for the gate to be "active" before the start comes in. Now, does that mean that if you give 6ns, you're guaranteed NOT to have a count ?

No it didn't work by luck. The author acknowledges the figure 6ns (and thus the 12ns) is wrong and that they have used the correct longer timing. You can ask him at his web page (just be tactful, he got very angry after I asked him about it).

The problem with "leave it alone, don't issue errata, since it worked" doesn't wash either. It didn't work. The g2=0 for eq (AJP.8) applies to normalized Glauber's correlation function which means you have to do subtractions of accidentals and removal of the unpaired singles (the counts on DG for which there were no T or R events). Otherwise you haven't removed the vacuum photon effects as the G2() does (see his derivation in [4]). Both adjustments lower the g2 in (AJP.14). But they already got nearly "perfect" result with raw counts. Note that they used N(G), which is 100,000 c/s, for their singles count in eq (AJP.14), while they should have used N(T+R) which is 8000 c/s.

Using raw counts in eq (14), they could only get g2>=1. See http://arxiv.org/abs/quant-ph/?0201036, page 11, where they say that classical g2>= eta, where eta is "coincidence-detection efficiency." They also acknowledge that the experiment does have semi-classical model (after all, Marshall & Santos had shown that already for Grangier at al. 1986 case, way back then). Thus they had to call upon already established Bell's inequality violations to discount semi-classical model for their experiment (p 11). They understand that this kind of experiment can't rule out semi-classical model on its own, since there is [post=529314]no such QED prediction[/post]. With Thorn et al, it is so "perfect" it does it all by itself, and even without any subtractions at all.

Keep also in mind that this is AJP and the experiment was meant to be a template for other undergraduate labs (as their title also says) to show their students, inexpensively and reliably the anticorrelation. That's why the figure 6ns is all over the article (in 11 places). That's the magic ingredient that makes it "work". Without it, on raw data you will have g2>=1.
 
  • #43
Ah, yes, and I still do. But that has nothing to do with true collapse or not !
There will be a branch where DR triggers and not DT, and there will be a branch where DT triggers, and not DR.


Oh good ol' MWI. We went this route before and you were backed into solipsism. Anything goes there.

There will never be a branch where DR and DT trigger together (apart from double events).

If you have an optical lab, try it out. That won't happen. The DT and DR will trigger no differently than classically i.e. your raw GTR coincidence data (nothing removed) plugged into (AJP.14) will give you g2>=1 (you'll probably get at least 1.5). The usual QO S/N enhancements via subtractions should not be done here.

Namely, if your raw counts (which is what classical model with g2>=1 applies to) don't violate g2>=1, there is no point subtracting and checking whether that adjusted g2a goes below 1, since the same subtraction can be added to the classical model (so that it, too, corresponds to what was done with the data in the experiment) and it will follow the adjusted data as well and show g2a<1 (as acknowledged in Chiao,Kwiat paper).

Note also that conventional QO "correlations", the Glauber's Gn(), already include in their definition the subtraction of accidentals and unpaired singles (the vacuum photon effects). All of the so-called nonclassicality of Quantum Optics is result of this difference in data post-processing convention -- they compute classical model for raw counts, while their own QO post-processing convention in Gn() is to count and correlate and than to subtract. So the two prediction will be "different" since they refer by their definitions to different quantities. The "classical" g2 of (AJP.1-2) is operationally different quantity than the "quantum" g2 of eq (AJP.8), but in QO nonclassicality claims, they will use same notation and imply they refer to the same thing (plain raw correlation), then proclaim non-classicality when the experiment using their QO convention (with all subtractions) matches the Gn()'s version of g2 (by which convention, after all, the data was post-processed), not the classical one (since it didn't use its post-processing convention).

You can follow this QO magic show recipe right in this AJP article, that's exactly how they set it up {but then, unlike most other authors, they get too gready and want to show the complete "perfection" and reproducable, the real magic, and for that you really need to cheat in a more reckless way than just misleading by omission}.

The same goes for most others where the QO non-classicality (anticorrelations, sub-Poissonian distributions, etc) is claimed as something genuine (as opposed to being a mere terminological convention for QO use of word 'non-classical', since that what it is and there is no more to it).

Note also that the "single photon" case, where "quantum" prediction is g2=0 has operationally nothing to do with this experiment ([post=529314]see my previous explanation[/post]). The "single photon" Glauber detector is a detector which absorbs (via atomic dipole and EM interaction, cf [4]) both beams, the whole photon, which is the state |T>+|R>. Thus the Glauber detector for single photon |T>+|R> with g2=0 is either a single large detector covering both paths, or an external circuit attached to DT+DR which treats two real detectors DT and DR as a single detector and gives 1 when one or both trigger, i.e. as if they're a single cathode. The double trigger T & R case is simply equivalent to having two photoelectrons emitted on this large cathode (the photoelectron distribution is at best Poissonian for perfectly steady incident field, but it is compound/super Poissonian for variable incident fields).
 
Last edited:
  • #44
Nightlight,

You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate?

Also, is it your opinion that photons are not quantum particles, but are instead waves?

-DrC
 
  • #45
You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate?

The phenomenologial PDC hamiltonian used in Quantum Optics computations has been reproduced perfectly within the Stochastic Electrodynamics (e.g. see papers from Marshall & Santos group; recent editions of well respected Yariv's QO textbook have an extra last chapter which for all practical purposes recognizes this equivalence).

Also, is it your opinion that photons are not quantum particles, but are instead waves?

Photons in QED are quantized modes of EM field. For a free field you can construct these in any basis in Hilbert space, so the "photon number" operator [n] depends on basis convention. Consequently the answer to question "how many photons are here" depends on the convention for the basis. (No different than you asking me what is the speed number on your car, and if I say 2300; this obviously is meaningless, since you need to know what convention I use for my speed units.)

For example if you have plane wave as a single mode, in its 1st excited state (as harmonic oscillator), in that particular base you have single photon, the state is an eigenstate of this [n]. But if you pick other bases, then you'll have a superposition of generally infinitely many of their "photons" and the plane wave is not the eigenstate of their [n].

The QO convention then calls "single photon" any superposition of the type |Psi.1> = Sum(k) of Ck(t) |1_k>, where sum goes over wave vectors k (a 4-vector) k=(w,kx,ky,kz) and |1_k> are eigenstates of some [n] with eigenvalue 1. This kind of photon is quasi-localized (with spread stretching across many wavelengths). Obviously, here you don't have any more E=hv relation since there is no single ferquency v superposed into the "single photon" state |Psi.1>. If the localization is very rough (many wavelengths superposed) then you could say that approximately |Psi.1> has some dominant and average v0, and one could say approximately E=hv0.

But there is no position operator for a point-like photon (and it can't be constructed in QED) and no QED process generates QED "single photon", the Fock state |1> for some basis and its [n], (except as an approximation in the lowest order of perturbation theory). Thus there is no formal counterpart in QED for a point-like entity hiding somewhere inside EM field operators, much less of some such point being exclusive. A "photon" for laser light streches out for many miles.

The equations these QED and QO "photons" follow in Heisenberg picture are plain old Maxwell equations for free fields or for any linear optical elements (mirrors, beam splitters, polarizers etc). For the EM interactions, the semiclassical and QED formalisms agree to at least alpha^4 order effects (as shown by Barut's version of semiclassical fields which include self-interaction). That is 8-9 digits of precision (it could well be more if one were to carry out the calculations). Barut unfortunately died in 1994, so that work has stalled. But their results up to 1987 are described in http://library.ictp.trieste.it/DOCS/P/87/248.pdf . ICTP has scanned 149 of his preprints, you can get pdfs here (type Barut in "author"; also interesting is his paper on http://library.ictp.trieste.it/DOCS/P/87/157.pdf ; his semiclassical approach starts in papers from 1980 and on).

In summary, you can't count them except by convention, they appear and disappear in interactions, there is no point they can be said to be at, they have no position but just approximate regions of space defined by a mere convention of "non-zero values" for field operators (one can call these wave packets as well, since they move by the plain Maxwell wave equations, anyway; and they are detected by the same square-law detection as semiclassical EM wave packets).

One can think, I suppose of point photons as a heuristics, but one has to watch not to take it too far and start imagining, as these AJP authors apparently did, that you have some genuine kind of exclusivity one would have for a particle. That exclusivity doesn't exist either in theory (QED) or in experiments (other than via misleading presentation or outright errors as in this case).

The theoretical non-existence was [post=529314]already explained[/post]. In brief, the "quantum" g2 of (AJP.8-11 for n=0) corresponds to a single photon in the incident field. This "single photon" is |Psi.1> = |T> + |R> where |T> and |R> correspond to regions of the "single photon" field in T and R beams. The detector which (AJP.8) models is Glauber's ideal detector, which counts 1 if and only if it absorbs the whole single photon, leaving the vacuum EM field. But this "absorbtion" is (derived by Glauber in [4]) purely dynamical process, local interaction of quantized EM field of the "photon" with the atomic dipole and for the "whole photon" to be absorbed, the "whole EM field" of the "single photon" has to be absorbed (via resonance, a la antenna) with the dipole. (Note that the dipole can be much smaller than the incident EM wavelength, since the resonance absorption will absorb the surrounding area of the order of wavelength.)

So, to absorb "single photon" |Psi.1> = |T> + |R>, the Glauber detector has to capture both branches of this single field, T and R, interact with them and resonantly absorb them, leaving EM vacuum as result, and counting 1. But to do this, the detector will have to be spread out to capture both T and R beams. Any second detector will get nothing, and you indeed have perfect anticorrelation, g2=0, but it is entirely trivial effect, with nothing non-classical or puzzling about it (semi-classical detector will do same if defined to capture full photon |T>+|R>).

You could simulate this Glauber detector capturing "single photon" |T>+|R> by adding an OR circuit to outputs of two regular detectors DT and DR, so that the combined detector is Glaber_D = DT | DR and it reports 1 if either one or both of DT and DR trigger. This, of course, doesn't add anything non-trivial since this Glauber_D is one possible implementation of Glauber detector described in previous paragraph -- its triggers are exclusive relative to a second Glauber_Detector (e.g. made of another pair of regular detectors placed somewhere, say, behind the first pair).

So the "quantum" g2=0 eq (8) (it is also semiclassical value, provided one models the Glauber Detector semiclassically), is valid but trivial and it doesn't correspond to the separate detection and counting used in this AJP experiment or to what the misguided authors (as their students will be after they "learn" it from this kind of fake experiment) had in mind.

You can get g2<1, of course, if you subtract accidentals and unpaired singles (the DG triggers for which no DT or DR triggered). This is in fact what Glauber's g2 of eq. (8) already includes in its definition -- it is defined to predict the subtracted correlation, and the matching operational procedure in Quantum Optics is to compare it to subtracted measured correlations. That's the QO convention. The classical g2 of (AJP.2) is defined and derived to model the non-subtracted correlation, so let's call it g2c. The inequality (AJP.3) is g2c>=1 for non-subtracted correlation.

Now, nothing is to stop you from defining another kind of classical "correlation" g2cq which includes subtraction in its definition, to match the QO convention. Then this g2cq will violate g2cq>=1, but there is nothing surprising here. Say, your subtractions are defined to discard the unpaired singles. Therefore in your new eq (14) you will put N(DR)+N(DT) (which was about 8000 c/s) instead of N(G) (which was 100,000 c/s) in the numerator of (14) and you have now g2cq which is 12.5 times smaller than g2c, and well below 1. But no magic. (The Chiao Kwiat paper recognizes this and doesn't claim any magic from their experiment.) Note that these subtracted g2's, "quantum" or "classical" are not the g2=0 of single photon case (eq AJP.11 for n=1), as that was a different way of counting where the perfect anticorrelation is entirely trivial.

Therefore, the "nonclassicality" of Quantum Optics is a term-of-art, a verbal convention for that term (which somehow just happens to make their work sound more ground-breaking). Any well bred Quantum Optician is thus expected to declare a QO effect as "nonclassical" whenever its subtracted correlations (predicted via Gn or measured and subtracted) violate inequalities for correlations computed classically for the same setup, but without subtractions. But there is nothing genuinely nonclassical about any such "violations".

These verbal gimmick kind of "violations" have nothing to do with theoretically conceivable genuine violations (where QED still might disagree with semiclassical theory). The genuine violations would have to be for the perturbative effects of orders alpha^5 or beyond, some kind of tiny difference beyond 8-9th decimal place, if there is any at all (unknown at present). QO operates mostly with 1st order effects, all its phenomena are plain semiclassical. All their "Bell inequality violations" with "photons" are just creatively worded magic tricks of the described kind -- they compare subtracted measured correlations with the unsubtracted classical predictions, all wrapped into whole lot of song and dance on "fair sampling" or "momentary technological detection loophole" or "non-enhancement hypothesis"... And after all the song and dance quiets down, lo and behold, the "results" match the subtracted prediction of Glauber's correlation function Gn (Bell's QM result cos^2() for correlations for photons are a special case Gn()) and violate nonsubtracted classical prediction. Duh.
 
Last edited by a moderator:
  • #46
nightlight said:
You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate?

The phenomenologial PDC hamiltonian used in Quantum Optics computations has been reproduced perfectly within the Stochastic Electrodynamics...

... And after all the song and dance quiets down, lo and behold, the "results" match the subtracted prediction of Glauber's correlation function Gn (Bell's QM result cos^2() for correlations for photons are a special case Gn()) and violate nonsubtracted classical prediction. Duh.

Very impressive analysis, seriously, and no sarcasm intended.

But as science, I consider it nearly useless. No amount of "semi-classical" explanation will ever cover for the fact that NOT ONE IOTA adds anything to our present day knowledge, which is the purpose of true scientific effort. It is strictly an elaborate catch-up to QM/CI by telling us that you can get the same answers a different way. Reminds me of the complex theories of how the sun and planets actually rotate around the Earth. (And all the while you criticize a theory which since 1927 has been accepted as one of the greatest scientific achievements in all history. The most Einstein ever could muster against it was that it was not complete; I doubt he ever thought it wrong per se.)

-------

1. I would love to hear you answer this question, previously asked and not answered: what is the true & correct correlation formula for entangled photon pair spin observations? Is it the same as QM - and thus subject to the issues of Bell's Theorem - or would you care to provide a specific formula which is different? (After all, the Bell Inequality and QM are a lot farther apart than the 7th or 8th decimal place.)

Your value for 0 degrees?
Your value for 22.5 degrees?
Your value for 45 degrees?
Your value for 67.5 degrees?
Your value for 90 degrees?

I ask because I would like to determine whether you agree or disagree with the predictions of QM.

2. Or in terms of the Thorn experiment: do you think that they suspected they might get results in accordance with the predictions of QM? Yes, because QM makes specific predictions which allow it to be falsified. So you argue that an experiment specifically designed to see if 2 photons can be detected on one side when only one is detected on the other is flawed -- exactly such that a false positive will be registered 100% of the time! And yet, a reasonable person might ask why 2300 photons aren't occasionally seen on one side, when only one is seen on the other if your concept is correct. After all, you specifically say one photon is really many photons.

3. In fact, can you provide any useful/testable prediction which is different from orthodox QM? You see, I don't actually believe your theory has the ability to make a single useful prediction that wasn't already in standard college textbooks years ago. (The definition of an AD HOC theory is one designed to fit the existing facts while predicting nothing new in the process.) I freely acknowledge that a future breakthrough might show one of your lines of thinking to have great merit or promise, although nothing concrete has yet been provided.

-------

I am open to persuasion, again no sarcasm intended. I disagree with your thinking, but I am fascinated as to why an intelligent person such as yourself would belittle QM and orthodox scientific views.

-DrC
 
Last edited:
  • #47
But as science, I consider it nearly useless. No amount of "semi-classical" explanation will ever cover for the fact that NOT ONE IOTA adds anything to our present day knowledge, which is the purpose of true scientific effort.

I wasn't writing a scientific paper here but merely discussing some common misconceptions. If/when I have something of importance to announce, I'll put it in a paper (I don't earn living by "publish or perish"). Although, these postings here are not of much "importance" or seriousness, I still find them useful in sharpening my own understanding and making me follow interesting thought patterns I wouldn't have otherwise.

It is strictly an elaborate catch-up to QM/CI by telling us that you can get the same answers a different way.

Again, I didn't create any theory much less make claims about "my" theory. I was referring you to what results exist, in particular Barut's and Jaynes work in QED and Marshall & Santos in Quantum Optics. I cited papers and links so you can look them up and learn some more and, if you're doubtful, to verify whether I made up anything.

The most Einstein ever could muster against it was that it was not complete; I doubt he ever thought it wrong per se.)

That's its basic defect, the incompleteness. On the other hand, the claims that any local fields theory must necessarily contradict it, depends how you interpret "it". As you know, there were impossibility proofs since von Neumann. Their basic problem was and is in excessive generalizing the interpretation of the formalism, requiring any future theory to satisfy requirements not implied by any known experiment.

Among such generalizations, the remote noninteracting collapse (the projection postulate applied to non-interacting subsystems at spacelike intervals) is the primary source of the nonlocality in nonrelativistic QM. If one ignores the trivial kinds of nonlocality arising from nonrelativistic approximations to EM interactions (such as instantaneous action-at-a-distance Coulomb potential), the generalized projection postulate is the sole source of nonlocality in nonrelativistic QM. The Bell's QM prediction (which assumes that the remote subsystem will "collapse" into a pure state, with no interaction and at spacelike interval) doesn't follow without this remote projection postulate. The only test for that generalization of projection postulate is the Bell's inequality test.

When considering optical experiments, the proper formalism is Quantum Optics (although the nonrelativistic QM is often used as a heuristic tool here). The photon pair Bell's QM prediction is derived here in more rigorous way (as Glauber's two point correlations, cf [5] for PDC pair), which makes it clearer that no genuine nonlocality is taking place, in theory or in experiments. The aspect made obvious is that the cos^(a) (or sin^2(a) in [5]) correlation is computed via Glauber's 2 point "correlation" of normal-ordered (E-) and (E+) operators. What that means is that, one is predicting prefiltered relations (between angles and coincidence rates) which filters out any 3 or 4 point events (there are 4 detectors). The use of normally ordered Glauber's G2() further implies that the prediction is made for still additionally filtered data, where the unpaired singles and any accidental coincidences will be subtracted.

Thus, what was only implicit in nonrelativistic QM toy derivation (and what required a generalized projection postulate while no such additional postulate is needed here) becomes explicit here -- the types of filtering needed to extract the "signal" function cos^(a) or sin^(a) ("explicit" provided you understand what Glauber's correlation and detection theory is and what it assumes, cf [4] and points I made earlier about it).

Thus, the Quantum Optics (which is QED applied to optical wavelengths, plus the detection theory for square-law detectors plus the Glauber's filtering conventions) doesn't really predict cos^(a) correlation, but it predicts merely the existence of the cos^2(a) "signal" burried within the actual measured correlation. It doesn't say how much is going to be discarded since that depends on specific detectors, lenses, polarizers, ... and all that real world stuff, but it says what kind of things must be discarded from the data to extract the general cos^() signal function.

So, unlike the QM derivation, the QED derivation doesn't predict violation of Bell inequalities for the actual data, but only the existence of particular signal function. While some of the discarded data can be estimated for specific setup and technology, no one knows how to make a good enough estimate of all data which is to be discarded by the theory in order to extract the "signal" to be able to say whather the actual correlation can violate Bell's inequalities. And, on the experimental side, no one has so far obtained experimental violation either.

The non-locality of the violation by the filtered "correlations" doesn't imply anything in particular regarding non-locality of correlations since the Quantum Optics filtering procedure is by definition non-local -- to subtract "accidentals" you need to measure the coincidence rate with source turned off, but that requires data collection from distand locations. Similarly to discard 3 or 4 detection events, you need to collect data from distant locations to know it was a 3rd or 4th click. Or to discard unpaired singles, you need the remote fact that none of the other detectors had triggered. Formally, this nonlocality is built into the Glauber's correlation functions by virtue of the disposal of all vacuum photon terms (from the full perturbative expression for multiple detections), where these terms refer to photon absorptions at different spacelike separated locations (e.g. "accidental" coincidence means e.g. a term which has one vacuum photon absorbed on A+ detectors and any photon, signal or vacuum, on B+/- detectors; any such term is dropped in the construction of the Glauber's filtering functions Gn()).


1. I would love to hear you answer this question, previously asked and not answered: what is the true & correct correlation formula for entangled photon pair spin observations? Is it the same as QM - and thus subject to the issues of Bell's Theorem - or would you care to provide a specific formula which is different?

The correct formulas for time evolution of n detectors interacting with quantized EM field are only given as generalized interaction in perturbative expansion such as ref [6] eq's (4.29)-(4.31). They are of no use in practice since there is too much unknown to do anything with them. The simplified versions, but as imractical are Glauber's [4], Lect. V, as computations of scattering amplitudes (eq's 5.2-5.6) then he handwaves his way into filtered version, starting at eq 5.7; the more rigorous ref [6] (sec IV, pp 327), after using the same approximations, acknowledges regarding the Glauber-Sudarshan correlation functions "A question which remains unanswed is under what circumstances and how this simplification can be justified."

The semiclassical results, which compute the regular, non-adjusted correlations are derived in [7]. The partially adjusted correlations (with only the local type of subtractions made) are same as the corresponding QED ones (cf. sect eq. 4.7 which only excludes single detector vacuum effects, not the combined nonlocal terms; one could do such removals by intorducing some fancy algorithm-like notation, I suppose). Again, as in QED formulas, these are too general to be useful.

What is useful, though, is that these are semiclassical results, thus completely non-mysterious and transparently local, no matter what the specifics of detector design or materials are. The fields themselves (EM and matter fields) are the "hidden" (in plain sight) variables. Any further non-local subtractions on data made from there on are the only source of non-locality, which is entirely non-mysterious and non-magical.

In principle one could say the same for 2nd quantized fields (separate local from non-local terms and discard only the local one as a design property of a single detector) , except that now, there is infitely redundant overkill on the number of such variables compared to semiclassical fields (1st quantized, the matter field+EM field). But as a matter of priciple, such equations do evolve system fully locally, so there can be no non-local effect deduced from them, except as an artifact of terminological conventions of, say, calling some later non-locally adjusted correlations, still the "correlations" which then would become non-local "correlations".


Your value for 0 degrees? Your value for 22.5 degrees? Your value for 45 degrees? Your value for 67.5 degrees?
Your value for 90 degrees?


Get your calculator, set it to DEG mode, enter your numbers and for each pres cos, then x^2. That's what the filtered "signal" functions would be. But, as explained, that implies nothing regarding the Bell inequality violations (which refer to plain data correlations, not some fancy term "correlations" which includes non-local adjustements). As to what any real, nonadjusted data ought to look like, I would be curious to see some. (Experimenters in this AJP paper would not give out a single specific count used for their g2, if their life depended on it.)

All I know that from theory of multiple detections, QED or semiclassical, there is nothing non-local about them.


I ask because I would like to determine whether you agree or disagree with the predictions of QM.

I agree that filtered "signal" functions will look like QM or Quantum Optics prediction (QM is too abstract to make fine distinctions, but QO "prediction" is explicit in that this is only a "signal" function extracted from correlations, and surely not the plain correlation between counts). But they will also look like the semiclassical prediction, when you apply the same non-local filtering on semiclassical predictions. Marshall and Santos have SED models that show this equivalence for atomic cascade and for PDC sources (see their numerous preprints in arXiv, I cited several in our earlier discussions).

2. Or in terms of the Thorn experiment: do you think that they suspected they might get results in accordance with the predictions of QM?

This is quantum optics experiment. Nonrelativistic QM can't tell you precisely enough (other than postulating nonlocal collapse). QO derives what happens, and as explained nothing nonlocal happens with the plain correlations. No one has such experiment or a QED/QO derivation of a nonlocal prediction (assuming you understand what Gn() "correlations" represent and you don't get mislead by a wishful terminology of QO).

I also explained in previous msg why the "quantum" g2=0 for 1 photon state (AJP.8) is a trivial kind of "anticorrelation". That the AJP authors (and few others in QO) misinterpreted it, that's their problem. Ask them why, if you're curious. Quantum Opticians have been known to suffer delusions of magic, as shown by Hanbury Brown and Twiss affair, when HBT using semiclassical thery predicted the HBT correlations in 1950s. In no time, the "priesthood" jumped in, papers "proving" it was impossible came out, experiments "proving" HBT were wrong were published in a hurry. It can't be so since photons can't do that... Well, all the mighty "proofs", experimental and theroetical turned out fake, but not before Purcell published paper explaining how photons could do it. Well, then, sorry HBT, you were right. But... and then in 1963 came Harvard's Roy Glauber with his wishful terminology "correlation" (for a filtered "signal" function) to confuse students and the shallower grownups for decades to come.


Yes, because QM makes specific predictions which allow it to be falsified.

The prediction for a form of filtered "signal" function has no implication for Bell's inequalities. The cos^() isn't uniquely QM or QED prediction for the signal function. The semiclassical theory predicts the same signal function. The only difference is that the QO calls its "signal" function a "correlation" but defines it still as non-locally post-processed correlation function. The nonlocality is built into the definition of Gn() "correlation".

So you argue that an experiment specifically designed to see if 2 photons can be detected on one side when only one is detected on the other is flawed

It is not flawed. It merely doesn't correspond operationally to eq (8) & (11) with "single photon" input, |Psi.1>=|T>+|R>, which yield g2=0. To understand that, read again the previous msg (and the earlier one referred there) and check Glauber's [4] to see what does (8) and (11) mean operationally. As I said, |Psi.1> has to be absorbed as a whole by one detector. But to do that, it has to interact with the detector, all of its field. In [4] there is no magic instant collapse, it is QED, relativistic interaction, and "collapse" here turns out to be plain photon absorption, with all steps governed by EM field inmteracting with atom. It just happens that the nonzero part of the field of |Psi.1> is spread out across the two non-contiguous regions T and R. But it is still "one mode" (by definition, since you plug into (AJP.11) 1 mode state for the given basis operator). A detector which by definition has to capture the whole mode and leave the vacuum state as result (the Glauber detector to which eq. 8-11 apply), in relativistic theory (QED) has to interact with the whole mode, all of its field. In QED dynamics, there is no collapse, and AJP.8-11 were result of dynamical derivation (cf. [4]), not a postulate where you might twist it and turn it, so it is clear what they mean -- they mean absorption of the whole photon |Psi.1> via pure dynamics of quantized EM field and a matter-field detector (they don't 2nd quantize the matter field in [4]).

After all, you specifically say one photon is really many photons.

I said that photon number operator [n] is basis dependent, what is "one photon" in one basis need not be "one photon" in another basis. Again, the car speed example -- is it meaningfull to argue whether my car had a "speed number" 2300 yesterday at 8AM?

In practice, the basis is selected to match best the problem geometry and physics (such as eigenstates of noninteracting Hamiltonian), in order to simplify the computations. There is much convention student learns over years so that one doesn't need to spell out on every turn what they mean (which can be a trap for novices or shallow people of any age). The "Glauber detector" which absorbs "one whole photon" and counts 1 (for which his Gn() apply, thus the eq's APJ.8-11) is therefore also ambiguous in the same sense. You need to define modes before you can speak what that detector is going to absorb. If you say you have state |Psi>=|T>+|R> and this state is your "mode" (you can always pick it for one of the basis vectors, since it is normalized), the "one photon", then you need to use that base for photon number operator [n] in APJ.11, and then that gives you g2=0. But all these choices also define how you Glauber detector for this "photon" |Psi> is to operate here -- it has to be spread out to interact and absorb (via EM field-matter QED dynamics) this |Psi>.


3. In fact, can you provide any useful/testable prediction which is different from orthodox QM?

I am not offering "theory", just explaining the misleading QO terminology and confusion it could and does cause. I don't have "my theory predictions" but what I am saying is that, when QO terminological smoke and mirrors are removed, there is nothing non-local predicted by their own formalism. The non-adjusted correlations will always be local, i.e for them it will be g2>=1. And they will be the same value in semiclassical and the QED computation, at least to the alpha^4 perturbative QED expansion (if Barut's semiclassical theory is used), thus 8+ digits of precision will be same, possibly more (unknown at present).

What will the adjusted g2 be, depends on the adjustments. If you make non-local adjustments (requiring data from multiple locations to compute amounts to subtract), yes, then you get some g2' which can't be obtained by plugging only locally collected counts into the (AJP.14). So what? That has nothing to do with non-locality. It just a way you define your g2'.

Only if you do what the AJP paper does (along with other "nonclassicality" claimants in QO), and label it with the same symbol g2 for "classical" (and also nonadjusted correlation) and g2 for "quantum" (and also adjusted "correlation" or "signal" function Gn) models, then few paragraphs later you manage to forget (or more likely, never knew the difference) the "and also" parts and decide the sole difference was in "classical" vs "quantum" then you will succeed in self-deception that you have shown "nonclassicality." Otherwise if you label apples 'a' and oranges 'o', you won't have to marvel at the end why is 'a' is different than 'o', as you do when you ask why is g2 from classical case different than g2 from quantum case and then "conclude" that it must be something "nonclassical" in the quantum case that made the difference.


--- Ref

[5] Z.Y. Ou, L. Mandel "Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment" Phys. Rev. Lett. 61(1) pp 50-53 (1988).

[6] P.L. Kelly and W.H. Kleiner, "Theory of electromagnetic field measurement
and photoelectron counting" Phys. Rev. 136, A316–A334 (1964).

[7] L. Mandel, E.C.G. Sudarshan, E. Wolf "Theory of Photoelectric Detection of Light Fluctuations" Proc. Phys Soc. 84 (1964) 435-444.
 
Last edited:
  • #48
nightlight said:
Using raw counts in eq (14), they could only get g2>=1.

You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??

I can easily provide you with such a series !

cheers,
Patrick.
 
  • #49
vanesch said:
You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??

In fact, without a lot of "quantum optics", the very fact of having "intensity threshold detectors" which give me a small P_gtr/(P_gt * P_gr) is already an indication that these intensities are not given by the Maxwell equations.
The reason is the following: in order to be able to have a stable interference pattern by the waves arriving at T and R, T and R need to be modulated by intensities which have a strong correlation on the timescale of the detection window. Indeed, if they are anticorrelated, when there is an intensity at T, there isn't any at R, and vice versa, there is never a moment when there is sufficient intensity from both to interfere. If the modulation depth is about 100%, this indicates that the intensities are essentially identical at T and R, on the time scale of the intensity detector (here, a few ns).
Identical intensities on this time scale is necessary to obtain extinction of intensity in the interference pattern (the fields have to cancel at any moment, otherwise some intensity is left).
So no matter how your detector operates, if it gives you a positive logical signal ABOVE an intensity theshold, and a negative logical signal BELOW an intensity threshold, these logical signals have to be correlated by about 100%.

Taking an arbitrary moment in time, and looking at the probability to have T AND R = 1, and T = 1 and R = 1, and calculating P_TR / (P_T P_R) is nothing else but an expression of this correlation of intensities, needed to obtain an interference pattern.
In fact, the probability expressions have the advantage of taking into account "finite efficiencies", meaning that to each intensity over the threshold corresponds only a finite probability of giving a positive logical signal. That's easily verified.

Subsampling these logical signals with just ANY arbitrary sampling sequence G gives you of course a good approximation of these probabilities. It doesn't matter if G is correlated or not, with T or R, because the logical signals from T and R are identical. If I now sample only at times given by a time series G, then I simply find the formula given in AJP (14): P_gtr/ (P_gt P_gr).
This is still close to 100% if I have interfering intensities from T and R.
No matter what time series. So I can just as well use the idler signal as G.

Now, you can say, if that's so simple, why do we go through the pain of generating idlers at all: why not use a random pulse generator.
The reason is that not the entire intensity function of R and T is usable. Most intensities are NOT correlated.
But that doesn't change the story: the only thing I now have to reproduce, is that I have an interference pattern from R and T when I have G-clicks. Then the same explanation holds, but only for the time windows defined by G.

If I have a time series G, defining time windows, in which I can have an interference pattern from T and R with high modulation, the intensities during T and R need to be highly correlated. This means that the expression P_GTR/(P_GT P_GR) needs to be close to 1.

So if I succeed, somehow, to have the following:

I have a time series generator G ;
I can show interference patterns, synchronized with this time series generator, of two light beams T and R ;

and I calculate an etimation of P_GTR / (P_GR P_GT),

then I should find a number close to 1 if this maxwellian picture holds.

Well, the results of the paper show that it is CLOSE TO 0.

cheers,
Patrick.
 
Last edited:
  • #50
nightlight said:
I wasn't writing a scientific paper here but merely discussing some common misconceptions. If/when I have something of importance to announce, I'll put it in a paper (I don't earn living by "publish or perish"). Although, these postings here are not of much "importance" or seriousness, I still find them useful in sharpening my own understanding and making me follow interesting thought patterns I wouldn't have otherwise.

...

--- Ref

[5] Z.Y. Ou, L. Mandel "Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment" Phys. Rev. Lett. 61(1) pp 50-53 (1988).

[6] P.L. Kelly and W.H. Kleiner, "Theory of electromagnetic field measurement
and photoelectron counting" Phys. Rev. 136, A316–A334 (1964).

[7] L. Mandel, E.C.G. Sudarshan, E. Wolf "Theory of Photoelectric Detection of Light Fluctuations" Proc. Phys Soc. 84 (1964) 435-444.


Thank you for the courtesy of your response. I do not agree with your conclusions, but definitely I want to study your thinking further.

And I definitely AGREE with what you say about you say about "sharpening" above... I get a lot out of these discussions in the same manner. Forces me to consider my views in a critical light, which is good.
 
  • #51
vanesch said:
You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??
I can easily provide you with such a series !

It is trivial to have beam splitter give you nearly perfectly anticorrelated data (e.g. vary polarization of optical photons randomly and then set photodetector to very low noise so it picks only the photons with nearly parallel polarization with the PBS). Clauser's test [2] had produced one such.

It can't be done if you also require that the T and R are suporposed (as opposed to mixture) and that they carry equal energy in each instance. That part is normally verified by interfering the two beams T and R.

The authors [1] have such experiment but they didn't test their T and R EM field samples used for anticorrelation part. Instead they stuck interferometer in front of the beam splitter and showed its T' and R' interfere, but that is a different beam splitter, and different coincidence setup with entirely unrelated EM field samples being correlated. It is essential to use the same samples of EM field from T and R (or at most propagated by r/t adjustments for extended paths) and using the same detector and time windows settings to detect interference. The "same" samples mean: extracted the same way from G events, without subselecting and rejecting based on data available away from G (such as content of detections on T and R, e.g. to reject unpaired singles of G events).

The QO magic show usually considers time windows (defined via coincidence circuits settings) and detector settings as free parameter they can tweak on each try till it all "works" and the magic is happening as prescribed. Easy to make magic when you can change these between runs or polarizer angles. They use their normal engineering signal filtering reporting conventions in not reporting in their papers the bits of info which are critical for this kind of tests (although a routine for their engineering signal processing and "signal" filtering) -- to have the same samples of fields. (This was a rare one with few details and the trick which does the magic immediately jumps out.) Combine that with Glauberized jargon, where "correlation" isn't correlation, and you got all the tools for an impressive Quantum Optics magic show.

They've been pulling leg of physics community since Bell tests started with this kind of phony magic and have caused vast quantities of nonsense to be written by otherwise good and even great physicists on this topic. A non-Glauberized physicist uses word correlation in a normal way, so they get invariably taken in by Glauber's "correlation" which doesn't correlate anything but is just an engineering style filtered "signal" function extracted out of acual correlations via inherently non-local filtering procedure (standard QO subtractions). I worked on this topic for my masters and read all the experimental stuff available, yet I had no clue how truly flaky and nonexistent those violation "facts" were.
 
Last edited:
  • #52
T and R need to be modulated by intensities which have a strong correlation on the timescale of the detection window. Indeed, if they are anticorrelated, when there is an intensity at T, there isn't any at R, and vice versa, there is never a moment when there is sufficient intensity from both to interfere.

Keep in mind that the 2.5ns time windows are defined on the detectors output fields, they're electrical current counterpart to the optical pulses. The PDC optical pulses are thousands times shorter, though.

Now, you can say, if that's so simple, why do we go through the pain of generating idlers at all: why not use a random pulse generator. ...
So if I succeed, somehow, to have the following:

I have a time series generator G ;
I can show interference patterns, synchronized with this time series generator, of two light beams T and R ;

and I calculate an etimation of P_GTR / (P_GR P_GT),

then I should find a number close to 1 if this maxwellian picture holds.

Well, the results of the paper show that it is CLOSE TO 0.


The random time window (e.g. if you ignore G beam) won't even be poissonian, since there will be a large vacuum superposition, which has gaussian distribution. So, the unconditional sample (or a random sample) will give you a super-Poissonian for the PDC source and Poissonian for the coherent source.

The main item to watch regarding the G-detection conditioned samples is to pick the "same" sample on T and R. For example, consider a GTR coincidence window setup shown with PDC pulse alignements:

Code:
           ----
signals  /...\
----------------------------------------------> times
        |--------|    Gate beam timing window
      |------|        Transmitted beam window
             |------| Reflected beam window

That obviously will give you a perfect anticorrelation while still allowing you to show the perfect interference if you change the T and W sampling windows for interference test (and align them properly).

Even if you keep windows the "same" for the intereference test as for anticorrelations, you can still get both, provided you overlap partially T and R windows. Then by tweaking the detector thresholds, you can still create the appearance of violating visibility vs anticorrelation classical tradeoff.

The basic validity test should be to feed laser split 50:50 instead of G and TR into the very same setup, optics, detectors, coincidence circuit and show that g2=1. If this gives you <1 the setup is cheating. Note that Grangier et al. test in [3] had used for this test the chaotic light and said they got g2>1, but for chaotic light g2 needs to be >= 2, so this is inconclusive. The stable laser should put you on the very boundary of "fair sampling" and it should be much easier to see cheating.

Also the gratuitous separate sampling via the 3rd circuit for the triple coincidence is cheating all by itself. The triple coincidences should be derived (via software or the AND circuit) from the obtained GT and GR samples, not sampled and tuned on its own. Grangier et al. also used this third coincidence unit, but wisely chose not to give any detail about it.

"My" prediction (or rather it is just a QED/QO prediction, but with properly assigned operational mapping to the right experiment) is that the correctly gated G conditioned sample will also give you at best the Poissonian.

As explained, the Glauber's g2=0 for "single photon" state |T> + |R> is operationally completely misinterpreted by some Quantum Opticians who did the test, and dutifuylly parroted by the rest (although Glauber, or Chiao & Kwiat or Mandel use much more cautios language, recognizing that you need standard QO subtractions to drop into the non-classical g2). The distribution should be same as if you separated single cathode (large compared to wavelengths but capturing the incident beam on both partitions equally) into two regions and tried to correlate photoelectron counts from the two sides. The photoelectron count will be Poissonian at best (g2=1).

Note that if you were to do experiment and show that the unsubtracted correlations (redundant, since correlations as normally understood should be unsubtracted, but it has to be said here in view of QO Glauberisms) are perfectly classical, it wouldn't be long before everyone in QO will declare, they knew it all along and pretend as if they never thought or wrote exactly the opposite. Namely, suddenly they will discover that the coherent light pump with the Poissonian superposition of the Fock states will simply generate Poissonian PDC pulses (actually they'll probably be Gaussian) and the problem is solved. The g2=0 for single photon states remains Ok, it just doesn't apply for this source (they will still continue searching for the magic source in other effects, so they'll blame the source, even though g2=0 for Glauber detector of "one photon" |T>+|R>, as explained, is a trivial anticorrelation; but they'll still maintain it is just matter of finding the "right source" and continue with the old misinterpretation of g2=0; QO "priesthood" is known for these kinds of delusions e.g. lookup on the Hanbury Brown and Twiss comedy of errors; or similar one with the "impossible" interference of independent laser beams which also caused grand mal convulsions).

If you follow up another step from here, once you establish that the anticorrelation doesn't work as claimed but always gives g2>=1 on actual correlations, you'll discover that no optical Bell test will work either since you can't get rid of the double +/- Poissonian hits for the same photon without reducing detection efficiency, which then invalidates it from the other "loophole".

This is more transparent in QO/QED derivations of Bell's QM prediction where they use only the 2 point Glauber correlation ([post=531880]cf. [5] Ou,Mandel[/post]), thus discarding explicitly all triple and quadruple hits, making it clear they're using sub-sample. The QO derivation is sharper than the abstract QM 2x2 toy derivation. In particular the QO/QED derivation doesn't use remote sybsystem projection postulate but derives effective "collapse" as a dynamical process of photon absorpotion, thus purely local and uncontroversial dynamical collapse. The additional explicit subtractions included into the definition of G2() make it clear that the cos^2() "correlation" isn't a correlation at all but an extracted "signal" function (a la Wigner distribution reconstructed via quantum tomography), thus one can't plug it into the Bell's inequalities as is without estimating the terms discared by Glauber's particular convention for signal vs noise dividing line. With the generic QM projection based 2x2 abstract derivation, that's all invisible. Also invisible is the constraint (part of Glauber's derivation [4]) that "collapse" is due to local field dynamics, it is just a plain photon absorption through local EM-atom interaction. The QO/QED derivation also shows explicitly how the non-locality enters into the von Neumann's generalized projection postulate (that projects remote noninteracting subsystems) -- it is result of the manifestly non-local data filtering procedures used in Glauber's Gn() subtraction conventions. That allone disqualifies any usage of such non-locally filtered "signal" function as a proof of non-locality by plugging them into Bell's inequalities.

Earlier I cited [post=529372]Haus[/post] (who was one of few wise Quantum Opticians; http://web.mit.edu/newsoffice/2003/haus.html of the detection process and needs to be applied with a good dose of salt.
 
Last edited by a moderator:
  • #53
nightlight said:
Keep in mind that the 2.5ns time windows are defined on the detectors output fields, they're electrical current counterpart to the optical pulses. The PDC optical pulses are thousands times shorter, though.

The point was, that if the intensities have to be strongly correlated (not to say, identical) on the fast timescale (in order to prduce interferences), then they will de facto be correlated on longer timescales (which are just sums of smaller timescales).

The random time window (e.g. if you ignore G beam) won't even be poissonian, since there will be a large vacuum superposition, which has gaussian distribution. So, the unconditional sample (or a random sample) will give you a super-Poissonian for the PDC source and Poissonian for the coherent source.

But it doesn't MATTER how the time windows are distributed. They could be REGULARLY distributed with a quartz clock, or when I decide to push a button, or anything. They are just a subsample of the entire intensity curve, and don't need to be in any way satisfy any statistical property.

The main item to watch regarding the G-detection conditioned samples is to pick the "same" sample on T and R. For example, consider a GTR coincidence window setup shown with PDC pulse alignements:

Of course you need to apply the SAME time window to as well G, T and R.
Or, to have to make the windows such that, for instance, the GTR window is LARGER than the GR and TR windows, so that you get an overestimation of the quantity which you want to get low.

But it is the only requirement. If you obtain interference effects within these time windows (by G) and you get a low value for the quantity N_gtr N_g/ (N_gr N_gt) then this cannot be generated by beams in the Maxwellian way.
And you don't need any statistical property of the G intervals, except for a kind of repeatability: namely that they behave in a similar way during the interference test (when detectors r and t are not present) and during the coincidence measurement. The G intervals can be distributed in any way.

cheers,
Patrick.
 
  • #54
But it doesn't MATTER how the time windows are distributed. They could be REGULARLY distributed with a quartz clock, or when I decide to push a button, or anything. They are just a subsample of the entire intensity curve, and don't need to be in any way satisfy any statistical property.

The subsample of T and R corresopnding to DG defined windows can and does have different statistics of T, R events (on both the T or R as singles and TR pairs) than a random sample (unconditioned on G events). For example, for the DG window samples T or R had 4000 c/s singles rates, and only 250 c/s on non-DG samples.


But it is the only requirement. If you obtain interference effects within these time windows (by G) and you get a low value for the quantity N_gtr N_g/ (N_gr N_gt) then this cannot be generated by beams in the Maxwellian way.

Well, almost so. There is one effect which could violate this condition provided you have very close (such as order of a wavelength) detectors DT and DR i.e. they would need to be atomic detectors. Then a resonance absorption on DT would distort the incident field within the near field range of DT, and consequently the DR would get a lower EM flux than with DT absorption absent. Namely when you have a dipole which resonates with the incident plane EM wave, the field it radiates will superpose coherently with the plane wave field resulting in bending of Poynting vector toward the dipole, increasing thus the flux it absorbs much beyond the dipole size d, resulting in an absorbed flux from area lambda^2 instead of d^2 (where d is dipole size).

With electron clouds of a detector (e.g. atom) there is a positive feedback loop, where the initial weak oscillations (from the small forward fronts of incident field) of the cloud cause the above EM-sucking distortion, which in turn increases the amplitude of oscillations, extending thus its reach farther (dipole emits stronger fields and bends flux more toward itself), thus enhancing the EM-sucking effect. Thus there is self-reinforcing loop, where EM-sucking increases exponentially, which finally results in an abrupt breakdown of the electron cloud.

When you have N nearby atoms absorbing light, the net effect of EM-sucking multiplies by N. But due to initial phase differences of electron clouds, some atom would have a small initial edge in its oscillations over its neighbors and due to exponential nature of the positive feedback, the EM-sucking efect into that single atom will quickly get ahead (like different compund interests) and rob its neighbors of their (N-1) fluxes, thus enhancing the EM-sucking effect approximately N-fold compared to a single atom absorption.

These absorptions will thus strongly anticorrelate between nearby atoms and create an effect similar to the Einstein's needle-radiation -- as if a pointlike photon had struck just one of the N atoms for each photo-absorption. There is a http://www.ensmp.fr/aflb/AFLB-26j/aflb26jp115.htm which has over last ten years developed a detailed photodetecton theory based on this physical picture, capable of explaining the discrete nature of detector pulses (without requiring point photon heuristics).

Regarding the experiment, another side effect of this kind of absorption is that the absorbing cloud will emit back about half the radiation it absorbed (toward the source, as if paying the vacuum's 1/2 hv; the space behind the atom will have a corresponding reduction of EM flux, a shadow, as if scattering has occured).

Although I haven't seen the following described, but based on the general gist of the phenomenon, it is conceivable that these back emissions, for very close beam splitter, could end up propagating the positive feedback to the beam-splitter, so that the incident EM field, which for each G pulse starts equally distributed into T and R, superposes with these back-emissions at the beam splitter in such a way to enhance the flux portion to the back-emitting detector, in an analogous way to the nearby-atom case. So the T and R would start equal during the weak leading front of TR pulse, but the initial phase imbalance between the cathodes of DT and DR would lead to amplification of the difference and lead to anticorrelation between the triger by the time the bulk of TR pulse traverses the beam splitter. But for the interference, there would be no separate absorber for T and R side, but just a single absorber much farther away and the interference would still occur.

To detect this kind of resonant beam splitter flip-flop effect one would have to observe dependence of any anticorrelation on the distance of detectors from the beam splitter. Using one way mirrors in front of detectors might block the effect if it exists at a given distance.
 
Last edited by a moderator:
  • #55
nightlight said:
But it doesn't MATTER how the time windows are distributed. They could be REGULARLY distributed with a quartz clock, or when I decide to push a button, or anything. They are just a subsample of the entire intensity curve, and don't need to be in any way satisfy any statistical property.

The subsample of T and R corresopnding to DG defined windows can and does have different statistics of T, R events (on both the T or R as singles and TR pairs) than a random sample (unconditioned on G events). For example, for the DG window samples T or R had 4000 c/s singles rates, and only 250 c/s on non-DG samples.

Yes, that's not what I'm saying. Of course, in a general setup, you can have different statistics for T, R and TR depending on how you decide to sample them (G). What I'm saying is that if you use ANY sample method of your choice, in such a way that you get interference between the T and R beams, THEN if you use the same sample method to do the counting, and you assume Maxwellian intensities which determine the "counting", you should find a "big number" for the formula N_gtr N_g / (N_gt N_gr), because this number indicates you the level of intensity correlation between T and R, in the time frames defined by the series G ; and you need a strong intensity correlation in order to be able to get interference effects.
Now, you can object that this is not exactly what is done in the AJP paper ; that's true, but what they do is VERY CLOSE to this, and with some modification of the electronics you can do EXACTLY THAT.
I don't say that you should strictly find >1. I'm saying you should find a number close to 1. So if you'd find 0.8, that wouldn't be such big news. However, finding something close to 0 is not possible if you have clear interference patterns.

cheers,
Patrick.
 
  • #56
Yes, that's not what I'm saying. Of course, in a general setup, you can have different statistics for T, R and TR depending on how you decide to sample them (G). What I'm saying is that if you use ANY sample method of your choice, in such a way that you get interference between the T and R beams, THEN if you use the same sample method to do the counting, and you assume Maxwellian intensities which determine the "counting", you should find a "big number" for the formula N_gtr N_g / (N_gt N_gr), because this number indicates you the level of intensity correlation between T and R, in the time frames defined by the series G ; and you need a strong intensity correlation in order to be able to get interference effects.

That's correct (with possible exception of resonant flip-flop effect).

Now, you can object that this is not exactly what is done in the AJP paper ; that's true, but what they do is VERY CLOSE to this, and with some modification of the electronics you can do EXACTLY THAT.

And you will get g2>1 (before any subtractions are done, of course).

I don't say that you should strictly find >1. I'm saying you should find a number close to 1. So if you'd find 0.8, that wouldn't be such big news. However, finding something close to 0 is not possible if you have clear interference patterns.

But this is not what the QO photon theory suggests (the customary interpretation of g2=0 for single photon case, which I argued is incorrect interpretation of g2=0 and you won't get anything below 1). They claim you should get g2 nearly 0 with equal T and R split in each instance (which is demonstrated by showing high visibility interference on the same sample). Are you saying you don't believe in QO interpretation of g2=0 for "single photon" case?
 
  • #57
nightlight said:
I don't say that you should strictly find >1. I'm saying you should find a number close to 1. So if you'd find 0.8, that wouldn't be such big news. However, finding something close to 0 is not possible if you have clear interference patterns.

But this is not what the QO photon theory suggests (the customary interpretation of g2=0 for single photon case, which I argued is incorrect interpretation of g2=0 and you won't get anything below 1). They claim you should get g2 nearly 0 with equal T and R split in each instance (which is demonstrated by showing high visibility interference on the same sample). Are you saying you don't believe in QO interpretation of g2=0 for "single photon" case?

Are you a priest ? I mean, you seem to have this desire to try to convert people, no ? :-))

Sorry to disappoint you. When I say: "close to 0 is not possible if you have clear interference patterns" I mean, when Maxwellian theory holds.
I think you will get something close to 0, and to me, the paper IS convincing because I think they essentially DO the right thing, although I can understand that for someone like you, thinking the priesthood is trying to steal the thinking minds of the poor students, there will always be something that doesn't fit, like these 6ns, or the way to tune the timing windows and so on.

But at least, we got to an experimentally accepted distinction:

IF we have timing samples of T and R, given by a time series G, and T and R give (without detectors) rise to interference, and with detectors, they give rise to N_gtr N_g / N_gr N_gt a very small number, you accept that any Maxwellian description goes wrong.

That's sufficient for me (not for you, you think people are cheating, I don't think they are). Because I know that the setup can give rise to interference (other quantum erasure experiments do that) ; I'm pretty convinced that the setup DOES have about equal time samplings of R and T (you don't, too bad) and that's good enough for me.
However, I agree with you that it is a setup in which you can easily "cheat", and as our mindsets are fundamentally different concerning that aspect, we are both satisfied. You are satisfied because there's still ample room for your theories (priesthood and semiclassical theories) ; I'm ok because there is ample room for my theories (competent scientists and quantum optics).
Let's say that this was a very positive experiment: satisfaction rose on both sides :-))

cheers,
Patrick.
 
  • #58
I'm pretty convinced that the setup DOES have about equal time samplings of R and T (you don't, too bad) and that's good enough for me.

Provided you use the same type of beam-splitter for interference and anticorrelations. Otherwise, a polarizing beam-splitter used for anticorrelation part of experiment (and non-polarizing for interference), combined with variation in incident polarization (which can be due to aperture depolarization) can produce anticorrelation (if the detector thresholds are tuned "right").

However, I agree with you that it is a setup in which you can easily "cheat", and as our mindsets are fundamentally different concerning that aspect, we are both satisfied. You are satisfied because there's still ample room for your theories (priesthood and semiclassical theories) ;

When you find an experiment which shows (on non-adjusted data) the anticorrelation and the interference, with timing windows and polarization issues properly and transparently addressed, let me know.

Note that neither semiclassical [post=529314]nor QED[/post] model predicts g2=0 for their two detector setup and separate counting. The g2=0 in Glauber's detection model and signal filtering conventions (his Gn()'s, [4]) is a trivial case of anticorrelation of single detector absorbing the full "single photon" |Psi.1> = |T>+ |R> as a plain QED local interaction, thus extending over the T and R paths. It has nothing to do with the imagined anticorrelation in counts of the two separate detectors, neither of which is physically or geometrically configured to absorb the whole mode |Psi.1>. So, if you obtain such an effect, you better brush up on your Swedish and get a travel guide for Stockholm, since the effect is not predicted by the existent theory.

If you wish to predict it from definition of G2() (based on QED model of joint detection, e.g. [4] or similar in Mandel's QO), explain on what basis, for a single mode |Psi.1>=|T>+|R> absorption, do you assign DT (or DR) as being such an absorber? How does DT absorb (as a QED interaction) the whole mode |Psi.1>, including its nonzero region R? The g2=0 describes correlation of events (a) and (b) where: (a)=complete EM field energy of |Psi.1> being absorbed (dynamically and locally) by one Glauber detector and (b)=no absorption of this EM field by any other Glauber detector (2nd, 3rd,...). Give me a geometry of such setup that satisfies dynamical (all QED dynamics is local) requirement of such absorption. { Note that [4] does not use remote projection postulate of photon field. It only uses transition probabilities for photo-electrons, which involves a localized projection of their states. Thus [4] demonstrates what the von Neumann's abstract non-local projection postulate of QM really means operationally and dynamically for the photon EM field -- the photon field always simply follows local EM dynamical evolution, including absorption, while its abstract non-local projection is entirely non-mysterious consequence of the non-local filtering convention built into the definition of Gn() and its operational counterpart, the non-local QO subtraction conventions.} Then show how does it correspond to this AJP setup, or the Grangier's setup, as you claim it does?

So, the absence of theoretical basis for claiming g2=0 in AJP experiment is not some "new theory" of mine, semiclassical or otherwise, but it simply follows from the careful reading of the QO foundation papers (such as [4]) which define the formal entities such Gn() and derive their properties from QED.

I'm ok because there is ample room for my theories (competent scientists and quantum optics).
Let's say that this was a very positive experiment: satisfaction rose on both sides :-))


Competent scientist (like Aspect, Grangier, Clauser,..) know how to cheat more competently, not like these six. The chief exeprimenter here would absolutely not reveal a single actual count used for their g2 computations for either the reported 6ns delays or for the alleged "real" longer delay, or disclose what this secret "real" delay was.

You write them and ask for the counts data and what this "real" delay was (and how was it implemented? as an extra delay wire? inserted where? what about the second delay, from DG to DR: was it still 12ns or did that one, too, had a "real" version longer than the published one? what was the "real" value for that one? why was that one wrong in paper? paper length problem again? but that one was already 2 digits long, so was the "real" one a 3 digit long delay?) and tell me. The "article was too long" can't possibly explain how, say, 15ns delay became 6ns in 11 places in the paper (to say nothing about the 12ns one which also would have to be longer if the 6ns daly was really more than 10ns).

If it was irrelevant detail, why have a 6ns (which they admit is wrong) spread out all over the paper? Considering they wrote the paper as an instruction for other undergraduate labs to demonstrate their imaginary effect, it seems this overemphasized 6ns delay (which was, curiosly, the single most repeated quantity for the whole experiment) was the actual magic ingredient they needed to make it "work" as they imagined it ought to.
 
Last edited:
  • #59
vanesch said:
Ah, yes, and I still do. But that has nothing to do with true collapse or not !
There will be a branch where DR triggers and not DT, and there will be a branch where DT triggers, and not DR. There will never be a branch where DR and DT trigger together (apart from double events).

Just realized that MWI won't be able to do that for you here. Unlike the abstract QM entanglement between the apparatus (photoelectron) and the "photon" (quantized EM mode), in this case, once you work out a dynamical model for the detection process, as several QED derivations have done (such as [4] Lect 5), the "measurement" here has no QM projection of EM field state and the EM field doesn't become entangled with its "aparatus" (e.g. photo-electrons), so von Neumann's abstraction (the QM's caricature of the QED) has a plain dynamical explanation. Of course, each electron does entangle, as a QM abstraction, with the rest of its apparatus (amplifier; here it is a local, interacting entanglement). But the latter is now a set of several separate independent measurements (of photo-electrons by their local amplifiers) in different spacelike locations. That one doesn't help you remove the dynamically obtained distribution of creations of these new QM "measurement setups", which has no "exclusivity" -- these new setups are created by local dynamics independently from each other e.g. a creation of a new "setup" on DR cathode does not affect the rates (or probabilities) of creation or non-creation of new "setup" on DT cathode.

The only conclusion you can then reach from this (without getting into reductio ad absurdum of predicting different observable phenomena based on different choices of von Neumann's measurement chain partition) is that for any given incident fields on the T and R (recalling that in [4] there was no non-dynamical EM field vanishing or splitting into vanished and non-vanished, but just plain EM-atom local resonant absorption) the photo-detections in the two locations will be independent of the events outside of their light cones, thus the classical inequality g2>=1 will hold since the probabilities of ionizations are now fixed by the incident EM fields, which never "vanished" except by the local independent dynamics of EM resonant absorption.

This conclusion is also consistent with the explicit QED prediction for this setup. As explained in the previous message, the QED predicts the same g2>=1 for the "one photon" setup of AJP experiment with two separate detectors DT and DR. The Thorn et al. "quantum" g2 (eq AJP.8), I will label it g2q, doesn't have a numerator (Glauber's G2()) which can be operationally mapped to their setup and their definitions of detectors (separate counting) for "one photon" incident state |Psi.1> = |T> + |R>. The numerator of g2q, which is G2(T,R) is defined as a term (extracted from perturbative expansion of EM - cathode atom interactions) that has precisely two whole photon absorptions by two Glauber detectors GD.T and GD.R which cannot be operationally assigned to DT and DR since neither of the DT or DT can (physcally or geometrically) absorb the whole mode |Psi.1> = |T> + |R> (normalizations factor ignored).

The experimental setup with DT and DR of the AJP paper interpreted as some Glauber's detectors GD.T' and GD.R' can be mapped to Glauber's scheme (cf [4]) and (AJP.8-11) but only for a mixed input state such as Rho = |T><T| + |R><R| . In that case DT and DR can absorb the whole mode since here in each try you have the entire incident EM field mode localized at either one or the other place, thus their detectors DT and DR can be operationally identified as GD.T' and GD.R', thus (8) applies and g2q=0 follows. But this is a trivial case as well, since the classical g2c' for this mixed state is also 0.

The superposed |Psi.1> = |T> + |R>, which was their "one photon" state requires different Glauber's detectors, some GD1 and GD2 to compute its G2(x1,x2). Each of these detectors would have to interact and absorb the entire mode to register a count 1, in which case it will leave vacuum state for EM field, and the other Glauber detector will register 0 (since Glauber's detectors, which are the operational counterparts for Gn(), by definition don't trigger on vacuum photons). That again is "anticorrelation" but of a trivial kind, with one detector covering both paths T and R and always registering 1, and the other detector shadowed behind or somewhere else altogether and always registering 0. The classical g2c for this case is obviously also 0.

Note that Glauber, right after defining Gn() ([4] p 88) says "As a first property of the correlation functions we note that when we have an upper bound on the number of photons present in the field then the functions Gn vanish identically for all orders higher than a fixed order M [the upper limit]." He then notes that this state is (technically) non-classical, but can't think up, for the remaining 100 pages, of this beam splitter example as an operational counterpart to illustrate this technical non-classicality. Probably because he realized that the two obvious experimental variants of implementing this kind of g2=0 described above have a classical counterpart with the same prediction g2=0 (the mixed state and the large DG1 covering T and R). In any case, he certainly didn't leap in this or the other QO founding papers to operational mapping of the formal g2=0 (of AJP.8-11) for one photon state to the setup of this AJP experiment (simply because QED doesn't predict any such "anticorrelation" phenomenon here, the leaps of few later "QO magic show" artists and their parroting admirers among "educators" notwithstanding).

Clauser, in his 1974 experiment [2] doesn't rely on g2 from Glauber's theory but comes up with anticorrelation based on 1st order perturbation theory of atomic surface of a single cathode interacting with incident EM field. It was known, from earlier 1932 results of Fermi { who believed in Schrodinger's interpretation of Psi as a physical field, a la EM, and |Psi|^2 as a real charge density, a seed which was developed later, first by Jaynes, then by Barut into a fully working theory matching QED predictions to at least alpha^4 }, that there will be anticorrelation in photo-ionizations of the "nearby" atoms (within wavelength distances, the near-field region; I discussed physical picture of this [post=533012]phenomenon earlier[/post]). Clauser then leaps from that genuine, but entirely semi-classical, anticorrelation phenomenon to the far apart (certainly more than many wavelengths) detectors and "predicts" anticorrelation, then goes on to show "classicality" being violated experimentally (again another fake due to having random linear polarizations and a polarizing beam splitter, thus measuring the mixed state Rho=|T><T| + |R><R|, where the classical g2c=0 as well; he didn't report intereference test, of course, since there could be no interference on Rho). When Clauser presented his "discovery" at the next QO conference in Rochester NY, Jaynes wasn't impressed, objecting and suggesting that he needs to try it with circularly polarized light, which would avoid the mixed state "loophole." Clauser never reported anything on that experiment, though, and generally clammed up on the great "discovery" altogether (not that it stopped, in the slightest, the quantum magic popularizers from using Clauser's experiment as their trump card, at least until Grangier's so-called 'tour de force'), moving on to "proving" the nonclassicality via Bell tests instead.

Similarly, Grangier et al [3], didn't redo their anticorrelation experiment to fix the loopholes, after the objections and a semiclassical model of their results from Marshall & Santos. This model was done via straightforward SED description, which models classically the vacuum subtractions built into Gn(), so one can define the same "signal" filtering conventions as those built into the Gn(). The non-filtered data doesn't need anything of that sort, though, since the experiments always have g2>=1 on nonfiltered data (which is the semiclassical and the QED prediction for this setup). Marshall & Santos wanted (and managed) also to replicate the Glauber's filtered "signal" function for this experiment, which is a stronger requirement than just showing that raw g2>=1.
 
Last edited:
  • #60
nightlight said:
Unlike the abstract QM entanglement between the apparatus (photoelectron) and the "photon" (quantized EM mode), in this case, once you work out a dynamical model for the detection process, as several QED derivations have done (such as [4] Lect 5), the "measurement" here has no QM projection of EM field state


You have been repeating that several times now. That's simply not true: quantum theory is not "different" for QED than for anything else.

The "dynamical model of the detection process" you always cite is just the detection process in the case of one specific mode which corresponds to a 1-photon state in Fock space, and which hits ONE detector.
Now, assuming that in the case there are 2 detectors, and the real EM field state is a superposition of 2 1-photon states (namely, 1/sqrt(2) for the beam that goes left on the splitter, and 1/sqrt(2) for the beam that goes right), you can of course apply the reasoning to each term individually, assuming that there will be no interaction between the two different detection processes.

What I mean is:
If your incoming EM field is a "pure photon state" |1photonleft>, then you can go and do all the locla dynamics with such a state, and after a lot of complicated computations, you will find that your LEFT detector gets into a certain state while the right detector didn't see anything. I'm not going to consider finite efficiencies (yes, yes...), the thing ends up as |detectorleftclick> |norightclick>.

You can do the same for a |1photonright> and then of course we get |noleftclick> |detectorrightclick>.

These two time evolutions:

|1photonleft> -> |detectorleftclick> |norightclick>

|1photonright> -> |noleftclick> |detectorrightclick>

are part of the overall time evolution operator U = exp(- i H t)

Now if we have an incoming beam on a beam splitter, this gives to a very good approximation:

|1incoming photon> -> 1/sqrt(2) {|1photonleft> + |1photonright> }

And, if the two paths are then far enough (a few wavelengths :-) from each other so that we can then assume that the physical processes in the two detectors are quite independent, (so that there are no modifications in the hamiltonian contributions: no "cross interactions" between the two detectors), then, by linearity of U, we find that the end result is:

1/sqrt(2)(|detectorleftclick> |norightclick>+|noleftclick> |detectorrightclick>)

I already know your objections, where you are going to throw Glauber and other correlations around, and call my model a toy model etc...
That's a pity, because the above is the very essence which these mathematical tools put into work in more generality and if you were more willing, you could even trace that through that complicated formalism from the beginning to the end. And the reason is this: QED, and QO, are simply APPLICATIONS of general quantum theory.

cheers,
Patrick.
 
  • #61
That's simply not true: quantum theory is not "different" for QED than for anything else.

Well, than you need some more reading and thinking to do. Let me recall the paper by Haus (who was one of the top Quantum Opticians from lengendary RLE lab at MIT) which I mentioned earlier (http://prola.aps.org/abstract/PRA/v47/i6/p4585_1):
Quantum-nondemolition measurements and the ``collapse of the wave function''

F. X. Kärtner and H. A. Haus
Department of Electrical Engineering and Computer Science and Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139

This paper intends to clarify some issues in the theory of quantum measurement by taking advantage of the self-consistent quantum formulation of nonlinear optics. A quantum-nondemolition measurement of the photon number of an optical pulse can be performed with a nonlinear Mach-Zehnder interferometer followed by a balanced detector. The full quantum-mechanical treatment shows that the shortcut in the description of the quantum-mechanical measurement, the so-called ``collapse of the wave function,'' is not needed for a self-consistent interpretation of the measurement process. Coherence in the density matrix of the signal to be measured is progressively reduced with increasing accuracy of the photon-number determination. The quantum-nondemolition measurement is incorporated in the double-slit experiment and the contrast ratio of the fringes is found to decrease systematically with increasing information on the photon number in one of the two paths. The ``gain'' in the measurement can be made arbitrarily large so that postprocessing of the information can proceed classically.

The point there and in what I was saying is not that QM is wrong, but that the remote "projection postulate" (collapse) is simply not telling you enough in the abstract postulate form which only guarantees the existence of such projection operation (which implements 'an observable yielding one result'). Specifically, it doesn't tell you what kind of operations are involved in its implementation. If, for example an operational imlementation of an "observable" requires plain classical collection of data from multiple locations and rejections or subtractions based on the values obtained from remote locations, than one cannot base claim of nonlocality on such trivially non-local observable (since one can allways add the same convention and their communications channels to any classical model for the raw counts; or generally -- defining extra data filtering or post-processing conventions cannot make the previosuly classically compatible raw counts into result which excludes the 'classical' model).

The typical leap of this kind is in the elementary QM proofs of Bell's QM prediction and the use of abstract "observable", say, [AB] = [Pz] x [Pm], where factor [Pz] measures polarization along z of photon A, and factor [Pm] measures polarization along m on B. The leap then consists in assuming an existence of "ideal" and local QM detectors implementing the observable [AB] (i.e. it will yield the raw and purely local counts reproducing statistically the probabilities and ocrrelations of the eigenvalues of the observable for a given state).

Therefore any such "theorems" of QM incompatibility with clasical theories based on such "predictions" should be understood as valid 'modulo existence of ideal and local detectors for the observables involved'. If one were to postulate such 'existence' then of course, the theorems for the theory "QM+PostulateX" indeed are incompatible with any classical theory (which could, among other things, also mean that PostulateX is too general for our universe and its physical laws).

The abstract QM projection postulate (for the composite system) doesn't tell you, one way or the other, anything about operational aspect of the projection (to one result), except that the operation exists. But the usual leap (in some circles) is that if it doesn't say anything then it means the "measurement" of [AB] values can be done, at least in principle, with purely local counts on 4 "ideal" detectors (otherwise there won't be Bell inequality violation on raw counts).

The QED/QO derivation in [5] makes it plain (assuming the understanding of Gn of [4]) that not only are all the nonlocal vacuum effects subtractions (the "signal" function filtering conventions of QO, built into the standard Gn() definition), included in the prediction of e.g. cos^2(a) "correlation" but one also has to take upfront only the 2 point G2() (cf. eq (4) in [5]) instead of the 4 point G4(), even though there are 4 detectors. That means the additional nonlocal filtering convention was added, which requires removal of the triple and quadruple detections (in addition to accidentals and unpaired singles built into the G2() they used). Some people, based on attributing some wishful meanings to the abstract QM observables, take this convention (of using G2 instead of G4) to mean that the parallel polarizers will give 100% correlated result. As QED derivation [5] shows, they surely will correlate 100%, provided you exclude by hand all those results where they don't agree.

With QED/QO derivation one sees all the additional filtering conventions, resulting from the QED dynamics of photon-atom interactions (used in deriving Gn() in [4]), which are needed in order to replicate the abstract prediction of elementary QM, such as cos^2(a) "correlation" (which obviously isn't any correlation of any actual local counts at all, not even in principle).

The abstract QM postulates simply lack information about EM field-atom interaction to tell you any of it. They just tell you observable exists. To find out what it means operationally (which you need in order to make any nonlocality claims via Bell inequalities violations; or in the AJP paper, about violation of classical g2>=1), you need dynamics of the specific system. That's what Glauber's QO detection and correlation modelling via QED provides.

In other words the "ideal" detectors, which will yield the "QM predicted" raw counts (which violate the classical inequalities) are necessarily the nonlocal devices -- to make decisions trigger/no-trigger these "ideal" detectors need extra information about results from remote locations. Thus you can't have the imagined "ideal" detectors that make decisions locally, not even in principle (e.g. how would an "ideal" local detector know its trigger will be the 3rd or 4th trigger, so it better stay quiet, so that its "ideal" counts don't contain doubles & triples? or that its silence will yield unpaired single so it better go off and trigger this time?). Even worse, they may need info from other experiments (e.g. to measures the 'accidental' rates, where the main source is turned off or shifted in the coincidence channel, data accumulated and subtracted from the total "correlations").

The conclusion is then that Quantum Optics/QED don't make a prediction of violation of Bell inequality (or, as explained earlier, of the g2>=1 inequality). There never was (as is well known to the experts) any violation of Bell inequality (or any other classical inequality) in the QO experiments, either. The analysis of the Glauber's QO detection model shows that no violation can exist, not even in principle, using photons as Bell's particles, since no "ideal" local detector for photons could replicate the abstract (and operationally vague) QM predictions.

The "dynamical model of the detection process" you always cite is just the detection process in the case of one specific mode which corresponds to a 1-photon state in Fock space, and which hits ONE detector.

Well, that is where you're missing it. First, the interaction between EM field and cathode atoms is not an abstract QM measurement (and there is no QM projection of "photon") in this treatment. It is plain local QED interaction, so we can skip all the obfuscatory "measurement theory" language.

Now, the detection process being modeled by the Glauber's "correlation" functions Gn() defines very specific kind of dynamical occurence to define what are the counts that the Gn() functions are correlating. These counts at a given 4-volume V(x) are QED processes of local QED absorption of the whole EM mode by the (ideal Glauber) "detector" GD, which means the full mode energy must be transferred to the GD, leaving no energy in that EM mode (other than vacuum). The energy transfers in QED are always local, which implies in case of a single mode field (such as our |Psi.1> = |T> + |R>) that the entire flux of EM field has to traverse the GD cathode (note that EM field operators in Heisenberrg picture evolve via Maxwell equations, for free fields and for linear optical elements, such as beam splitters, mirrors, polarizers...).

Therefore, your statement about mode " which hits ONE detector" needs to account that for this "ONE" detector to generate a "Glauber count" (since that is what defines the Gn(), used in this paper, e.g. AJP.8) of 1, it has to absorb the whole mode, the full EM energy of |Psi.1> = |T>+|R>. As you can easily verify from the picture of the AJP setup, DT is not a detector configured for such operation of absorbing the full energy of the mode in question, the |Psi.1>. You can't start now splitting universes and invoking your consciousness etc. This is just plain old EM filed propagation. Explain how can DT absorb the whole mode |Psi.1>, its full energy leaving just vacuum after the absorption? (And without it your Glauber counts for AJP.8 will not be generated.)

It can't just grab it from the region R by willpower, be it Everett's or von Neumann's or yours. The only way to absorb it, and thus generate the Glauber's count +1, is by regular EM field propagation (via Maxwell equations) which brings the entire energy of the mode, its T and R regions, onto the cathode of DT. Which means DT has to spread out to cover both paths.

Therefore, the AJP setup doesn't correspond to any detector absorbing the whole incident mode, thus their setup doesn't implement Glauber's G2() in (AJP.8) for the single mode state |Psi.1> = |T> + |R>. Therefore, the count correlations of DT and DR are not described by eq. (AJP.8) since neither DT nor DR implement Glauber's detector (whose counts are counted and correlated by the AJP.8 numerator). Such detector, whose counts are correlated in (AJP.8), counts 1 if and only if it absorbs the full energy of one EM field mode.

The eq (AJP.8) with their detector geometry, applies to the mixed incident state which is Rho = |T><T| + |R><R|. In that state, for each PDC pulse, the full single mode switches randomly from try to try between the T and R paths, going only one way in each try, and thus the detectors in their configuration can, indeed, perform the counts described by their eq. AJP.8. In that case, they would get g2=0 (which is also identical to the classical prediction for the mixed state), except that their EM state isn't the given mixed state. They're just parroting the earlier erroneous interpretation of that setup for the input state |Psi.1> and to make it "work" as they imagined they had to cheat (as anyone else will have to get g2<1 for raw counts).

What I mean is: If your incoming EM field is a "pure photon state" |1photonleft>, then you can go and do all the locla dynamics with such a state, and after a lot of complicated computations, you will find that your LEFT detector gets into a certain state while the right detector didn't see anything. I'm not going to consider finite efficiencies (yes, yes...), the thing ends up as |detectorleftclick> |norightclick>...

What you're confusing here is the behavior of their detectors DT and DR (which will be triggering as configured, even for the single mode field |Psi.1> = |T> + |R> EM field) with the applicability of their eq. (AJP.8) to the counts their DT and DR produce. The detectors DT and DR will trigger, no-one ever said they won't, but the correlations of these triggers are not described by their (AJP.8).

The G2(x1,x2) in numerator of (AJP.8) descibes correlation of the counts of the "full mode absorptions" at locations x1 and x2 (which is more evident in the form AJP.9, although you need to read Glauber to understand what precise dynamical conditions produce these count entering AJP.8: only the full mode absorption, leaving the EM vacuum for the single mode input, make the count 1). But these cannot be their DT and DR since neither of them is for this input state a full mode absorber. And without it they can't apply (AJP,8) to the counts of their detectors. (You need also to recall that in general, the input state determines how the detector has to be placed to perform as an absorber, of any kind, full absorber of AJP.8, or partial absorber which is what they had, for a given state.)

You can do the same for a |1photonright> and then of course we get |noleftclick> |detectorrightclick>. These two time evolutions:

|1photonleft> -> |detectorleftclick> |norightclick>
|1photonright> -> |noleftclick> |detectorrightclick>

are part of the overall time evolution operator U = exp(- i H t)
Now if we have an incoming beam on a beam splitter, this gives to a very good approximation:

|1incoming photon> -> 1/sqrt(2) {|1photonleft> + |1photonright>


You're replaying von Neumann's QM measurement model, which is not what is contained in the Glauber's detections whose counts are used in (AJP.8-11) -- these contain additional specific conditions and constraints for Glauber's counts (counts of full "signal" field mode absorptions). There is no von Numann's measurement here (e.g. photon number isn't preserved in absorptions or emissions) on the EM field. The (AJP.8) is dynamically deduced relation, with precise interpretation given by Glauber in [4], resulting from EM-atom dynamics.

The Glauber's (AJP.8) doesn't apply to the raw counts of AJP experiment DT and DR for the single mode input |Psi.1> = |T> + |R>, since neither DT nor DR as configured can absorb this full single mode. As I mentioned before, you can apply Glaubers detection theory here, by defining properly the two Glauber detectors, GD1 and GD2 which are configured for the requirements of the G(x1,x2) for the this single mode input state (which happens to span two separate regions of space). You simply define GD1 as a detector which combines (via logical OR) the outputs of DT and DR from the experiment, thus treat them as single cathode of odd shape. Thus DG1 can absorb the full mode (giving you Poissonioan count of photo-electrons, as theory already predicts for single photo-cathode).

Now, the second detector DG2 has to be somewhere else, but the x2 can't cover or block the volume x1 used by DG1. In any case, wherever you put it (without blocking DG1), it will absorb vacuum state (left from DG1's action) and its Glauber count will be 0 (GD's don't count anything for vacuum photons). Thus you will get the trivial case g2=0 (which is the same as semi-classical prediction for this configuration of DG1, DG2).

The upshot of this triviality of DG1+DG2 based g2=0 is that it illustrates limitation of the Glauber's definition of Gn() for these (technically) "nonclassical" state -- as Glauber noted in [4], the most trivial property of these functions is that they are, by definition, 0 when the input state has fewer photons (modes) than there are 'Glauber detectors'. Even though he noted this "nonclassicality" for the Fock states, he never tried to assign its operational meaning to the setup like DT and DR of this "anticorrelation" experiment, he knew it doesn't apply here except in the trivial manner of GD1 and GD2 layout (or mixed state Rho layout), in which it shows absolutely nothing non-classical.

It may be puzzling why is it "nonclassical" but it shows nothing genuinly non-classical. This is purely the consequence of the convention Glauber adopted in his definition of Gn. Its technical "nonclassicality" (behavior unlike classical or mathematical correlations) is simply the result of the fact that these Gn() are not really correlation functions of any sequence of local counts at x1 and x2. Namely, their operational definition includes correlation of the counts, followed by the QO subtractions. Formally, this appears in his dropping of the terms from the QED prediction for the actual counts. To quote him ([4] p 85):
... we obtain for this Un(t,t0) an expression containing n^n terms, which represent all the ways in which n atoms can participate in an n-th order process. Many of these terms, however, have nothing to do with the process we are considering, since we require each atom [his detector] to participate by absorbing a photon once and only once. Terms involving repetitions of the Hamiltonian for a given atom describe processes other than those we are interested in.

He then goes on and drops all the terms "we are not interested in" and gets his Gn()'s and their properties. While they're useful in practice, since they do filter the most characteristic features of the "signal", with all "noise" removed (formally and via QO subtractions), they are not correlations functions of any counts and they have only trivial meaning (and value 0) for the cases of having more Glauber detectors than the incident modes, or generally when we have detectors which capture only partial modes, such as DG and DT interacting with |Psi.1> state in this experiment.

The Mandel, Wolf & Sudarshan's semiclassical detection theory has identical predictions (Poissonian photo-electron counts, proportionality of photo-electron counts to incident intensity, etc) but it lacks Glauber's "signal" function definition for multipoint detections (MWS do subtract local detector's vacuum contributions to its own count). For multipoint detections, they simply use plain product of detection rates of each detector, since these points are at the spacelike distances, which gives you "classical" g2>=1 for this experiment for the actual counts. And that is what you will always measure.

Glauber could have defined the raw count correlations as well, the same product as the MWS semiclassical theory, and derived it from his QED model, but he didn't (the QED unfiltered form is generally not useful for the practical coincidence applications due to great generality and "noise" terms). The correlation functions here would have the unordered operator products instead of normally ordered (as discussed in our earlier thread).

Note that the fact that semiclassical model (or same for any non-Glauberized QED model) uses products of trigger rates doesn't mean the counts can't be correleted. They can be since the local trigger rates are proportional to the local intensities, and these intensities can correlate e.g. as they do in this experiment between G and T + R rates. The correlation in counts is entirely non-mysterious, due to simple EM amplitude correlations.

{ PS: Grangier, who is your countryman and perhaps nearby, is famous for this experiment. Ask him if he still thinks it is genuinly nonclassical (on raw counts g2<1 and with high enough visibility). Also, whether he has some real QED proof of the existence of any nontrivial anticorrelation in this setup (genuinly nonclassical), since you don't seem to know how to do it (I don't know how, either, but I know that).}
 
Last edited:
  • #62
nightlight said:
Well, than you need some more reading and thinking to do. Let me recall the paper by Haus (who was one of the top Quantum Opticians from lengendary RLE lab at MIT) which I mentioned earlier (http://prola.aps.org/abstract/PRA/v47/i6/p4585_1):

Yes, the subject of that paper is the application of decoherence in order to analyse in more depth what is usually a "shortcut" when we apply the projection postulate. I'm fully in agreement with that, I even try to "sell" the idea here. But it doesn't invalidate the RESULTS of the projection postulate (if applied late enough!), it justifies them.

If, for example an operational imlementation of an "observable" requires plain classical collection of data from multiple locations and rejections or subtractions based on the values obtained from remote locations, than one cannot base claim of nonlocality on such trivially non-local observable (since one can allways add the same convention and their communications channels to any classical model for the raw counts; or generally -- defining extra data filtering or post-processing conventions cannot make the previosuly classically compatible raw counts into result which excludes the 'classical' model).

It doesn't change the settings. You're right that in order to be completely correct, one should include, in the whole calculation, ALL detectors. But often, this is doing a lot of work just to find out that the ESSENTIAL quantity you're looking after is popping out, in a factorized way, and corresponds with what you are really doing, and that the experimental corrections you are so heavily protesting against, is nothing else but applying these calculations in reverse.
Do it if you like. Calculate the 3-point correlation function, and then find that it factorizes in a 2-point correlation function and a poisson stream. I don't stop you, it is the correct way of proceeding. And you will then probably also realize that what you will have achieved with a lot of sweat is equivalent to the simple experimental calculation that has been done and which you don't like :-)

Here, do it:

The QED/QO derivation in [5] makes it plain (assuming the understanding of Gn of [4]) that not only are all the nonlocal vacuum effects subtractions (the "signal" function filtering conventions of QO, built into the standard Gn() definition), included in the prediction of e.g. cos^2(a) "correlation" but one also has to take upfront only the 2 point G2() (cf. eq (4) in [5]) instead of the 4 point G4(), even though there are 4 detectors. That means the additional nonlocal filtering convention was added, which requires removal of the triple and quadruple detections (in addition to accidentals and unpaired singles built into the G2() they used). Some people, based on attributing some wishful meanings to the abstract QM observables, take this convention (of using G2 instead of G4) to mean that the parallel polarizers will give 100% correlated result. As QED derivation [5] shows, they surely will correlate 100%, provided you exclude by hand all those results where they don't agree.

Do the full calculation, and see that the quantity that is experimentally extracted is given by the 2-point correlation function to a high degree of accuracy.

In other words the "ideal" detectors, which will yield the "QM predicted" raw counts (which violate the classical inequalities) are necessarily the nonlocal devices -- to make decisions trigger/no-trigger these "ideal" detectors need extra information about results from remote locations. Thus you can't have the imagined "ideal" detectors that make decisions locally, not even in principle (e.g. how would an "ideal" local detector know its trigger will be the 3rd or 4th trigger, so it better stay quiet, so that its "ideal" counts don't contain doubles & triples? or that its silence will yield unpaired single so it better go off and trigger this time?). Even worse, they may need info from other experiments (e.g. to measures the 'accidental' rates, where the main source is turned off or shifted in the coincidence channel, data accumulated and subtracted from the total "correlations").

But if you really want to, you CAN do the entire calculation. Nothing stops you from calculating a 5-point correlation function, and then work back your way to the quantum expectation value of the quantity under study. Only, people have developed a certain intuition of when they don't need to do so ; when the result is given by the quantity they want to calculate (such as the 2-point function) and some experimental corrections, or trigger conditions. You don't seem to have that intuition, so it is probably a good exercise (because it seems to be so important to you) to go through the long calculation.

See, your objections sound a bit like the following analogy.
One could think of somebody objecting against the "shortcuts taken" when naive student calculations calculate Kepler orbits for point particles in a 1/r^2 field, and then associate this to the movement of planets in the solar system.
The objections could be: hey, PLANETS ARE NOT POINT PARTICLES. You could delve into continuum mechanics, plate tectonics, and fluid dynamics to show how complicated the material movement is, no way that this can be considered a point particle ! So it should be clear that those student calculations giving rise to Kepler orbits to "fit the data" are bogus: real Newtonian mechanics doesn't do so !

The "dynamical model of the detection process" you always cite is just the detection process in the case of one specific mode which corresponds to a 1-photon state in Fock space, and which hits ONE detector.

Well, that is where you're missing it. First, the interaction between EM field and cathode atoms is not an abstract QM measurement (and there is no QM projection of "photon") in this treatment. It is plain local QED interaction, so we can skip all the obfuscatory "measurement theory" language.

But there is NEVER an abstract QM measurement. All is just entanglement.

Now, the detection process being modeled by the Glauber's "correlation" functions Gn() defines very specific kind of dynamical occurence to define what are the counts that the Gn() functions are correlating. These counts at a given 4-volume V(x) are QED processes of local QED absorption of the whole EM mode by the (ideal Glauber) "detector" GD, which means the full mode energy must be transferred to the GD, leaving no energy in that EM mode (other than vacuum). The energy transfers in QED are always local, which implies in case of a single mode field (such as our |Psi.1> = |T> + |R>) that the entire flux of EM field has to traverse the GD cathode (note that EM field operators in Heisenberrg picture evolve via Maxwell equations, for free fields and for linear optical elements, such as beam splitters, mirrors, polarizers...).

This is where you don't understand quantum theory. The "whole mode" can be the "left beam". The "left beam" then interacts LOCALLY with the detector. If you now say that the whole mode is in fact half the left beam plus half the right beam, you can do that, thanks to the superposition principle and the 1-1 relationship between EM modes and 1-photon states.

After all the LR mode can be written as a very good approximation as half an L mode and half an R mode. I know that this is not EXACTLY true, but it is, to a good enough approximation. So the 1-photon state corresponding to the LR mode is the superposition of the 1-photon states corresponding to the almost L mode and almost R mode. They follow exactly the superpositions of the classical EM fields, it is just a change of basis.

One could make all these details explicit, but usually one has enough "physical intuition" to jump these obvious but tedious steps.

Therefore, your statement about mode " which hits ONE detector" needs to account that for this "ONE" detector to generate a "Glauber count" (since that is what defines the Gn(), used in this paper, e.g. AJP.8) of 1, it has to absorb the whole mode, the full EM energy of |Psi.1> = |T>+|R>.

That's where YOU are missing the whole point in quantum theory. You cannot talk about the localisation or not of the energy of a quantum state !
That "full mode" can be written in another basis, namely in the L and R basis, where it is the superposition of two states (or modes). And your detector is locally sensitive to ONE of them, so this basis is the "eigenbasis" corresponding to the measurement ; but if you don't like that language (nor do I!), you say that the detector ENTANGLES with each state L and R: for the L state he is in the "click" state, and for the R state he's in the "no click" state.

As you can easily verify from the picture of the AJP setup, DT is not a detector configured for such operation of absorbing the full energy of the mode in question, the |Psi.1>. You can't start now splitting universes and invoking your consciousness etc. This is just plain old EM filed propagation.

Well, you can object to quantum theory if you want to. But you should first understand how it works! What you have been saying above indicates that you don't. If the detector was absorbing the "entire LR mode" (which could be in principle possible, why not ; it would be a strange setup but ok), then you would NOT measure exactly what we wanted to measure, because you would measure AN EIGENSTATE ! You would always get the same result, namely, well, that there was a photon in the LR mode. Big deal.

Explain how can DT absorb the whole mode |Psi.1>, its full energy leaving just vacuum after the absorption? (And without it your Glauber counts for AJP.8 will not be generated.)

Because we were not working in that basis ! So the system was FORCED to be in one of the eigenstates of the LOCAL detectors, namely L OR R. That was the whole point of the setup.

It can't just grab it from the region R by willpower, be it Everett's or von Neumann's or yours. The only way to absorb it, and thus generate the Glauber's count +1, is by regular EM field propagation (via Maxwell equations) which brings the entire energy of the mode, its T and R regions, onto the cathode of DT. Which means DT has to spread out to cover both paths.

I guess that the Pope wrote that in his testament, so it must be true ?

I think that you've been missing the essential content of quantum theory, honestly. Whether or not you think it is true is another issue, but first you should understand the basic premisses of quantum theory and how you arrive at predictions in it. You are making fundamental mistakes in its basic application, and then confuse the issues by going to complicated formalisms in quantum optics. Newtonian mechanics DOES predict Kepler orbits.

cheers,
Patrick.
 
  • #63
This is where you don't understand quantum theory. The "whole mode" can be the "left beam"...

You forgot the context. As general isolated statement, any basis defines modes, so any state is a mode. But it is irrelevant for the usability of (AJP.8) for the given input state. For Glauber's detector to produce a count +1 it needs to absorb dynamically this entire incident mode, its entire energy and leave the vacuum.

The reason the numerator G2 of (AJP.8,9) goes to zero is precisely because of the 2nd abosrber has no mode to absorb, because it was absorbed by the 1st absorber (in the G2). Either of the two absorbers of (AJP.9) is therefore absorbing this particular mode, |Psi.1> and each leaves vacuum when it completes its interaction with |Psi.1> = |T> + |R>. The actual detector DT (or DR) in their setup does not correspond to this type of absorption. DT of their setup with this input state will never generate Glauber count +1 counted in the (AJP.8,9) (DT in experiment generates counts, of course, but this is not the "count" being counted by Glauber detector of AJP.9), since it will never leave the vacuum as result of the interaction with the mode it absorbs, the |Psi.1> (since it is not placed to interact with its field in the region R).

The "left beam" then interacts LOCALLY with the detector. If you now say that the whole mode is in fact half the left beam plus half the right beam, you can do that, thanks to the superposition principle and the 1-1 relationship between EM modes and 1-photon states.

You're explaining why the actual DT detector in the setup will trigger. That has nothing to do with the Glauber count in the numerator of AJP.8. Each "count" being correlated in AJP.8) counts of full absorbtions of mode |Psi.1>, not just any general mode you can imagine. That's why the numerator of (AJP.8) turns zero -- the first anihilation operator a_t (cf. (AJP.9)) leaves vacuum for the mode GT of (AJP.8) absorbs, then the 2nd operator a_r yields 0 since it acts on the same mode |Psi.1>. But these are not the absorptions performed by the actual DT and DR of the setup since these absorb each its own mode.

The (AJP.8) has the desired behavior only for the mixed state Rho =1/2(|T><T| + |R><R|), where in each try DT or DR will aborb the single mode (DT will absorb whole mode |T> and |DR> mode |R>) and leave the EM state vacuum, so the other absorber acting on this vacuum will yield 0. But here the anticorrelation is also present in the field amplitudes, so that classical g2 is 0 as well (the single "photon" goes one definite path in each try).

Your whole argument here is based on mixing up the general applicability of (AJP.8) to any conceivable input states (such as separate modes T and R) with the specific conditions of this setup, where (AJP.8) is required to yield 0 in the numerator when used with the state |Psi.1> = |T> + |R> for the expectation value of (AJP.8). In order to make the numerator of (AJP.8) vanish when applied to this input state, the |Psi.1>, which is what their result g2=0 requires, you need both of its anihilation operators to act on the same mode.

The only way the actual DT and DR counts (interpreted as Glauber full mode absorption counts used in AJP.8) of the setup will yield 0 in the numerator of (AJP.8,9), as the paper's g2=0 conclusion demands, is if the input state is Rho, the mixed state.

For the superposed state |Psi.1> = (|T> + |R>)/sqrt(2), with DT absorbing mode |T> and DR absorbing mode |R>, as you are intepreting it, the numerator of (AJP.8-9) yields 1/4, resulting in g2=1. But for the mixed input state Rho, using the same DT as aborber of |T> and DR as absorber of |R>, the (AJP.8-9) yields 0 (since each term in Rho via trace yields 0 against the product a_r*a_t in AJP.9).

Your defense appears as if tryng to weasel out using the trivial non sequitur generalities and the slippery Copenhagen photon-speak. So let's get fully specific and concrete in the two questions below:

Question-a) Given actual DT and DR of their setup, thus DT absorbing mode |T> (hence a_t |T> = |0>, a_t |R> = 0) and DR absorbing mode |R>, what input state (used for <...> in AJP.9) do you need in the numerator of (AJP.9) to obtain their g2=0? Is that input state same as their "one photon" state |Psi.1> = (|T> + |R>)/sqrt(2) ?

My answer: only the mixed state Rho =1/2 (|T><T| + |R><R|) gives g2=0 with their actual DT and DR absorbers in (AJP.9). Their actual input state |Psi.1> used with their DT and DR absorbers in (AJP.9) gives g2=1.

Question-b) To avoid confusion with (Question-a), take their (AJP.9) and replace the subscripts T and R with generic 1 and 2. Then, given their actual "one photon" state |Psi.1> and given their g2=0 conclusion, tell me what kind of absorbers (what is the basis defining the anihilation operators) a1 and a2 do you need to obtain 0 in the numerator of (AJP.9) on the |Psi.1> (used for the averaging)? What is the spatial extent of the Glauber detector realizing such absorber? { As a check, consider placing just that first detector by itself at your place chosen and examine its effect on |Psi.1>, i.e. after the QED absorption of |Psi.1>, does |Psi.1> become |0> or some other state? E.g. DT of their setup doesn't produce the required effect, the |0> as the final field state after acting on |Psi.1>. Also check your design for GD1 detector (implementing this absorption effect) that does not violate QED evolution equations for the field operators in the region R, which will be the Maxwell equations.}

My answer: To get 0, a1 (or a2) must be an absorber of mode |Psi.1>, hence a2 a1|Psi.1> = a2 |0> = 0. The corresponding Glauber detector performing a1 absorption must extend spatially to receive the entire EM flux of |Psi.1>, which happens to cover T and R paths.

Note also their cheap trick used to mask the triviality of the "anticorrelation" obtained in (b) through their (AJP.11): they transformed the mode absorbers from the post-splitter fields (n_t and n_r) to the pre-splitter fields (n_i). This corresponds to placing a detector in front of the beam splitter in order for a single detector to absorb that single mode via n_i (and yield their g2=0). They don't want a student to wonder about the actual spatial layout of the absorbers of (AJP.9) which have to absorb this single mode after the beam splitter and give g2=0 in (AJP.9). That is also why I asked in (Q-b) that you put non-suggestive labels on the absorbers in order to decide what they ought to be to yield 0 in the numerator of (AJP.9).

Their related interference experiment used the same trick (cf. their web page) -- they stick the interferometer in front of the beam splitter not after, so they don't test whether they have genuine 50:50 superposition (that is in their case a minor loophole, given the main cheat with the gratuitous third coincidence unit).


It doesn't change the settings. You're right that in order to be completely correct, one should include, in the whole calculation, ALL detectors. But often, this is doing a lot of work just to find out that the ESSENTIAL quantity you're looking after...

The whole front section of your responses has lost entirely the context of the discussion. You are arguing that the filtering rules of the Glauber's conventions for Gn() are useful. Indeed they are.

But that usefulness has nothing to do with the topic of non-classicality we were discussing.

The first point of my comments was that the non-local (formally and operationally) filtering defining the G2(), which in turn yields the QED version of Bell's QM prediction, turns the claim that "the cos^2() value of G2 filtered out from the counts data implies nonlocality" into a trivial tautology -- of course that cos^2() interpreted as a genuine "correlation" (despite the fact that it isn't, as the QED derivation makes it transparent, while it is obscured in the QM abstract "measurement") -- it is defined as a non-local "signal" function, filtered out from the actual counts via the non-local QO procedures (or extracted formally in [4] by dropping non-local coincidence terms). So, yes, it is trivial that the "correlation" of counts which is computed using nonlocal operations (QO subtractions) from the measured counts may not be in general case also obtainable as a direct correlation of the purely local counts (thus it will violate Bell's inequality). So what?

This is the same kind of apples vs oranges "discoveries", when claiming you can get g2<1 on adjusted data, so the effect must be nonlocal. The error in this is that you're comparing the classical g2c of (AJP.2) which does not define g2c as a filtered value, against the g2q of (AJP.8) which Glauber's conventions require to filter out the unpaired singles (DG trigger with no DT and DR trigger, which decreases the numerator of AJP.14, thus decreases the g2q) and the accidental coincidences. Then you "discover" the quantum magic by labeling both g2c and g2q as g2 (as this paper does), and act surprised that the second one is smaller. Duh. There is as much of "discovery" and "mystery" here as in getting excited after "discovering" that three centimeters are shorter than three inches.

The second point being made with Haus' quote and my comments was that the dynamical model of detecton explains the "collapse" of "photon" state, giving the exact operational interpretation of the abstract QM projection postulate (which merely assures existence of some such operation without telling you what kind of operation it is, expecially regarding the locality of the procedures that implement the abstract QM observable).

Therefore the dynamical approach is quite relevant in the discussion of whether some formal result implies non-locality -- to find that out you need very precise operational mapping between the formal results and the experimental procedures and their data. The existence fact that the abstract QM projection postulate gives, provides you no information to decide whether cos^(a) correlation it yields implies any non-locality or not. You need to find out precisely how does such cos^2(a) map to experiment and its data for given type of a system, such as photons. The non-relativisitic elementary QM doesn't have an adequate model of the EM fields to flesh out the precise operational requirements for the realizations of its abstract observables of the EM fields. That's what is provided by the QED analysis and the dynamical model of the photo detection & coincidence measurements, as demonstrated in [4].

And the conclusion from that, as shown in the previous note, is that QED doesn't predict either the Bell inequality violation or the g2>=1 violation for photons. In both cases, the operational mapping provided by the QED models (of [4]) shows that the locality violating results obtained in QM have a trivial source of non-locality already built into the procedures needed to actually implement the abstract QM observables in a manner compatible with QED.
 
Last edited:
  • #64
nightlight said:
Each "count" being correlated in AJP.8) counts of full absorbtions of mode |Psi.1>, not just any general mode you can imagine.

Yes, and that's the essential, basic part of quantum theory that you refuse to see, because you make an identification with the classical case, I presume.

I really don't need all that Glauber stuff, which only renders this basic reasoning opaque, so I'm not going to plunge into it for the moment. It is just a way of putting on a more formal basis the very simple reasoning I'm trying, without result, to explain you here.

To every (complex) classical field configuration ("mode"), up to an arbitrary (complex) amplitude coefficient, corresponds a 1-photon state ; but they are not the same physical situation of course. Only, there is a bijective relationship between them, so we can LABEL each 1-photon state with the classical field configuration. "modes" are a BASIS of the vectorspace of classical field configurations, and the equivalent 1-photon states are a basis of the Hilbert space of 1-photon states.

So GIVEN A 1-photon state, we can always expand it in any basis we like.

A physical detector, of small size, will split the hilbert space of 1-photon states in two orthogonal eigenspaces: one will contain all the modes that ARE COMPLETELY ABSORBED, the other will contain all modes that COMPLETELY MISS THE DETECTOR.
Let us call those two orthogonal eigenspaces E1 and E0 of the detector.
So each mode that belongs to E1 will give, with certainty if the detector is 100% opaque, a "detection" ; E0 will give, with certainty, NO "detection".

WHAT IS SEMICLASSICALLY DONE, and which corresponds to the full quantum calculation, is that we take a mode in E1 (one that will hit the detector fully).
We don't bother about E0, we know that it will miss the detector.

Let us now consider a second detector, somewhere else. It too has associated with it, two eigenspaces, let us call them F1 and F0, which are orthogonal and "span the entire hilbert space" of 1-photon states.

We now have 4 subspaces of the hilbert space: E0 and E1, F0 and F1.
Clearly, if the detectors are in different locations, E1 and F1 have no element in common: no beam which fully hits D1 also fully hits D2. However, E0 and F0 (which are much bigger) do overlap.
This means that from these 4 sets, we can construct 3 ORTHOGONAL sets:
E1, F1 and EF0.

Now let us consider our actual incoming beam and beamsplitter. This corresponds to a certain EM field configuration, psi1.
Because E1, F1 and EF0 are orthogonal and complete, it is possible to write psi1 as a sum of 3 terms, each in one of the above.

So in all generality:
|psi1> = a |e1> + b|f1> + c |ef0>

However, if the beams are well-aligned, they *always end into one of the two detectors*, so the |ef0> part can be discarted. If the thing is balanced, moreover, we have:

|psi1> = 1/sqrt(2) (|R> + |T>) with R in E1, T in F1.

We have split up the hilbert space in 3 orthogonal components: E1, F1 and EF0, and our measurement apparatus (the 2 detectors) will react in the following way:

in E1: we have D1 clicks, D2 doesn't.
in F1: we have D1 doesn't click, D2 does.
in EF0: we have nor D1, nor D2 click.

Note that there is no state in which D1 and D2 can click, and that's because E1 and F1 are orthogonal (the detectors are in different locations).

If we now submit a state like |psi1> to this measurement, we apply the general procedure in quantum theory, which is the same, for quantum mechanics, for QED and every other application of quantum theory:

We write the state in the eigenbasis of the measurement (is done here), and we assign probabilities to each of the eigenstates, corresponding to each of the terms' coefficients, squared:

50% probability that D1 clicks, D2 doesn't ;
50% probability that D2 clicks, D1 doesn't.

Here, we applied the projection postulate, which is justified, but if you want to do decoherence, you also can do so. The detector state eigenspace can be split in two orthogonal subspaces: D0 and D1, where D0 corresponds to about all physical states of the detector with "no click" and D1 with "a click".
We have the two detectors De and Df.
We take it that initially the detectors reside somehow in their D0 space.

You consider a time evolution operator U in which:
each mode in E1, |e1>, makes a De0 state evolve in a De1 state ;
each mode in E0 makes a De0 state evolve into another De0 state;
each mode in F1, |f1>, makes a Df0 state evolve in a Df1 state;
each mode in F0, makes a Df0 state evolve into another Df0 state.

The photons end up absorbed so we return to vacuum mode |0>

So we start out with the state (EM state) x (D1 state) x (D2 state):

|psi1> |De0> |Df0> = 1/sqrt(2) {|R> + |T>}|De0> |Df0>

Applying the (linear) time evolution operator of the measurement interaction, we end up in:

1/sqrt(2) {|0>|De1>|Df0> + |0>|De0>|Df1>}

= |0> 1/sqrt(2) { |De1>|Df0> + |De0> |Df1> }

So, there is one branch in which D1 clicked and D2 didn't, and there is another branch in which D1 didn't click, but D2 did.

I didn't need any Glauber or other formalism, which does just the same, but in a more formal setting, in order to deal with more complicated cases. THIS is elementary quantum theory, but there's nothing "naive" or "toy" about it. It is fundamental.

I should add a point, because it is the very essence of your misunderstanding of quantum theory (at least that's how I see it from your arguments). It is the following: I said above:

A physical detector, of small size, will split the hilbert space of 1-photon states in two orthogonal eigenspaces: one will contain all the modes that ARE COMPLETELY ABSORBED, the other will contain all modes that COMPLETELY MISS THE DETECTOR.
Let us call those two orthogonal eigenspaces E1 and E0 of the detector.
So each mode that belongs to E1 will give, with certainty if the detector is 100% opaque, a "detection" ; E0 will give, with certainty, NO "detection".

Now, it should be clear that, although E0 and E1 are orthogonal eigenspaces, they don't of course COVER all of hilbert space (in the same way that the X axis and the Y axis don't cover the plane). So there are many modes of which we don't say how they "interact" with the detector, and this is the typical case you're complaining about. But don't forget that 1-photon states are NOT "em fields". They are all the quantum states of the EM quantum field which are eigenstates of the (free) hamiltonian with energy hbar omega above the ground state (if we limit ourselves to one frequency), and it just happens that there is a bijective relationship with the classical EM field configurations.
And here now comes in the very fundamental postulate of quantum theory: the superposition principle. Namely that if we KNOW WHAT HAPPENS FOR A CERTAIN SET OF BASIS STATES, then we know what happens for any superposition. So we happen to know what happens for those specific field configurations (basis states) which FULLY hit the detector, or which completely MISS the detector, and THAT FIXES ENTIRELY the behaviour for ALL 1-photon states because these "fully hit" and "completely miss" configurations SPAN the whole state space.
This is fundamental, and related to the superposition principle in quantum theory, and this is entirely different in classical field theory where "what happens to the lump of field at A" has normally nothing to do with "what happens to the lump of field at B", and what you are always referring to. The classical fields are just handy bookkeeping devices for the state space of 1-photon states, as can be seen in the following way:
for a 1-photon state, which is an eigenstate of the EM free field hamiltonian, we know that the AVAILABLE ENERGY in the entire quantum field equals hbar omega. This is true for ALL 1-photon states, in its entire hilbert space (limiting, again, ourselves to the monochromatic case). So no matter what happens, if we go from any 1-photon state to the vacuum state, the EM quantum field looses exactly hbar omega in energy. And that is the reason why it can only interact with ONE SINGLE detector ; why geometrically different detectors correspond to orthogonal "fully absorb" eigenspaces, and, in fine, why there is this correspondence between classical EM field configurations and 1-photon states.

Now, something else: if there is a 1-1 relationship between the 1-photon state space and classical EM field configurations, does that mean that the best quantum description of a classical field is its corresponding 1-photon state ? The answer is no. The best quantum description is a coherent state which is a sum over n-photon states.
This indicates already that in the case of 1-photon states, you do not have a classical description, and hence your "grabbing the (classical) field energy at the other side" doesn't make much sense in 1-photon settings.

It can be shown that starting from a classical beam (a coherent state), and using a process which generates a transition from 1-photon to 2-photon states (such as a PDC), detecting a hit of a 2-photon state on 1 detector gives you an experimental situation, in this short time interval, which is very close to a 1-photon state. There is still some contamination of coherent states and so on, but not much. And THIS is what is reproduced with the G detector. Now you can rederive all this stuff again, or you can assume this known, use your G-detector to select the time intervals, and then consider that on the other side you have an almost pure 1-photon state. THIS is what is done in the AJP article.
And once the almost pure 1-photon state is in the RT arm of the experiment, you don't need to go through all the glauber stuff which is fairly complicated: my analysis above is completely correct, and can be traced back to all that creator / annihilator stuff.

cheers,
Patrick.
 
Last edited:
  • #65
I didn't need any Glauber or other formalism, which does just the same, but in a more formal setting, in order to deal with more complicated cases. THIS is elementary quantum theory, but there's nothing "naive" or "toy" about it. It is fundamental.

I see, you can't answer the two direct questions on how to get zero in the numerator of the (AJP.9) along with its coherent operational mapping to the conditions of the experiment (including the given input field and the detectors placements).

The nonrelativistic QM handwaving just won't do, no matter how much Copenhagen fog you blow around it, since the nonrelativistic QM doesn't model the constraints of the EM field dynamics (such as the field operator evolution equations, commutations at the spacelike regions, EM field absorption etc.).

If you have a bit of trouble interpreting the (AJP.9), ask your friendly Quantum Optician at your work, see if with his help the two of you can answer questions Q-a and Q-b, and thus show me how the QED (and not the elementary nonrelativistic QM) does predict such genuine anticorrelation effect, by producing zero in the numerator of (AJP.9). Or ask Grangier if he can do it, he is famous for this experiment, so he must have believed back in 1986 the (genuine) anticorrelation effect existed and thus he has a valid QED proof of its existence in this experiment.

... my analysis above is completely correct, and can be traced back to all that creator / annihilator stuff.

Your analysis again deals with question how does DT or DR trigger, which is not at issue. They do trigger, we agree. The question is what does QED predict for their correlations (its formal answer is contained in AJP.9). I said the QED predicts g2>=1 for their detectors DT and DR and the input state |Psi.1> (see [post=536215]Question-a[/post]).

That's why, to avoid such non sequiturs, I had put the the main issue into a very specific form, [post=536215]the two direct and concrete questions[/post] on how to get zero in the numerator of the (AJP.9) and provide a coherent operational mapping (consistent with Glauber's definitions and conventions [4], used in AJP.9) of the formal result to the conditions of the experiment.

I gave you the two answers there, both showing that G2=0 is trivially classical in both cases, thus no genuine anticorrelation exists as a QED effect. You give it some thought and find your third way, if you can, to get zero there, thus G2=0, and let's see if it holds.
 
Last edited:
  • #66
nightlight said:
The nonrelativistic QM handwaving just won't do, no matter how much Copenhagen fog you blow around it, since the nonrelativistic QM doesn't model the constraints of the EM field dynamics (such as the field operator evolution equations, commutations at the spacelike regions, EM field absorption etc.).

What I've written down is NOT non-relativistic QM. It is quantum theory, for short. QED is a specific application of quantum theory, and the Fock space (and its subspace of 1-photon states) is entirely part of it.
Non-relativistic quantum theory is the application of the same formalism to the configuration space of an n-particle system, and with a time evolution operator U which has as its derivative the non-relativistic hamiltonian.
QED is the application of quantum theory to the configuration space of the EM field (and the dirac field of the electron, but we only use the EM part here), and with a time evolution operator which describes all EM interactions.
But, understand this very well, QED is based on exactly the same quantum theory which contains:
- a hilbert state space
- a linear time evolution operator based upon the hamiltonian of the system
- the superposition principle
- the Born postulate linking hilbert states to probabilities of observation, as related to the eigenspaces of the operator corresponding to the measurement.

The above stuff is part of EVERY quantum theory, be it non-relativistic quantum mechanics, quantum field theories such as QED, string theory, or anything else: whatever is a quantum theory takes on the above form.

The EM field dynamics (all the things you mention above) you are always talking about comes in in the specific structure of the hamiltonian, but that doesn't change a iota to the superposition principle, or to the linearity of the time evolution operator, and these were the only aspects I needed.

I specifically don't want to plunge into the Glauber formalism because it is not needed. Forget about that g2. I have simply shown you where the anticorrelation of detection in 1-photon states comes from, and it is NOT a shortcut, it is not handwaving, and it is not non-relativistic quantum mechanics. It is full QED, with some abstract notations of the state vectors. The very fact that you don't understand this means that you have a fundamental misunderstanding of quantum theory in general, and de facto about QED in particular.
As I said, there's a difference between contesting the validity of a certain theories' predictions, and misunderstanding the predictions of the theory. You're in the second case.

cheers,
Patrick.
 
  • #67
I specifically don't want to plunge into the Glauber formalism because it is not needed. Forget about that g2.

Oh, well, Ok. Thanks, though, for the spirited battle which made me dig some more into the Glauber's lectures [4] (which is the holy of holies of the Quantum Optics) to clarify few of my own misconceptions.
 
Last edited:
  • #68
nightlight said:
Oh, well, Ok. Thanks, though, for the spirited battle which made me dig some more into the Glauber's lectures [4] (which is the holy of holies of the Quantum Optics) to clarify few of my own misconceptions.

Well, I can tell you the following: one of the reasons why I don't want to delve into it is that I don't have a very profound understanding of it myself. I more or less see how it is handled, but I think it would be too dangerous for me to go and talk much about it.
However, what I tried to point out (and it is no "pedagogical weaseling out") is that for THIS PARTICULAR CASE (the anti-correlation of detections predicted by QED), you really don't need that machinery. I'm really convinced that what I've written down is correct, as a QED prediction, because I really need only very few elements of quantum theory to make the point, and I'm really amazed to see you contest it each time.

That said, I'll try to plunge deeper into the Glauber formalism to try to address the points you raise.

cheers,
Patrick.
 
  • #69
nightlight said:
Question-a) Given actual DT and DR of their setup, thus DT absorbing mode |T> (hence a_t |T> = |0>, a_t |R> = 0) and DR absorbing mode |R>, what input state (used for <...> in AJP.9) do you need in the numerator of (AJP.9) to obtain their g2=0? Is that input state same as their "one photon" state |Psi.1> = (|T> + |R>)/sqrt(2) ?

This is a particularly easy question to answer.

The numerator is < a_t^dagger a_r^dagger a_r a_t >

and the state is 1/sqrt(2) (|t> + |r>)

This is nothing else but the norm of the following vector:

a_r a_t 1/sqrt(2) (|t> + |r>)

apply distributivity (linearity of the a operators):

1/sqrt(2) ( a_r a_t |t> + a_r a_t |r> )

Now take the first term: a_t |t> = |0> and a_r |0> = 0

the second term: note that a_t |r> = 0, so a_r a_t |r> = 0

So it seems that we have the null vector. Its norm is 0.

QED :smile:

cheers,
Patrick.
 
Last edited:
  • #70
nightlight said:
Question-b) To avoid confusion with (Question-a), take their (AJP.9) and replace the subscripts T and R with generic 1 and 2. Then, given their actual "one photon" state |Psi.1> and given their g2=0 conclusion, tell me what kind of absorbers (what is the basis defining the anihilation operators) a1 and a2 do you need to obtain 0 in the numerator of (AJP.9) on the |Psi.1> (used for the averaging)? What is the spatial extent of the Glauber detector realizing such absorber? { As a check, consider placing just that first detector by itself at your place chosen and examine its effect on |Psi.1>, i.e. after the QED absorption of |Psi.1>, does |Psi.1> become |0> or some other state? E.g. DT of their setup doesn't produce the required effect, the |0> as the final field state after acting on |Psi.1>. Also check your design for GD1 detector (implementing this absorption effect) that does not violate QED evolution equations for the field operators in the region R, which will be the Maxwell equations.}

My answer: To get 0, a1 (or a2) must be an absorber of mode |Psi.1>, hence a2 a1|Psi.1> = a2 |0> = 0. The corresponding Glauber detector performing a1 absorption must extend spatially to receive the entire EM flux of |Psi.1>, which happens to cover T and R paths.

This is also simple to answer. ANY single-photon mode, with any detector setup, will give 0.

The reason is quite simple: you have TWO ANNIHILATION OPERATORS in (9). A single-photon state (no matter which one) is always annihilated by two annihilation operators.

The reason is this: imagine a basis of EM modes, labeled q1...
Now, a_q5 which annihilates the 5th mode has the following behaviour:

a_q5 |mode 5> = |0>
a_q5 |mode not 5> = 0

Now, write a general 1-photon state in the basis of the |q> modes, and let's say that our two annihilators are a_7 a_5. Now, or psi contains a term with mode 5, say c |mode5>. Then a_7 a_5 will lead to a_7 c |0> (all the other terms are already put to 0 by a_5).
Next a_7, acting on |0> gives you 0.
Or psi does not contain a term with mode 5. Then a_5 |psi> will already be 0.

So this comes down to saying that in a 1-photon mode, you can't have 2 detectors detect them together. Or, as Granier put it: you can detect one photon only once.

So, for a 1-photon state, no matter which one, g2(0) is always 0.
Simply because you have 2 annihilators.

BTW, for fun, compare this to my orthogonal E1 and F1 spaces :-)

cheers,
Patrick.

EDIT: it seems that your mistake is somehow that you think that an annihilator operator, working on a state, only produces the vacuum state if the state is its own mode ; and that if it works on another mode, it is "transparent". Although the first part is correct, the second isn't: an annihilator of a certain mode, working on a state of another mode, just gives you the null vector.

a_q |q> = |0> but a_q |r> = 0 and not, as you seem to think, a_q|r> is |r>.
 
Last edited:

Similar threads

  • Quantum Physics
2
Replies
40
Views
2K
Replies
24
Views
23K
Back
Top