Photon Wave Collapse Experiment (Yeah sure; AJP Sep 2004, Thorn )

Click For Summary
The discussion centers on a 2004 paper by Thorn et al. claiming to demonstrate the indivisibility of photons through a beam splitter experiment, asserting a significant violation of classicality. The authors report a g2 value of 0.0177, suggesting a collapse of the wave function upon measurement, but critics argue that the experimental setup contains flaws, particularly in how coincidences were measured. Key issues include the use of different sampling windows for the GTR unit, which could skew results to falsely support the quantum prediction of g2=0. Additionally, discrepancies in reported delay times raise concerns about the validity of the findings. Overall, the experiment is viewed skeptically, with calls for a formal rebuttal to clarify these issues.
  • #31
nightlight said:
Of course I did address it, already twice. I explained why there are limits to such differentiation and consequently why the dark counts and missed detections (combined in a tradeoff of experimenters choice) are unavoidable. That's as far as the mildely on-topc aspect of that line of questions goes. { And, no, I am not going to rush out to find a lab to look at the "power spectra." Can't you get off it. At least in this thread (just start another one on "power spectra" if you just have to do it).}

And why would this be "off topic" in this thread? Wasn't it you who made the claim that such selection of the raw data CAN CHANGE the result? This was WAAAAAY towards the beginning of this thread. Thus, this is the whole crux of your argument, that AFTER such subtraction, the data looks very agreeable to QM predictions, where as before, you claim that all the garbage looks like "classical" description. Did you or did you not make such claim?

If you can question the validity of making such cuts, then why isn't it valid for me to question if you actually know the physics of photodetector on WHY such cuts are valid in the first place? So why would this be off-topic in this thread?

Zz.
 
Physics news on Phys.org
  • #32
nightlight said:
The photoeffect was modeled by a patent office clerk, who was a reject from the academia.

I guess the conspirators silenced him too. Ditto Bell, whose seminal paper received scant attention for years after release.

Anyone who can come up with a better idea will be met with open arms. But one should expect to do their homework on the idea first. Half a good idea is still half an idea. I don't see the good idea here.

The Thorn experiment is simple, copied of course from earlier experiments (I think P. Grangier). If the photons are not quantum particles, I would expect this experiment to clearly highlight that fact. How can you really expect to convince folks that the experiment is generating a false positive? The results should not even be close if the idea is wrong. And yet a peer-reviewed paper has appeared in a reputable journal.

To make sense of your argument essentially requires throwing out any published experimental result anywhere, if the same criteria is applied consistently. Or does this level of criticism apply only to QM? Because the result which you claim to be wrong is part of a very concise theory which displays incredible utility.

I noticed that you don't bother offering an alternative hypothesis to either the photon correlation formula or something which explains the actual Thorn results (other than to say they are wrong). That is why you have half a good idea, and not a good idea.
 
  • #33
ZapperZ said:
And why would this be "off topic" in this thread? Wasn't it you who made the claim that such selection of the raw data CAN CHANGE the result? This was WAAAAAY towards the beginning of this thread. Thus, this is the whole crux of your argument, that AFTER such subtraction, the data looks very agreeable to QM predictions, where as before, you claim that all the garbage looks like "classical" description. Did you or did you not make such claim? Zz.

You have tangled yourself up into the talmudic logic trap of Quantum Optics and QM magic non-clasicality experimental claims. It is a pure word game, comparing three inches to three centimeters and discovering the two are different, as explained in several recent messages to DrChinese.

You can see the same gimmick in the verbal setup of nearly every such claim -- they define "classical correlations" conventionally, which means the prediction is made assuming no subtractions. Then they do the experiment, do the standard subtraction and show that the result match predictions of the Glauber's "correlations" Gn(..). But Gn()'s are defined differently than the ones they used for "classical" prediction -- the Gn()'s include the subtractions in their definition so, of course they'll match the subtracted data, while the "classical prediction" which, again by the common convention doesn't include the subtractions into the prediction, won't match it. Big suprprise.

This AJP paper and experiment had followed precisely this same common recipe for the good old Quantum Optics magic show. Their eqs' (1)-(3) are "classical" and by definitions no subtractions are included in the model here. Then they go to "quantum" expression for g2, eq. (8) label it same as classical, but it is entirely different thing -- it is Glauber's normal ordered expectation value, and that is the prediction for correlation plus vacuum effects subtractions (which operationally are the usual QO subtractions). The authors, following the established QO magic show playbook, use the same label g2(0) for two different things, one which models the data post-procesing recipe, the other that doesn't. The two g2's of (1) vs (8) are entirely different quantities.

But these authors then went one beyond the true and tried QO show playbook, by adding in the redundant third unit and then, by misconfiguring the delays, achieve the Nobel prize result (if it were real) -- they get nearly maximum violation before they even subtracted the accidentals or unpaired singles (on DG detector, when neither DT or DR triggers). It's a completely made up effect, that doesn't exist in QED theory ([post=529314]as already explained[/post]) or any experiment (other than the [post=529069]rigged "data"[/post] they had put out).

I also already explained what g2(0)=0 on single photon state means operationally. That prediction of full anticorrelation has nothing to do with the AJP paper setup and the computations via eq (14). It is operationally entirely different procedure to which g2=0 applies (see my [post=529314]earlier reply[/post] on details).

If you can question the validity of making such cuts,

I am not questioning some generic "validity" (say for some other purpose) of the conventional QO subtractions.

I am questioning their comparing of apples and oranges -- defining "classical" g2() in eq (AJP.1) so that it doesn't include subtractions, and then labeling g2() in eq (AJP.8) same way, suggesting implicitly they are the same definition regarding the convention for subtractions. The (8) includes subtractions by definitions, the (1) doesn't. The inequality (3) modelling by convention the non-subtracted data is not violated by any non-nonsubtracted data. If you want to model the subtracted data via modified "classical" g2, you can easily do that, but then the inequality (3) isn't any more g2>=1 but g2>=eta (the overall setup efficiency), which is much smaller than one (see Chiao & Kwiat preprint, page 10, eq (11) correpsonding to AJP.14, and page 11 giving g2>=eta, corresonding to AJP.3). The Chiao-Kwiat stayed within the limits of the regular QO magic show rules, thus their "violations" are of term-of-art kind, a matter of defining custom term "violation" in this context, without really claiming any genuine nonlocal collapse, as they acknowledge (page 11):
And in fact, such a local realistic model can account for the
results of this experiment with no need to invoke a nonlocal collapse.
Then, after that recognition of inadequacy of PDC + beam splitter as a proof of collapse, following the true and tried QO magic show recipe, they do invoke Bell's inequality violations, take the violations for granted, as having been shown. But that's another story, not their experiment, and it is also well known that no Bell test data has actually violated the inequality, only the imagined "data" (reconstructed under the "fair sampling" assumption, which no-one wishes to put to test) did violate the inequalities. As with the collapse experiments, the semiclassical model violates Bell inequalities, too, once you allow it the same subtraction that are assumed in the QO prediction (which includes the non-local vacuum effects subtractions in its definition).
 
Last edited:
  • #34
The Thorn experiment is simple, copied of course from earlier experiments (I think P. Grangier). If the photons are not quantum particles, I would expect this experiment to clearly highlight that fact.

The AJP experiment is showing that if you misconfigure the triple coincidence unit timings so that it doesn't get anything but the accidentals, the g2 via eq (14) will come out nearly same as the one accidentals alone will produce (their "best" g2 is within half std deviation from the g2 for accidentals alone).

How can you really expect to convince folks that the experiment is generating a false positive?

Just read the data sheet and read their 6ns delay brought up in 10 places in their text plus in the picture once. Show me how it works with the delays given. Or explain why would they put wrong delay in the paper in so many places, while somehow having used the correct delay in the experiment. See the Chiao-Kwiat paper mentioned in my previous msg to see what this experiment can prove.

To make sense of your argument essentially requires throwing out any published experimental result anywhere, if the same criteria is applied consistently. Or does this level of criticism apply only to QM? Because the result which you claim to be wrong is part of a very concise theory which displays incredible utility.

You're arguing as if we're discussing literary critique. No substance, no pertinent point, just the meta-discussion and psychologizing, euphemizing, conspiracy ad hominem labeling, etc.

I noticed that you don't bother offering an alternative hypothesis to either the photon correlation formula or something which explains the actual Thorn results (other than to say they are wrong).

Did you read this thread. I have [post=529314]explained what g2=0 [/post]means operationally and why it is irrelevant for their setup and their use of it via eq (AJP.14). The classical prediction is perfectly fine, the g2 of unsubtracted data used via (AJP.14) will be >=1. See also the [post=530058]Chiao-Kwiat acknowledgment of that plain fact I already quoted[/post].
 
Last edited:
  • #35
vanesch said:
It can happen, but most of the time, people writing up an experimental publication are not completely incompetent nutcases who don't know how to use their apparatus.

It wouldn't be unusual for pedagogical materials to cheat, for what in their mind is a justifiable cause. After all, how many phony derivations are done in regular textbooks. It doesn't mean the authors don't know better, but they have to get it simple enough, or at least mnemonic enough, for students to absorb. If they believe the final result is right, they don't hestiate to take cheap shortcuts to get there.

The same probably holds for the experimenters. Except that here, there is no such effect[/color] to be observed, [post=529314]in theory[/post] or in [post=530058]practice[/post] (see the section on Chiao-Kwiat paper).

Another bit that raises eyebrows -- the chief author would not provide even a single actual count figure of anything in the experiment. He just stuck with the final results for g2, Table I, but neither the paper nor the email requests provided a single actual count figure used to compute them. All the answers I got were in the style I quoted in the initial post -- 'we did it right, don't ask, period.' That doesn't sound like a scientific experiment at all.

There is no way they can claim their formula (14), in which there are neither accidental subtractions nor the subtractions of unpaired singles (since they use N_G for the singles rates; Chiao-Kwiat experiment subtracts unpaired singles, which would correspond to (N_T+N_R) in the numerator of (AJP.14)) could yield g2<1.

The only hypothesis that fits this kind of 'too good to be true' perfection (contrary to other similar experiments and the theory) and their absolutely unshakable determination to not expose any count figures used to compute g2, is what I stated at the top of this thread -- they did use the wrong delay of 6ns in the experiment, not just in its AJP description.

Their article length excuse for timing inconsistency is completely lame -- that explanation would work only if they didn't give the 6ns figure at all. Then one can indeed say -- well it's one of those experimental details we didn't provide, along with many others. But they did put the 6ns figure in the article, not once but in 10 places plus in the figure 5. Why over-emphasize the completely bogus figure so much, especially if they knew it was bogus? So, clearly, the repetition of 6ns was meant as an instruction to other student labs, a sure-fire magic ingredient which makes it always "work" to absolute "perfection", no matter what the accidentals and the efficiencies are.

-- comment added:

Note that in our earlier discussion, you were claiming, too, that this kind of remote nondynamical collapse does occur. I gather from some of your messages in other threads you have now accepted my locality argument that the local QED field dynamics cannot yield such prediction without cornering itself into the selfcontradiction (since, among others, that would imply that the actual observable phenomena depend on position of the von Neumann's boundary). If you follow that new understanding one more step, you will realize that this implies that von Neumann's general projection postulate for composite systems, such as that used for deducing Bell's QM prediction (the noninteracting projection of spacelike remote subsystem) fails for the same reason -- it contradicts the von Neumann's very own deduction of independence of phenomena on the position of his boundary. In other words, von Neumann's general projection postulate is invalid since it contradicts the QED dynamics. The only projection which remains valid is a projection which has a consistent QED dynamical counterpart, such as photon absorption or a fermion localization (e.g. a free electron gets captured into a bound state by an ion).
 
Last edited:
  • #36
nightlight said:
Note that in our earlier discussion, you were claiming, too, that this kind of remote nondynamical collapse does occur.

I never did. I cannot remember ever having taken the position of "remote collapse", except as a calculational tool. All my arguments here, about delayed choice quantum erasers, EPR etc... were oriented to show that there is no remote collapse necessary, although you can use it.

In fact, the "remote collapse" poses problems from the moment you assign any ontology to the wave function. There problems are not unsurmountable (as is shown in Bohm's mechanics) but I prefer not to consider such solutions for the moment, based upon some kind of esthetics, which says that if you stick at all cost to certain symmetries for the wave function formalism, you should not spit on them when considering another rule (such as the guiding equation).
The "remote collapse" poses no problem if you don't assign any ontology to the wave function, and just see it as a calculational tool.

So if I ever talked about "remote collapse" it was because of 2 possible reasons: I was talking about a calculation OR I was drunk.

cheers,
Patrick.
 
  • #37
vanesch said:
I never did. I cannot remember ever having taken the position of "remote collapse", except as a calculational tool.

Well, the collapse in the sense of beam splitter anticorrelations discussed in this thread. You definitely, in our detector discussion took position that there will be genuinely reduced detection probability on detector DT whenever there was a trigger on the matching detctor DR. You argued that there was a drop in the trigger probability of DT trigger in this case and that this drop is genuine (in the sense of anticorrelation not being an artifact of subtractions of accidentals and unpaired singles). As I understand you don't believe any more in that kind of non-interacting spacelike anticorrelation (the reduction of remote subsystem state). You certianly did argue consistently that it was a genuine anticorrelation and not an artifact of subtractions.
 
  • #38
nightlight said:
You definitely, in our detector discussion took position that there will be genuinely reduced detection probability on detector DT whenever there was a trigger on the matching detctor DR.

Ah, yes, and I still do. But that has nothing to do with true collapse or not !
There will be a branch where DR triggers and not DT, and there will be a branch where DT triggers, and not DR. There will never be a branch where DR and DT trigger together (apart from double events).

I just happen to be an observer in one of the two branches. But the other one can happily exist. Collapse means that suddenly, that other branch "disappears". It doesn't have to. I will simply not observe it, because I don't happen to be in that branch.

Observationally this is of course indistinguishable from the claim that the branch "I am not in" somehow "doesn't exist anymore". If you do that, you have a collapse. But I find it more pleasing to say that that branch still exists, but is not open to my observation, because I happen to have made a bifurcation in another branch.
It all depends what you want to stick to: do you stick to the postulate that things can only exist when I can observe them, or do you stick to esthetics in the mathematical formalism ? I prefer the latter. You don't have to.

cheers,
Patrick.
 
  • #39
nightlight said:
http://www.ortec-online.com/electronics/tac/567.htm . The data sheet for
the model 567 lists the required delay of 10ns for the START
(which was here DT signal, see AJP.Fig 5) from the START GATE
signal (which was here DG signal) in order for START to get
accepted. But the AJP.Fig 5, and the several places in the
text give their delay line between DG and DT as 6 ns. That
means when DG triggers at t0, 6ns later (+/- 1.25ns) the
DT will trigger (if at all), but the TAC will ignore it
since it won't be ready yet, and for another 4ns. Then,
at t0+10ns the TAC is finally enabled, but without START
no event will be registered. The "GTR" coincidence rate
will be close to accidental background (slightly above
since if the real T doesn't trip DT and the subsequent
background DT hits at t0+10, then the DG trigger, which
is now more likely than background, at t0+12ns will
allow the registration).

I read in detail your analysis here, and it is correct in a certain sense. Indeed, in the data sheet is a requirement for 10 ns for the gate to be "active" before the start comes in. Now, does that mean that if you give 6ns, you're guaranteed NOT to have a count ? I wouldn't think so ! Doing quite some electronics development myself, I know that when I specify some limits on utilisation, I'm usually sure of a much better performance ! So if I make a device of which I'm relatively sure that, say, a delay of 2 ns is sufficient in most circumstances, I'd rather specify 5 or 10 ns, so that I'm absolutely sure that I will keep the specs, except if I'm pushed to the limits.
So I'm pretty sure that the 6ns is sufficient to make the actually trigger.
A point is of course that in this particular example, the remedy is extremely simple: use longer delay lines !
I've already been in such a situation, you know: you do an experiment, everything works OK, you write up your stuff and you submit it. Then there is a guy who points out that you didn't respect a certain specification. So you get pale, and you wonder if you have to retract your submission. You think about it, and then you say: ah, but simply using longer delay lines will do the trick ! So you rush to the lab, you correct for the error, and you find more or less equivalent results ; it didn't matter in the end. What do you do ? Do you write to the editor saying that you made a mistake in the paper, but that it actually doesn't matter, if only you're allowed to change a few things ??
OF COURSE NOT. You only do that if the problem DOES matter.
So that's in my opinion what happened, and that's why the author told you that everything is all right, and that these 6ns delays are not correct but that everything is all right.
There's also another very simple test to check the coincidence counting: connect the three inputs G, T and R to one single pulse generator. In that case, you should find identical counts in GT, GR and GTR.

After all, they 1) WORK (even if they don't strictly respect the specs of the counter) 2) they are in principle correct if we use fast enough electronics, 3) they are a DETAIL.
Honestly, if it WOULDN'T WORK, then the author has ALL POSSIBLE REASONS to publish it:
1) he would have discovered a major thing, a discrepancy with quantum optics
2) people will soon find out ; so he would stand out as the FOOL THAT MISSED AN IMPORTANT DISCOVERY BECAUSE HE DIDN'T PAY ATTENTION TO HIS APPARATUS.

After all, he suggests, himself, to make true counting electronics. Hey, if somebody pays me enough for it, I'll design it myself ! I'd think that for $20000,- I make you a circuit without any problem. That's what I do part of my job time.

I'll see if we have such an ORTEC 567 and try to gate it at 6ns.

Now, if you are convinced that it IS a major issue, just try to find out a lab that DID build the set up, and promise them a BIG SURPRISE when they change the cables into longer ones, on condition that you will share the Nobel prize with them.

cheers,
Patrick
 
Last edited by a moderator:
  • #40
vanesch said:
So if I make a device of which I'm relatively sure that, say, a delay of 2 ns is sufficient in most circumstances, I'd rather specify 5 or 10 ns, so that I'm absolutely sure that I will keep the specs, except if I'm pushed to the limits.

This is not babbling in the air! For instance, together with a collegue, we devellopped a card for charge division read out. After reception of the analogue pulses, the position address on 8 bits, taking into account all calibration curves, is ready at the output in 370 ns. Well, we specified 1 microsecond, because the design specifications were 2 microseconds...

cheers,
Patrick.
 
  • #41
This is not babbling in the air! For instance, together with a collegue, we devellopped a card for charge division read out. After reception of the analogue pulses, the position address on 8 bits, taking into account all calibration curves, is ready at the output in 370 ns. Well, we specified 1 microsecond, because the design specifications were 2 microseconds...

Your example shows an additional reason why it is not plausible in this case. In your case there were no ten other vendors with competing designs and specs fighting for the same customer. You had monopoly and what you say goes, thus you're better off being conservative with stating limitations. In a competitive situation, the manufacturers will push the specs as far as they can get away with (i.e. in the cost/benefit analysis they would estimate how much they will lose from returned units vs los of sales, prestige, customers to competition). Just think of what limits you would state had that been a competition for that job -- there were ten candidates and they all design a unit to given minimum requirements (but they're allowed to improve) and the employer will pick one they like best.
 
  • #42
I read in detail your analysis here, and it is correct in a certain sense. Indeed, in the data sheet is a requirement for 10 ns for the gate to be "active" before the start comes in. Now, does that mean that if you give 6ns, you're guaranteed NOT to have a count ?

No it didn't work by luck. The author acknowledges the figure 6ns (and thus the 12ns) is wrong and that they have used the correct longer timing. You can ask him at his web page (just be tactful, he got very angry after I asked him about it).

The problem with "leave it alone, don't issue errata, since it worked" doesn't wash either. It didn't work. The g2=0 for eq (AJP.8) applies to normalized Glauber's correlation function which means you have to do subtractions of accidentals and removal of the unpaired singles (the counts on DG for which there were no T or R events). Otherwise you haven't removed the vacuum photon effects as the G2() does (see his derivation in [4]). Both adjustments lower the g2 in (AJP.14). But they already got nearly "perfect" result with raw counts. Note that they used N(G), which is 100,000 c/s, for their singles count in eq (AJP.14), while they should have used N(T+R) which is 8000 c/s.

Using raw counts in eq (14), they could only get g2>=1. See http://arxiv.org/abs/quant-ph/?0201036, page 11, where they say that classical g2>= eta, where eta is "coincidence-detection efficiency." They also acknowledge that the experiment does have semi-classical model (after all, Marshall & Santos had shown that already for Grangier at al. 1986 case, way back then). Thus they had to call upon already established Bell's inequality violations to discount semi-classical model for their experiment (p 11). They understand that this kind of experiment can't rule out semi-classical model on its own, since there is [post=529314]no such QED prediction[/post]. With Thorn et al, it is so "perfect" it does it all by itself, and even without any subtractions at all.

Keep also in mind that this is AJP and the experiment was meant to be a template for other undergraduate labs (as their title also says) to show their students, inexpensively and reliably the anticorrelation. That's why the figure 6ns is all over the article (in 11 places). That's the magic ingredient that makes it "work". Without it, on raw data you will have g2>=1.
 
  • #43
Ah, yes, and I still do. But that has nothing to do with true collapse or not !
There will be a branch where DR triggers and not DT, and there will be a branch where DT triggers, and not DR.


Oh good ol' MWI. We went this route before and you were backed into solipsism. Anything goes there.

There will never be a branch where DR and DT trigger together (apart from double events).

If you have an optical lab, try it out. That won't happen. The DT and DR will trigger no differently than classically i.e. your raw GTR coincidence data (nothing removed) plugged into (AJP.14) will give you g2>=1 (you'll probably get at least 1.5). The usual QO S/N enhancements via subtractions should not be done here.

Namely, if your raw counts (which is what classical model with g2>=1 applies to) don't violate g2>=1, there is no point subtracting and checking whether that adjusted g2a goes below 1, since the same subtraction can be added to the classical model (so that it, too, corresponds to what was done with the data in the experiment) and it will follow the adjusted data as well and show g2a<1 (as acknowledged in Chiao,Kwiat paper).

Note also that conventional QO "correlations", the Glauber's Gn(), already include in their definition the subtraction[/color] of accidentals and unpaired singles (the vacuum photon effects). All of the so-called nonclassicality of Quantum Optics is result of this difference in data post-processing convention -- they compute classical model for raw counts, while their own QO post-processing convention in Gn() is to count and correlate and than to subtract. So the two prediction will be "different" since they refer by their definitions to different quantities. The "classical" g2 of (AJP.1-2) is operationally different quantity than the "quantum" g2 of eq (AJP.8), but in QO nonclassicality claims, they will use same notation and imply they refer to the same thing (plain raw correlation), then proclaim non-classicality when the experiment using their QO convention (with all subtractions) matches the Gn()'s version of g2 (by which convention, after all, the data was post-processed), not the classical one (since it didn't use its post-processing convention).

You can follow this QO magic show recipe right in this AJP article, that's exactly how they set it up {but then, unlike most other authors, they get too gready and want to show the complete "perfection" and reproducable, the real magic, and for that you really need to cheat in a more reckless way than just misleading by omission}.

The same goes for most others where the QO non-classicality (anticorrelations, sub-Poissonian distributions, etc) is claimed as something genuine (as opposed to being a mere terminological convention for QO use of word 'non-classical', since that what it is and there is no more to it).

Note also that the "single photon" case, where "quantum" prediction is g2=0 has operationally nothing to do with this experiment ([post=529314]see my previous explanation[/post]). The "single photon" Glauber detector is a detector which absorbs (via atomic dipole and EM interaction, cf [4]) both beams, the whole photon, which is the state |T>+|R>. Thus the Glauber detector for single photon |T>+|R> with g2=0 is either a single large detector covering both paths, or an external circuit attached to DT+DR which treats two real detectors DT and DR as a single detector and gives 1 when one or both trigger, i.e. as if they're a single cathode. The double trigger T & R case is simply equivalent to having two photoelectrons emitted on this large cathode (the photoelectron distribution is at best Poissonian for perfectly steady incident field, but it is compound/super Poissonian for variable incident fields).
 
Last edited:
  • #44
Nightlight,

You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate?

Also, is it your opinion that photons are not quantum particles, but are instead waves?

-DrC
 
  • #45
You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate?

The phenomenologial PDC hamiltonian used in Quantum Optics computations has been reproduced perfectly within the Stochastic Electrodynamics (e.g. see papers from Marshall & Santos group; recent editions of well respected Yariv's QO textbook have an extra last chapter which for all practical purposes recognizes this equivalence).

Also, is it your opinion that photons are not quantum particles, but are instead waves?

Photons in QED are quantized modes of EM field. For a free field you can construct these in any basis in Hilbert space, so the "photon number" operator [n] depends on basis convention. Consequently the answer to question "how many photons are here" depends on the convention for the basis. (No different than you asking me what is the speed number on your car, and if I say 2300; this obviously is meaningless, since you need to know what convention I use for my speed units.)

For example if you have plane wave as a single mode, in its 1st excited state (as harmonic oscillator), in that particular base you have single photon, the state is an eigenstate of this [n]. But if you pick other bases, then you'll have a superposition of generally infinitely many of their "photons" and the plane wave is not the eigenstate of their [n].

The QO convention then calls "single photon" any superposition of the type |Psi.1> = Sum(k) of Ck(t) |1_k>, where sum goes over wave vectors k (a 4-vector) k=(w,kx,ky,kz) and |1_k> are eigenstates of some [n] with eigenvalue 1. This kind of photon is quasi-localized (with spread stretching across many wavelengths). Obviously, here you don't have any more E=hv relation since there is no single ferquency v superposed into the "single photon" state |Psi.1>. If the localization is very rough (many wavelengths superposed) then you could say that approximately |Psi.1> has some dominant and average v0, and one could say approximately E=hv0.

But there is no position operator for a point-like photon (and it can't be constructed in QED) and no QED process generates QED "single photon", the Fock state |1> for some basis and its [n], (except as an approximation in the lowest order of perturbation theory). Thus there is no formal counterpart in QED for a point-like entity hiding somewhere inside EM field operators, much less of some such point being exclusive. A "photon" for laser light streches out for many miles.

The equations these QED and QO "photons" follow in Heisenberg picture are plain old Maxwell equations for free fields or for any linear optical elements (mirrors, beam splitters, polarizers etc). For the EM interactions, the semiclassical and QED formalisms agree to at least alpha^4 order effects (as shown by Barut's version of semiclassical fields which include self-interaction). That is 8-9 digits of precision (it could well be more if one were to carry out the calculations). Barut unfortunately died in 1994, so that work has stalled. But their results up to 1987 are described in http://library.ictp.trieste.it/DOCS/P/87/248.pdf . ICTP has scanned 149 of his preprints, you can get pdfs here (type Barut in "author"; also interesting is his paper on http://library.ictp.trieste.it/DOCS/P/87/157.pdf ; his semiclassical approach starts in papers from 1980 and on).

In summary, you can't count them except by convention, they appear and disappear in interactions, there is no point they can be said to be at, they have no position but just approximate regions of space defined by a mere convention of "non-zero values" for field operators (one can call these wave packets as well, since they move by the plain Maxwell wave equations, anyway; and they are detected by the same square-law detection as semiclassical EM wave packets).

One can think, I suppose of point photons as a heuristics, but one has to watch not to take it too far and start imagining, as these AJP authors apparently did, that you have some genuine kind of exclusivity one would have for a particle. That exclusivity doesn't exist either in theory (QED) or in experiments (other than via misleading presentation or outright errors as in this case).

The theoretical non-existence was [post=529314]already explained[/post]. In brief, the "quantum" g2 of (AJP.8-11 for n=0) corresponds to a single photon in the incident field. This "single photon" is |Psi.1> = |T> + |R> where |T> and |R> correspond to regions of the "single photon" field in T and R beams. The detector which (AJP.8) models is Glauber's ideal detector, which counts 1 if and only if it absorbs the whole single photon, leaving the vacuum EM field. But this "absorbtion" is (derived by Glauber in [4]) purely dynamical process, local interaction of quantized EM field of the "photon" with the atomic dipole and for the "whole photon" to be absorbed, the "whole EM field" of the "single photon" has to be absorbed (via resonance, a la antenna) with the dipole. (Note that the dipole can be much smaller than the incident EM wavelength, since the resonance absorption will absorb the surrounding area of the order of wavelength.)

So, to absorb "single photon" |Psi.1> = |T> + |R>, the Glauber detector has to capture both branches of this single field, T and R, interact with them and resonantly absorb them, leaving EM vacuum as result, and counting 1. But to do this, the detector will have to be spread out to capture both T and R beams. Any second detector will get nothing, and you indeed have perfect anticorrelation, g2=0, but it is entirely trivial effect, with nothing non-classical or puzzling about it (semi-classical detector will do same if defined to capture full photon |T>+|R>).

You could simulate this Glauber detector capturing "single photon" |T>+|R> by adding an OR circuit to outputs of two regular detectors DT and DR, so that the combined detector is Glaber_D = DT | DR and it reports 1 if either one or both of DT and DR trigger. This, of course, doesn't add anything non-trivial since this Glauber_D is one possible implementation of Glauber detector described in previous paragraph -- its triggers are exclusive relative to a second Glauber_Detector (e.g. made of another pair of regular detectors placed somewhere, say, behind the first pair).

So the "quantum" g2=0 eq (8) (it is also semiclassical value, provided one models the Glauber Detector semiclassically), is valid but trivial and it doesn't correspond to the separate detection and counting used in this AJP experiment or to what the misguided authors (as their students will be after they "learn" it from this kind of fake experiment) had in mind.

You can get g2<1, of course, if you subtract accidentals and unpaired singles (the DG triggers for which no DT or DR triggered). This is in fact what Glauber's g2 of eq. (8) already includes in its definition -- it is defined to predict the subtracted correlation, and the matching operational procedure in Quantum Optics is to compare it to subtracted measured correlations. That's the QO convention. The classical g2 of (AJP.2) is defined and derived to model the non-subtracted correlation, so let's call it g2c. The inequality (AJP.3) is g2c>=1 for non-subtracted correlation.

Now, nothing is to stop you from defining another kind of classical "correlation" g2cq which includes subtraction in its definition, to match the QO convention. Then this g2cq will violate g2cq>=1, but there is nothing surprising here. Say, your subtractions are defined to discard the unpaired singles. Therefore in your new eq (14) you will put N(DR)+N(DT) (which was about 8000 c/s) instead of N(G) (which was 100,000 c/s) in the numerator of (14) and you have now g2cq which is 12.5 times smaller than g2c, and well below 1. But no magic. (The Chiao Kwiat paper recognizes this and doesn't claim any magic from their experiment.) Note that these subtracted g2's, "quantum" or "classical" are not the g2=0 of single photon case (eq AJP.11 for n=1), as that was a different way of counting where the perfect anticorrelation is entirely trivial.

Therefore, the "nonclassicality" of Quantum Optics is a term-of-art, a verbal convention for that term (which somehow just happens to make their work sound more ground-breaking). Any well bred Quantum Optician is thus expected to declare a QO effect as "nonclassical" whenever its subtracted correlations (predicted via Gn or measured and subtracted) violate inequalities for correlations computed classically for the same setup, but without subtractions. But there is nothing genuinely nonclassical about any such "violations".

These verbal gimmick kind of "violations" have nothing to do with theoretically conceivable genuine violations (where QED still might disagree with semiclassical theory). The genuine violations would have to be for the perturbative effects of orders alpha^5 or beyond, some kind of tiny difference beyond 8-9th decimal place, if there is any at all (unknown at present). QO operates mostly with 1st order effects, all its phenomena are plain semiclassical. All their "Bell inequality violations" with "photons" are just creatively worded magic tricks of the described kind -- they compare subtracted measured correlations with the unsubtracted classical predictions, all wrapped into whole lot of song and dance on "fair sampling" or "momentary technological detection loophole" or "non-enhancement hypothesis"... And after all the song and dance quiets down, lo and behold, the "results" match the subtracted prediction of Glauber's correlation function Gn (Bell's QM result cos^2() for correlations for photons are a special case Gn()) and violate nonsubtracted classical prediction. Duh.
 
Last edited by a moderator:
  • #46
nightlight said:
You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate?

The phenomenologial PDC hamiltonian used in Quantum Optics computations has been reproduced perfectly within the Stochastic Electrodynamics...

... And after all the song and dance quiets down, lo and behold, the "results" match the subtracted prediction of Glauber's correlation function Gn (Bell's QM result cos^2() for correlations for photons are a special case Gn()) and violate nonsubtracted classical prediction. Duh.

Very impressive analysis, seriously, and no sarcasm intended.

But as science, I consider it nearly useless. No amount of "semi-classical" explanation will ever cover for the fact that NOT ONE IOTA adds anything to our present day knowledge, which is the purpose of true scientific effort. It is strictly an elaborate catch-up to QM/CI by telling us that you can get the same answers a different way. Reminds me of the complex theories of how the sun and planets actually rotate around the Earth. (And all the while you criticize a theory which since 1927 has been accepted as one of the greatest scientific achievements in all history. The most Einstein ever could muster against it was that it was not complete; I doubt he ever thought it wrong per se.)

-------

1. I would love to hear you answer this question, previously asked and not answered: what is the true & correct correlation formula for entangled photon pair spin observations? Is it the same as QM - and thus subject to the issues of Bell's Theorem - or would you care to provide a specific formula which is different? (After all, the Bell Inequality and QM are a lot farther apart than the 7th or 8th decimal place.)

Your value for 0 degrees?
Your value for 22.5 degrees?
Your value for 45 degrees?
Your value for 67.5 degrees?
Your value for 90 degrees?

I ask because I would like to determine whether you agree or disagree with the predictions of QM.

2. Or in terms of the Thorn experiment: do you think that they suspected they might get results in accordance with the predictions of QM? Yes, because QM makes specific predictions which allow it to be falsified. So you argue that an experiment specifically designed to see if 2 photons can be detected on one side when only one is detected on the other is flawed -- exactly such that a false positive will be registered 100% of the time! And yet, a reasonable person might ask why 2300 photons aren't occasionally seen on one side, when only one is seen on the other if your concept is correct. After all, you specifically say one photon is really many photons.

3. In fact, can you provide any useful/testable prediction which is different from orthodox QM? You see, I don't actually believe your theory has the ability to make a single useful prediction that wasn't already in standard college textbooks years ago. (The definition of an AD HOC theory is one designed to fit the existing facts while predicting nothing new in the process.) I freely acknowledge that a future breakthrough might show one of your lines of thinking to have great merit or promise, although nothing concrete has yet been provided.

-------

I am open to persuasion, again no sarcasm intended. I disagree with your thinking, but I am fascinated as to why an intelligent person such as yourself would belittle QM and orthodox scientific views.

-DrC
 
Last edited:
  • #47
But as science, I consider it nearly useless. No amount of "semi-classical" explanation will ever cover for the fact that NOT ONE IOTA adds anything to our present day knowledge, which is the purpose of true scientific effort.

I wasn't writing a scientific paper here but merely discussing some common misconceptions. If/when I have something of importance to announce, I'll put it in a paper (I don't earn living by "publish or perish"). Although, these postings here are not of much "importance" or seriousness, I still find them useful in sharpening my own understanding and making me follow interesting thought patterns I wouldn't have otherwise.

It is strictly an elaborate catch-up to QM/CI by telling us that you can get the same answers a different way.

Again, I didn't create any theory much less make claims about "my" theory. I was referring you to what results exist, in particular Barut's and Jaynes work in QED and Marshall & Santos in Quantum Optics. I cited papers and links so you can look them up and learn some more and, if you're doubtful, to verify whether I made up anything.

The most Einstein ever could muster against it was that it was not complete; I doubt he ever thought it wrong per se.)

That's its basic defect, the incompleteness. On the other hand, the claims that any local fields theory must necessarily contradict it, depends how you interpret "it". As you know, there were impossibility proofs since von Neumann. Their basic problem was and is in excessive generalizing the interpretation of the formalism, requiring any future theory to satisfy requirements not implied by any known experiment.

Among such generalizations, the remote noninteracting collapse (the projection postulate applied to non-interacting subsystems at spacelike intervals) is the primary source of the nonlocality in nonrelativistic QM. If one ignores the trivial kinds of nonlocality arising from nonrelativistic approximations to EM interactions (such as instantaneous action-at-a-distance Coulomb potential), the generalized projection postulate is the sole source of nonlocality in nonrelativistic QM. The Bell's QM prediction (which assumes that the remote subsystem will "collapse" into a pure state, with no interaction and at spacelike interval) doesn't follow without this remote projection postulate. The only test for that generalization of projection postulate is the Bell's inequality test.

When considering optical experiments, the proper formalism is Quantum Optics (although the nonrelativistic QM is often used as a heuristic tool here). The photon pair Bell's QM prediction is derived here in more rigorous way (as Glauber's two point correlations, cf [5] for PDC pair), which makes it clearer that no genuine nonlocality is taking place, in theory or in experiments. The aspect made obvious is that the cos^(a) (or sin^2(a) in [5]) correlation is computed via Glauber's 2 point "correlation" of normal-ordered (E-) and (E+) operators. What that means is that, one is predicting prefiltered relations (between angles and coincidence rates) which filters out any 3 or 4 point events (there are 4 detectors). The use of normally ordered Glauber's G2() further implies that the prediction is made for still additionally filtered data, where the unpaired singles and any accidental coincidences will be subtracted.

Thus, what was only implicit in nonrelativistic QM toy derivation (and what required a generalized projection postulate while no such additional postulate is needed here) becomes explicit here -- the types of filtering needed to extract the "signal" function cos^(a) or sin^(a) ("explicit" provided you understand what Glauber's correlation and detection theory is and what it assumes, cf [4] and points I made earlier about it).

Thus, the Quantum Optics (which is QED applied to optical wavelengths, plus the detection theory for square-law detectors plus the Glauber's filtering conventions) doesn't really predict cos^(a) correlation, but it predicts merely the existence of the cos^2(a) "signal" burried within the actual measured correlation. It doesn't say how much is going to be discarded since that depends on specific detectors, lenses, polarizers, ... and all that real world stuff, but it says what kind of things must be discarded from the data to extract the general cos^() signal function.

So, unlike the QM derivation, the QED derivation doesn't predict violation of Bell inequalities for the actual data, but only the existence of particular signal function. While some of the discarded data can be estimated for specific setup and technology, no one knows how to make a good enough estimate of all data which is to be discarded by the theory in order to extract the "signal" to be able to say whather the actual correlation can violate Bell's inequalities. And, on the experimental side, no one has so far obtained experimental violation either.

The non-locality of the violation by the filtered "correlations" doesn't imply anything in particular regarding non-locality of correlations since the Quantum Optics filtering procedure is by definition non-local[/color] -- to subtract "accidentals" you need to measure the coincidence rate with source turned off, but that requires data collection from distand locations. Similarly to discard 3 or 4 detection events, you need to collect data from distant locations to know it was a 3rd or 4th click. Or to discard unpaired singles, you need the remote fact that none of the other detectors had triggered. Formally, this nonlocality is built into the Glauber's correlation functions by virtue of the disposal of all vacuum photon terms (from the full perturbative expression for multiple detections), where these terms refer to photon absorptions at different spacelike separated locations (e.g. "accidental" coincidence means e.g. a term which has one vacuum photon absorbed on A+ detectors and any photon, signal or vacuum, on B+/- detectors; any such term is dropped in the construction of the Glauber's filtering functions Gn()).


1. I would love to hear you answer this question, previously asked and not answered: what is the true & correct correlation formula for entangled photon pair spin observations? Is it the same as QM - and thus subject to the issues of Bell's Theorem - or would you care to provide a specific formula which is different?

The correct formulas for time evolution of n detectors interacting with quantized EM field are only given as generalized interaction in perturbative expansion such as ref [6] eq's (4.29)-(4.31). They are of no use in practice since there is too much unknown to do anything with them. The simplified versions, but as imractical are Glauber's [4], Lect. V, as computations of scattering amplitudes (eq's 5.2-5.6) then he handwaves his way into filtered version, starting at eq 5.7; the more rigorous ref [6] (sec IV, pp 327), after using the same approximations, acknowledges regarding the Glauber-Sudarshan correlation functions "A question which remains unanswed is under what circumstances and how this simplification can be justified."

The semiclassical results, which compute the regular, non-adjusted correlations are derived in [7]. The partially adjusted correlations (with only the local type of subtractions made) are same as the corresponding QED ones (cf. sect eq. 4.7 which only excludes single detector vacuum effects, not the combined nonlocal terms; one could do such removals by intorducing some fancy algorithm-like notation, I suppose). Again, as in QED formulas, these are too general to be useful.

What is useful, though, is that these are semiclassical results, thus completely non-mysterious and transparently local, no matter what the specifics of detector design or materials are. The fields themselves (EM and matter fields) are the "hidden" (in plain sight) variables. Any further non-local subtractions on data made from there on are the only source of non-locality, which is entirely non-mysterious and non-magical.

In principle one could say the same for 2nd quantized fields (separate local from non-local terms and discard only the local one as a design property of a single detector) , except that now, there is infitely redundant overkill on the number of such variables compared to semiclassical fields (1st quantized, the matter field+EM field). But as a matter of priciple, such equations do evolve system fully locally, so there can be no non-local effect deduced from them, except as an artifact of terminological conventions of, say, calling some later non-locally adjusted correlations, still the "correlations" which then would become non-local "correlations".


Your value for 0 degrees? Your value for 22.5 degrees? Your value for 45 degrees? Your value for 67.5 degrees?
Your value for 90 degrees?


Get your calculator, set it to DEG mode, enter your numbers and for each pres cos, then x^2. That's what the filtered "signal" functions would be. But, as explained, that implies nothing regarding the Bell inequality violations (which refer to plain data correlations, not some fancy term "correlations" which includes non-local adjustements). As to what any real, nonadjusted data ought to look like, I would be curious to see some. (Experimenters in this AJP paper would not give out a single specific count used for their g2, if their life depended on it.)

All I know that from theory of multiple detections, QED or semiclassical, there is nothing non-local about them.


I ask because I would like to determine whether you agree or disagree with the predictions of QM.

I agree that filtered "signal" functions will look like QM or Quantum Optics prediction (QM is too abstract to make fine distinctions, but QO "prediction" is explicit in that this is only a "signal" function extracted from correlations, and surely not the plain correlation between counts). But they will also look like the semiclassical prediction, when you apply the same non-local filtering on semiclassical predictions. Marshall and Santos have SED models that show this equivalence for atomic cascade and for PDC sources (see their numerous preprints in arXiv, I cited several in our earlier discussions).

2. Or in terms of the Thorn experiment: do you think that they suspected they might get results in accordance with the predictions of QM?

This is quantum optics experiment. Nonrelativistic QM can't tell you precisely enough (other than postulating nonlocal collapse). QO derives what happens, and as explained nothing nonlocal happens with the plain correlations. No one has such experiment or a QED/QO derivation of a nonlocal prediction (assuming you understand what Gn() "correlations" represent and you don't get mislead by a wishful terminology of QO).

I also explained in previous msg why the "quantum" g2=0 for 1 photon state (AJP.8) is a trivial kind of "anticorrelation". That the AJP authors (and few others in QO) misinterpreted it, that's their problem. Ask them why, if you're curious. Quantum Opticians have been known to suffer delusions of magic, as shown by Hanbury Brown and Twiss affair, when HBT using semiclassical thery predicted the HBT correlations in 1950s. In no time, the "priesthood" jumped in, papers "proving" it was impossible came out, experiments "proving" HBT were wrong were published in a hurry. It can't be so since photons can't do that... Well, all the mighty "proofs", experimental and theroetical turned out fake, but not before Purcell published paper explaining how photons could do it. Well, then, sorry HBT, you were right. But... and then in 1963 came Harvard's Roy Glauber with his wishful terminology "correlation" (for a filtered "signal" function) to confuse students and the shallower grownups for decades to come.


Yes, because QM makes specific predictions which allow it to be falsified.

The prediction for a form of filtered "signal" function has no implication for Bell's inequalities. The cos^() isn't uniquely QM or QED prediction for the signal function. The semiclassical theory predicts the same signal function. The only difference is that the QO calls its "signal" function a "correlation" but defines it still as non-locally post-processed correlation function. The nonlocality is built into the definition of Gn() "correlation".

So you argue that an experiment specifically designed to see if 2 photons can be detected on one side when only one is detected on the other is flawed

It is not flawed. It merely doesn't correspond operationally to eq (8) & (11) with "single photon" input, |Psi.1>=|T>+|R>, which yield g2=0. To understand that, read again the previous msg (and the earlier one referred there) and check Glauber's [4] to see what does (8) and (11) mean operationally. As I said, |Psi.1> has to be absorbed as a whole by one detector. But to do that, it has to interact with the detector, all of its field. In [4] there is no magic instant collapse, it is QED, relativistic interaction, and "collapse" here turns out to be plain photon absorption, with all steps governed by EM field inmteracting with atom. It just happens that the nonzero part of the field of |Psi.1> is spread out across the two non-contiguous regions T and R. But it is still "one mode" (by definition, since you plug into (AJP.11) 1 mode state for the given basis operator). A detector which by definition has to capture the whole mode and leave the vacuum state as result (the Glauber detector to which eq. 8-11 apply), in relativistic theory (QED) has to interact with the whole mode, all of its field. In QED dynamics, there is no collapse, and AJP.8-11 were result of dynamical derivation (cf. [4]), not a postulate where you might twist it and turn it, so it is clear what they mean -- they mean absorption of the whole photon |Psi.1> via pure dynamics of quantized EM field and a matter-field detector (they don't 2nd quantize the matter field in [4]).

After all, you specifically say one photon is really many photons.

I said that photon number operator [n] is basis dependent, what is "one photon" in one basis need not be "one photon" in another basis. Again, the car speed example -- is it meaningfull to argue whether my car had a "speed number" 2300 yesterday at 8AM?

In practice, the basis is selected to match best the problem geometry and physics (such as eigenstates of noninteracting Hamiltonian), in order to simplify the computations. There is much convention student learns over years so that one doesn't need to spell out on every turn what they mean (which can be a trap for novices or shallow people of any age). The "Glauber detector" which absorbs "one whole photon" and counts 1 (for which his Gn() apply, thus the eq's APJ.8-11) is therefore also ambiguous in the same sense. You need to define modes before you can speak what that detector is going to absorb. If you say you have state |Psi>=|T>+|R> and this state is your "mode" (you can always pick it for one of the basis vectors, since it is normalized), the "one photon", then you need to use that base for photon number operator [n] in APJ.11, and then that gives you g2=0. But all these choices also define how you Glauber detector for this "photon" |Psi> is to operate here -- it has to be spread out to interact and absorb (via EM field-matter QED dynamics) this |Psi>.


3. In fact, can you provide any useful/testable prediction which is different from orthodox QM?

I am not offering "theory", just explaining the misleading QO terminology and confusion it could and does cause. I don't have "my theory predictions" but what I am saying is that, when QO terminological smoke and mirrors are removed, there is nothing non-local predicted by their own formalism. The non-adjusted correlations will always be local, i.e for them it will be g2>=1. And they will be the same value in semiclassical and the QED computation, at least to the alpha^4 perturbative QED expansion (if Barut's semiclassical theory is used), thus 8+ digits of precision will be same, possibly more (unknown at present).

What will the adjusted g2 be, depends on the adjustments. If you make non-local adjustments (requiring data from multiple locations to compute amounts to subtract), yes, then you get some g2' which can't be obtained by plugging only locally collected counts into the (AJP.14). So what? That has nothing to do with non-locality. It just a way you define your g2'.

Only if you do what the AJP paper does (along with other "nonclassicality" claimants in QO), and label it with the same symbol g2 for "classical" (and also nonadjusted correlation) and g2 for "quantum" (and also adjusted "correlation" or "signal" function Gn) models, then few paragraphs later you manage to forget (or more likely, never knew the difference) the "and also" parts and decide the sole difference was in "classical" vs "quantum" then you will succeed in self-deception that you have shown "nonclassicality." Otherwise if you label apples 'a' and oranges 'o', you won't have to marvel at the end why is 'a' is different than 'o', as you do when you ask why is g2 from classical case different than g2 from quantum case and then "conclude" that it must be something "nonclassical" in the quantum case that made the difference.


--- Ref

[5] Z.Y. Ou, L. Mandel "Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment" Phys. Rev. Lett. 61(1) pp 50-53 (1988).

[6] P.L. Kelly and W.H. Kleiner, "Theory of electromagnetic field measurement
and photoelectron counting" Phys. Rev. 136, A316–A334 (1964).

[7] L. Mandel, E.C.G. Sudarshan, E. Wolf "Theory of Photoelectric Detection of Light Fluctuations" Proc. Phys Soc. 84 (1964) 435-444.
 
Last edited:
  • #48
nightlight said:
Using raw counts in eq (14), they could only get g2>=1.

You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??

I can easily provide you with such a series !

cheers,
Patrick.
 
  • #49
vanesch said:
You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??

In fact, without a lot of "quantum optics", the very fact of having "intensity threshold detectors" which give me a small P_gtr/(P_gt * P_gr) is already an indication that these intensities are not given by the Maxwell equations.
The reason is the following: in order to be able to have a stable interference pattern by the waves arriving at T and R, T and R need to be modulated by intensities which have a strong correlation on the timescale of the detection window. Indeed, if they are anticorrelated, when there is an intensity at T, there isn't any at R, and vice versa, there is never a moment when there is sufficient intensity from both to interfere. If the modulation depth is about 100%, this indicates that the intensities are essentially identical at T and R, on the time scale of the intensity detector (here, a few ns).
Identical intensities on this time scale is necessary to obtain extinction of intensity in the interference pattern (the fields have to cancel at any moment, otherwise some intensity is left).
So no matter how your detector operates, if it gives you a positive logical signal ABOVE an intensity theshold, and a negative logical signal BELOW an intensity threshold, these logical signals have to be correlated by about 100%.

Taking an arbitrary moment in time, and looking at the probability to have T AND R = 1, and T = 1 and R = 1, and calculating P_TR / (P_T P_R) is nothing else but an expression of this correlation of intensities, needed to obtain an interference pattern.
In fact, the probability expressions have the advantage of taking into account "finite efficiencies", meaning that to each intensity over the threshold corresponds only a finite probability of giving a positive logical signal. That's easily verified.

Subsampling these logical signals with just ANY arbitrary sampling sequence G gives you of course a good approximation of these probabilities. It doesn't matter if G is correlated or not, with T or R, because the logical signals from T and R are identical. If I now sample only at times given by a time series G, then I simply find the formula given in AJP (14): P_gtr/ (P_gt P_gr).
This is still close to 100% if I have interfering intensities from T and R.
No matter what time series. So I can just as well use the idler signal as G.

Now, you can say, if that's so simple, why do we go through the pain of generating idlers at all: why not use a random pulse generator.
The reason is that not the entire intensity function of R and T is usable. Most intensities are NOT correlated.
But that doesn't change the story: the only thing I now have to reproduce, is that I have an interference pattern from R and T when I have G-clicks. Then the same explanation holds, but only for the time windows defined by G.

If I have a time series G, defining time windows, in which I can have an interference pattern from T and R with high modulation, the intensities during T and R need to be highly correlated. This means that the expression P_GTR/(P_GT P_GR) needs to be close to 1.

So if I succeed, somehow, to have the following:

I have a time series generator G ;
I can show interference patterns, synchronized with this time series generator, of two light beams T and R ;

and I calculate an etimation of P_GTR / (P_GR P_GT),

then I should find a number close to 1 if this maxwellian picture holds.

Well, the results of the paper show that it is CLOSE TO 0.

cheers,
Patrick.
 
Last edited:
  • #50
nightlight said:
I wasn't writing a scientific paper here but merely discussing some common misconceptions. If/when I have something of importance to announce, I'll put it in a paper (I don't earn living by "publish or perish"). Although, these postings here are not of much "importance" or seriousness, I still find them useful in sharpening my own understanding and making me follow interesting thought patterns I wouldn't have otherwise.

...

--- Ref

[5] Z.Y. Ou, L. Mandel "Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment" Phys. Rev. Lett. 61(1) pp 50-53 (1988).

[6] P.L. Kelly and W.H. Kleiner, "Theory of electromagnetic field measurement
and photoelectron counting" Phys. Rev. 136, A316–A334 (1964).

[7] L. Mandel, E.C.G. Sudarshan, E. Wolf "Theory of Photoelectric Detection of Light Fluctuations" Proc. Phys Soc. 84 (1964) 435-444.


Thank you for the courtesy of your response. I do not agree with your conclusions, but definitely I want to study your thinking further.

And I definitely AGREE with what you say about you say about "sharpening" above... I get a lot out of these discussions in the same manner. Forces me to consider my views in a critical light, which is good.
 
  • #51
vanesch said:
You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??
I can easily provide you with such a series !

It is trivial to have beam splitter give you nearly perfectly anticorrelated data (e.g. vary polarization of optical photons randomly and then set photodetector to very low noise so it picks only the photons with nearly parallel polarization with the PBS). Clauser's test [2] had produced one such.

It can't be done if you also require that the T and R are suporposed (as opposed to mixture) and that they carry equal energy in each instance. That part is normally verified by interfering the two beams T and R.

The authors [1] have such experiment but they didn't test their T and R EM field samples used for anticorrelation part. Instead they stuck interferometer in front of the beam splitter and showed its T' and R' interfere, but that is a different beam splitter, and different coincidence setup with entirely unrelated EM field samples being correlated. It is essential to use the same samples of EM field from T and R (or at most propagated by r/t adjustments for extended paths) and using the same detector and time windows settings to detect interference. The "same" samples mean: extracted the same way from G events, without subselecting and rejecting based on data available away from G (such as content of detections on T and R, e.g. to reject unpaired singles of G events).

The QO magic show usually considers time windows (defined via coincidence circuits settings) and detector settings as free parameter they can tweak on each try till it all "works" and the magic is happening as prescribed. Easy to make magic when you can change these between runs or polarizer angles. They use their normal engineering signal filtering reporting conventions in not reporting in their papers the bits of info which are critical for this kind of tests (although a routine for their engineering signal processing and "signal" filtering) -- to have the same samples of fields. (This was a rare one with few details and the trick which does the magic immediately jumps out.) Combine that with Glauberized jargon, where "correlation" isn't correlation, and you got all the tools for an impressive Quantum Optics magic show.

They've been pulling leg of physics community since Bell tests started with this kind of phony magic and have caused vast quantities of nonsense to be written by otherwise good and even great physicists on this topic. A non-Glauberized physicist uses word correlation in a normal way, so they get invariably taken in by Glauber's "correlation" which doesn't correlate anything but is just an engineering style filtered "signal" function extracted out of acual correlations via inherently non-local filtering procedure (standard QO subtractions). I worked on this topic for my masters and read all the experimental stuff available, yet I had no clue how truly flaky and nonexistent those violation "facts" were.
 
Last edited:
  • #52
T and R need to be modulated by intensities which have a strong correlation on the timescale of the detection window. Indeed, if they are anticorrelated, when there is an intensity at T, there isn't any at R, and vice versa, there is never a moment when there is sufficient intensity from both to interfere.

Keep in mind that the 2.5ns time windows are defined on the detectors output fields, they're electrical current counterpart to the optical pulses. The PDC optical pulses are thousands times shorter, though.

Now, you can say, if that's so simple, why do we go through the pain of generating idlers at all: why not use a random pulse generator. ...
So if I succeed, somehow, to have the following:

I have a time series generator G ;
I can show interference patterns, synchronized with this time series generator, of two light beams T and R ;

and I calculate an etimation of P_GTR / (P_GR P_GT),

then I should find a number close to 1 if this maxwellian picture holds.

Well, the results of the paper show that it is CLOSE TO 0.


The random time window (e.g. if you ignore G beam) won't even be poissonian, since there will be a large vacuum superposition, which has gaussian distribution. So, the unconditional sample (or a random sample) will give you a super-Poissonian for the PDC source and Poissonian for the coherent source.

The main item to watch regarding the G-detection conditioned samples is to pick the "same" sample on T and R. For example, consider a GTR coincidence window setup shown with PDC pulse alignements:

Code:
           ----
signals  /...\
----------------------------------------------> times
        |--------|    Gate beam timing window
      |------|        Transmitted beam window
             |------| Reflected beam window

That obviously will give you a perfect anticorrelation while still allowing you to show the perfect interference if you change the T and W sampling windows for interference test (and align them properly).

Even if you keep windows the "same" for the intereference test as for anticorrelations, you can still get both, provided you overlap partially T and R windows. Then by tweaking the detector thresholds, you can still create the appearance of violating visibility vs anticorrelation classical tradeoff.

The basic validity test should be to feed laser split 50:50 instead of G and TR into the very same setup, optics, detectors, coincidence circuit and show that g2=1. If this gives you <1 the setup is cheating. Note that Grangier et al. test in [3] had used for this test the chaotic light and said they got g2>1, but for chaotic light g2 needs to be >= 2, so this is inconclusive. The stable laser should put you on the very boundary of "fair sampling" and it should be much easier to see cheating.

Also the gratuitous separate sampling via the 3rd circuit for the triple coincidence is cheating all by itself. The triple coincidences should be derived (via software or the AND circuit) from the obtained GT and GR samples, not sampled and tuned on its own. Grangier et al. also used this third coincidence unit, but wisely chose not to give any detail about it.

"My" prediction (or rather it is just a QED/QO prediction, but with properly assigned operational mapping to the right experiment) is that the correctly gated G conditioned sample will also give you at best the Poissonian.

As explained, the Glauber's g2=0 for "single photon" state |T> + |R> is operationally completely misinterpreted by some Quantum Opticians who did the test, and dutifuylly parroted by the rest (although Glauber, or Chiao & Kwiat or Mandel use much more cautios language, recognizing that you need standard QO subtractions to drop into the non-classical g2). The distribution should be same as if you separated single cathode (large compared to wavelengths but capturing the incident beam on both partitions equally) into two regions and tried to correlate photoelectron counts from the two sides. The photoelectron count will be Poissonian at best (g2=1).

Note that if you were to do experiment and show that the unsubtracted correlations (redundant, since correlations as normally understood should be unsubtracted, but it has to be said here in view of QO Glauberisms) are perfectly classical, it wouldn't be long before everyone in QO will declare, they knew it all along and pretend as if they never thought or wrote exactly the opposite. Namely, suddenly they will discover that the coherent light pump with the Poissonian superposition of the Fock states will simply generate Poissonian PDC pulses (actually they'll probably be Gaussian) and the problem is solved. The g2=0 for single photon states remains Ok, it just doesn't apply for this source (they will still continue searching for the magic source in other effects, so they'll blame the source, even though g2=0 for Glauber detector of "one photon" |T>+|R>, as explained, is a trivial anticorrelation; but they'll still maintain it is just matter of finding the "right source" and continue with the old misinterpretation of g2=0; QO "priesthood" is known for these kinds of delusions e.g. lookup on the Hanbury Brown and Twiss comedy of errors; or similar one with the "impossible" interference of independent laser beams which also caused grand mal convulsions).

If you follow up another step from here, once you establish that the anticorrelation doesn't work as claimed but always gives g2>=1 on actual correlations, you'll discover that no optical Bell test will work either since you can't get rid of the double +/- Poissonian hits for the same photon without reducing detection efficiency, which then invalidates it from the other "loophole".

This is more transparent in QO/QED derivations of Bell's QM prediction where they use only the 2 point Glauber correlation ([post=531880]cf. [5] Ou,Mandel[/post]), thus discarding explicitly all triple and quadruple hits, making it clear they're using sub-sample. The QO derivation is sharper than the abstract QM 2x2 toy derivation. In particular the QO/QED derivation doesn't use remote sybsystem projection postulate[/color] but derives effective "collapse" as a dynamical process of photon absorpotion, thus purely local and uncontroversial dynamical collapse. The additional explicit subtractions included into the definition of G2() make it clear that the cos^2() "correlation" isn't a correlation at all but an extracted "signal" function (a la Wigner distribution reconstructed via quantum tomography), thus one can't plug it into the Bell's inequalities as is without estimating the terms discared by Glauber's particular convention for signal vs noise dividing line. With the generic QM projection based 2x2 abstract derivation, that's all invisible. Also invisible is the constraint (part of Glauber's derivation [4]) that "collapse" is due to local field dynamics, it is just a plain photon absorption through local EM-atom interaction. The QO/QED derivation also shows explicitly how the non-locality enters into the von Neumann's generalized projection postulate[/color] (that projects remote noninteracting subsystems) -- it is result of the manifestly non-local data filtering procedures used in Glauber's Gn() subtraction conventions. That allone disqualifies any usage of such non-locally filtered "signal" function as a proof of non-locality by plugging them into Bell's inequalities.

Earlier I cited [post=529372]Haus[/post] (who was one of few wise Quantum Opticians; http://web.mit.edu/newsoffice/2003/haus.html of the detection process and needs to be applied with a good dose of salt.
 
Last edited by a moderator:
  • #53
nightlight said:
Keep in mind that the 2.5ns time windows are defined on the detectors output fields, they're electrical current counterpart to the optical pulses. The PDC optical pulses are thousands times shorter, though.

The point was, that if the intensities have to be strongly correlated (not to say, identical) on the fast timescale (in order to prduce interferences), then they will de facto be correlated on longer timescales (which are just sums of smaller timescales).

The random time window (e.g. if you ignore G beam) won't even be poissonian, since there will be a large vacuum superposition, which has gaussian distribution. So, the unconditional sample (or a random sample) will give you a super-Poissonian for the PDC source and Poissonian for the coherent source.

But it doesn't MATTER how the time windows are distributed. They could be REGULARLY distributed with a quartz clock, or when I decide to push a button, or anything. They are just a subsample of the entire intensity curve, and don't need to be in any way satisfy any statistical property.

The main item to watch regarding the G-detection conditioned samples is to pick the "same" sample on T and R. For example, consider a GTR coincidence window setup shown with PDC pulse alignements:

Of course you need to apply the SAME time window to as well G, T and R.
Or, to have to make the windows such that, for instance, the GTR window is LARGER than the GR and TR windows, so that you get an overestimation of the quantity which you want to get low.

But it is the only requirement. If you obtain interference effects within these time windows (by G) and you get a low value for the quantity N_gtr N_g/ (N_gr N_gt) then this cannot be generated by beams in the Maxwellian way.
And you don't need any statistical property of the G intervals, except for a kind of repeatability: namely that they behave in a similar way during the interference test (when detectors r and t are not present) and during the coincidence measurement. The G intervals can be distributed in any way.

cheers,
Patrick.
 
  • #54
But it doesn't MATTER how the time windows are distributed. They could be REGULARLY distributed with a quartz clock, or when I decide to push a button, or anything. They are just a subsample of the entire intensity curve, and don't need to be in any way satisfy any statistical property.

The subsample of T and R corresopnding to DG defined windows can and does have different statistics of T, R events (on both the T or R as singles and TR pairs) than a random sample (unconditioned on G events). For example, for the DG window samples T or R had 4000 c/s singles rates, and only 250 c/s on non-DG samples.


But it is the only requirement. If you obtain interference effects within these time windows (by G) and you get a low value for the quantity N_gtr N_g/ (N_gr N_gt) then this cannot be generated by beams in the Maxwellian way.

Well, almost so. There is one effect which could violate this condition provided you have very close (such as order of a wavelength) detectors DT and DR i.e. they would need to be atomic detectors. Then a resonance absorption on DT would distort the incident field within the near field range of DT, and consequently the DR would get a lower EM flux than with DT absorption absent. Namely when you have a dipole which resonates with the incident plane EM wave, the field it radiates will superpose coherently with the plane wave field resulting in bending of Poynting vector toward the dipole, increasing thus the flux it absorbs much beyond the dipole size d, resulting in an absorbed flux from area lambda^2 instead of d^2 (where d is dipole size).

With electron clouds of a detector (e.g. atom) there is a positive feedback loop, where the initial weak oscillations (from the small forward fronts of incident field) of the cloud cause the above EM-sucking distortion, which in turn increases the amplitude of oscillations, extending thus its reach farther (dipole emits stronger fields and bends flux more toward itself), thus enhancing the EM-sucking effect. Thus there is self-reinforcing loop, where EM-sucking increases exponentially, which finally results in an abrupt breakdown of the electron cloud.

When you have N nearby atoms absorbing light, the net effect of EM-sucking multiplies by N. But due to initial phase differences of electron clouds, some atom would have a small initial edge in its oscillations over its neighbors and due to exponential nature of the positive feedback, the EM-sucking efect into that single atom will quickly get ahead (like different compund interests) and rob its neighbors of their (N-1) fluxes, thus enhancing the EM-sucking effect approximately N-fold compared to a single atom absorption.

These absorptions will thus strongly anticorrelate between nearby atoms and create an effect similar to the Einstein's needle-radiation[/color] -- as if a pointlike photon had struck just one of the N atoms for each photo-absorption. There is a http://www.ensmp.fr/aflb/AFLB-26j/aflb26jp115.htm which has over last ten years developed a detailed photodetecton theory based on this physical picture, capable of explaining the discrete nature of detector pulses (without requiring point photon heuristics).

Regarding the experiment, another side effect of this kind of absorption is that the absorbing cloud will emit back about half the radiation it absorbed (toward the source, as if paying the vacuum's 1/2 hv; the space behind the atom will have a corresponding reduction of EM flux, a shadow, as if scattering has occured).

Although I haven't seen the following described, but based on the general gist of the phenomenon, it is conceivable that these back emissions, for very close beam splitter, could end up propagating the positive feedback to the beam-splitter, so that the incident EM field, which for each G pulse starts equally distributed into T and R, superposes with these back-emissions at the beam splitter in such a way to enhance the flux portion to the back-emitting detector, in an analogous way to the nearby-atom case. So the T and R would start equal during the weak leading front of TR pulse, but the initial phase imbalance between the cathodes of DT and DR would lead to amplification of the difference and lead to anticorrelation between the triger by the time the bulk of TR pulse traverses the beam splitter. But for the interference, there would be no separate absorber for T and R side, but just a single absorber much farther away and the interference would still occur.

To detect this kind of resonant beam splitter flip-flop effect one would have to observe dependence of any anticorrelation on the distance of detectors from the beam splitter. Using one way mirrors in front of detectors might block the effect if it exists at a given distance.
 
Last edited by a moderator:
  • #55
nightlight said:
But it doesn't MATTER how the time windows are distributed. They could be REGULARLY distributed with a quartz clock, or when I decide to push a button, or anything. They are just a subsample of the entire intensity curve, and don't need to be in any way satisfy any statistical property.

The subsample of T and R corresopnding to DG defined windows can and does have different statistics of T, R events (on both the T or R as singles and TR pairs) than a random sample (unconditioned on G events). For example, for the DG window samples T or R had 4000 c/s singles rates, and only 250 c/s on non-DG samples.

Yes, that's not what I'm saying. Of course, in a general setup, you can have different statistics for T, R and TR depending on how you decide to sample them (G). What I'm saying is that if you use ANY sample method of your choice, in such a way that you get interference between the T and R beams, THEN if you use the same sample method to do the counting, and you assume Maxwellian intensities which determine the "counting", you should find a "big number" for the formula N_gtr N_g / (N_gt N_gr), because this number indicates you the level of intensity correlation between T and R, in the time frames defined by the series G ; and you need a strong intensity correlation in order to be able to get interference effects.
Now, you can object that this is not exactly what is done in the AJP paper ; that's true, but what they do is VERY CLOSE to this, and with some modification of the electronics you can do EXACTLY THAT.
I don't say that you should strictly find >1. I'm saying you should find a number close to 1. So if you'd find 0.8, that wouldn't be such big news. However, finding something close to 0 is not possible if you have clear interference patterns.

cheers,
Patrick.
 
  • #56
Yes, that's not what I'm saying. Of course, in a general setup, you can have different statistics for T, R and TR depending on how you decide to sample them (G). What I'm saying is that if you use ANY sample method of your choice, in such a way that you get interference between the T and R beams, THEN if you use the same sample method to do the counting, and you assume Maxwellian intensities which determine the "counting", you should find a "big number" for the formula N_gtr N_g / (N_gt N_gr), because this number indicates you the level of intensity correlation between T and R, in the time frames defined by the series G ; and you need a strong intensity correlation in order to be able to get interference effects.

That's correct (with possible exception of resonant flip-flop effect).

Now, you can object that this is not exactly what is done in the AJP paper ; that's true, but what they do is VERY CLOSE to this, and with some modification of the electronics you can do EXACTLY THAT.

And you will get g2>1 (before any subtractions are done, of course).

I don't say that you should strictly find >1. I'm saying you should find a number close to 1. So if you'd find 0.8, that wouldn't be such big news. However, finding something close to 0 is not possible if you have clear interference patterns.

But this is not what the QO photon theory suggests (the customary interpretation of g2=0 for single photon case, which I argued is incorrect interpretation of g2=0 and you won't get anything below 1). They claim you should get g2 nearly 0 with equal T and R split in each instance (which is demonstrated by showing high visibility interference on the same sample). Are you saying you don't believe in QO interpretation of g2=0 for "single photon" case?
 
  • #57
nightlight said:
I don't say that you should strictly find >1. I'm saying you should find a number close to 1. So if you'd find 0.8, that wouldn't be such big news. However, finding something close to 0 is not possible if you have clear interference patterns.

But this is not what the QO photon theory suggests (the customary interpretation of g2=0 for single photon case, which I argued is incorrect interpretation of g2=0 and you won't get anything below 1). They claim you should get g2 nearly 0 with equal T and R split in each instance (which is demonstrated by showing high visibility interference on the same sample). Are you saying you don't believe in QO interpretation of g2=0 for "single photon" case?

Are you a priest ? I mean, you seem to have this desire to try to convert people, no ? :-))

Sorry to disappoint you. When I say: "close to 0 is not possible if you have clear interference patterns" I mean, when Maxwellian theory holds.
I think you will get something close to 0, and to me, the paper IS convincing because I think they essentially DO the right thing, although I can understand that for someone like you, thinking the priesthood is trying to steal the thinking minds of the poor students, there will always be something that doesn't fit, like these 6ns, or the way to tune the timing windows and so on.

But at least, we got to an experimentally accepted distinction:

IF we have timing samples of T and R, given by a time series G, and T and R give (without detectors) rise to interference, and with detectors, they give rise to N_gtr N_g / N_gr N_gt a very small number, you accept that any Maxwellian description goes wrong.

That's sufficient for me (not for you, you think people are cheating, I don't think they are). Because I know that the setup can give rise to interference (other quantum erasure experiments do that) ; I'm pretty convinced that the setup DOES have about equal time samplings of R and T (you don't, too bad) and that's good enough for me.
However, I agree with you that it is a setup in which you can easily "cheat", and as our mindsets are fundamentally different concerning that aspect, we are both satisfied. You are satisfied because there's still ample room for your theories (priesthood and semiclassical theories) ; I'm ok because there is ample room for my theories (competent scientists and quantum optics).
Let's say that this was a very positive experiment: satisfaction rose on both sides :-))

cheers,
Patrick.
 
  • #58
I'm pretty convinced that the setup DOES have about equal time samplings of R and T (you don't, too bad) and that's good enough for me.

Provided you use the same type of beam-splitter for interference and anticorrelations. Otherwise, a polarizing beam-splitter used for anticorrelation part of experiment (and non-polarizing for interference), combined with variation in incident polarization (which can be due to aperture depolarization) can produce anticorrelation (if the detector thresholds are tuned "right").

However, I agree with you that it is a setup in which you can easily "cheat", and as our mindsets are fundamentally different concerning that aspect, we are both satisfied. You are satisfied because there's still ample room for your theories (priesthood and semiclassical theories) ;

When you find an experiment which shows (on non-adjusted data) the anticorrelation and the interference, with timing windows and polarization issues properly and transparently addressed, let me know.

Note that neither semiclassical [post=529314]nor QED[/post] model predicts g2=0 for their two detector setup and separate counting. The g2=0 in Glauber's detection model and signal filtering conventions (his Gn()'s, [4]) is a trivial case of anticorrelation of single detector absorbing the full "single photon" |Psi.1> = |T>+ |R> as a plain QED local interaction, thus extending over the T and R paths. It has nothing to do with the imagined anticorrelation in counts of the two separate detectors, neither of which is physically or geometrically configured to absorb the whole mode |Psi.1>. So, if you obtain such an effect, you better brush up on your Swedish and get a travel guide for Stockholm, since the effect is not predicted by the existent theory.

If you wish to predict it from definition of G2() (based on QED model of joint detection, e.g. [4] or similar in Mandel's QO), explain on what basis, for a single mode |Psi.1>=|T>+|R> absorption, do you assign DT (or DR) as being such an absorber? How does DT absorb (as a QED interaction) the whole mode |Psi.1>, including its nonzero region R? The g2=0 describes correlation of events (a) and (b) where: (a)=complete EM field energy of |Psi.1> being absorbed (dynamically and locally) by one Glauber detector and (b)=no absorption of this EM field by any other Glauber detector (2nd, 3rd,...). Give me a geometry of such setup that satisfies dynamical (all QED dynamics is local) requirement of such absorption. { Note that [4] does not use remote projection postulate of photon field. It only uses transition probabilities for photo-electrons, which involves a localized projection of their states. Thus [4] demonstrates what the von Neumann's abstract non-local projection postulate of QM really means operationally and dynamically for the photon EM field -- the photon field always simply follows local EM dynamical evolution, including absorption, while its abstract non-local projection is entirely non-mysterious consequence of the non-local filtering convention built into the definition of Gn() and its operational counterpart, the non-local QO subtraction conventions.} Then show how does it correspond to this AJP setup, or the Grangier's setup, as you claim it does?

So, the absence of theoretical basis for claiming g2=0 in AJP experiment is not some "new theory" of mine, semiclassical or otherwise, but it simply follows from the careful reading of the QO foundation papers (such as [4]) which define the formal entities such Gn() and derive their properties from QED.

I'm ok because there is ample room for my theories (competent scientists[/color] and quantum optics).
Let's say that this was a very positive experiment: satisfaction rose on both sides :-))


Competent scientist (like Aspect, Grangier, Clauser,..) know how to cheat more competently, not like these six. The chief exeprimenter here would absolutely not reveal a single actual count used for their g2 computations for either the reported 6ns delays or for the alleged "real" longer delay, or disclose what this secret "real" delay was.

You write them and ask for the counts data and what this "real" delay was (and how was it implemented? as an extra delay wire? inserted where? what about the second delay, from DG to DR: was it still 12ns or did that one, too, had a "real" version longer than the published one? what was the "real" value for that one? why was that one wrong in paper? paper length problem again? but that one was already 2 digits long, so was the "real" one a 3 digit long delay?) and tell me. The "article was too long" can't possibly explain how, say, 15ns delay became 6ns in 11 places in the paper (to say nothing about the 12ns one which also would have to be longer if the 6ns daly was really more than 10ns).

If it was irrelevant detail, why have a 6ns (which they admit is wrong) spread out all over the paper? Considering they wrote the paper as an instruction for other undergraduate labs to demonstrate their imaginary effect, it seems this overemphasized 6ns delay [/color](which was, curiosly, the single most repeated quantity for the whole experiment[/color]) was the actual magic ingredient they needed to make it "work" as they imagined it ought to.
 
Last edited:
  • #59
vanesch said:
Ah, yes, and I still do. But that has nothing to do with true collapse or not !
There will be a branch where DR triggers and not DT, and there will be a branch where DT triggers, and not DR. There will never be a branch where DR and DT trigger together (apart from double events).

Just realized that MWI won't be able to do that for you here. Unlike the abstract QM entanglement between the apparatus (photoelectron) and the "photon" (quantized EM mode), in this case, once you work out a dynamical model for the detection process, as several QED derivations have done (such as [4] Lect 5), the "measurement" here has no QM projection of EM field state and the EM field doesn't become entangled with its "aparatus" [/color](e.g. photo-electrons), so von Neumann's abstraction (the QM's caricature of the QED) has a plain dynamical explanation. Of course, each electron does entangle, as a QM abstraction, with the rest of its apparatus (amplifier; here it is a local, interacting entanglement). But the latter is now a set of several separate independent measurements (of photo-electrons by their local amplifiers) in different spacelike locations. That one doesn't help you remove the dynamically obtained distribution of creations of these new QM "measurement setups", which has no "exclusivity" -- these new setups are created by local dynamics independently from each other e.g. a creation of a new "setup" on DR cathode does not affect the rates (or probabilities) of creation or non-creation of new "setup" on DT cathode.

The only conclusion you can then reach from this (without getting into reductio ad absurdum of predicting different observable phenomena based on different choices of von Neumann's measurement chain partition) is that for any given incident fields on the T and R (recalling that in [4] there was no non-dynamical EM field vanishing or splitting into vanished and non-vanished, but just plain EM-atom local resonant absorption) the photo-detections in the two locations will be independent of the events outside of their light cones, thus the classical inequality g2>=1 will hold since the probabilities of ionizations are now fixed by the incident EM fields, which never "vanished" except by the local independent dynamics of EM resonant absorption.

This conclusion is also consistent with the explicit QED prediction for this setup. As explained in the previous message, the QED predicts the same g2>=1 for the "one photon" setup of AJP experiment with two separate detectors DT and DR. The Thorn et al. "quantum" g2 (eq AJP.8), I will label it g2q, doesn't have a numerator (Glauber's G2()) which can be operationally mapped to their setup and their definitions of detectors (separate counting) for "one photon" incident state |Psi.1> = |T> + |R>. The numerator of g2q, which is G2(T,R) is defined as a term (extracted from perturbative expansion of EM - cathode atom interactions) that has precisely two whole photon absorptions by two Glauber detectors GD.T and GD.R which cannot be operationally assigned to DT and DR since neither of the DT or DT can (physcally or geometrically) absorb the whole mode |Psi.1> = |T> + |R> (normalizations factor ignored).

The experimental setup with DT and DR of the AJP paper interpreted as some Glauber's detectors GD.T' and GD.R' can be mapped to Glauber's scheme (cf [4]) and (AJP.8-11) but only for a mixed input state such as Rho = |T><T| + |R><R| [/color]. In that case DT and DR can absorb the whole mode since here in each try you have the entire incident EM field mode localized at either one or the other place, thus their detectors DT and DR can be operationally identified as GD.T' and GD.R', thus (8) applies and g2q=0 follows. But this is a trivial case as well, since the classical g2c' for this mixed state is also 0.

The superposed |Psi.1> = |T> + |R>, which was their "one photon" state requires different Glauber's detectors, some GD1 and GD2 to compute its G2(x1,x2). Each of these detectors would have to interact and absorb the entire mode to register a count 1, in which case it will leave vacuum state for EM field, and the other Glauber detector will register 0 (since Glauber's detectors, which are the operational counterparts for Gn(), by definition don't trigger on vacuum photons). That again is "anticorrelation" but of a trivial kind, with one detector covering both paths T and R and always registering 1, and the other detector shadowed behind or somewhere else altogether and always registering 0. The classical g2c for this case is obviously also 0.

Note that Glauber, right after defining Gn() ([4] p 88) says "As a first property of the correlation functions we note that when we have an upper bound on the number of photons present in the field then the functions Gn vanish identically for all orders higher than a fixed order M [the upper limit]." He then notes that this state is (technically) non-classical, but can't think up, for the remaining 100 pages, of this beam splitter example as an operational counterpart to illustrate this technical non-classicality. Probably because he realized that the two obvious experimental variants of implementing this kind of g2=0 described above have a classical counterpart with the same prediction g2=0 (the mixed state and the large DG1 covering T and R). In any case, he certainly didn't leap in this or the other QO founding papers to operational mapping of the formal g2=0 (of AJP.8-11) for one photon state to the setup of this AJP experiment (simply because QED doesn't predict any such "anticorrelation" phenomenon here, the leaps of few later "QO magic show" artists and their parroting admirers among "educators" notwithstanding).

Clauser, in his 1974 experiment [2] doesn't rely on g2 from Glauber's theory but comes up with anticorrelation based on 1st order perturbation theory of atomic surface of a single cathode interacting with incident EM field. It was known, from earlier 1932 results of Fermi { who believed in Schrodinger's interpretation of Psi as a physical field, a la EM, and |Psi|^2 as a real charge density, a seed which was developed later, first by Jaynes, then by Barut into a fully working theory matching QED predictions to at least alpha^4 }, that there will be anticorrelation in photo-ionizations of the "nearby" atoms (within wavelength distances, the near-field region; I discussed physical picture of this [post=533012]phenomenon earlier[/post]). Clauser then leaps from that genuine, but entirely semi-classical, anticorrelation phenomenon to the far apart (certainly more than many wavelengths) detectors and "predicts" anticorrelation, then goes on to show "classicality" being violated experimentally (again another fake due to having random linear polarizations and a polarizing beam splitter, thus measuring the mixed state Rho=|T><T| + |R><R|, where the classical g2c=0 as well; he didn't report intereference test, of course, since there could be no interference on Rho). When Clauser presented his "discovery" at the next QO conference in Rochester NY, Jaynes wasn't impressed, objecting and suggesting that he needs to try it with circularly polarized light, which would avoid the mixed state "loophole." Clauser never reported anything on that experiment, though, and generally clammed up on the great "discovery" altogether (not that it stopped, in the slightest, the quantum magic popularizers from using Clauser's experiment as their trump card, at least until Grangier's so-called 'tour de force'), moving on to "proving" the nonclassicality via Bell tests instead.

Similarly, Grangier et al [3], didn't redo their anticorrelation experiment to fix the loopholes, after the objections and a semiclassical model of their results from Marshall & Santos. This model was done via straightforward SED description, which models classically the vacuum subtractions built into Gn(), so one can define the same "signal" filtering conventions as those built into the Gn(). The non-filtered data doesn't need anything of that sort, though, since the experiments always have g2>=1 on nonfiltered data (which is the semiclassical and the QED prediction for this setup). Marshall & Santos wanted (and managed) also to replicate the Glauber's filtered "signal" function for this experiment, which is a stronger requirement than just showing that raw g2>=1.
 
Last edited:
  • #60
nightlight said:
Unlike the abstract QM entanglement between the apparatus (photoelectron) and the "photon" (quantized EM mode), in this case, once you work out a dynamical model for the detection process, as several QED derivations have done (such as [4] Lect 5), the "measurement" here has no QM projection of EM field state


You have been repeating that several times now. That's simply not true: quantum theory is not "different" for QED than for anything else.

The "dynamical model of the detection process" you always cite is just the detection process in the case of one specific mode which corresponds to a 1-photon state in Fock space, and which hits ONE detector.
Now, assuming that in the case there are 2 detectors, and the real EM field state is a superposition of 2 1-photon states (namely, 1/sqrt(2) for the beam that goes left on the splitter, and 1/sqrt(2) for the beam that goes right), you can of course apply the reasoning to each term individually, assuming that there will be no interaction between the two different detection processes.

What I mean is:
If your incoming EM field is a "pure photon state" |1photonleft>, then you can go and do all the locla dynamics with such a state, and after a lot of complicated computations, you will find that your LEFT detector gets into a certain state while the right detector didn't see anything. I'm not going to consider finite efficiencies (yes, yes...), the thing ends up as |detectorleftclick> |norightclick>.

You can do the same for a |1photonright> and then of course we get |noleftclick> |detectorrightclick>.

These two time evolutions:

|1photonleft> -> |detectorleftclick> |norightclick>

|1photonright> -> |noleftclick> |detectorrightclick>

are part of the overall time evolution operator U = exp(- i H t)

Now if we have an incoming beam on a beam splitter, this gives to a very good approximation:

|1incoming photon> -> 1/sqrt(2) {|1photonleft> + |1photonright> }

And, if the two paths are then far enough (a few wavelengths :-) from each other so that we can then assume that the physical processes in the two detectors are quite independent, (so that there are no modifications in the hamiltonian contributions: no "cross interactions" between the two detectors), then, by linearity of U, we find that the end result is:

1/sqrt(2)(|detectorleftclick> |norightclick>+|noleftclick> |detectorrightclick>)

I already know your objections, where you are going to throw Glauber and other correlations around, and call my model a toy model etc...
That's a pity, because the above is the very essence which these mathematical tools put into work in more generality and if you were more willing, you could even trace that through that complicated formalism from the beginning to the end. And the reason is this: QED, and QO, are simply APPLICATIONS of general quantum theory.

cheers,
Patrick.
 

Similar threads

  • · Replies 40 ·
2
Replies
40
Views
4K
Replies
24
Views
24K