# Photon "Wave Collapse" Experiment (Yeah sure; AJP Sep 2004, Thorn...)

by nightlight
Tags: 2004, experiment, photon, thorn, wave collapse, yeah
P: 186
 Quote by vanesch I never did. I cannot remember ever having taken the position of "remote collapse", except as a calculational tool.
Well, the collapse in the sense of beam splitter anticorrelations discussed in this thread. You definitely, in our detector discussion took position that there will be genuinely reduced detection probability on detector DT whenever there was a trigger on the matching detctor DR. You argued that there was a drop in the trigger probability of DT trigger in this case and that this drop is genuine (in the sense of anticorrelation not being an artifact of subtractions of accidentals and unpaired singles). As I understand you don't believe any more in that kind of non-interacting spacelike anticorrelation (the reduction of remote subsystem state). You certianly did argue consistently that it was a genuine anticorrelation and not an artifact of subtractions.
PF Patron
Emeritus
P: 6,203
 Quote by nightlight You definitely, in our detector discussion took position that there will be genuinely reduced detection probability on detector DT whenever there was a trigger on the matching detctor DR.
Ah, yes, and I still do. But that has nothing to do with true collapse or not !
There will be a branch where DR triggers and not DT, and there will be a branch where DT triggers, and not DR. There will never be a branch where DR and DT trigger together (apart from double events).

I just happen to be an observer in one of the two branches. But the other one can happily exist. Collapse means that suddenly, that other branch "disappears". It doesn't have to. I will simply not observe it, because I don't happen to be in that branch.

Observationally this is of course indistinguishable from the claim that the branch "I am not in" somehow "doesn't exist anymore". If you do that, you have a collapse. But I find it more pleasing to say that that branch still exists, but is not open to my observation, because I happen to have made a bifurcation in another branch.
It all depends what you want to stick to: do you stick to the postulate that things can only exist when I can observe them, or do you stick to esthetics in the mathematical formalism ? I prefer the latter. You don't have to.

cheers,
Patrick.
PF Patron
Emeritus
P: 6,203
 Quote by nightlight ORTEC TAC/SCA 567. The data sheet for the model 567 lists the required delay of 10ns for the START (which was here DT signal, see AJP.Fig 5) from the START GATE signal (which was here DG signal) in order for START to get accepted. But the AJP.Fig 5, and the several places in the text give their delay line between DG and DT as 6 ns. That means when DG triggers at t0, 6ns later (+/- 1.25ns) the DT will trigger (if at all), but the TAC will ignore it since it won't be ready yet, and for another 4ns. Then, at t0+10ns the TAC is finally enabled, but without START no event will be registered. The "GTR" coincidence rate will be close to accidental background (slightly above since if the real T doesn't trip DT and the subsequent background DT hits at t0+10, then the DG trigger, which is now more likely than background, at t0+12ns will allow the registration).
I read in detail your analysis here, and it is correct in a certain sense. Indeed, in the data sheet is a requirement for 10 ns for the gate to be "active" before the start comes in. Now, does that mean that if you give 6ns, you're guaranteed NOT to have a count ? I wouldn't think so ! Doing quite some electronics devellopment myself, I know that when I specify some limits on utilisation, I'm usually sure of a much better performance ! So if I make a device of which I'm relatively sure that, say, a delay of 2 ns is sufficient in most circumstances, I'd rather specify 5 or 10 ns, so that I'm absolutely sure that I will keep the specs, except if I'm pushed to the limits.
So I'm pretty sure that the 6ns is sufficient to make the actually trigger.
A point is of course that in this particular example, the remedy is extremely simple: use longer delay lines !!
I've already been in such a situation, you know: you do an experiment, everything works OK, you write up your stuff and you submit it. Then there is a guy who points out that you didn't respect a certain specification. So you get pale, and you wonder if you have to retract your submission. You think about it, and then you say: ah, but simply using longer delay lines will do the trick ! So you rush to the lab, you correct for the error, and you find more or less equivalent results ; it didn't matter in the end. What do you do ? Do you write to the editor saying that you made a mistake in the paper, but that it actually doesn't matter, if only you're allowed to change a few things ??
OF COURSE NOT. You only do that if the problem DOES matter.
So that's in my opinion what happened, and that's why the author told you that everything is all right, and that these 6ns delays are not correct but that everything is all right.
There's also another very simple test to check the coincidence counting: connect the three inputs G, T and R to one single pulse generator. In that case, you should find identical counts in GT, GR and GTR.

After all, they 1) WORK (even if they don't strictly respect the specs of the counter) 2) they are in principle correct if we use fast enough electronics, 3) they are a DETAIL.
Honestly, if it WOULDN'T WORK, then the author has ALL POSSIBLE REASONS to publish it:
1) he would have discovered a major thing, a discrepancy with quantum optics
2) people will soon find out ; so he would stand out as the FOOL THAT MISSED AN IMPORTANT DISCOVERY BECAUSE HE DIDN'T PAY ATTENTION TO HIS APPARATUS.

After all, he suggests, himself, to make true counting electronics. Hey, if somebody pays me enough for it, I'll design it myself ! I'd think that for \$20000,- I make you a circuit without any problem. That's what I do part of my job time.

I'll see if we have such an ORTEC 567 and try to gate it at 6ns.

Now, if you are convinced that it IS a major issue, just try to find out a lab that DID build the set up, and promise them a BIG SURPRISE when they change the cables into longer ones, on condition that you will share the Nobel prize with them.

cheers,
Patrick
PF Patron
Emeritus
P: 6,203
 Quote by vanesch So if I make a device of which I'm relatively sure that, say, a delay of 2 ns is sufficient in most circumstances, I'd rather specify 5 or 10 ns, so that I'm absolutely sure that I will keep the specs, except if I'm pushed to the limits.
This is not babbling in the air! For instance, together with a collegue, we devellopped a card for charge division read out. After reception of the analogue pulses, the position address on 8 bits, taking into account all calibration curves, is ready at the output in 370 ns. Well, we specified 1 microsecond, because the design specifications were 2 microseconds...

cheers,
Patrick.
 P: 186 This is not babbling in the air! For instance, together with a collegue, we devellopped a card for charge division read out. After reception of the analogue pulses, the position address on 8 bits, taking into account all calibration curves, is ready at the output in 370 ns. Well, we specified 1 microsecond, because the design specifications were 2 microseconds... Your example shows an additional reason why it is not plausible in this case. In your case there were no ten other vendors with competing designs and specs fighting for the same customer. You had monopoly and what you say goes, thus you're better off being conservative with stating limitations. In a competitive situation, the manufacturers will push the specs as far as they can get away with (i.e. in the cost/benefit analysis they would estimate how much they will lose from returned units vs los of sales, prestige, customers to competition). Just think of what limits you would state had that been a competition for that job -- there were ten candidates and they all design a unit to given minimum requirements (but they're allowed to improve) and the employer will pick one they like best.
 PF Patron Sci Advisor P: 5,057 Nightlight, You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate? Also, is it your opinion that photons are not quantum particles, but are instead waves? -DrC
PF Patron
P: 5,057
 Quote by nightlight You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate? The phenomenologial PDC hamiltonian used in Quantum Optics computations has been reproduced perfectly within the Stochastic Electrodynamics... ... And after all the song and dance quiets down, lo and behold, the "results" match the subtracted prediction of Glauber's correlation function Gn (Bell's QM result cos^2() for correlations for photons are a special case Gn()) and violate nonsubtracted classical prediction. Duh.
Very impressive analysis, seriously, and no sarcasm intended.

But as science, I consider it nearly useless. No amount of "semi-classical" explanation will ever cover for the fact that NOT ONE IOTA adds anything to our present day knowledge, which is the purpose of true scientific effort. It is strictly an elaborate catch-up to QM/CI by telling us that you can get the same answers a different way. Reminds me of the complex theories of how the sun and planets actually rotate around the Earth. (And all the while you criticize a theory which since 1927 has been accepted as one of the greatest scientific achievements in all history. The most Einstein ever could muster against it was that it was not complete; I doubt he ever thought it wrong per se.)

-------

1. I would love to hear you answer this question, previously asked and not answered: what is the true & correct correlation formula for entangled photon pair spin observations? Is it the same as QM - and thus subject to the issues of Bell's Theorem - or would you care to provide a specific formula which is different? (After all, the Bell Inequality and QM are a lot farther apart than the 7th or 8th decimal place.)

I ask because I would like to determine whether you agree or disagree with the predictions of QM.

2. Or in terms of the Thorn experiment: do you think that they suspected they might get results in accordance with the predictions of QM? Yes, because QM makes specific predictions which allow it to be falsified. So you argue that an experiment specifically designed to see if 2 photons can be detected on one side when only one is detected on the other is flawed -- exactly such that a false positive will be registered 100% of the time! And yet, a reasonable person might ask why 2300 photons aren't occasionally seen on one side, when only one is seen on the other if your concept is correct. After all, you specifically say one photon is really many photons.

3. In fact, can you provide any useful/testable prediction which is different from orthodox QM? You see, I don't actually believe your theory has the ability to make a single useful prediction that wasn't already in standard college textbooks years ago. (The definition of an AD HOC theory is one designed to fit the existing facts while predicting nothing new in the process.) I freely acknowledge that a future breakthrough might show one of your lines of thinking to have great merit or promise, although nothing concrete has yet been provided.

-------

I am open to persuasion, again no sarcasm intended. I disagree with your thinking, but I am fascinated as to why an intelligent person such as yourself would belittle QM and orthodox scientific views.

-DrC
PF Patron
Emeritus
P: 6,203
 Quote by nightlight Using raw counts in eq (14), they could only get g2>=1.
You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??

I can easily provide you with such a series !

cheers,
Patrick.
PF Patron
Emeritus
P: 6,203
 Quote by vanesch You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??
In fact, without a lot of "quantum optics", the very fact of having "intensity threshold detectors" which give me a small P_gtr/(P_gt * P_gr) is already an indication that these intensities are not given by the Maxwell equations.
The reason is the following: in order to be able to have a stable interference pattern by the waves arriving at T and R, T and R need to be modulated by intensities which have a strong correlation on the timescale of the detection window. Indeed, if they are anticorrelated, when there is an intensity at T, there isn't any at R, and vice versa, there is never a moment when there is sufficient intensity from both to interfere. If the modulation depth is about 100%, this indicates that the intensities are essentially identical at T and R, on the time scale of the intensity detector (here, a few ns).
Identical intensities on this time scale is necessary to obtain extinction of intensity in the interference pattern (the fields have to cancel at any moment, otherwise some intensity is left).
So no matter how your detector operates, if it gives you a positive logical signal ABOVE an intensity theshold, and a negative logical signal BELOW an intensity threshold, these logical signals have to be correlated by about 100%.

Taking an arbitrary moment in time, and looking at the probability to have T AND R = 1, and T = 1 and R = 1, and calculating P_TR / (P_T P_R) is nothing else but an expression of this correlation of intensities, needed to obtain an interference pattern.
In fact, the probability expressions have the advantage of taking into account "finite efficiencies", meaning that to each intensity over the threshold corresponds only a finite probability of giving a positive logical signal. That's easily verified.

Subsampling these logical signals with just ANY arbitrary sampling sequence G gives you of course a good approximation of these probabilities. It doesn't matter if G is correlated or not, with T or R, because the logical signals from T and R are identical. If I now sample only at times given by a time series G, then I simply find the formula given in AJP (14): P_gtr/ (P_gt P_gr).
This is still close to 100% if I have interfering intensities from T and R.
No matter what time series. So I can just as well use the idler signal as G.

Now, you can say, if that's so simple, why do we go through the pain of generating idlers at all: why not use a random pulse generator.
The reason is that not the entire intensity function of R and T is usable. Most intensities are NOT correlated.
But that doesn't change the story: the only thing I now have to reproduce, is that I have an interference pattern from R and T when I have G-clicks. Then the same explanation holds, but only for the time windows defined by G.

If I have a time series G, defining time windows, in which I can have an interference pattern from T and R with high modulation, the intensities during T and R need to be highly correlated. This means that the expression P_GTR/(P_GT P_GR) needs to be close to 1.

So if I succeed, somehow, to have the following:

I have a time series generator G ;
I can show interference patterns, synchronized with this time series generator, of two light beams T and R ;

and I calculate an etimation of P_GTR / (P_GR P_GT),

then I should find a number close to 1 if this maxwellian picture holds.

Well, the results of the paper show that it is CLOSE TO 0.

cheers,
Patrick.
PF Patron
P: 5,057
 Quote by nightlight [i] I wasn't writing a scientific paper here but merely discussing some common misconceptions. If/when I have something of importance to announce, I'll put it in a paper (I don't earn living by "publish or perish"). Although, these postings here are not of much "importance" or seriousness, I still find them useful in sharpening my own understanding and making me follow interesting thought patterns I wouldn't have otherwise. ... --- Ref [5] Z.Y. Ou, L. Mandel "Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment" Phys. Rev. Lett. 61(1) pp 50-53 (1988). [6] P.L. Kelly and W.H. Kleiner, "Theory of electromagnetic field measurement and photoelectron counting" Phys. Rev. 136, A316–A334 (1964). [7] L. Mandel, E.C.G. Sudarshan, E. Wolf "Theory of Photoelectric Detection of Light Fluctuations" Proc. Phys Soc. 84 (1964) 435-444.
Thank you for the courtesy of your response. I do not agree with your conclusions, but definitely I want to study your thinking further.

And I definitely AGREE with what you say about you say about "sharpening" above... I get a lot out of these discussions in the same manner. Forces me to consider my views in a critical light, which is good.
P: 186
 Quote by vanesch You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ?? I can easily provide you with such a series !
It is trivial to have beam splitter give you nearly perfectly anticorrelated data (e.g. vary polarization of optical photons randomly and then set photodetector to very low noise so it picks only the photons with nearly parallel polarization with the PBS). Clauser's test [2] had produced one such.

It can't be done if you also require that the T and R are suporposed (as opposed to mixture) and that they carry equal energy in each instance. That part is normally verified by interfering the two beams T and R.

The authors [1] have such experiment but they didn't test their T and R EM field samples used for anticorrelation part. Instead they stuck interferometer in front of the beam splitter and showed its T' and R' interfere, but that is a different beam splitter, and different coincidence setup with entirely unrelated EM field samples being correlated. It is essential to use the same samples of EM field from T and R (or at most propagated by r/t adjustments for extended paths) and using the same detector and time windows settings to detect interference. The "same" samples mean: extracted the same way from G events, without subselecting and rejecting based on data available away from G (such as content of detections on T and R, e.g. to reject unpaired singles of G events).

The QO magic show usually considers time windows (defined via coincidence circuits settings) and detector settings as free parameter they can tweak on each try till it all "works" and the magic is happening as prescribed. Easy to make magic when you can change these between runs or polarizer angles. They use their normal engineering signal filtering reporting conventions in not reporting in their papers the bits of info which are critical for this kind of tests (although a routine for their engineering signal processing and "signal" filtering) -- to have the same samples of fields. (This was a rare one with few details and the trick which does the magic immediately jumps out.) Combine that with Glauberized jargon, where "correlation" isn't correlation, and you got all the tools for an impressive Quantum Optics magic show.

They've been pulling leg of physics community since Bell tests started with this kind of phony magic and have caused vast quantities of nonsense to be written by otherwise good and even great physicists on this topic. A non-Glauberized physicist uses word correlation in a normal way, so they get invariably taken in by Glauber's "correlation" which doesn't correlate anything but is just an engineering style filtered "signal" function extracted out of acual correlations via inherently non-local filtering procedure (standard QO subtractions). I worked on this topic for my masters and read all the experimental stuff available, yet I had no clue how truly flaky and nonexistent those violation "facts" were.
 P: 186 T and R need to be modulated by intensities which have a strong correlation on the timescale of the detection window. Indeed, if they are anticorrelated, when there is an intensity at T, there isn't any at R, and vice versa, there is never a moment when there is sufficient intensity from both to interfere. Keep in mind that the 2.5ns time windows are defined on the detectors output fields, they're electrical current counterpart to the optical pulses. The PDC optical pulses are thousands times shorter, though. Now, you can say, if that's so simple, why do we go through the pain of generating idlers at all: why not use a random pulse generator. ... So if I succeed, somehow, to have the following: I have a time series generator G ; I can show interference patterns, synchronized with this time series generator, of two light beams T and R ; and I calculate an etimation of P_GTR / (P_GR P_GT), then I should find a number close to 1 if this maxwellian picture holds. Well, the results of the paper show that it is CLOSE TO 0. The random time window (e.g. if you ignore G beam) won't even be poissonian, since there will be a large vacuum superposition, which has gaussian distribution. So, the unconditional sample (or a random sample) will give you a super-Poissonian for the PDC source and Poissonian for the coherent source. The main item to watch regarding the G-detection conditioned samples is to pick the "same" sample on T and R. For example, consider a GTR coincidence window setup shown with PDC pulse alignements:  ---- signals /......\ ----------------------------------------------> times |--------| Gate beam timing window |------| Transmitted beam window |------| Reflected beam window That obviously will give you a perfect anticorrelation while still allowing you to show the perfect interference if you change the T and W sampling windows for interference test (and align them properly). Even if you keep windows the "same" for the intereference test as for anticorrelations, you can still get both, provided you overlap partially T and R windows. Then by tweaking the detector thresholds, you can still create the appearance of violating visibility vs anticorrelation classical tradeoff. The basic validity test should be to feed laser split 50:50 instead of G and TR into the very same setup, optics, detectors, coincidence circuit and show that g2=1. If this gives you <1 the setup is cheating. Note that Grangier et al. test in [3] had used for this test the chaotic light and said they got g2>1, but for chaotic light g2 needs to be >= 2, so this is inconclusive. The stable laser should put you on the very boundary of "fair sampling" and it should be much easier to see cheating. Also the gratuitous separate sampling via the 3rd circuit for the triple coincidence is cheating all by itself. The triple coincidences should be derived (via software or the AND circuit) from the obtained GT and GR samples, not sampled and tuned on its own. Grangier et al. also used this third coincidence unit, but wisely chose not to give any detail about it. "My" prediction (or rather it is just a QED/QO prediction, but with properly assigned operational mapping to the right experiment) is that the correctly gated G conditioned sample will also give you at best the Poissonian. As explained, the Glauber's g2=0 for "single photon" state |T> + |R> is operationally completely misinterpreted by some Quantum Opticians who did the test, and dutifuylly parroted by the rest (although Glauber, or Chiao & Kwiat or Mandel use much more cautios language, recognizing that you need standard QO subtractions to drop into the non-classical g2). The distribution should be same as if you separated single cathode (large compared to wavelengths but capturing the incident beam on both partitions equally) into two regions and tried to correlate photoelectron counts from the two sides. The photoelectron count will be Poissonian at best (g2=1). Note that if you were to do experiment and show that the unsubtracted correlations (redundant, since correlations as normally understood should be unsubtracted, but it has to be said here in view of QO Glauberisms) are perfectly classical, it wouldn't be long before everyone in QO will declare, they knew it all along and pretend as if they never thought or wrote exactly the opposite. Namely, suddenly they will discover that the coherent light pump with the Poissonian superposition of the Fock states will simply generate Poissonian PDC pulses (actually they'll probably be Gaussian) and the problem is solved. The g2=0 for single photon states remains Ok, it just doesn't apply for this source (they will still continue searching for the magic source in other effects, so they'll blame the source, even though g2=0 for Glauber detector of "one photon" |T>+|R>, as explained, is a trivial anticorrelation; but they'll still maintain it is just matter of finding the "right source" and continue with the old misinterpretation of g2=0; QO "priesthood" is known for these kinds of delusions e.g. lookup on the Hanbury Brown and Twiss comedy of errors; or similar one with the "impossible" interference of independent laser beams which also caused grand mal convulsions). If you follow up another step from here, once you establish that the anticorrelation doesn't work as claimed but always gives g2>=1 on actual correlations, you'll discover that no optical Bell test will work either since you can't get rid of the double +/- Poissonian hits for the same photon without reducing detection efficiency, which then invalidates it from the other "loophole". This is more transparent in QO/QED derivations of Bell's QM prediction where they use only the 2 point Glauber correlation (cf. [5] Ou,Mandel), thus discarding explicitly all triple and quadruple hits, making it clear they're using sub-sample. The QO derivation is sharper than the abstract QM 2x2 toy derivation. In particular the QO/QED derivation doesn't use remote sybsystem projection postulate but derives effective "collapse" as a dynamical process of photon absorpotion, thus purely local and uncontroversial dynamical collapse. The additional explicit subtractions included into the definition of G2() make it clear that the cos^2() "correlation" isn't a correlation at all but an extracted "signal" function (a la Wigner distribution reconstructed via quantum tomography), thus one can't plug it into the Bell's inequalities as is without estimating the terms discared by Glauber's particular convention for signal vs noise dividing line. With the generic QM projection based 2x2 abstract derivation, that's all invisible. Also invisible is the constraint (part of Glauber's derivation [4]) that "collapse" is due to local field dynamics, it is just a plain photon absorption through local EM-atom interaction. The QO/QED derivation also shows explicitly how the non-locality enters into the von Neumann's generalized projection postulate (that projects remote noninteracting subsystems) -- it is result of the manifestly non-local data filtering procedures used in Glauber's Gn() subtraction conventions. That allone disqualifies any usage of such non-locally filtered "signal" function as a proof of non-locality by plugging them into Bell's inequalities. Earlier I cited Haus (who was one of few wise Quantum Opticians; he lived in my town but passed away recently, before I knew the QO grandmaster was just few blocks away) where he plainly said that "collapse" (von Neumann's projection) is a "shortcut" for genuine dynamical analysis of the detection process and needs to be applied with a good dose of salt.
PF Patron
Emeritus
P: 6,203
 Quote by nightlight Keep in mind that the 2.5ns time windows are defined on the detectors output fields, they're electrical current counterpart to the optical pulses. The PDC optical pulses are thousands times shorter, though.
The point was, that if the intensities have to be strongly correlated (not to say, identical) on the fast timescale (in order to prduce interferences), then they will de facto be correlated on longer timescales (which are just sums of smaller timescales).

 The random time window (e.g. if you ignore G beam) won't even be poissonian, since there will be a large vacuum superposition, which has gaussian distribution. So, the unconditional sample (or a random sample) will give you a super-Poissonian for the PDC source and Poissonian for the coherent source.
But it doesn't MATTER how the time windows are distributed. They could be REGULARLY distributed with a quartz clock, or when I decide to push a button, or anything. They are just a subsample of the entire intensity curve, and don't need to be in any way satisfy any statistical property.

 The main item to watch regarding the G-detection conditioned samples is to pick the "same" sample on T and R. For example, consider a GTR coincidence window setup shown with PDC pulse alignements:
Of course you need to apply the SAME time window to as well G, T and R.
Or, to have to make the windows such that, for instance, the GTR window is LARGER than the GR and TR windows, so that you get an overestimation of the quantity which you want to get low.

But it is the only requirement. If you obtain interference effects within these time windows (by G) and you get a low value for the quantity N_gtr N_g/ (N_gr N_gt) then this cannot be generated by beams in the Maxwellian way.
And you don't need any statistical property of the G intervals, except for a kind of repeatability: namely that they behave in a similar way during the interference test (when detectors r and t are not present) and during the coincidence measurement. The G intervals can be distributed in any way.

cheers,
Patrick.