Photon Wave Collapse Experiment (Yeah sure; AJP Sep 2004, Thorn )

nightlight
Messages
187
Reaction score
0
Photon "Wave Collapse" Experiment (Yeah sure; AJP Sep 2004, Thorn...)

There was a recent paper claiming to demonstrate the indivisibility of
photons in a beam splitter experiment (the remote "wave collapse"
upon "measurement" of "photon" in one of the detectors).

1. J.J. Thorn, M.S. Neel, V.W. Donato, G.S. Bergreen, R.E. Davies, M. Beck
http://marcus.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf
Am. J. Phys., Vol. 72, No. 9, 1210-1219 (2004).
http://marcus.whitman.edu/~beckmk/QM/

The authors claim to violate "classicality" by 377 standard deviations,
which is by far the largest violation ever for this type of experiment.
The setup is an archetype of quantum mystery: A single photon arrives at
a 50:50 beam splitter. One could verify that the two photon wave packet
branches (after the beam splitter) interfere nearly perfectly, yet if
one places a photodetector in each path, only one of the two detectors
will trigger in each try. As Feynman put it - "In reality, it contains
the only mystery." How does "it" do it? The answer is -- "it" doesn't
do "it" and the mysterious appearance is no more than a magic trick.

Unlike the earlier well known variants of this experiment ([2],[3]),
the present one describes the setup in sufficient detail that the
sleight of hand can be spotted. The setup is sketched below, but
you should get the paper since I will refer to figure and formula
numbers there.

Code:
     G photon Source   TR photon    PBS  T 
DG <---------- [PDC] ----------------\----------> DT
                                     |
                                     | R
                                     V
                                     DR
The PDC Source generates two photons, G and TR. The G photon is used
as a "gate" photon, meaning that the trigger of its detector DG defines
the time windows (of 2.5ns centered around the DG trigger) in which
to count the events on the detectors DT and DG, which detect the
photons in the Transmitted and Reflected beams (wave packets). The
"quantum" effect they wish to show is that after a detector, say,
DT triggers, the other detector DR will not trigger. That would
demonstrate the "indivisibility" of the photon and a "collapse"
of the remote wave packet at DR location as soon as the photon
was "found" at the location DT.

In order to quantify the violation of classicality, [1] defines
a coefficient g2 which is a normalized probability of joint
trigger of DT and DR (within the windows defined by DG trigger)
and is given via:

... g2 = P(GTR) / [P(GT)*P(GR)] ... (1)

or in terms of the (ideal) counts as:

... g2 = N(GTR)*N(G) / [N(GT)*N(GR)] ... (2)

where the N(GTR) is count of triple trigger, the N(GT) of double
triggers DG and DT, etc. The classical prediction is that g2>=1
(the equality g2=1 would hold for a perfectly steady laser source,
the "coherent light"). This inequality is eq (AJP.3). The quantum
"prediction" (eq AJP.8,13) is that for a single photon state TR,
the g2=0. The paper claims they obtained g2=0.0177 +/- 0.0026.
The accidental (background) coincidences alone would yield
g2a=0.0164, so that the g2-g2a is just 0.0013, well within the
std deviation 0.0026 from the quantum prediction. Perfection.

The two tiny, little clouds in this paradise of experimental
and theoretical perfection are:

a) while there is a QED prediction of g2=0, it is not for this
kind of detection setup (that's a separate story which we could
[post=529314]pursue later[/post]), and

b) the experiment doesn't show that (2) yields g2=0, since they
didn't measure at all the actual triple coincidence N(GTR) but
just a small part of it.

Let's see what was the sleight of hand in (b). We can look at the
coincident detections scheme as sampling of the EM fields T and R
where the sampling time windows are defined by the triggers of
gate DG. Here we had sampling window of 2.5ns and they measured
around 100,000 counts/sec on DG (and 8000 c/s on DT+DR). Thus
the sampled EM field represents just 0.25 ms out of each second.

The classical prediction g2>=1 applies for either continuous or
sampled measurements, provided the samples of GT and GR are
taken from the same position in the EM stream. For the coincidences
GT and GR, [1] does seem to select the properly matching sampling
windows since they tuned the GT and GR coincidence units (TAC/SCA,
see AJP p.1215-1216, sect C, Fig 5) to maximize each rate (they
don't give unfortunately any actual counts used for computing via
(2) their final results in AJP, Table I, but we'll grant them this).

Now, one would expect, that obtaining the sequence of properly
aligned GT and GR samples (say a sequence of length N(G) of
0's and 1's), one would extract the triple coincidence count
N(GTR) by simply adding 1 to N(GTR), whenever both GT and
GR contain 1 for the same position in the bit array.

But no, that's not what [1] does. For "mysterious" reasons
they add a third, separate coincidence unit (AJP.Fig 5; GTR)
which they tune on its own to extract its own separate sample
of EM fields. That alone is a gigantic loophole, a secret pocket
in the magicians sleeve. If the sampling windows GT and GR for
the new GTR unit are different enough from the earlier GT/GR
windows (e.g. shifted by just 1.25 ns in opposite directions),
the classical prediction via (2) will also be g2=0 (just background).
And as they're about to finish they pause at the door, with
'and by the way' look say "There is one last trick used in setting
up this threefold coincidence unit." where they explain how they
switch optical fibers, from DR to DG, then tune to G+T to stand
in for R+T coincidences, because they say "we expect an absence
of coincidences between T and R" (p 1216). Well, funny they
should mention it, since after all this, somehow, I am also
beginning to expect the absence of any GTR coincidences.

Also, unlike the GT and GR units which operated in START/STOP
TAC mode the GTR unit operated in START GATE mode, where a
separate pulse from DG is required to enable the acceptance
of DT and DR signals (here the DT was used as START and DR as
STOP input to TAC/SCA, see fig 5, while in the other two units
G was used for SATRT and T or R for STOP). It surely is getting
curioser and curioser, all these seeming redundancies with all
their little differences.


The experiment [3] also had a 3rd GTR unit with its own tuning,
but they didn't give any details at all. The AJP authors [1] give
only the qualitative sketch, but no figures on the T and R sampling
window positions for GTR unit (e.g. relative to those from GT and
GR units) were available from the chief author. Since the classical
prediction is sensitive to the sampling window positions, and can
easily produce via (2) anything from g2=0 to g2>1, just by changing
the GTR windows, this is a critical bit of data the experimenters
should provide, at least mention how they checked it and what
the windows were.

Of course, after that section, I was about ready to drop it
as yet another phony 'quantum magic show'. Then I noticed
at the end they give part numbers for their TAC/SCA units,
(p1218, Appendix C): http://www.ortec-online.com/electronics/tac/567.htm . The data sheet for
the model 567 lists the required delay of 10ns for the START
(which was here DT signal, see AJP.Fig 5) from the START GATE
signal (which was here DG signal) in order for START to get
accepted. But the AJP.Fig 5, and the several places in the
text give their delay line between DG and DT as 6 ns. That
means when DG triggers at t0, 6ns later (+/- 1.25ns) the
DT will trigger (if at all), but the TAC will ignore it
since it won't be ready yet, and for another 4ns. Then,
at t0+10ns the TAC is finally enabled, but without START
no event will be registered. The "GTR" coincidence rate
will be close to accidental background (slightly above
since if the real T doesn't trip DT and the subsequent
background DT hits at t0+10, then the DG trigger, which
is now more likely than background, at t0+12ns will
allow the registration). And that is exactly the g2 they
claim (other papers claim much smaller violations and
only on subtracted data, not the raw data, which is how
it should be, the "nonclassicality" as the Quantum Otpics
term of art, not anything nonclassical for real).

So, as described in [1], the GTR unit settings will cut off
almost all genuine GTR coincidences, yielding the "perfect"
g2. I did ask the chief experimenter about the 10ns delay
and the inconsistency. Oh, he knew it all along, and they have
actually used the proper delay (and not the 6ns as stated the
paper), but the paper was too long for such details. Lemme see,
they wrote the paper and the AJP told them to shorten it to such
and such length. Now, say, the six of them sweated for a day
editing the text, and just had it at about the right length,
except for 5 extra characters. Now, having cut out all they could
think of, they sit at the table wondering, vot to do? Then suddenly
a lightning strikes and they notice that if they were to replace
the delay they actually used (say 15ns, or anything above 11.5,
thus 2+ digit anyway) with 6 ns they could shorten the text by
10 characters. Great, let's do it, and so they edit the paper,
replacing the "real" delay of say 15ns with the fake delay of 6ns.
Yep, that's how the real delay which was actually and truly greater
than 10ns must have become the 6ns reported in the paper in 10 places.
Very plausible - it was the paper length that did it.

And my correspondent ended his reply with: "If I say I measured
triple coincidences (or lack thereof) then I did. Period. End of
discussion." Yes, Sir. Jahwol, Herr Professor.



--- Additional References:

2. J. F. Clauser, ``Experimental distinction between the quantum and
classical field-theoretic predictions for the photoelectric effect,''
Phys. Rev. D 9, 853-860 (1974).

3. P. Grangier, G. Roger, and A. Aspect, ``Experimental evidence for
a photon anticorrelation effect on a beam splitter: A new light on
single-photon interferences,'' Europhys. Lett. 1, 173-179 (1986).

4. R. J. Glauber, "Optical coherence and photon statistics" in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji (Gordon and Breach, New York, 1965), pp. 63–185.
 
Last edited by a moderator:
Physics news on Phys.org
Is this is true, then you should write a REBUTTAL to AJP and see if the authors will respond formally. If not, your complaint here will simply disappear into oblivion.

Zz.
 
ZapperZ said:
Is this is true, then you should write a REBUTTAL to AJP and see if the authors will respond formally. If not, your complaint here will simply disappear into oblivion. Zz.

You can check the paper and the ORTEC data sheet at the links provided. It is, of course, true. Even the chief author acknowledges the paper had wrong delays. As to writing to AJP, I am not subscribed any more to AJP (I was getting it for few years after grad school). I will be generous and let the authors issue their own errata since I did correspond with them first. Frankly, I don't buy their "article was too long" explanation -- it makes no sense that one would make time delay shorter because the "article was already too long". Plain ridiculous. And after the replies I got, I don't believe they will do the honest experiment either. Why confuse the kids with messy facts, when the story they tell them is so much more exciting.
 
nightlight said:
You can check the paper and the ORTEC data sheet at the links provided. It is, of course, true. Even the chief author acknowledges the paper had wrong delays. As to writing to AJP, I am not subscribed any more to AJP (I was getting it for few years after grad school). I will be generous and let the authors issue their own errata since I did correspond with them first. Frankly, I don't buy their "article was too long" explanation -- it makes no sense that one would make time delay shorter because the "article was already too long". Plain ridiculous. And after the replies I got, I don't believe they will do the honest experiment either. Why confuse the kids with messy facts, when the story they tell them is so much more exciting.

I have been aware of the paper since the week it appeared online and have been highlighting it ever since. If it is true that they either messed up the timing, or simply didn't publish the whole picture, then the referee didn't do as good a job as he/she should, and AJP should be made aware via a rebuttal. Being a "subscriber" is irrelevant. ANYONE can write a rebuttal to ANY paper in any journal.

Furthermore, the section that this paper appeared in has no page limit as far as I can tell, if one is willing to pay the publication cost. So them saying the paper was getting too long is a weak excuse.

However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained. Accounting for dark counts has ALWAYS been a problem with single photon detectors such as this - this is a frequent ammo for some people to point at the so-called detection loophole in EPR-type experiments. But this doesn't mean such detectors are completely useless and cannot produce reliable results either. This is where the careful use of statistical analysis comes in. If this is where they messed up, then it needs to be clearly explained.

Zz.
 
ZapperZ said:
However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained. Accounting for dark counts has ALWAYS been a problem with single photon detectors such as this - this is a frequent ammo for some people to point at the so-called detection loophole in EPR-type experiments. But this doesn't mean such detectors are completely useless and cannot produce reliable results either. This is where the careful use of statistical analysis comes in. If this is where they messed up, then it needs to be clearly explained.

Honestly, I think there is no point. In such a paper you're not going to write down evident details. What nightlight is complaining about is that the authors didn't take into account an extra delay in a component. I'm pretty sure they did, and published only the EFFECTIVE delays after having taken these details into account ; at least that is what I gather from the response of the original author. So he didn't bother writing down these experimentally evident details. He also didn't bother to write down that he put in the right power supply ; that doesn't mean you should complain that he forgot to power his devices.
You know, the amplifiers and discriminators also have small time delays. You're not going to write all those technicalities down in a paper, are you ?

It can happen, but most of the time, people writing up an experimental publication are not completely incompetent nutcases who don't know how to use their apparatus.

cheers,
Patrick.
 
Being a "subscriber" is irrelevant. ANYONE can write a rebuttal to ANY paper in any journal.

Well, if I write I would like to see it, and see the replies, etc.

However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained.

It changes quite a bit. The semi-classical model works perfectly here, when one models the subtractions as well (Marshall & Santos wrote about it way back when Grangier et al published their version in 1986). The problem with this AJP paper is that they claim nearly perfect "quantum" g2, without any subtractions (of accidental coincidences and of unpaired DG triggers, which lowers substantially g2, via eq (2) where N(G) would drop from 100,000 c/s to 8000 c/s) . If true, it would imply genuine physical collapse at spacelike region. If they had it for real, you would be looking at a Nobel prize work. But, it just happens to be contrary to the facts, experimental or theorietical.

Check for example a recent preprint by Chiao & Kwiat where they do acknowledge in their version that no real remote collapse was shown, although they still like that collapse imagery (for its heuristic and mnemonic value, I suppose). Note also that they acknowledge that classical model can account for any g2 >= eta (the setup efficiency, when accounting for the unpaired DG singles in classical model), which is much smaller than 1. Additional subtraction of background accidentals can lower the classical g2 still further (if one accounts for it in the classical model of the experiment). The experimental g2 would have to go below both effects to show anything nonclassical -- or, put simply, the experiment would have to show the violation on the raw data, and that has never happened. The PDC source, though, cannot show anything nonclassical since it can be perfectly modeled by local semiclassical theory, the conventional Stochastic Electrodynamics (see also last chapter in Yariv's Quantum Optics book on this).
 
vanesch said:
Honestly, I think there is no point. In such a paper you're not going to write down evident details. What nightlight is complaining about is that the authors didn't take into account an extra delay in a component. I'm pretty sure they did, and published only the EFFECTIVE delays after having taken these details into account ; at least that is what I gather from the response of the original author. So he didn't bother writing down these experimentally evident details. He also didn't bother to write down that he put in the right power supply ; that doesn't mean you should complain that he forgot to power his devices.
You know, the amplifiers and discriminators also have small time delays. You're not going to write all those technicalities down in a paper, are you ?

It can happen, but most of the time, people writing up an experimental publication are not completely incompetent nutcases who don't know how to use their apparatus.

cheers,
Patrick.

If those ARE the missing delays that is the subject here, then yes, I'd agree with you. *I* personally do not consider those in my electronics since we have calibrated them to make sure we only measure all time parameters of the beam dynamics rather than our electronics delay.

Again, the important point here is the central point of the result. Would that significantly change if we consider such things? From what I have understood, I don't see how that would happen. I still consider this as a good experiment for undergrads to do.

Zz.
 
nightlight said:
Being a "subscriber" is irrelevant. ANYONE can write a rebuttal to ANY paper in any journal.

Well, if I write I would like to see it, and see the replies, etc.

OK, so now YOU are the one offering a very weak excuse. I'll make a deal with you. If you write it, and it gets published, *I* will personally make sure you get a copy. Deal?

However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained.

It changes quite a bit. The semi-classical model works perfectly here, when one models the subtractions as well (Marshall & Santos wrote about it way back when Grangier et al published their version in 1986). The problem with this AJP paper is that they claim nearly perfect "quantum" g2, without any subtractions (of accidental coincidences and of unpaired DG triggers, which lowers substantially g2, via eq (2) where N(G) would drop from 100,000 c/s to 8000 c/s) . If true, it would imply genuine physical collapse at spacelike region. If they had it for real, you would be looking at a Nobel prize work. But, it just happens to be contrary to the facts, experimental or theorietical.

There appears to be something self-contradictory here, caroline. You first do not believe their results, and then you said look, it also works well when explained using semi classical model. Now either you discard it completely and don't use it, or you buy it and deal with the fact that two different descriptions, at best, can explain it.

If you claim that the result DOES change, then put your money where your mouth is and send a rebuttal. If not, this goes into oblivion.

Zz.
 
What nightlight is complaining about is that the authors didn't take into account an extra delay in a component.

Not quite so. If the delay between DG and DT pulses was more than 6ns, say it was 15ns, why was the 6ns written in the paper at all? I see no good reason to change the reproted delay from 15ns to 6ns. There is no economy in stating it was 6ns.

I'm pretty sure they did, and published only the EFFECTIVE delays after having taken these details into account ;

There is no effective vs real delay. It is a plain delay between a DG pulse and DT pulses. It is not like it would have added any length to the article to speak of.

You know, the amplifiers and discriminators also have small time delays. You're not going to write all those technicalities down in a paper, are you ?

And detector has latency and dead time, etc. That's not relevant. Talking about needless elements here, adding and going into trouble of tuning the independent sampling of EM fields on T and R for the third unit is, and describing it all is just fine. But saying that delay was 15ns instead of 6ns is way too much trouble. Yeah, sure, that sounds very plausible.

If I wanted to do the collapse magic trick, I would add that third unit, too. There are so many more ways to reach the "perfection" with that unit in there than to simply reuse the already obtained samples from the GT and GR units (as AND-ed signal).

In any case, are you saying you can show violation (g2<1) without any subtractions, on raw data?
 
  • #10
nightlight said:
There appears to be something self-contradictory here, caroline.

I am not Caroline. For one, my English (which I learned when I was over twenty) is much worse than her English. Our educational backgrouds are quite different, too.

1. It's interesting that you would even know WHO I was referring to.

2. You also ignored the self-contradiction that you made.

3. And it appears that you will not put your money where your mouth is and send a rebuttal to AJP. Oblivion land, here we come!

Zz.
 
  • #11
There appears to be something self-contradictory here... You first do not believe their results, and then you said look, it also works well when explained using semi classical model. Now either you discard it completely and don't use it, or you buy it and deal with the fact that two different descriptions, at best, can explain it.

No, that's not what I said. You can't get what they claim, the violation of g2>=0 without subtractions (the accidentals and unpaired singles; the two can be traded by settings on the detectors, window sizes, etc). But if you do the subtractions and if you model semi-classically not just the raw data but also the subtractions themselves, then the semiclassical model is still fine. The subtraction-adjusted g2, say g2' is now much smaller than 1, but that is not a violation of "classicality" in any real sense but merely a convention of speech (when the adjusted g2' is below 1 then we define such phenomenone as "nonclassical"). But a classical model, which also does the same subtraction on its predictions will work fine.


If you claim that the result DOES change, then put your money where your mouth is and send a rebuttal. If not, this goes into oblivion.

As I said, being a nice guy, I will let them issue their own errata. They can simply say that the delays were not 6ns and give the correct value, if that is true (which from their g2 nearly 0, I don't think it is, unless they had few backup "last tricks" already active in the setup) . But if they do the experiment with the longer delay they won't have g2<1 on raw data (unless they fudge it in any of the 'million' other ways that their very 'flexible' setup offers).
 
  • #12
nightlight said:
There appears to be something self-contradictory here... You first do not believe their results, and then you said look, it also works well when explained using semi classical model. Now either you discard it completely and don't use it, or you buy it and deal with the fact that two different descriptions, at best, can explain it.

No, that's not what I said. You can't get what they claim, the violation of g2>=0 without subtractions (the accidentals and unpaired singles; the two can be traded by settings on the detectors, window sizes, etc). But if you do the subtractions and if you model semi-classically not just the raw data but also the subtractions themselves, then the semiclassical model is still fine. The subtraction-adjusted g2, say g2' is now much smaller than 1, but that is not a violation of "classicality" in any real sense but merely a convention of speech (when the adjusted g2' is below 1 then we define such phenomenone as "nonclassical"). But a classical model, which also does the same subtraction on its predictions will work fine.

Sorry, but HOW would you know it "will work fine" with the "classical model"? You haven't seen the full data, nor have you seen what kind of subtraction that has to be made. Have you ever performed a dead-count measurement on a photodetector and look at the energy spectrum from a dark count? This is a well-studied area and we do know how to distinguish between these and actual count rate.

If you claim that the result DOES change, then put your money where your mouth is and send a rebuttal. If not, this goes into oblivion.

As I said, being a nice guy, I will let them issue their own errata. They can simply say that the delays were not 6ns and give the correct value, if that is true (which from their g2 nearly 0, I don't think it is, unless they had few backup "last tricks" already active in the setup) . But if they do the experiment with the longer delay they won't have g2<1 on raw data (unless they fudge it in any of the 'million' other ways that their very 'flexible' setup offers).

No, you are not being "nice" at all. In fact, all I see is a cop out. Being "nice" means taking the responsibility to correct something that one sees as a possible error or misleading information and informing the community in question about it. You refused to do that. Instead, you whinned about it here where it possibly will make ZERO impact on anything. So why even bother in the first place? The authors may not make any corrections at all based on what you have described as their response, so most likely the paper will stand AS IS. They will continue to get the recognition, you get NOTHING.

Zz.
 
  • #13
nightlight said:
1. It's interesting that you would even know WHO I was referring to.

Why is it interesting? Haven't you argued with her here quite a bit, not long ago? And she has been all over the internet on this topic for several years. Despite of her handicaps in this field (of being neither a physicist nor a male), she is quite a warrior and at the bottom of it she is right on the experimental claims of QM/QO.

Then you must have missed her claims on Yahoo groups that "Maxwell equations are just math" when I asked her to derive the wave equation from them (she keeps claiming that light is ONLY a wave, yet she doesn't know what a "wave" is). She also has repeatedly claim that QM is dead" despite admitting she knows very little of it and of classical mechanics.

So the person you championed is ignorant of the very subject she's criticizing, by her own admission.

And it appears that you will not put your money where your mouth is and send a rebuttal to AJP. Oblivion land, here we come!

Nothing goes to oblivion, not even a butterfly flipping his wings somewhere in Hawaii. Writing to AJP, where they will censor anything they disagree with idelogocally is not kind of pursuit I care about. I left academia and work in industry to stay away from their kind of self-important poltician-scientists. I might post it to usenet, though.

I agree. The kind of info you are advertizing fits very nicely in the bastion of the Usenet.

It also appears that this conspiracy theory about physics journals has finally reared its ugly head again. Have you ever TRIED submitting to AJP, ever? Or did you already made up your mind and use that prejudice in deciding what they will do? It is ironic that you are criticizing a paper in which you claim they have possibly made a bias decision on what to select, and yet you practice the very same thing here. I hate to think you do this often in your profession.

Zz.
 
  • #14
nightlight said:
You refused to do that. Instead, you whinned about it here where it possibly will make ZERO impact on anything.

Well, let's see if this paper (of which I heard first here) gets trotted out again as a definite proof of collapse or photon indivisibility, or some such. It was a magic trick for kids and now you know it, too.

As to AJP letter impact, it would have zero impact there, if they were to publish it at all, since the authors of [1] would dismiss it as they did in email, and however phony their excuse sounds ("article was already too long" - give me break), that would be the end of it. No reply allowed. And the ortodoxy vs heretic scores once again 1:0. When, years back, Marshall and Santos challenged Aspects experiments, after the Aspect et al replied, their subsequent submissions (letters and papers) were turned down by the editors as of no further interest. That left the general impression, false as it was (since they had the last word), that Aspect's side won against the critics, adding thus extra feather to their triumph. Marshall and Santos have been battling it this way and it just doesn't work. The way the "priesthood" (Marshall's label) works, you just help them look better and you end up burned out if you play these games they have set-up so that half of the time they win and the other half you lose.

That is correct, they were rebuffed in their attempts to rebutt Aspect. And for good reason. They were wrong, although many (including Caroline and yourself) can't see it. It is clear the direction things are going in: more efficiency means greater violation of the local realistic position. So naturally you don't want the Thorn paper to be good, it completely undermines your theoretical objections to Bell tests - i.e. photons are really waves, not particles.

But you have missed THE most important point of the Thorn paper in the process. It is a cost-effective experiment which can be repeated in any undergraduate laboratory. When it is repeated your objections will eventually be accounted for and the results will either support or refute your hypothesis.

I do not agree that your objections are valid anyway, but there is nothing wrong with checking it out if resources allow. Recall that many felt that Aspect had left a loophole before he added the time-varying analyzers. When he added them, the results did not change. A scientist would conclude that a signal is not being sent from one measuring apparatus to the other at a speed of c or less. Other "loopholes" have been closed over the years as well. Now there is little left. New experiments using PDC sources don't have a lot of the issues the older cascade type did. But I note that certain members of the LR camp operate as if nothing has changed.

Make no mistake about it: loopholes do not invalidate all experimental results automatically. Sometimes you have to work with what you get until something better comes along. After all, almost any experiment has some loophole if you look it as hard as the LR crew has.

I wonder why - if the LR position is correct - the Thorn experiment supports the quantum view? If it is a "magic trick", why does it work? Hmmm.
 
  • #15
nightlight said:
Then you must have missed her claims on Yahoo groups that "Maxwell equations are just math" when I asked her to derive the wave equation from them (she keeps claiming that light is ONLY a wave, yet she doesn't know what a "wave" is). She also has repeatedly claim that QM is dead" despite admitting she knows very little of it and of classical mechanics.

That has no relation to what I said.

Yes it does, because you were the one who made her to be what she's not - someone who posesses the knowledge to know what she's talking about. Unless you believe that one can learn physics in bits and pieces, and that physics are not interconnected, then knowing one aspect of it does not necessarily mean one has understood it. This is what she does and this is who you touting.

I agree. The kind of info you are advertizing fits very nicely in the bastion of the Usenet... It also appears that this conspiracy theory about physics journals has finally reared its ugly head again...

Why don't you challenge the substance of the critique, instead of drifting into irrelevant ad hominem tangents. This is, you know, the physics section here. If you want to discuss politics there are plenty of other places you can have fun.

But it IS physics - it is the PRACTICE of physics, which I do every single working day. Unlike you, I cannot challenge something based on what you PERCEIVE as the whole raw data. While you have no qualms in already making a definitive statement that the raw data would agree with "semi-classical" model, I can't! I haven't seen this particular raw data and simply refuse to make conjecture on what it SHOULD agree with.

Again, I will ask you if you have made ANY dark current or dark counts measurement, and have looked at the dark count energy spectrum, AND based on this, if we CAN make a definitive differentiation between dark counts and actual measurements triggered by the photons we're detecting. And don't tell me this is irrelevant, because it boils down to the justification of making cuts in the detection count not only in this type of experiments, but in high energy physics detectors, in neutrino background measurements, in photoemission spectra, etc.. etc.

Zz.
 
  • #16
Sorry, but HOW would you know it "will work fine" with the "classical model"? You haven't seen the full data, nor have you seen what kind of subtraction that has to be made.

All the PDC source (as well as laser & thermal light) can produce is semi-classical phenomena. You need to read Glauber's derivation (see [4]) of his G functions (called "correlation" functions) and his detection model and see the assumptions being made. In particular, any detection is modeled as quantized EM field-atomic dipole interaction (which is Ok for his purpose) and expanded perturbatively (to 1st order for single detector, n-th order for n-detectors). The essential points for Glauber's n-point G coincidences are:

a) All terms with vacuum produced photons are dropped (that results in the normally ordered products of creation and anhilation EM field operators). This formal procedure corresponds to the operational procedure of subtracting the accidentals and unpaired singles. For single detector that subtraction is non-controversial local pocedure and it is built into the design i.e. the detector doesn't trigger if no external light is incident on its cathode. {This of course is only approximate, since the vacuum and the signal fields are superposed when they interact with electrons, so you can't subtract the vacuum part accurately from just knowing the average square amplitude of vacuum alone (which is 1/2 hv per EM mode on average) and the square amplitude of the superposition (the vector sum), i.e. knowing only V^2 and (V+S)^2 (V and S are vectors), you can't deduce what is the S^2, the pure signal intensity. Hence, detector by its design effectively subtracts some average for all possible vectorial additions, which it gets slightly wrong on both sides -- the existence of subtraction which are too small results in (vacuum) shot noise, and of subtractions which are too large results missing some of the signal (absolute efficiency less than 1; note that conventional QE definition already includes background subtractions, so QE could be 1, but with very large dark rate, see e.g. QE=83% detector, with noise comparable to the signal photon counts). }

But for the multiple detectors, the vacuum removal procedure built into the Glauber's "correlations" is non-local -- you cannot design a "Glauber" detector, not even in principle, which can subtract locally the accidental coincidences or unpaired singles . And these are the "coincidences" predicted by the Glauber's Gn() functions -- they predict, by definition, the coincidences modified by the QO subtractions. All their nonlocality comes from non-local formal operation (dropping of terms with absorptions of spacelike vacuum photons), or operationally, from inherently non-local subtractions (you need to gather data from all detectors and only then you can subtract accidentals and unpaired singles). Of, course, when you add this same nonlocal subtraction procedure to semi-classical correlation predictions, they have no problem replicating Glauber's Gn() functions. The non-locality of Gn() "correlations" comes exclusively from non-local subtractions, and it has nothing to do with the particle-like indivisible photon (that assumption never entered Glauber's derivation, that is a metaphysical add-on, with no empirical consequences or a formal counterpart with such properties, grafted on top of the theory, for their mnemonic and heuristic value).

b) Glauber's detector (the physical model behind the Gn() or g2=0, eq.8 of AJP paper) produces 1 count if and only if it absorbs the "whole photon" (which is a shorthand for the quantized EM field mode; e.g. an approximately localized single photon |Psi.1> is a superposition of vectors Ck(t)|1k>, where Ck(t) are complex functions of time and 4-vector k=(w,kx,ky,kz), and |1k> are Fock states in some base, which depend on k as well; note that hv quantum for photon energy is an idealization applicable to infinite plane wave). This absorption process in Glauber's perturbative derivation of Gn() is is purely dynamical process i.e. the |Psi.1> is absorbed as an interaction of EM field with atomic dipole, all being purely local EM field-matter field dynamics treated perturbatively (in 2nd quantized formalism). Thus, to absorb the "full photon" the Glauber detector has to interact with all the photon's field. There is no magic collapse of anything in this dynamics (the von Neumann's boundary classical-quantum is moved one layer above the detector) -- the fields merely follow their local dynamics (as captured in 2nd quantized perturbative approximation).

Now, what does g2=0 (where g2 is eq AJP.8) mean for the single photon incident field? It means that T and R are fields of that photon and the Glauber detector which absorbs and counts this photon, and to which g2() of eq (AJP.8) applies, must interact with the entire mode, which is |T> + |R>, which means the Glauber detector which counts 1 for this single photon is spread out to interact with/capture both T and R beams.[/color] By its definition this Glauber detector leaves EM vacuum as the result of the absorpotion, thus any second Glauber detector (which would have to be somewhere else, e.g. behind it in space, thus no signal EM field would reach it) would absorb and count 0, just the vacuum. That of course is the trivial kind of "anticorrelation" predicted by the so-called "quantum" g2=0 (g2 of eq AJP.8). There is no great mystery about it, it is simply a way to label T and R detectors as one Glauber detector for single photon |Psi.1>=|T>+|R> and then declare it has count 1 when one or both DT or DR trigger and declare it has count 0 if none of DT or DR triggers. It is a trivial prediction. You could do the same (as is often done) with photo-electron counts on a single detector, declare 1 when 1 or more photo-electrons are emitted, 0 if none is emitted. The only difference is that here you would have this G detector spread out to capture distant T and R packets (and you don't differentiate their photoelectrons regarding declared counts of the G detector).

The actual setup for the AJP experiment has two separate detectors. Neither of them is the Glauber detector for the single mode superposed of T and R vectors, since they don't interact with the "whole mode", but only with the part of it. The QED model of detector (which is not Glauber's detector any more) for this situation doesn't predict g2=0 but g2>=1, the same as semiclassical model does, each detector triggers on average half the time, independently of each other (within the gate G window when both T and R have the same intensity PDC pulse). Since each detector here gets just the half of the "signal photon" field, the "missing" energy needed for its trigger is just the vacuum field which is superposed to the signal photon (to its field) on the beam splitter (cf eq. AJP.10, the a_v operators). If you were to split further each T and R into halves, and then these 1/4 beams to further halves, and so on for L levels, with N=2^L final beams and N detectors, the number of the triggers for each G event would be Binomial distribution (provided the detection time is short enough that signal field is roughly constant; otherwise you compund Binomial, which is super-Poissonian), i.e. the probability of exactly k detectors triggering is p(k,N)=C(N,k)*p^k*(1-p)^(N-k), where p=p0/N, and p0 is probability of trigger of a single detector capturing the whole incident photon (which may be defined as 1 for ideal detector and ideal 1 photon state). If N is large enough, you can approximate Binomial distribution p(k,N) with Poissonian distribution p(k)=a^k exp(-a)/k!, where a=N*p=p0, i.e. you get the same result as the Poissonian distribution of the photoelectrons (i.e. these N detectors behave as N electrons of a cathode of a single detector).

In conclusion, there is no anticorrelation effect "discovered" by the AJP authors for the setup they had. They imagined it and faked the experiment to prove it (compare AJP claim to much more cautious claim of the Chiao & Kwiat preprint cited earlier). The g2 of their eq (8) which does have g2=0 applies to a trivial case of a single Glauber detector absorbing the whole EM field of the "single photon" |T>+|R>. No theory predicts nonclassical anticorreltation, much less anything non-local, for their setup with the two independent detectors. It is a pure fiction resulting from an operational misinterpretation of QED for optical phenomena in some circles of Quantum Opticians and QM popularizers.

Have you ever performed a dead-count measurement on a photodetector and look at the energy spectrum from a dark count? This is a well-studied area and we do know how to distinguish between these and actual count rate.

Engineering S/N enhancements, while certainly important, e.g. when dealing with TV signals, have no place in these types of experiments. In these experiments (be it this kind of beam splitter "collapse" or Bell inequality tests) the "impossibility" is essentially of enumerative kind, like a pigeonhole principle. Namely, if you can't violate classical inequalities on raw data, you can't possibly claim a violation on subtracted data since any such subtraction can be added to classical model, it is a perfectly classical extra operation.

The only way the "violation" is claimed is to adjust the the data and then compare it to the original classical prediction that didn't perform the same subtractions on its predictions. That kind term-of-art "violation" is the only kind that exists so far (the "loophole" euphemisms aside).

-- Ref

4. R. J. Glauber, "Optical coherence and photon statistics" in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji (Gordon and Breach, New York, 1965), pp. 63–185.
 
Last edited:
  • #17
nightlight said:
Have you ever performed a dead-count measurement on a photodetector and look at the energy spectrum from a dark count? This is a well-studied area and we do know how to distinguish between these and actual count rate.

Engineering S/N enhancements, while certainly important, e.g. when dealing with TV signals, have no place in these types of experiments. In these experiments (be it this kind of beam splitter "collapse" or Bell inequality tests) the "impossibility" is essentially of enumerative kind, like a pigeonhole principle. Namely, if you can't violate classical inequalities on raw data, you can't possibly claim a violation on subtracted data since any such subtraction can be added to classical model, it is a perfectly classical extra operation.

The only way the "violation" is claimed is to adjust the the data and then compare it to the original classical prediction that didn't perform the same subtractions on its predictions. That kind term-of-art "violation" is the only kind that exists so far (the "loophole" euphemisms aside).

Wait... so how did what you wrote answered the question I asked?

This isn't just a S/N ratio issue! It goes even deeper than that in the sense of how much is known about dark current/dark counts in a photodetector. It was something I had to deal with when I did photoemission spectroscopy, and something I deal with NOW in a photoinjector accelerator. And considering that I put in as much as 90 MV/m of E-field in the RF cavity, I'd better know damn well which ones are the dark currents and which are the real counts.

I will ask for the third time: Have you performed dark counts measurement on a photodetector and measure the dark current spectrum from it?

Zz.
 
  • #18
That is correct, they were rebuffed in their attempts to rebutt Aspect. And for good reason. They were wrong,

How about substantiating your assertion. Can you give me specific paper and result and what was "wrong" (maybe you meant "unpopular" ir "unfashinable" instead of "wrong") about it.

It is clear the direction things are going in: more efficiency means greater violation of the local realistic position.

Yep, better, like these AJP 377 std deviation, after they short-circuit triple concidences out of the loop all together. There are a handful fraudsters selling quantum computers to investors. Let me know when any of it actually works. After 30+ years of claims, your claim sounds just like perpetuum mobile excuses from centuries ago. It's just around the corner, as soon as this friction can be taken out and this reservoir temperature dropped few thousand degrees lower (before thermodynamics).

So naturally you don't want the Thorn paper to be good, it completely undermines your theoretical objections to Bell tests - i.e. photons are really waves, not particles.

This AJP paper claim is plain wrong. Their timings don't add up, even the author acknowledges that much. You can check the paper and data sheet and 'splain to me how does that work. Their result contradicts the theory as well -- the g2<1 occurs only for subtracted data, not the raw counts. There is nothing nonclassical about that kind of g2<1, since the semi-classical model can be extended to subtract the same way as done in the experiment or as done in the QO formalism via the Glauber's normal order prescription for his correlation function.

But you have missed THE most important point of the Thorn paper in the process. It is a cost-effective experiment which can be repeated in any undergraduate laboratory.

That's what worries me. Bunch of deluded kids trying to become physicist, imagining instant vanishing of EM fields at spacelike distances, just because some far detector somewhere triggered. That effect doesn't exist. The actual collapse of EM fields does occur, as shown in the semi-classical and 2nd quantized theories of photoeffect -- it occurs through purely local interactions -- the photons do vanish. Check for example what a highly respected Quantum Optician from MIT, Herman Haus wrote on this matter:
F. X. Kärtner and H. A. Haus http://prola.aps.org/abstract/PRA/v47/i6/p4585_1
Phys. Rev. A 47, 4585–4592 (1993)

This paper intends to clarify some issues in the theory of quantum measurement by taking advantage of the self-consistent quantum formulation of nonlinear optics. A quantum-nondemolition measurement of the photon number of an optical pulse can be performed with a nonlinear Mach-Zehnder interferometer followed by a balanced detector. The full quantum-mechanical treatment shows that the shortcut in the description of the quantum-mechanical measurement, the so-called ``collapse of the wave function,'' is not needed [/color]for a self-consistent interpretation of the measurement process. Coherence in the density matrix of the signal to be measured is progressively reduced with increasing accuracy of the photon-number determination. The quantum-nondemolition measurement is incorporated in the double-slit experiment and the contrast ratio of the fringes is found to decrease systematically with increasing information on the photon number in one of the two paths. The ``gain'' in the measurement can be made arbitrarily large so that postprocessing of the information can proceed classically.

When it is repeated your objections will eventually be accounted for and the results will either support or refute your hypothesis.

You can tell me when it happens. As it stands in print, it can't work. The g2~0 is due to an experimental error. You can't wave away the 6ns delay figure in 10 places in the paper, by blaming the article length restriction. If it was 15ns or 20ns, and not 6ns the article length difference is negligible. It's the most ridiculous excuse I have heard, at least since my age when dogs ate homeworks.

I do not agree that your objections are valid anyway, but there is nothing wrong with checking it out if resources allow.

Anything to back-up your faith in AJP paper claim?

Recall that many felt that Aspect had left a loophole before he added the time-varying analyzers.

No that was a red herring, a pretense to be fixing something that wasn't a problem. Nobody seriously thought, much less proposed a model or theory, that the far apart detectors or polarizers are somehow communicating in which position they are so the they could all conspire to replicate the alleged correlations (which were never obtained in the first place, at least not on the measured data; the "nonclassical correlations" exist only on vastly extrapolated "fair sample" data, which is, if you will pardon the term, the imagined data). It was a self-created strawman. Why don't they tackle and test the fair sampling conjecture. There are specific proposals how to test it, you know (even though the "quantum magic" appologists often claim that the "fair sampling" is untestable), e.g. check Khrennikov's papers.

Other "loopholes" have been closed over the years as well. Now there is little left. New experiments using PDC sources don't have a lot of the issues the older cascade type did. But I note that certain members of the LR camp operate as if nothing has changed.

Yes, the change is that 30+ years of promisses have passed. And oddly, no one wants to touch the "fair sampling" tests. All the fixes fixed what weren't really challenged. Adding few std deviations or aperture depolarization (that PDC fixed from cascades) wasn't a genuine objection. The semiclassical models weren't even speculating in those areas (one would need some strange conspirtatorial models to make those work). I suppose, it is easier to answer the question you or your friend pose to yourselves than to answer with some subtance to challenges from the opponents.


Make no mistake about it: loopholes do not invalidate all experimental results automatically. Sometimes you have to work with what you get until something better comes along. After all, almost any experiment has some loophole if you look it as hard as the LR crew has.

Term "loophole" is a verbal smokescreen. It either works or doesn't. The data either violates inequalities or it doesn't (what some imagined data does, doesn't matter, no matter how you call it).

I wonder why - if the LR position is correct - the Thorn experiment supports the quantum view? If it is a "magic trick", why does it work?

This experiment is contrary to QED prediction (see reply to Zapper, items a and b on Glauber's detectors and the g2=0 setup). You can't obtain g2<1 on raw data. Find someone who does it and you will be looking at the Nobel prize experiment, since it would overturn the present QED -- it would imply a discovery of a mechanism to perform spacelike non-interacting absorption of the quantized EM fields, which according to the existent QED evolve only locally and can be absorbed only through the local dynamics.
 
Last edited:
  • #19
nightlight said:
It is clear the direction things are going in: more efficiency means greater violation of the local realistic position.

Yep, better, like these AJP 377 std deviation, after they short-circuit triple concidences out of the loop all together. There are a handful fraudsters selling quantum computers to investors. Let me know when any of it actually works. After 30+ years of claims, your claim sounds just like perpetuum mobile excuses from centuries ago. It's just around the corner, as soon as this friction can be taken out and this reservoir temperature dropped few thousand degrees lower (before thermodynamics).

etc.

You have already indicated that no evidence will satisfy you; you deny all relevant published experimental results! I don't. Yet, time and time again, the LR position must be advocated by way of apology because you have no actual experimental evidence in your favor. Instead, you explain why all counter-evidence is wrong.

Please tell me, what part of the Local Realist position explains why one would expect all experiments to show false correlations? (On the other hand, please note that all experiments just happen to support the QM predictions out of all of the "wrong" answers possible.)

The fact is, the Thorn et al experiment is great. I am pleased you have highlighted it with this thread, because it is the simplest way to see that photons are quantum particles. Combine this with the Marcella paper (ZapperZ's ref on the double slit interference) and the picture is quite clear: Wave behavior is a result of quantum particles and the HUP.
 
  • #20
Wait... so how did what you wrote answered the question I asked? This isn't just a S/N ratio issue!

Subtracting background accidentals and unpaired singles is irrelevant for the question of classicality of the experiment, as explained. If the semiclassical model M of the phenomenon explains raw data RD(M), then if you perform some adjustemnet operation A and create adjusted data A(RD(M)), then for a classical model to follow, it simply needs to model the adjustment procedure itself on its original prediction, which is always possible (subtracting experimentally obtained numbers from the predicted correlations and removing others which experiment showed to be unpaired singles -- it will still match because the undadjusted data matched, and the adjustments counts come from the experiment, just as you do to compare to the QO prediction via Glauber's Gn(). The only difference is that Glauber's QO predictions refer directly to subtracted data while the semiclassical refer to either, depending on how you treat the noise vs signal separation, but they both have to use experimentally obtained corrections and the raw experimental correlations, and both can match filtered or unfiltered data).

It goes even deeper than that in the sense of how much is known about dark current/dark counts in a photodetector. It was something I had to deal with when I did photoemission spectroscopy, and something I deal with NOW in a photoinjector accelerator. And considering that I put in as much as 90 MV/m of E-field in the RF cavity, I'd better know damn well which ones are the dark currents and which are the real counts.

These are interesting and deep topics, but they are just not related to the question of "nonclassicality".

I will ask for the third time: Have you performed dark counts measurement on a photodetector and measure the dark current spectrum from it?

Yes, I had whole lot of lab work, especially in graduate school at Brown (the undergraduate work in Belgrade had less advanced technology, more classical kind of experiments, although it did have a better nuclear physics lab than Brown, and avoided that one as much as I could), including the 'photon counting' experiments with the full data processing, and I hated it at the time. It was only years later that I realized that it wasn't all that bad and that I should have gotten more out of it. I guess, that wisdom will have wait for the next reincarnation.
 
Last edited:
  • #21
You have already indicated that no evidence will satisfy you; you deny all relevant published experimental results!

I do? Tell me what measured data violates Bell inequalities or shows what this AJP "experiment" claims (violation of g2>=1 on nonsubtracted data; that's not even Glauber's QED prediction since his G2(x1,x2) does assume subtraction of all vacuum photon effects, bith dark rates and unpaired singles)? I ignore the overly optmisitic claims based on imagined data under the conjecture ("fair sampling") which they refuse to test.

I don't.

Well, you prefer imagined data even the non-data resulting by cutting-out of triple coincidences via the wrong delay setting. I don't care about any of those kinds, that's true. Sorry. When I am in the mood to read fiction, there is better stuff out there than that.

Yet, time and time again, the LR position must be advocated by way of apology because you have no actual experimental evidence in your favor.

You tell me which Bell test or anticorrelation test violates classicality constraints. None so far. Adjusting data is a perfectly classical operation, even Bohr would go for that, so that can't make classically compatible raw data non-classical.

Instead, you explain why all counter-evidence is wrong.

What evidence? Which experiment claims to be loophole free and thus excludes any local model?

Please tell me, what part of the Local Realist position explains why one would expect all experiments to show false correlations?

You are confusing imagined correlations (the adjusted data) with measured data (the raw counts). There is nothing non-classical in the procedure of subtracting experimentally obtained background rates and unpaired singles. You simply subtract the same thing from the classical prediction, and if it was compatible with raw data, it will stay compatible with the subtracted data.

(On the other hand, please note that all experiments just happen to support the QM predictions out of all of the "wrong" answers possible.)

Irrelevant for the question of experimental proof of non-classicality. The absence of a semiclassical computation to model some experimental data doesn't imply that the experiment proves 'noclassicality' (the exclusion of any semiclassical model even in principle). Just as not using QED to compute telescope parameters doesn't mean the QED is excluded for the kind of light going through telescope.

Regarding the Bell inequality tests, the only difference is that classical models don't necessarily separate 'noise' from 'signal' along the same line as QM prediction does. What QM calls accidental, the semiclassical model sees as data. There can be no fundamental disagreement between two theories which predict same raw data. What do they do later with the data, how to call each piece or which order to draw the lines between the labels ('noise' 'accidental' 'unpaired singles' etc) is a matter of verbal conventions. It's not physics of the phenomenon. You keep persistently confusing the two, despite yur tag line.

The fact is, the Thorn et al experiment is great. I am pleased you have highlighted it with this thread, because it is the simplest way to see that photons are quantum particles.

Yes, especially if you use 6ns delays on TAC START as claimed in combination with the 10ns TAC GATE START latency. You get just background rate of triple coincidences. Imagine what you can get, if you just cut the wire from the GTR unit going to the counter all together. You get perfect zero GTR rate, no need to estimate backgrouns to subtract or unpaired singles. Absolute "quantum magic" perfection.
 
Last edited:
  • #22
nightlight said:
I will ask for the third time: Have you performed dark counts measurement on a photodetector and measure the dark current spectrum from it?

Yes, I had whole lot of lab work, especially in graduate school at Brown (the undergraduate work in Belgrade had less advanced technology, more classical kind of experiments, although it did have a better nuclear physics lab than Brown, and avoided that one as much as I could), including the 'photon counting' experiments with the full data processing, and I hated it at the time. It was only years later that I realized that it wasn't all that bad and that I should have gotten more out of it. I guess, that wisdom will have wait for the next reincarnation.

"photon counting experiments"? Did you, or did you not perform dark counts measurement AND dark current energy spectrum? [How many times have I asked that already? Four?]

Zz.
 
  • #23
nightlight said:
Yes I measured and subtracted dark rates. Not the spectrum, though. What's that got to do with the subject matter anyway. If you think I mistated facts being discussed, ask that. You could as well ask whether I played football or how did I do with girls, too. When I get old enough to want to write my biography, I will do it myself, thanks.

But this is crucial because the dark count spectrum is VERY DIFFERENT than the real count spectrum. The difference is like night and day (no pun intended). So if one is unsure if simply doing dark counts and then subtracting it out of the actual count is kosher, one can confirm this via looking at the energy spectrum of the raw signal. One can even see signals resembling "false coincidence counts" due to random enviromental events! These things often occur at a wrong energy range than your calibrated signal for a particular photodetector!

So while they don't teach you, or maybe they don't tell you, why we can simply cut off the raw data, or maybe you thought this is such a trivial exercise, there are ample physics behind this methodology. They are NOT swept under the rug and simply dismissed. Spend some time doing this and you will know that this is not voodoo science based on some unproven idea.

Zz.
 
  • #24
ZapperZ said:
But this is crucial because the dark count spectrum is VERY DIFFERENT than the real count spectrum. The difference is like night and day (no pun intended). So if one is unsure if simply doing dark counts and then subtracting it out of the actual count is kosher, one can confirm this via looking at the energy spectrum of the raw signal. One can even see signals resembling "false coincidence counts" due to random enviromental events! These things often occur at a wrong energy range than your calibrated signal for a particular photodetector!

So while they don't teach you, or maybe they don't tell you, why we can simply cut off the raw data, or maybe you thought this is such a trivial exercise, there are ample physics behind this methodology. They are NOT swept under the rug and simply dismissed. Spend some time doing this and you will know that this is not voodoo science based on some unproven idea. Zz.

I know they get subtracted by the detector. That doesn't change that this subtraction is still approximate since you simply don't have enough info to extract the pure signal intensity out of the intensity of the vectorial sum of (S+V), the total field knowing just the V^2. Thus you still get dark rates and some failures to detect the value of low intensity signals, especially for short time windows. The sum, which you have is V^2 + 2VS + S^2, and the mixed term you can't know well enough to subtract that effect of V (you can remove only the V^2 term). For slow detections the mixed term will cancel out, but not for the fast detections, which can be vieweed as a manifestation of the energy-time uncertainty relations - if you want to know the signal energy S^2 accurately enough, you have to wait longer. So you won't ever have accurate S^2 to send out (or in the discrete counting mode, you will have wrong decisions, dark rates and missed counts).
 
  • #25
nightlight said:
I know they get subtracted by the detector. That doesn't change that this subtraction is still approximate since you simply don't have enough info to extract the pure signal intensity out of the intensity of the vectorial sum of (S+V), the total field knowing just the V^2. Thus you still get dark rates and some failures to detect the value of low intensity signals, especially for short time windows. The sum, which you have is V^2 + 2VS + S^2, and the mixed term you can't know well enough to subtract that effect of V (you can remove only the V^2 term). For slow detections the mixed term will cancel out, but not for the fast detections, which can be vieweed as a manifestation of the energy-time uncertainty relations - if you want to know the signal energy S^2 accurately enough, you have to wait longer. So you won't ever have accurate S^2 to send out (or in the discrete counting mode, you will have wrong decisions, dark rates and missed counts).

No, I'm not referring to ANY experiment, and certainly not THIS experiment. I was focusing on the physics of photodetection, of which I've spent considerable effort in. What you have described would be a problem too even with trying to prove classical model of ANY kind. If the signal is weak, it is weak, no matter what you are trying to detect. However, that is why we do multiple measurements, and when you look at the energy spectrum over such a period, you CAN distinguish between what you are looking for, and those that came in at random. Go home, and come back, and do the spectrum again and you will see what was supposed to be there, are still there, and what didn't suppose to be there, are now at different locations! Then do this again next week at Midnight to make sure you weren't facing some astronomical event that's causing anomalous systematic counts in your detector - and yes, I did work with THAT sensitive of an instrument.

I've yet to see anyone who has done THIS systematic determination of the dark count spectrum when compared to the real signal disputes the methodology done in these experiments. And there ARE people whose sole profession is instrumentation physics. If anyone would know what part of the detected signal to trust and what part not to, they certainly would.

Zz.
 
  • #26
nightlight said:
What evidence? Which experiment claims to be loophole free and thus excludes any local model?

Q.E.D. :)

OK, how about Bell Test Experiments which even mentions some of the loopholes commonly asserted.

And what experiment is loophole free anyway? Focus enough energy on anything, and you may plant a seed of doubt. But to what purpose? QM says the entangled photon correlation will be cos^2 \theta. If that formula is not sufficiently accurate, what better formula do you have? Let's test that!
 
  • #27
nightlight said:
And there ARE people whose sole profession is instrumentation physics.

Yes, I know, I am married to one of these "people". My wife, who is en experimental physicist has worked (until our third kid was born; from there to our fifth she has advanced to a full time mother and kids sports taxi driver) in couple instrumentation companies as a physicist and later as a VP of engineering (one for air pollution measurements, another for silicon clean rooms and disk defect detections). They were all heavy users of QO correlation techniques and it was while visiting there and chatting with their physicists and engineers that it downed on me I had no clue what I was talking about regarding Quantum Bell tests and measurement problem (which were topic of my masters thesis while in Belgrade; I switched later to nonperturbative field theory methods, with A. Jevicki and G. Guralnick, when I came to USA, to Brown grad school). What I knew before was all a fiction and math toy models detached completely from reality. When I realized how the real world (unlike the fake stuff done at university labs, such as this AJP "demonstration" of photon magic) optical measurements are done, it occurred to me to re-eximine Bell inequality, and within days I "discovered" that it would be easy to violate the inequalities with right amount of data subtractions, and even completely replicate QM prediction with the right kind of losses. I wrote a little computer simulation which indeed could reproduce the QM cos^2 correlations on subtracted data. Not long after that I found all that was an old hat, P. Pearle had come with roughly same idea couple decades earlier, and then bunch of people had rediscovered it since then. Then I ran into Marshall and Santos (and the rest of the "heretics") none of whom I had ever heard of before, even though this problem was my MS thesis material. That's how shielded students are from the "politically incorrect" science. I was really POed (and still am, I guess) at the whole sterile, conformist and politicized mentality which (mis)guided me through my academic life. It was exactly the same kind of stalinist mindset (or as Marshall calls them, the "priesthood") I found in charge on both sides of the "iron curtain." And it's still there, scheming and catfighting as noisily and as vigorously as ever in their little castle in the air.


Then explain to me how these actually occured:

1. 1986. People were CONVINCED that superconductivity would NEVER occur above 30K (33K to be exact, which was the phonon coupling limit at that time). In fact, the field of superconductivity was thought to be dead, that we knew all there was to know, and for the first time in physics, a field actually has reach maturity. Yet, within just ONE year, the whole community embraced the high-Tc superconductors.

2. The standard model had ZERO ability to include the neutrino oscillation/mass. Yet, after the Super-K discovery, it is now the hottest area of neutrino physics.

3. Photoelectric effect. The Einstein model clearly notes that for photons below the work function of the metal, NO electrons can be emitted. This is, after all, what you are taught in school, even college! Yet, I violate this almost every time I run the photoinjector! Not only that, various multiphoton photoemission work violates this ALL the time AND, get this, they even get to appear in physics journals! Horrors!

I can go on and on and on. All of the above completely destroy what we teach kids in college. If we were to buy your story, NONE of these would have happened, and we would do nothing but only just regurgitate the identical stuff we were taught in school. Such revolutionary discoveries as the ones above would have been surpressed the same way as these so call "heretics". They haven't! These alone are enough to destroy your paranoia about any "priesthoods" wanting to maintain any kid of status quo on any part of physics.

Considering that the whole history of physics is filled with the continuing search for where existing theories would FAIL, it is absurd to suggest that all we care about is preserving them! Why would any physicist want to stop the natural evolutionary process of physics where the essential reason why it continues to exist is to reinvent itself?

Again, you haven't addressed my claim that you CAN make a differentiation between random and dark counts, and actual counts that are of interest. Look at the energy spectrum of dark currents in a photodetector, and then look at it again when you have a real count. Till then, you are missing a HUGE part of the credibility of your argument.

Zz.
 
  • #28
OK, how about Bell Test Experiments which even mentions some of the loopholes commonly asserted.

Whoa, that place is getting worse every time I visit. Now, the opposing view is already a tiny, euphemism and cheap ridicule laced three paragraph sectionlette, under the funny title "Hypothetical Loopholes." Which idiot wrote that "new and improved" stuff? By next visit I expect to see it had evolved into "Hypothetical Conspiracy Wingnut Alleged Loopholes" or something like that with one paragraph of JFK-Waco-911 linkage of the opposition fully exposed. Shame on whoever did that crap. It's not worth the mouse click to get there.

And what experiment is loophole free anyway?

As explained before, the enginerring S/N filtering and data-adjustment procedures are perfectly classically reproducable add-ons. They have no useful place in the 'nonclassicality' tests. All of the nonclassicality experimental claims so far are based on a gimick of making semi-classical prediction for non-adjusted outcomes, then adjusting the outcomes (subtractions of accidentals and unpaired singles), since that is the current form of QO prediction, the Glauber's "correlations" which have built in, simply by definition, the non-local subtractions of all vacuum originated photons and their effects. And then, upon "discovering" the two don't match (as they ought not to) and crowing 'nonclassicality' reproduced yet again by 377 std deviations.

Again, to remind you (since your tag line seems to have lost its power on you) -- that is a convention, a map, for data post-processing and labeling, which has nothing to do with the physics of the phenomenon manifesting in the raw data. QO defines G "correlations" in such a way that they have built in these subtraction, while the semi-classical models don't normally (unless contrived for the purpose of imitation of Gn()) distinguish and label data as "accidental" or "signal vs noise" or apply any other form of engineering filtering of 'rejectables' upfront, mixed up with the physical model. What is for the Gn() post-processing convention "accidental coincidence" and discardable, is for a semiclassical theory just as good a coincidence as "the Good" coincidence of the Gn() scheme (the purified, enhanced signal bearing coincidence residue).


QM says the entangled photon correlation will be cos^2 \theta.

And so does the semiclassical theory, when you include the same subtraction conventions as those already built into the Quantum Optics engineering style signal filtering conventions, which predict the cos^2 pseudo-correlations (they are pseudo since they are not bare correlations of anything in the proper sense of the word -- they, by man-made verbal definition, terminological convention, use Gn() "correlations" to mean data correlation followed by the standard QO signal filtering). I have nothing against data filtering. It is perfectly fine if you're designing TVs, cell phones, and such, but they have no usefulness for the 'non-classicality' experiments, unless one is trying to make living publishing "quantum mysteries" books for the dupes or swindling venture and military bucks for the "quantum computing, quantum cryptography and teleportation magic trio" (and there are handful of these already, the dregs from the dotcom swindle crash).

These kinds of arguments for 'nonclassicality' are based on mixing up and missaplying different data post-processing conventions, and "discovering" that if different conventions are used in two different theories, they don't match each other. Yep, so what. Three and half inches are not three and half centimeters either. Nothing to crow from the rooftops about that kind of "discoveries".
 
  • #29
nightlight said:
(BTW, I am not interested in discussing these politics-of-science topics; so I'll ignore any further diverging tangents thrown in. I'll respond only to the subject matter of thread proper.)

Then don't START your rant on it!

It was YOU who were whinning about the fact that things are not accepted that counter the prevailing idealog, or did you have a memory lapse that at every opportunity you got, you never failed to refer to the "priesthood". And you complain about going off on a tangent? PUHLEEZE!

I just showed you SEVERAL (there's more even from WITHIN academia) examples. In fact, the fact that these came from outside the academics area SHOULD make them even MORE resistant to be accepted. Yet, in one clear example, within just ONE year it was universally accepted. So there!

You want to go back to physics? Fine! Go do the energy spectrum of dark counts in a photodetector. Better yet, get the ones used to detect the Cerenkov light from passing neutrinos since these are MORE susceptible to dark counts all the time!

Zz.
 
  • #30
ZapperZ said:
Again, you haven't addressed my claim that you CAN make a differentiation between random and dark counts, and actual counts that are of interest. Look at the energy spectrum of dark currents in a photodetector, and then look at it again when you have a real count.

Of course I did address it, already twice. I explained why there are limits to such differentiation and consequently why the dark counts and missed detections (combined in a tradeoff of experimenters choice) are unavoidable. That's as far as the mildely on-topc aspect of that line of questions goes. { And, no, I am not going to rush out to find a lab to look at the "power spectra." Can't you get off it. At least in this thread (just start another one on "power spectra" if you just have to do it).}
 
  • #31
nightlight said:
Of course I did address it, already twice. I explained why there are limits to such differentiation and consequently why the dark counts and missed detections (combined in a tradeoff of experimenters choice) are unavoidable. That's as far as the mildely on-topc aspect of that line of questions goes. { And, no, I am not going to rush out to find a lab to look at the "power spectra." Can't you get off it. At least in this thread (just start another one on "power spectra" if you just have to do it).}

And why would this be "off topic" in this thread? Wasn't it you who made the claim that such selection of the raw data CAN CHANGE the result? This was WAAAAAY towards the beginning of this thread. Thus, this is the whole crux of your argument, that AFTER such subtraction, the data looks very agreeable to QM predictions, where as before, you claim that all the garbage looks like "classical" description. Did you or did you not make such claim?

If you can question the validity of making such cuts, then why isn't it valid for me to question if you actually know the physics of photodetector on WHY such cuts are valid in the first place? So why would this be off-topic in this thread?

Zz.
 
  • #32
nightlight said:
The photoeffect was modeled by a patent office clerk, who was a reject from the academia.

I guess the conspirators silenced him too. Ditto Bell, whose seminal paper received scant attention for years after release.

Anyone who can come up with a better idea will be met with open arms. But one should expect to do their homework on the idea first. Half a good idea is still half an idea. I don't see the good idea here.

The Thorn experiment is simple, copied of course from earlier experiments (I think P. Grangier). If the photons are not quantum particles, I would expect this experiment to clearly highlight that fact. How can you really expect to convince folks that the experiment is generating a false positive? The results should not even be close if the idea is wrong. And yet a peer-reviewed paper has appeared in a reputable journal.

To make sense of your argument essentially requires throwing out any published experimental result anywhere, if the same criteria is applied consistently. Or does this level of criticism apply only to QM? Because the result which you claim to be wrong is part of a very concise theory which displays incredible utility.

I noticed that you don't bother offering an alternative hypothesis to either the photon correlation formula or something which explains the actual Thorn results (other than to say they are wrong). That is why you have half a good idea, and not a good idea.
 
  • #33
ZapperZ said:
And why would this be "off topic" in this thread? Wasn't it you who made the claim that such selection of the raw data CAN CHANGE the result? This was WAAAAAY towards the beginning of this thread. Thus, this is the whole crux of your argument, that AFTER such subtraction, the data looks very agreeable to QM predictions, where as before, you claim that all the garbage looks like "classical" description. Did you or did you not make such claim? Zz.

You have tangled yourself up into the talmudic logic trap of Quantum Optics and QM magic non-clasicality experimental claims. It is a pure word game, comparing three inches to three centimeters and discovering the two are different, as explained in several recent messages to DrChinese.

You can see the same gimmick in the verbal setup of nearly every such claim -- they define "classical correlations" conventionally, which means the prediction is made assuming no subtractions. Then they do the experiment, do the standard subtraction and show that the result match predictions of the Glauber's "correlations" Gn(..). But Gn()'s are defined differently than the ones they used for "classical" prediction -- the Gn()'s include the subtractions in their definition so, of course they'll match the subtracted data, while the "classical prediction" which, again by the common convention doesn't include the subtractions into the prediction, won't match it. Big suprprise.

This AJP paper and experiment had followed precisely this same common recipe for the good old Quantum Optics magic show. Their eqs' (1)-(3) are "classical" and by definitions no subtractions are included in the model here. Then they go to "quantum" expression for g2, eq. (8) label it same as classical, but it is entirely different thing -- it is Glauber's normal ordered expectation value, and that is the prediction for correlation plus vacuum effects subtractions (which operationally are the usual QO subtractions). The authors, following the established QO magic show playbook, use the same label g2(0) for two different things, one which models the data post-procesing recipe, the other that doesn't. The two g2's of (1) vs (8) are entirely different quantities.

But these authors then went one beyond the true and tried QO show playbook, by adding in the redundant third unit and then, by misconfiguring the delays, achieve the Nobel prize result (if it were real) -- they get nearly maximum violation before they even subtracted the accidentals or unpaired singles (on DG detector, when neither DT or DR triggers). It's a completely made up effect, that doesn't exist in QED theory ([post=529314]as already explained[/post]) or any experiment (other than the [post=529069]rigged "data"[/post] they had put out).

I also already explained what g2(0)=0 on single photon state means operationally. That prediction of full anticorrelation has nothing to do with the AJP paper setup and the computations via eq (14). It is operationally entirely different procedure to which g2=0 applies (see my [post=529314]earlier reply[/post] on details).

If you can question the validity of making such cuts,

I am not questioning some generic "validity" (say for some other purpose) of the conventional QO subtractions.

I am questioning their comparing of apples and oranges -- defining "classical" g2() in eq (AJP.1) so that it doesn't include subtractions, and then labeling g2() in eq (AJP.8) same way, suggesting implicitly they are the same definition regarding the convention for subtractions. The (8) includes subtractions by definitions, the (1) doesn't. The inequality (3) modelling by convention the non-subtracted data is not violated by any non-nonsubtracted data. If you want to model the subtracted data via modified "classical" g2, you can easily do that, but then the inequality (3) isn't any more g2>=1 but g2>=eta (the overall setup efficiency), which is much smaller than one (see Chiao & Kwiat preprint, page 10, eq (11) correpsonding to AJP.14, and page 11 giving g2>=eta, corresonding to AJP.3). The Chiao-Kwiat stayed within the limits of the regular QO magic show rules, thus their "violations" are of term-of-art kind, a matter of defining custom term "violation" in this context, without really claiming any genuine nonlocal collapse, as they acknowledge (page 11):
And in fact, such a local realistic model can account for the
results of this experiment with no need to invoke a nonlocal collapse.
Then, after that recognition of inadequacy of PDC + beam splitter as a proof of collapse, following the true and tried QO magic show recipe, they do invoke Bell's inequality violations, take the violations for granted, as having been shown. But that's another story, not their experiment, and it is also well known that no Bell test data has actually violated the inequality, only the imagined "data" (reconstructed under the "fair sampling" assumption, which no-one wishes to put to test) did violate the inequalities. As with the collapse experiments, the semiclassical model violates Bell inequalities, too, once you allow it the same subtraction that are assumed in the QO prediction (which includes the non-local vacuum effects subtractions in its definition).
 
Last edited:
  • #34
The Thorn experiment is simple, copied of course from earlier experiments (I think P. Grangier). If the photons are not quantum particles, I would expect this experiment to clearly highlight that fact.

The AJP experiment is showing that if you misconfigure the triple coincidence unit timings so that it doesn't get anything but the accidentals, the g2 via eq (14) will come out nearly same as the one accidentals alone will produce (their "best" g2 is within half std deviation from the g2 for accidentals alone).

How can you really expect to convince folks that the experiment is generating a false positive?

Just read the data sheet and read their 6ns delay brought up in 10 places in their text plus in the picture once. Show me how it works with the delays given. Or explain why would they put wrong delay in the paper in so many places, while somehow having used the correct delay in the experiment. See the Chiao-Kwiat paper mentioned in my previous msg to see what this experiment can prove.

To make sense of your argument essentially requires throwing out any published experimental result anywhere, if the same criteria is applied consistently. Or does this level of criticism apply only to QM? Because the result which you claim to be wrong is part of a very concise theory which displays incredible utility.

You're arguing as if we're discussing literary critique. No substance, no pertinent point, just the meta-discussion and psychologizing, euphemizing, conspiracy ad hominem labeling, etc.

I noticed that you don't bother offering an alternative hypothesis to either the photon correlation formula or something which explains the actual Thorn results (other than to say they are wrong).

Did you read this thread. I have [post=529314]explained what g2=0 [/post]means operationally and why it is irrelevant for their setup and their use of it via eq (AJP.14). The classical prediction is perfectly fine, the g2 of unsubtracted data used via (AJP.14) will be >=1. See also the [post=530058]Chiao-Kwiat acknowledgment of that plain fact I already quoted[/post].
 
Last edited:
  • #35
vanesch said:
It can happen, but most of the time, people writing up an experimental publication are not completely incompetent nutcases who don't know how to use their apparatus.

It wouldn't be unusual for pedagogical materials to cheat, for what in their mind is a justifiable cause. After all, how many phony derivations are done in regular textbooks. It doesn't mean the authors don't know better, but they have to get it simple enough, or at least mnemonic enough, for students to absorb. If they believe the final result is right, they don't hestiate to take cheap shortcuts to get there.

The same probably holds for the experimenters. Except that here, there is no such effect[/color] to be observed, [post=529314]in theory[/post] or in [post=530058]practice[/post] (see the section on Chiao-Kwiat paper).

Another bit that raises eyebrows -- the chief author would not provide even a single actual count figure of anything in the experiment. He just stuck with the final results for g2, Table I, but neither the paper nor the email requests provided a single actual count figure used to compute them. All the answers I got were in the style I quoted in the initial post -- 'we did it right, don't ask, period.' That doesn't sound like a scientific experiment at all.

There is no way they can claim their formula (14), in which there are neither accidental subtractions nor the subtractions of unpaired singles (since they use N_G for the singles rates; Chiao-Kwiat experiment subtracts unpaired singles, which would correspond to (N_T+N_R) in the numerator of (AJP.14)) could yield g2<1.

The only hypothesis that fits this kind of 'too good to be true' perfection (contrary to other similar experiments and the theory) and their absolutely unshakable determination to not expose any count figures used to compute g2, is what I stated at the top of this thread -- they did use the wrong delay of 6ns in the experiment, not just in its AJP description.

Their article length excuse for timing inconsistency is completely lame -- that explanation would work only if they didn't give the 6ns figure at all. Then one can indeed say -- well it's one of those experimental details we didn't provide, along with many others. But they did put the 6ns figure in the article, not once but in 10 places plus in the figure 5. Why over-emphasize the completely bogus figure so much, especially if they knew it was bogus? So, clearly, the repetition of 6ns was meant as an instruction to other student labs, a sure-fire magic ingredient which makes it always "work" to absolute "perfection", no matter what the accidentals and the efficiencies are.

-- comment added:

Note that in our earlier discussion, you were claiming, too, that this kind of remote nondynamical collapse does occur. I gather from some of your messages in other threads you have now accepted my locality argument that the local QED field dynamics cannot yield such prediction without cornering itself into the selfcontradiction (since, among others, that would imply that the actual observable phenomena depend on position of the von Neumann's boundary). If you follow that new understanding one more step, you will realize that this implies that von Neumann's general projection postulate for composite systems, such as that used for deducing Bell's QM prediction (the noninteracting projection of spacelike remote subsystem) fails for the same reason -- it contradicts the von Neumann's very own deduction of independence of phenomena on the position of his boundary. In other words, von Neumann's general projection postulate is invalid since it contradicts the QED dynamics. The only projection which remains valid is a projection which has a consistent QED dynamical counterpart, such as photon absorption or a fermion localization (e.g. a free electron gets captured into a bound state by an ion).
 
Last edited:
  • #36
nightlight said:
Note that in our earlier discussion, you were claiming, too, that this kind of remote nondynamical collapse does occur.

I never did. I cannot remember ever having taken the position of "remote collapse", except as a calculational tool. All my arguments here, about delayed choice quantum erasers, EPR etc... were oriented to show that there is no remote collapse necessary, although you can use it.

In fact, the "remote collapse" poses problems from the moment you assign any ontology to the wave function. There problems are not unsurmountable (as is shown in Bohm's mechanics) but I prefer not to consider such solutions for the moment, based upon some kind of esthetics, which says that if you stick at all cost to certain symmetries for the wave function formalism, you should not spit on them when considering another rule (such as the guiding equation).
The "remote collapse" poses no problem if you don't assign any ontology to the wave function, and just see it as a calculational tool.

So if I ever talked about "remote collapse" it was because of 2 possible reasons: I was talking about a calculation OR I was drunk.

cheers,
Patrick.
 
  • #37
vanesch said:
I never did. I cannot remember ever having taken the position of "remote collapse", except as a calculational tool.

Well, the collapse in the sense of beam splitter anticorrelations discussed in this thread. You definitely, in our detector discussion took position that there will be genuinely reduced detection probability on detector DT whenever there was a trigger on the matching detctor DR. You argued that there was a drop in the trigger probability of DT trigger in this case and that this drop is genuine (in the sense of anticorrelation not being an artifact of subtractions of accidentals and unpaired singles). As I understand you don't believe any more in that kind of non-interacting spacelike anticorrelation (the reduction of remote subsystem state). You certianly did argue consistently that it was a genuine anticorrelation and not an artifact of subtractions.
 
  • #38
nightlight said:
You definitely, in our detector discussion took position that there will be genuinely reduced detection probability on detector DT whenever there was a trigger on the matching detctor DR.

Ah, yes, and I still do. But that has nothing to do with true collapse or not !
There will be a branch where DR triggers and not DT, and there will be a branch where DT triggers, and not DR. There will never be a branch where DR and DT trigger together (apart from double events).

I just happen to be an observer in one of the two branches. But the other one can happily exist. Collapse means that suddenly, that other branch "disappears". It doesn't have to. I will simply not observe it, because I don't happen to be in that branch.

Observationally this is of course indistinguishable from the claim that the branch "I am not in" somehow "doesn't exist anymore". If you do that, you have a collapse. But I find it more pleasing to say that that branch still exists, but is not open to my observation, because I happen to have made a bifurcation in another branch.
It all depends what you want to stick to: do you stick to the postulate that things can only exist when I can observe them, or do you stick to esthetics in the mathematical formalism ? I prefer the latter. You don't have to.

cheers,
Patrick.
 
  • #39
nightlight said:
http://www.ortec-online.com/electronics/tac/567.htm . The data sheet for
the model 567 lists the required delay of 10ns for the START
(which was here DT signal, see AJP.Fig 5) from the START GATE
signal (which was here DG signal) in order for START to get
accepted. But the AJP.Fig 5, and the several places in the
text give their delay line between DG and DT as 6 ns. That
means when DG triggers at t0, 6ns later (+/- 1.25ns) the
DT will trigger (if at all), but the TAC will ignore it
since it won't be ready yet, and for another 4ns. Then,
at t0+10ns the TAC is finally enabled, but without START
no event will be registered. The "GTR" coincidence rate
will be close to accidental background (slightly above
since if the real T doesn't trip DT and the subsequent
background DT hits at t0+10, then the DG trigger, which
is now more likely than background, at t0+12ns will
allow the registration).

I read in detail your analysis here, and it is correct in a certain sense. Indeed, in the data sheet is a requirement for 10 ns for the gate to be "active" before the start comes in. Now, does that mean that if you give 6ns, you're guaranteed NOT to have a count ? I wouldn't think so ! Doing quite some electronics development myself, I know that when I specify some limits on utilisation, I'm usually sure of a much better performance ! So if I make a device of which I'm relatively sure that, say, a delay of 2 ns is sufficient in most circumstances, I'd rather specify 5 or 10 ns, so that I'm absolutely sure that I will keep the specs, except if I'm pushed to the limits.
So I'm pretty sure that the 6ns is sufficient to make the actually trigger.
A point is of course that in this particular example, the remedy is extremely simple: use longer delay lines !
I've already been in such a situation, you know: you do an experiment, everything works OK, you write up your stuff and you submit it. Then there is a guy who points out that you didn't respect a certain specification. So you get pale, and you wonder if you have to retract your submission. You think about it, and then you say: ah, but simply using longer delay lines will do the trick ! So you rush to the lab, you correct for the error, and you find more or less equivalent results ; it didn't matter in the end. What do you do ? Do you write to the editor saying that you made a mistake in the paper, but that it actually doesn't matter, if only you're allowed to change a few things ??
OF COURSE NOT. You only do that if the problem DOES matter.
So that's in my opinion what happened, and that's why the author told you that everything is all right, and that these 6ns delays are not correct but that everything is all right.
There's also another very simple test to check the coincidence counting: connect the three inputs G, T and R to one single pulse generator. In that case, you should find identical counts in GT, GR and GTR.

After all, they 1) WORK (even if they don't strictly respect the specs of the counter) 2) they are in principle correct if we use fast enough electronics, 3) they are a DETAIL.
Honestly, if it WOULDN'T WORK, then the author has ALL POSSIBLE REASONS to publish it:
1) he would have discovered a major thing, a discrepancy with quantum optics
2) people will soon find out ; so he would stand out as the FOOL THAT MISSED AN IMPORTANT DISCOVERY BECAUSE HE DIDN'T PAY ATTENTION TO HIS APPARATUS.

After all, he suggests, himself, to make true counting electronics. Hey, if somebody pays me enough for it, I'll design it myself ! I'd think that for $20000,- I make you a circuit without any problem. That's what I do part of my job time.

I'll see if we have such an ORTEC 567 and try to gate it at 6ns.

Now, if you are convinced that it IS a major issue, just try to find out a lab that DID build the set up, and promise them a BIG SURPRISE when they change the cables into longer ones, on condition that you will share the Nobel prize with them.

cheers,
Patrick
 
Last edited by a moderator:
  • #40
vanesch said:
So if I make a device of which I'm relatively sure that, say, a delay of 2 ns is sufficient in most circumstances, I'd rather specify 5 or 10 ns, so that I'm absolutely sure that I will keep the specs, except if I'm pushed to the limits.

This is not babbling in the air! For instance, together with a collegue, we devellopped a card for charge division read out. After reception of the analogue pulses, the position address on 8 bits, taking into account all calibration curves, is ready at the output in 370 ns. Well, we specified 1 microsecond, because the design specifications were 2 microseconds...

cheers,
Patrick.
 
  • #41
This is not babbling in the air! For instance, together with a collegue, we devellopped a card for charge division read out. After reception of the analogue pulses, the position address on 8 bits, taking into account all calibration curves, is ready at the output in 370 ns. Well, we specified 1 microsecond, because the design specifications were 2 microseconds...

Your example shows an additional reason why it is not plausible in this case. In your case there were no ten other vendors with competing designs and specs fighting for the same customer. You had monopoly and what you say goes, thus you're better off being conservative with stating limitations. In a competitive situation, the manufacturers will push the specs as far as they can get away with (i.e. in the cost/benefit analysis they would estimate how much they will lose from returned units vs los of sales, prestige, customers to competition). Just think of what limits you would state had that been a competition for that job -- there were ten candidates and they all design a unit to given minimum requirements (but they're allowed to improve) and the employer will pick one they like best.
 
  • #42
I read in detail your analysis here, and it is correct in a certain sense. Indeed, in the data sheet is a requirement for 10 ns for the gate to be "active" before the start comes in. Now, does that mean that if you give 6ns, you're guaranteed NOT to have a count ?

No it didn't work by luck. The author acknowledges the figure 6ns (and thus the 12ns) is wrong and that they have used the correct longer timing. You can ask him at his web page (just be tactful, he got very angry after I asked him about it).

The problem with "leave it alone, don't issue errata, since it worked" doesn't wash either. It didn't work. The g2=0 for eq (AJP.8) applies to normalized Glauber's correlation function which means you have to do subtractions of accidentals and removal of the unpaired singles (the counts on DG for which there were no T or R events). Otherwise you haven't removed the vacuum photon effects as the G2() does (see his derivation in [4]). Both adjustments lower the g2 in (AJP.14). But they already got nearly "perfect" result with raw counts. Note that they used N(G), which is 100,000 c/s, for their singles count in eq (AJP.14), while they should have used N(T+R) which is 8000 c/s.

Using raw counts in eq (14), they could only get g2>=1. See http://arxiv.org/abs/quant-ph/?0201036, page 11, where they say that classical g2>= eta, where eta is "coincidence-detection efficiency." They also acknowledge that the experiment does have semi-classical model (after all, Marshall & Santos had shown that already for Grangier at al. 1986 case, way back then). Thus they had to call upon already established Bell's inequality violations to discount semi-classical model for their experiment (p 11). They understand that this kind of experiment can't rule out semi-classical model on its own, since there is [post=529314]no such QED prediction[/post]. With Thorn et al, it is so "perfect" it does it all by itself, and even without any subtractions at all.

Keep also in mind that this is AJP and the experiment was meant to be a template for other undergraduate labs (as their title also says) to show their students, inexpensively and reliably the anticorrelation. That's why the figure 6ns is all over the article (in 11 places). That's the magic ingredient that makes it "work". Without it, on raw data you will have g2>=1.
 
  • #43
Ah, yes, and I still do. But that has nothing to do with true collapse or not !
There will be a branch where DR triggers and not DT, and there will be a branch where DT triggers, and not DR.


Oh good ol' MWI. We went this route before and you were backed into solipsism. Anything goes there.

There will never be a branch where DR and DT trigger together (apart from double events).

If you have an optical lab, try it out. That won't happen. The DT and DR will trigger no differently than classically i.e. your raw GTR coincidence data (nothing removed) plugged into (AJP.14) will give you g2>=1 (you'll probably get at least 1.5). The usual QO S/N enhancements via subtractions should not be done here.

Namely, if your raw counts (which is what classical model with g2>=1 applies to) don't violate g2>=1, there is no point subtracting and checking whether that adjusted g2a goes below 1, since the same subtraction can be added to the classical model (so that it, too, corresponds to what was done with the data in the experiment) and it will follow the adjusted data as well and show g2a<1 (as acknowledged in Chiao,Kwiat paper).

Note also that conventional QO "correlations", the Glauber's Gn(), already include in their definition the subtraction[/color] of accidentals and unpaired singles (the vacuum photon effects). All of the so-called nonclassicality of Quantum Optics is result of this difference in data post-processing convention -- they compute classical model for raw counts, while their own QO post-processing convention in Gn() is to count and correlate and than to subtract. So the two prediction will be "different" since they refer by their definitions to different quantities. The "classical" g2 of (AJP.1-2) is operationally different quantity than the "quantum" g2 of eq (AJP.8), but in QO nonclassicality claims, they will use same notation and imply they refer to the same thing (plain raw correlation), then proclaim non-classicality when the experiment using their QO convention (with all subtractions) matches the Gn()'s version of g2 (by which convention, after all, the data was post-processed), not the classical one (since it didn't use its post-processing convention).

You can follow this QO magic show recipe right in this AJP article, that's exactly how they set it up {but then, unlike most other authors, they get too gready and want to show the complete "perfection" and reproducable, the real magic, and for that you really need to cheat in a more reckless way than just misleading by omission}.

The same goes for most others where the QO non-classicality (anticorrelations, sub-Poissonian distributions, etc) is claimed as something genuine (as opposed to being a mere terminological convention for QO use of word 'non-classical', since that what it is and there is no more to it).

Note also that the "single photon" case, where "quantum" prediction is g2=0 has operationally nothing to do with this experiment ([post=529314]see my previous explanation[/post]). The "single photon" Glauber detector is a detector which absorbs (via atomic dipole and EM interaction, cf [4]) both beams, the whole photon, which is the state |T>+|R>. Thus the Glauber detector for single photon |T>+|R> with g2=0 is either a single large detector covering both paths, or an external circuit attached to DT+DR which treats two real detectors DT and DR as a single detector and gives 1 when one or both trigger, i.e. as if they're a single cathode. The double trigger T & R case is simply equivalent to having two photoelectrons emitted on this large cathode (the photoelectron distribution is at best Poissonian for perfectly steady incident field, but it is compound/super Poissonian for variable incident fields).
 
Last edited:
  • #44
Nightlight,

You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate?

Also, is it your opinion that photons are not quantum particles, but are instead waves?

-DrC
 
  • #45
You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate?

The phenomenologial PDC hamiltonian used in Quantum Optics computations has been reproduced perfectly within the Stochastic Electrodynamics (e.g. see papers from Marshall & Santos group; recent editions of well respected Yariv's QO textbook have an extra last chapter which for all practical purposes recognizes this equivalence).

Also, is it your opinion that photons are not quantum particles, but are instead waves?

Photons in QED are quantized modes of EM field. For a free field you can construct these in any basis in Hilbert space, so the "photon number" operator [n] depends on basis convention. Consequently the answer to question "how many photons are here" depends on the convention for the basis. (No different than you asking me what is the speed number on your car, and if I say 2300; this obviously is meaningless, since you need to know what convention I use for my speed units.)

For example if you have plane wave as a single mode, in its 1st excited state (as harmonic oscillator), in that particular base you have single photon, the state is an eigenstate of this [n]. But if you pick other bases, then you'll have a superposition of generally infinitely many of their "photons" and the plane wave is not the eigenstate of their [n].

The QO convention then calls "single photon" any superposition of the type |Psi.1> = Sum(k) of Ck(t) |1_k>, where sum goes over wave vectors k (a 4-vector) k=(w,kx,ky,kz) and |1_k> are eigenstates of some [n] with eigenvalue 1. This kind of photon is quasi-localized (with spread stretching across many wavelengths). Obviously, here you don't have any more E=hv relation since there is no single ferquency v superposed into the "single photon" state |Psi.1>. If the localization is very rough (many wavelengths superposed) then you could say that approximately |Psi.1> has some dominant and average v0, and one could say approximately E=hv0.

But there is no position operator for a point-like photon (and it can't be constructed in QED) and no QED process generates QED "single photon", the Fock state |1> for some basis and its [n], (except as an approximation in the lowest order of perturbation theory). Thus there is no formal counterpart in QED for a point-like entity hiding somewhere inside EM field operators, much less of some such point being exclusive. A "photon" for laser light streches out for many miles.

The equations these QED and QO "photons" follow in Heisenberg picture are plain old Maxwell equations for free fields or for any linear optical elements (mirrors, beam splitters, polarizers etc). For the EM interactions, the semiclassical and QED formalisms agree to at least alpha^4 order effects (as shown by Barut's version of semiclassical fields which include self-interaction). That is 8-9 digits of precision (it could well be more if one were to carry out the calculations). Barut unfortunately died in 1994, so that work has stalled. But their results up to 1987 are described in http://library.ictp.trieste.it/DOCS/P/87/248.pdf . ICTP has scanned 149 of his preprints, you can get pdfs here (type Barut in "author"; also interesting is his paper on http://library.ictp.trieste.it/DOCS/P/87/157.pdf ; his semiclassical approach starts in papers from 1980 and on).

In summary, you can't count them except by convention, they appear and disappear in interactions, there is no point they can be said to be at, they have no position but just approximate regions of space defined by a mere convention of "non-zero values" for field operators (one can call these wave packets as well, since they move by the plain Maxwell wave equations, anyway; and they are detected by the same square-law detection as semiclassical EM wave packets).

One can think, I suppose of point photons as a heuristics, but one has to watch not to take it too far and start imagining, as these AJP authors apparently did, that you have some genuine kind of exclusivity one would have for a particle. That exclusivity doesn't exist either in theory (QED) or in experiments (other than via misleading presentation or outright errors as in this case).

The theoretical non-existence was [post=529314]already explained[/post]. In brief, the "quantum" g2 of (AJP.8-11 for n=0) corresponds to a single photon in the incident field. This "single photon" is |Psi.1> = |T> + |R> where |T> and |R> correspond to regions of the "single photon" field in T and R beams. The detector which (AJP.8) models is Glauber's ideal detector, which counts 1 if and only if it absorbs the whole single photon, leaving the vacuum EM field. But this "absorbtion" is (derived by Glauber in [4]) purely dynamical process, local interaction of quantized EM field of the "photon" with the atomic dipole and for the "whole photon" to be absorbed, the "whole EM field" of the "single photon" has to be absorbed (via resonance, a la antenna) with the dipole. (Note that the dipole can be much smaller than the incident EM wavelength, since the resonance absorption will absorb the surrounding area of the order of wavelength.)

So, to absorb "single photon" |Psi.1> = |T> + |R>, the Glauber detector has to capture both branches of this single field, T and R, interact with them and resonantly absorb them, leaving EM vacuum as result, and counting 1. But to do this, the detector will have to be spread out to capture both T and R beams. Any second detector will get nothing, and you indeed have perfect anticorrelation, g2=0, but it is entirely trivial effect, with nothing non-classical or puzzling about it (semi-classical detector will do same if defined to capture full photon |T>+|R>).

You could simulate this Glauber detector capturing "single photon" |T>+|R> by adding an OR circuit to outputs of two regular detectors DT and DR, so that the combined detector is Glaber_D = DT | DR and it reports 1 if either one or both of DT and DR trigger. This, of course, doesn't add anything non-trivial since this Glauber_D is one possible implementation of Glauber detector described in previous paragraph -- its triggers are exclusive relative to a second Glauber_Detector (e.g. made of another pair of regular detectors placed somewhere, say, behind the first pair).

So the "quantum" g2=0 eq (8) (it is also semiclassical value, provided one models the Glauber Detector semiclassically), is valid but trivial and it doesn't correspond to the separate detection and counting used in this AJP experiment or to what the misguided authors (as their students will be after they "learn" it from this kind of fake experiment) had in mind.

You can get g2<1, of course, if you subtract accidentals and unpaired singles (the DG triggers for which no DT or DR triggered). This is in fact what Glauber's g2 of eq. (8) already includes in its definition -- it is defined to predict the subtracted correlation, and the matching operational procedure in Quantum Optics is to compare it to subtracted measured correlations. That's the QO convention. The classical g2 of (AJP.2) is defined and derived to model the non-subtracted correlation, so let's call it g2c. The inequality (AJP.3) is g2c>=1 for non-subtracted correlation.

Now, nothing is to stop you from defining another kind of classical "correlation" g2cq which includes subtraction in its definition, to match the QO convention. Then this g2cq will violate g2cq>=1, but there is nothing surprising here. Say, your subtractions are defined to discard the unpaired singles. Therefore in your new eq (14) you will put N(DR)+N(DT) (which was about 8000 c/s) instead of N(G) (which was 100,000 c/s) in the numerator of (14) and you have now g2cq which is 12.5 times smaller than g2c, and well below 1. But no magic. (The Chiao Kwiat paper recognizes this and doesn't claim any magic from their experiment.) Note that these subtracted g2's, "quantum" or "classical" are not the g2=0 of single photon case (eq AJP.11 for n=1), as that was a different way of counting where the perfect anticorrelation is entirely trivial.

Therefore, the "nonclassicality" of Quantum Optics is a term-of-art, a verbal convention for that term (which somehow just happens to make their work sound more ground-breaking). Any well bred Quantum Optician is thus expected to declare a QO effect as "nonclassical" whenever its subtracted correlations (predicted via Gn or measured and subtracted) violate inequalities for correlations computed classically for the same setup, but without subtractions. But there is nothing genuinely nonclassical about any such "violations".

These verbal gimmick kind of "violations" have nothing to do with theoretically conceivable genuine violations (where QED still might disagree with semiclassical theory). The genuine violations would have to be for the perturbative effects of orders alpha^5 or beyond, some kind of tiny difference beyond 8-9th decimal place, if there is any at all (unknown at present). QO operates mostly with 1st order effects, all its phenomena are plain semiclassical. All their "Bell inequality violations" with "photons" are just creatively worded magic tricks of the described kind -- they compare subtracted measured correlations with the unsubtracted classical predictions, all wrapped into whole lot of song and dance on "fair sampling" or "momentary technological detection loophole" or "non-enhancement hypothesis"... And after all the song and dance quiets down, lo and behold, the "results" match the subtracted prediction of Glauber's correlation function Gn (Bell's QM result cos^2() for correlations for photons are a special case Gn()) and violate nonsubtracted classical prediction. Duh.
 
Last edited by a moderator:
  • #46
nightlight said:
You appear to not accept the PDC technology as being any more persuasive to you than Aspect's when it comes to pair production. Is that accurate?

The phenomenologial PDC hamiltonian used in Quantum Optics computations has been reproduced perfectly within the Stochastic Electrodynamics...

... And after all the song and dance quiets down, lo and behold, the "results" match the subtracted prediction of Glauber's correlation function Gn (Bell's QM result cos^2() for correlations for photons are a special case Gn()) and violate nonsubtracted classical prediction. Duh.

Very impressive analysis, seriously, and no sarcasm intended.

But as science, I consider it nearly useless. No amount of "semi-classical" explanation will ever cover for the fact that NOT ONE IOTA adds anything to our present day knowledge, which is the purpose of true scientific effort. It is strictly an elaborate catch-up to QM/CI by telling us that you can get the same answers a different way. Reminds me of the complex theories of how the sun and planets actually rotate around the Earth. (And all the while you criticize a theory which since 1927 has been accepted as one of the greatest scientific achievements in all history. The most Einstein ever could muster against it was that it was not complete; I doubt he ever thought it wrong per se.)

-------

1. I would love to hear you answer this question, previously asked and not answered: what is the true & correct correlation formula for entangled photon pair spin observations? Is it the same as QM - and thus subject to the issues of Bell's Theorem - or would you care to provide a specific formula which is different? (After all, the Bell Inequality and QM are a lot farther apart than the 7th or 8th decimal place.)

Your value for 0 degrees?
Your value for 22.5 degrees?
Your value for 45 degrees?
Your value for 67.5 degrees?
Your value for 90 degrees?

I ask because I would like to determine whether you agree or disagree with the predictions of QM.

2. Or in terms of the Thorn experiment: do you think that they suspected they might get results in accordance with the predictions of QM? Yes, because QM makes specific predictions which allow it to be falsified. So you argue that an experiment specifically designed to see if 2 photons can be detected on one side when only one is detected on the other is flawed -- exactly such that a false positive will be registered 100% of the time! And yet, a reasonable person might ask why 2300 photons aren't occasionally seen on one side, when only one is seen on the other if your concept is correct. After all, you specifically say one photon is really many photons.

3. In fact, can you provide any useful/testable prediction which is different from orthodox QM? You see, I don't actually believe your theory has the ability to make a single useful prediction that wasn't already in standard college textbooks years ago. (The definition of an AD HOC theory is one designed to fit the existing facts while predicting nothing new in the process.) I freely acknowledge that a future breakthrough might show one of your lines of thinking to have great merit or promise, although nothing concrete has yet been provided.

-------

I am open to persuasion, again no sarcasm intended. I disagree with your thinking, but I am fascinated as to why an intelligent person such as yourself would belittle QM and orthodox scientific views.

-DrC
 
Last edited:
  • #47
But as science, I consider it nearly useless. No amount of "semi-classical" explanation will ever cover for the fact that NOT ONE IOTA adds anything to our present day knowledge, which is the purpose of true scientific effort.

I wasn't writing a scientific paper here but merely discussing some common misconceptions. If/when I have something of importance to announce, I'll put it in a paper (I don't earn living by "publish or perish"). Although, these postings here are not of much "importance" or seriousness, I still find them useful in sharpening my own understanding and making me follow interesting thought patterns I wouldn't have otherwise.

It is strictly an elaborate catch-up to QM/CI by telling us that you can get the same answers a different way.

Again, I didn't create any theory much less make claims about "my" theory. I was referring you to what results exist, in particular Barut's and Jaynes work in QED and Marshall & Santos in Quantum Optics. I cited papers and links so you can look them up and learn some more and, if you're doubtful, to verify whether I made up anything.

The most Einstein ever could muster against it was that it was not complete; I doubt he ever thought it wrong per se.)

That's its basic defect, the incompleteness. On the other hand, the claims that any local fields theory must necessarily contradict it, depends how you interpret "it". As you know, there were impossibility proofs since von Neumann. Their basic problem was and is in excessive generalizing the interpretation of the formalism, requiring any future theory to satisfy requirements not implied by any known experiment.

Among such generalizations, the remote noninteracting collapse (the projection postulate applied to non-interacting subsystems at spacelike intervals) is the primary source of the nonlocality in nonrelativistic QM. If one ignores the trivial kinds of nonlocality arising from nonrelativistic approximations to EM interactions (such as instantaneous action-at-a-distance Coulomb potential), the generalized projection postulate is the sole source of nonlocality in nonrelativistic QM. The Bell's QM prediction (which assumes that the remote subsystem will "collapse" into a pure state, with no interaction and at spacelike interval) doesn't follow without this remote projection postulate. The only test for that generalization of projection postulate is the Bell's inequality test.

When considering optical experiments, the proper formalism is Quantum Optics (although the nonrelativistic QM is often used as a heuristic tool here). The photon pair Bell's QM prediction is derived here in more rigorous way (as Glauber's two point correlations, cf [5] for PDC pair), which makes it clearer that no genuine nonlocality is taking place, in theory or in experiments. The aspect made obvious is that the cos^(a) (or sin^2(a) in [5]) correlation is computed via Glauber's 2 point "correlation" of normal-ordered (E-) and (E+) operators. What that means is that, one is predicting prefiltered relations (between angles and coincidence rates) which filters out any 3 or 4 point events (there are 4 detectors). The use of normally ordered Glauber's G2() further implies that the prediction is made for still additionally filtered data, where the unpaired singles and any accidental coincidences will be subtracted.

Thus, what was only implicit in nonrelativistic QM toy derivation (and what required a generalized projection postulate while no such additional postulate is needed here) becomes explicit here -- the types of filtering needed to extract the "signal" function cos^(a) or sin^(a) ("explicit" provided you understand what Glauber's correlation and detection theory is and what it assumes, cf [4] and points I made earlier about it).

Thus, the Quantum Optics (which is QED applied to optical wavelengths, plus the detection theory for square-law detectors plus the Glauber's filtering conventions) doesn't really predict cos^(a) correlation, but it predicts merely the existence of the cos^2(a) "signal" burried within the actual measured correlation. It doesn't say how much is going to be discarded since that depends on specific detectors, lenses, polarizers, ... and all that real world stuff, but it says what kind of things must be discarded from the data to extract the general cos^() signal function.

So, unlike the QM derivation, the QED derivation doesn't predict violation of Bell inequalities for the actual data, but only the existence of particular signal function. While some of the discarded data can be estimated for specific setup and technology, no one knows how to make a good enough estimate of all data which is to be discarded by the theory in order to extract the "signal" to be able to say whather the actual correlation can violate Bell's inequalities. And, on the experimental side, no one has so far obtained experimental violation either.

The non-locality of the violation by the filtered "correlations" doesn't imply anything in particular regarding non-locality of correlations since the Quantum Optics filtering procedure is by definition non-local[/color] -- to subtract "accidentals" you need to measure the coincidence rate with source turned off, but that requires data collection from distand locations. Similarly to discard 3 or 4 detection events, you need to collect data from distant locations to know it was a 3rd or 4th click. Or to discard unpaired singles, you need the remote fact that none of the other detectors had triggered. Formally, this nonlocality is built into the Glauber's correlation functions by virtue of the disposal of all vacuum photon terms (from the full perturbative expression for multiple detections), where these terms refer to photon absorptions at different spacelike separated locations (e.g. "accidental" coincidence means e.g. a term which has one vacuum photon absorbed on A+ detectors and any photon, signal or vacuum, on B+/- detectors; any such term is dropped in the construction of the Glauber's filtering functions Gn()).


1. I would love to hear you answer this question, previously asked and not answered: what is the true & correct correlation formula for entangled photon pair spin observations? Is it the same as QM - and thus subject to the issues of Bell's Theorem - or would you care to provide a specific formula which is different?

The correct formulas for time evolution of n detectors interacting with quantized EM field are only given as generalized interaction in perturbative expansion such as ref [6] eq's (4.29)-(4.31). They are of no use in practice since there is too much unknown to do anything with them. The simplified versions, but as imractical are Glauber's [4], Lect. V, as computations of scattering amplitudes (eq's 5.2-5.6) then he handwaves his way into filtered version, starting at eq 5.7; the more rigorous ref [6] (sec IV, pp 327), after using the same approximations, acknowledges regarding the Glauber-Sudarshan correlation functions "A question which remains unanswed is under what circumstances and how this simplification can be justified."

The semiclassical results, which compute the regular, non-adjusted correlations are derived in [7]. The partially adjusted correlations (with only the local type of subtractions made) are same as the corresponding QED ones (cf. sect eq. 4.7 which only excludes single detector vacuum effects, not the combined nonlocal terms; one could do such removals by intorducing some fancy algorithm-like notation, I suppose). Again, as in QED formulas, these are too general to be useful.

What is useful, though, is that these are semiclassical results, thus completely non-mysterious and transparently local, no matter what the specifics of detector design or materials are. The fields themselves (EM and matter fields) are the "hidden" (in plain sight) variables. Any further non-local subtractions on data made from there on are the only source of non-locality, which is entirely non-mysterious and non-magical.

In principle one could say the same for 2nd quantized fields (separate local from non-local terms and discard only the local one as a design property of a single detector) , except that now, there is infitely redundant overkill on the number of such variables compared to semiclassical fields (1st quantized, the matter field+EM field). But as a matter of priciple, such equations do evolve system fully locally, so there can be no non-local effect deduced from them, except as an artifact of terminological conventions of, say, calling some later non-locally adjusted correlations, still the "correlations" which then would become non-local "correlations".


Your value for 0 degrees? Your value for 22.5 degrees? Your value for 45 degrees? Your value for 67.5 degrees?
Your value for 90 degrees?


Get your calculator, set it to DEG mode, enter your numbers and for each pres cos, then x^2. That's what the filtered "signal" functions would be. But, as explained, that implies nothing regarding the Bell inequality violations (which refer to plain data correlations, not some fancy term "correlations" which includes non-local adjustements). As to what any real, nonadjusted data ought to look like, I would be curious to see some. (Experimenters in this AJP paper would not give out a single specific count used for their g2, if their life depended on it.)

All I know that from theory of multiple detections, QED or semiclassical, there is nothing non-local about them.


I ask because I would like to determine whether you agree or disagree with the predictions of QM.

I agree that filtered "signal" functions will look like QM or Quantum Optics prediction (QM is too abstract to make fine distinctions, but QO "prediction" is explicit in that this is only a "signal" function extracted from correlations, and surely not the plain correlation between counts). But they will also look like the semiclassical prediction, when you apply the same non-local filtering on semiclassical predictions. Marshall and Santos have SED models that show this equivalence for atomic cascade and for PDC sources (see their numerous preprints in arXiv, I cited several in our earlier discussions).

2. Or in terms of the Thorn experiment: do you think that they suspected they might get results in accordance with the predictions of QM?

This is quantum optics experiment. Nonrelativistic QM can't tell you precisely enough (other than postulating nonlocal collapse). QO derives what happens, and as explained nothing nonlocal happens with the plain correlations. No one has such experiment or a QED/QO derivation of a nonlocal prediction (assuming you understand what Gn() "correlations" represent and you don't get mislead by a wishful terminology of QO).

I also explained in previous msg why the "quantum" g2=0 for 1 photon state (AJP.8) is a trivial kind of "anticorrelation". That the AJP authors (and few others in QO) misinterpreted it, that's their problem. Ask them why, if you're curious. Quantum Opticians have been known to suffer delusions of magic, as shown by Hanbury Brown and Twiss affair, when HBT using semiclassical thery predicted the HBT correlations in 1950s. In no time, the "priesthood" jumped in, papers "proving" it was impossible came out, experiments "proving" HBT were wrong were published in a hurry. It can't be so since photons can't do that... Well, all the mighty "proofs", experimental and theroetical turned out fake, but not before Purcell published paper explaining how photons could do it. Well, then, sorry HBT, you were right. But... and then in 1963 came Harvard's Roy Glauber with his wishful terminology "correlation" (for a filtered "signal" function) to confuse students and the shallower grownups for decades to come.


Yes, because QM makes specific predictions which allow it to be falsified.

The prediction for a form of filtered "signal" function has no implication for Bell's inequalities. The cos^() isn't uniquely QM or QED prediction for the signal function. The semiclassical theory predicts the same signal function. The only difference is that the QO calls its "signal" function a "correlation" but defines it still as non-locally post-processed correlation function. The nonlocality is built into the definition of Gn() "correlation".

So you argue that an experiment specifically designed to see if 2 photons can be detected on one side when only one is detected on the other is flawed

It is not flawed. It merely doesn't correspond operationally to eq (8) & (11) with "single photon" input, |Psi.1>=|T>+|R>, which yield g2=0. To understand that, read again the previous msg (and the earlier one referred there) and check Glauber's [4] to see what does (8) and (11) mean operationally. As I said, |Psi.1> has to be absorbed as a whole by one detector. But to do that, it has to interact with the detector, all of its field. In [4] there is no magic instant collapse, it is QED, relativistic interaction, and "collapse" here turns out to be plain photon absorption, with all steps governed by EM field inmteracting with atom. It just happens that the nonzero part of the field of |Psi.1> is spread out across the two non-contiguous regions T and R. But it is still "one mode" (by definition, since you plug into (AJP.11) 1 mode state for the given basis operator). A detector which by definition has to capture the whole mode and leave the vacuum state as result (the Glauber detector to which eq. 8-11 apply), in relativistic theory (QED) has to interact with the whole mode, all of its field. In QED dynamics, there is no collapse, and AJP.8-11 were result of dynamical derivation (cf. [4]), not a postulate where you might twist it and turn it, so it is clear what they mean -- they mean absorption of the whole photon |Psi.1> via pure dynamics of quantized EM field and a matter-field detector (they don't 2nd quantize the matter field in [4]).

After all, you specifically say one photon is really many photons.

I said that photon number operator [n] is basis dependent, what is "one photon" in one basis need not be "one photon" in another basis. Again, the car speed example -- is it meaningfull to argue whether my car had a "speed number" 2300 yesterday at 8AM?

In practice, the basis is selected to match best the problem geometry and physics (such as eigenstates of noninteracting Hamiltonian), in order to simplify the computations. There is much convention student learns over years so that one doesn't need to spell out on every turn what they mean (which can be a trap for novices or shallow people of any age). The "Glauber detector" which absorbs "one whole photon" and counts 1 (for which his Gn() apply, thus the eq's APJ.8-11) is therefore also ambiguous in the same sense. You need to define modes before you can speak what that detector is going to absorb. If you say you have state |Psi>=|T>+|R> and this state is your "mode" (you can always pick it for one of the basis vectors, since it is normalized), the "one photon", then you need to use that base for photon number operator [n] in APJ.11, and then that gives you g2=0. But all these choices also define how you Glauber detector for this "photon" |Psi> is to operate here -- it has to be spread out to interact and absorb (via EM field-matter QED dynamics) this |Psi>.


3. In fact, can you provide any useful/testable prediction which is different from orthodox QM?

I am not offering "theory", just explaining the misleading QO terminology and confusion it could and does cause. I don't have "my theory predictions" but what I am saying is that, when QO terminological smoke and mirrors are removed, there is nothing non-local predicted by their own formalism. The non-adjusted correlations will always be local, i.e for them it will be g2>=1. And they will be the same value in semiclassical and the QED computation, at least to the alpha^4 perturbative QED expansion (if Barut's semiclassical theory is used), thus 8+ digits of precision will be same, possibly more (unknown at present).

What will the adjusted g2 be, depends on the adjustments. If you make non-local adjustments (requiring data from multiple locations to compute amounts to subtract), yes, then you get some g2' which can't be obtained by plugging only locally collected counts into the (AJP.14). So what? That has nothing to do with non-locality. It just a way you define your g2'.

Only if you do what the AJP paper does (along with other "nonclassicality" claimants in QO), and label it with the same symbol g2 for "classical" (and also nonadjusted correlation) and g2 for "quantum" (and also adjusted "correlation" or "signal" function Gn) models, then few paragraphs later you manage to forget (or more likely, never knew the difference) the "and also" parts and decide the sole difference was in "classical" vs "quantum" then you will succeed in self-deception that you have shown "nonclassicality." Otherwise if you label apples 'a' and oranges 'o', you won't have to marvel at the end why is 'a' is different than 'o', as you do when you ask why is g2 from classical case different than g2 from quantum case and then "conclude" that it must be something "nonclassical" in the quantum case that made the difference.


--- Ref

[5] Z.Y. Ou, L. Mandel "Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment" Phys. Rev. Lett. 61(1) pp 50-53 (1988).

[6] P.L. Kelly and W.H. Kleiner, "Theory of electromagnetic field measurement
and photoelectron counting" Phys. Rev. 136, A316–A334 (1964).

[7] L. Mandel, E.C.G. Sudarshan, E. Wolf "Theory of Photoelectric Detection of Light Fluctuations" Proc. Phys Soc. 84 (1964) 435-444.
 
Last edited:
  • #48
nightlight said:
Using raw counts in eq (14), they could only get g2>=1.

You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??

I can easily provide you with such a series !

cheers,
Patrick.
 
  • #49
vanesch said:
You mean that it is impossible to have a series of time clicks for which cannot give a number lower than 1 for the expression N_gtr N_g / (N_gt * N_gr) ??

In fact, without a lot of "quantum optics", the very fact of having "intensity threshold detectors" which give me a small P_gtr/(P_gt * P_gr) is already an indication that these intensities are not given by the Maxwell equations.
The reason is the following: in order to be able to have a stable interference pattern by the waves arriving at T and R, T and R need to be modulated by intensities which have a strong correlation on the timescale of the detection window. Indeed, if they are anticorrelated, when there is an intensity at T, there isn't any at R, and vice versa, there is never a moment when there is sufficient intensity from both to interfere. If the modulation depth is about 100%, this indicates that the intensities are essentially identical at T and R, on the time scale of the intensity detector (here, a few ns).
Identical intensities on this time scale is necessary to obtain extinction of intensity in the interference pattern (the fields have to cancel at any moment, otherwise some intensity is left).
So no matter how your detector operates, if it gives you a positive logical signal ABOVE an intensity theshold, and a negative logical signal BELOW an intensity threshold, these logical signals have to be correlated by about 100%.

Taking an arbitrary moment in time, and looking at the probability to have T AND R = 1, and T = 1 and R = 1, and calculating P_TR / (P_T P_R) is nothing else but an expression of this correlation of intensities, needed to obtain an interference pattern.
In fact, the probability expressions have the advantage of taking into account "finite efficiencies", meaning that to each intensity over the threshold corresponds only a finite probability of giving a positive logical signal. That's easily verified.

Subsampling these logical signals with just ANY arbitrary sampling sequence G gives you of course a good approximation of these probabilities. It doesn't matter if G is correlated or not, with T or R, because the logical signals from T and R are identical. If I now sample only at times given by a time series G, then I simply find the formula given in AJP (14): P_gtr/ (P_gt P_gr).
This is still close to 100% if I have interfering intensities from T and R.
No matter what time series. So I can just as well use the idler signal as G.

Now, you can say, if that's so simple, why do we go through the pain of generating idlers at all: why not use a random pulse generator.
The reason is that not the entire intensity function of R and T is usable. Most intensities are NOT correlated.
But that doesn't change the story: the only thing I now have to reproduce, is that I have an interference pattern from R and T when I have G-clicks. Then the same explanation holds, but only for the time windows defined by G.

If I have a time series G, defining time windows, in which I can have an interference pattern from T and R with high modulation, the intensities during T and R need to be highly correlated. This means that the expression P_GTR/(P_GT P_GR) needs to be close to 1.

So if I succeed, somehow, to have the following:

I have a time series generator G ;
I can show interference patterns, synchronized with this time series generator, of two light beams T and R ;

and I calculate an etimation of P_GTR / (P_GR P_GT),

then I should find a number close to 1 if this maxwellian picture holds.

Well, the results of the paper show that it is CLOSE TO 0.

cheers,
Patrick.
 
Last edited:
  • #50
nightlight said:
I wasn't writing a scientific paper here but merely discussing some common misconceptions. If/when I have something of importance to announce, I'll put it in a paper (I don't earn living by "publish or perish"). Although, these postings here are not of much "importance" or seriousness, I still find them useful in sharpening my own understanding and making me follow interesting thought patterns I wouldn't have otherwise.

...

--- Ref

[5] Z.Y. Ou, L. Mandel "Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment" Phys. Rev. Lett. 61(1) pp 50-53 (1988).

[6] P.L. Kelly and W.H. Kleiner, "Theory of electromagnetic field measurement
and photoelectron counting" Phys. Rev. 136, A316–A334 (1964).

[7] L. Mandel, E.C.G. Sudarshan, E. Wolf "Theory of Photoelectric Detection of Light Fluctuations" Proc. Phys Soc. 84 (1964) 435-444.


Thank you for the courtesy of your response. I do not agree with your conclusions, but definitely I want to study your thinking further.

And I definitely AGREE with what you say about you say about "sharpening" above... I get a lot out of these discussions in the same manner. Forces me to consider my views in a critical light, which is good.
 

Similar threads

Replies
40
Views
4K
Back
Top