Photon "Wave Collapse" Experiment (Yeah sure; AJP Sep 2004, Thorn...)

There was a recent paper claiming to demonstrate the indivisibility of
photons in a beam splitter experiment (the remote "wave collapse"
upon "measurement" of "photon" in one of the detectors).

1. J.J. Thorn, M.S. Neel, V.W. Donato, G.S. Bergreen, R.E. Davies, M. Beck
"Observing the quantum behavior of light in an undergraduate laboratory"
Am. J. Phys., Vol. 72, No. 9, 1210-1219 (2004).

The authors claim to violate "classicality" by 377 standard deviations,
which is by far the largest violation ever for this type of experiment.
The setup is an archetype of quantum mystery: A single photon arrives at
a 50:50 beam splitter. One could verify that the two photon wave packet
branches (after the beam splitter) interfere nearly perfectly, yet if
one places a photodetector in each path, only one of the two detectors
will trigger in each try. As Feynman put it - "In reality, it contains
the only mystery." How does "it" do it? The answer is -- "it" doesn't
do "it" and the mysterious appearance is no more than a magic trick.

Unlike the earlier well known variants of this experiment ([2],[3]),
the present one describes the setup in sufficient detail that the
sleight of hand can be spotted. The setup is sketched below, but
you should get the paper since I will refer to figure and formula
numbers there.

Code:
     G photon Source   TR photon    PBS  T
DG <---------- [PDC] ----------------\----------> DT
|
| R
V
DR
The PDC Source generates two photons, G and TR. The G photon is used
as a "gate" photon, meaning that the trigger of its detector DG defines
the time windows (of 2.5ns centered around the DG trigger) in which
to count the events on the detectors DT and DG, which detect the
photons in the Transmitted and Reflected beams (wave packets). The
"quantum" effect they wish to show is that after a detector, say,
DT triggers, the other detector DR will not trigger. That would
demonstrate the "indivisibility" of the photon and a "collapse"
of the remote wave packet at DR location as soon as the photon
was "found" at the location DT.

In order to quantify the violation of classicality, [1] defines
a coefficient g2 which is a normalized probability of joint
trigger of DT and DR (within the windows defined by DG trigger)
and is given via:

.... g2 = P(GTR) / [P(GT)*P(GR)] ... (1)

or in terms of the (ideal) counts as:

.... g2 = N(GTR)*N(G) / [N(GT)*N(GR)] ... (2)

where the N(GTR) is count of triple trigger, the N(GT) of double
triggers DG and DT, etc. The classical prediction is that g2>=1
(the equality g2=1 would hold for a perfectly steady laser source,
the "coherent light"). This inequality is eq (AJP.3). The quantum
"prediction" (eq AJP.8,13) is that for a single photon state TR,
the g2=0. The paper claims they obtained g2=0.0177 +/- 0.0026.
The accidental (background) coincidences alone would yield
g2a=0.0164, so that the g2-g2a is just 0.0013, well within the
std deviation 0.0026 from the quantum prediction. Perfection.

The two tiny, little clouds in this paradise of experimental
and theoretical perfection are:

a) while there is a QED prediction of g2=0, it is not for this
kind of detection setup (that's a separate story which we could
pursue later), and

b) the experiment doesn't show that (2) yields g2=0, since they
didn't measure at all the actual triple coincidence N(GTR) but
just a small part of it.

Let's see what was the sleight of hand in (b). We can look at the
coincident detections scheme as sampling of the EM fields T and R
where the sampling time windows are defined by the triggers of
gate DG. Here we had sampling window of 2.5ns and they measured
around 100,000 counts/sec on DG (and 8000 c/s on DT+DR). Thus
the sampled EM field represents just 0.25 ms out of each second.

The classical prediction g2>=1 applies for either continuous or
sampled measurements, provided the samples of GT and GR are
taken from the same position in the EM stream. For the coincidences
GT and GR, [1] does seem to select the properly matching sampling
windows since they tuned the GT and GR coincidence units (TAC/SCA,
see AJP p.1215-1216, sect C, Fig 5) to maximize each rate (they
don't give unfortunately any actual counts used for computing via
(2) their final results in AJP, Table I, but we'll grant them this).

Now, one would expect, that obtaining the sequence of properly
aligned GT and GR samples (say a sequence of length N(G) of
0's and 1's), one would extract the triple coincidence count
N(GTR) by simply adding 1 to N(GTR), whenever both GT and
GR contain 1 for the same position in the bit array.

But no, that's not what [1] does. For "mysterious" reasons
they add a third, separate coincidence unit (AJP.Fig 5; GTR)
which they tune on its own to extract its own separate sample
of EM fields. That alone is a gigantic loophole, a secret pocket
in the magicians sleeve. If the sampling windows GT and GR for
the new GTR unit are different enough from the earlier GT/GR
windows (e.g. shifted by just 1.25 ns in opposite directions),
the classical prediction via (2) will also be g2=0 (just background).
And as they're about to finish they pause at the door, with
'and by the way' look say "There is one last trick used in setting
up this threefold coincidence unit." where they explain how they
switch optical fibers, from DR to DG, then tune to G+T to stand
in for R+T coincidences, because they say "we expect an absence
of coincidences between T and R" (p 1216). Well, funny they
should mention it, since after all this, somehow, I am also
beginning to expect the absence of any GTR coincidences.

Also, unlike the GT and GR units which operated in START/STOP
TAC mode the GTR unit operated in START GATE mode, where a
separate pulse from DG is required to enable the acceptance
of DT and DR signals (here the DT was used as START and DR as
STOP input to TAC/SCA, see fig 5, while in the other two units
G was used for SATRT and T or R for STOP). It surely is getting
curioser and curioser, all these seeming redundancies with all
their little differences.

The experiment [3] also had a 3rd GTR unit with its own tuning,
but they didn't give any details at all. The AJP authors [1] give
only the qualitative sketch, but no figures on the T and R sampling
window positions for GTR unit (e.g. relative to those from GT and
GR units) were available from the chief author. Since the classical
prediction is sensitive to the sampling window positions, and can
easily produce via (2) anything from g2=0 to g2>1, just by changing
the GTR windows, this is a critical bit of data the experimenters
should provide, at least mention how they checked it and what
the windows were.

Of course, after that section, I was about ready to drop it
as yet another phony 'quantum magic show'. Then I noticed
at the end they give part numbers for their TAC/SCA units,
(p1218, Appendix C): ORTEC TAC/SCA 567. The data sheet for
the model 567 lists the required delay of 10ns for the START
(which was here DT signal, see AJP.Fig 5) from the START GATE
signal (which was here DG signal) in order for START to get
accepted. But the AJP.Fig 5, and the several places in the
text give their delay line between DG and DT as 6 ns. That
means when DG triggers at t0, 6ns later (+/- 1.25ns) the
DT will trigger (if at all), but the TAC will ignore it
since it won't be ready yet, and for another 4ns. Then,
at t0+10ns the TAC is finally enabled, but without START
no event will be registered. The "GTR" coincidence rate
will be close to accidental background (slightly above
since if the real T doesn't trip DT and the subsequent
background DT hits at t0+10, then the DG trigger, which
is now more likely than background, at t0+12ns will
allow the registration). And that is exactly the g2 they
claim (other papers claim much smaller violations and
only on subtracted data, not the raw data, which is how
it should be, the "nonclassicality" as the Quantum Otpics
term of art, not anything nonclassical for real).

So, as described in [1], the GTR unit settings will cut off
almost all genuine GTR coincidences, yielding the "perfect"
and the inconsistency. Oh, he knew it all along, and they have
actually used the proper delay (and not the 6ns as stated the
paper), but the paper was too long for such details. Lemme see,
they wrote the paper and the AJP told them to shorten it to such
and such length. Now, say, the six of them sweated for a day
editing the text, and just had it at about the right length,
except for 5 extra characters. Now, having cut out all they could
think of, they sit at the table wondering, vot to do? Then suddenly
a lightening strikes and they notice that if they were to replace
the delay they actually used (say 15ns, or anything above 11.5,
thus 2+ digit anyway) with 6 ns they could shorten the text by
10 characters. Great, let's do it, and so they edit the paper,
replacing the "real" delay of say 15ns with the fake delay of 6ns.
Yep, that's how the real delay which was actually and truly greater
than 10ns must have become the 6ns reported in the paper in 10 places.
Very plausible - it was the paper length that did it.

And my correspondent ended his reply with: "If I say I measured
triple coincidences (or lack thereof) then I did. Period. End of
discussion." Yes, Sir. Jahwol, Herr Professor.

2. J. F. Clauser, Experimental distinction between the quantum and
classical field-theoretic predictions for the photoelectric effect,''
Phys. Rev. D 9, 853-860 (1974).

3. P. Grangier, G. Roger, and A. Aspect, Experimental evidence for
a photon anticorrelation effect on a beam splitter: A new light on
single-photon interferences,'' Europhys. Lett. 1, 173-179 (1986).

4. R. J. Glauber, "Optical coherence and photon statistics" in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji (Gordon and Breach, New York, 1965), pp. 63–185.

 PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
 Mentor Blog Entries: 27 Is this is true, then you should write a REBUTTAL to AJP and see if the authors will respond formally. If not, your complaint here will simply disappear into oblivion. Zz.

 Quote by ZapperZ Is this is true, then you should write a REBUTTAL to AJP and see if the authors will respond formally. If not, your complaint here will simply disappear into oblivion. Zz.
You can check the paper and the ORTEC data sheet at the links provided. It is, of course, true. Even the chief author acknowledges the paper had wrong delays. As to writing to AJP, I am not subscribed any more to AJP (I was getting it for few years after grad school). I will be generous and let the authors issue their own errata since I did correspond with them first. Frankly, I don't buy their "article was too long" explanation -- it makes no sense that one would make time delay shorter because the "article was already too long". Plain ridiculous. And after the replies I got, I don't believe they will do the honest experiment either. Why confuse the kids with messy facts, when the story they tell them is so much more exciting.

Mentor
Blog Entries: 27

Photon "Wave Collapse" Experiment (Yeah sure; AJP Sep 2004, Thorn...)

 Quote by nightlight You can check the paper and the ORTEC data sheet at the links provided. It is, of course, true. Even the chief author acknowledges the paper had wrong delays. As to writing to AJP, I am not subscribed any more to AJP (I was getting it for few years after grad school). I will be generous and let the authors issue their own errata since I did correspond with them first. Frankly, I don't buy their "article was too long" explanation -- it makes no sense that one would make time delay shorter because the "article was already too long". Plain ridiculous. And after the replies I got, I don't believe they will do the honest experiment either. Why confuse the kids with messy facts, when the story they tell them is so much more exciting.
I have been aware of the paper since the week it appeared online and have been highlighting it ever since. If it is true that they either messed up the timing, or simply didn't publish the whole picture, then the referee didn't do as good a job as he/she should, and AJP should be made aware via a rebuttal. Being a "subscriber" is irrelevant. ANYONE can write a rebuttal to ANY paper in any journal.

Furthermore, the section that this paper appeared in has no page limit as far as I can tell, if one is willing to pay the publication cost. So them saying the paper was getting too long is a weak excuse.

However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained. Accounting for dark counts has ALWAYS been a problem with single photon detectors such as this - this is a frequent ammo for some people to point at the so-called detection loophole in EPR-type experiments. But this doesn't mean such detectors are completely useless and cannot produce reliable results either. This is where the careful use of statistical analysis comes in. If this is where they messed up, then it needs to be clearly explained.

Zz.

Recognitions:
Gold Member
Staff Emeritus
 Quote by ZapperZ However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained. Accounting for dark counts has ALWAYS been a problem with single photon detectors such as this - this is a frequent ammo for some people to point at the so-called detection loophole in EPR-type experiments. But this doesn't mean such detectors are completely useless and cannot produce reliable results either. This is where the careful use of statistical analysis comes in. If this is where they messed up, then it needs to be clearly explained.
Honestly, I think there is no point. In such a paper you're not going to write down evident details. What nightlight is complaining about is that the authors didn't take into account an extra delay in a component. I'm pretty sure they did, and published only the EFFECTIVE delays after having taken these details into account ; at least that is what I gather from the response of the original author. So he didn't bother writing down these experimentally evident details. He also didn't bother to write down that he put in the right power supply ; that doesn't mean you should complain that he forgot to power his devices.
You know, the amplifiers and discriminators also have small time delays. You're not going to write all those technicalities down in a paper, are you ?

It can happen, but most of the time, people writing up an experimental publication are not completely incompetent nutcases who don't know how to use their apparatus.

cheers,
Patrick.

 Being a "subscriber" is irrelevant. ANYONE can write a rebuttal to ANY paper in any journal. Well, if I write I would like to see it, and see the replies, etc. However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained. It changes quite a bit. The semi-classical model works perfectly here, when one models the subtractions as well (Marshall & Santos wrote about it way back when Grangier et al published their version in 1986). The problem with this AJP paper is that they claim nearly perfect "quantum" g2, without any subtractions (of accidental coincidences and of unpaired DG triggers, which lowers substantially g2, via eq (2) where N(G) would drop from 100,000 c/s to 8000 c/s) . If true, it would imply genuine physical collapse at spacelike region. If they had it for real, you would be looking at a Nobel prize work. But, it just happens to be contrary to the facts, experimental or theorietical. Check for example a recent preprint by Chiao & Kwiat where they do acknowledge in their version that no real remote collapse was shown, although they still like that collapse imagery (for its heuristic and mnemonic value, I suppose). Note also that they acknowledge that classical model can account for any g2 >= eta (the setup efficiency, when accounting for the unpaired DG singles in classical model), which is much smaller than 1. Additional subtraction of background accidentals can lower the classical g2 still further (if one accounts for it in the classical model of the experiment). The experimental g2 would have to go below both effects to show anything nonclassical -- or, put simply, the experiment would have to show the violation on the raw data, and that has never happened. The PDC source, though, cannot show anything nonclassical since it can be perfectly modelled by local semiclassical theory, the conventional Stochastic Electrodynamics (see also last chapter in Yariv's Quantum Optics book on this).

Mentor
Blog Entries: 27
 Quote by vanesch Honestly, I think there is no point. In such a paper you're not going to write down evident details. What nightlight is complaining about is that the authors didn't take into account an extra delay in a component. I'm pretty sure they did, and published only the EFFECTIVE delays after having taken these details into account ; at least that is what I gather from the response of the original author. So he didn't bother writing down these experimentally evident details. He also didn't bother to write down that he put in the right power supply ; that doesn't mean you should complain that he forgot to power his devices. You know, the amplifiers and discriminators also have small time delays. You're not going to write all those technicalities down in a paper, are you ? It can happen, but most of the time, people writing up an experimental publication are not completely incompetent nutcases who don't know how to use their apparatus. cheers, Patrick.
If those ARE the missing delays that is the subject here, then yes, I'd agree with you. *I* personally do not consider those in my electronics since we have calibrated them to make sure we only measure all time parameters of the beam dynamics rather than our electronics delay.

Again, the important point here is the central point of the result. Would that significantly change if we consider such things? From what I have understood, I don't see how that would happen. I still consider this as a good experiment for undergrads to do.

Zz.

Mentor
Blog Entries: 27
 Quote by nightlight Being a "subscriber" is irrelevant. ANYONE can write a rebuttal to ANY paper in any journal. Well, if I write I would like to see it, and see the replies, etc.
OK, so now YOU are the one offering a very weak excuse. I'll make a deal with you. If you write it, and it gets published, *I* will personally make sure you get a copy. Deal?

 However, what I can gather from what you wrote, and what have been published, is that including everything doesn't change the results but may change the degree of the standard deviation that they obtained. It changes quite a bit. The semi-classical model works perfectly here, when one models the subtractions as well (Marshall & Santos wrote about it way back when Grangier et al published their version in 1986). The problem with this AJP paper is that they claim nearly perfect "quantum" g2, without any subtractions (of accidental coincidences and of unpaired DG triggers, which lowers substantially g2, via eq (2) where N(G) would drop from 100,000 c/s to 8000 c/s) . If true, it would imply genuine physical collapse at spacelike region. If they had it for real, you would be looking at a Nobel prize work. But, it just happens to be contrary to the facts, experimental or theorietical.
There appears to be something self-contradictory here, caroline. You first do not believe their results, and then you said look, it also works well when explained using semi classical model. Now either you discard it completely and don't use it, or you buy it and deal with the fact that two different descriptions, at best, can explain it.

If you claim that the result DOES change, then put your money where your mouth is and send a rebuttal. If not, this goes into oblivion.

Zz.

 What nightlight is complaining about is that the authors didn't take into account an extra delay in a component. Not quite so. If the delay between DG and DT pulses was more than 6ns, say it was 15ns, why was the 6ns written in the paper at all? I see no good reason to change the reproted delay from 15ns to 6ns. There is no economy in stating it was 6ns. I'm pretty sure they did, and published only the EFFECTIVE delays after having taken these details into account ; There is no effective vs real delay. It is a plain delay between a DG pulse and DT pulses. It is not like it would have added any length to the article to speak of. You know, the amplifiers and discriminators also have small time delays. You're not going to write all those technicalities down in a paper, are you ? And detector has latency and dead time, etc. That's not relevant. Talking about needless elements here, adding and going into trouble of tuning the independent sampling of EM fields on T and R for the third unit is, and describing it all is just fine. But saying that delay was 15ns instead of 6ns is way too much trouble. Yeah, sure, that sounds very plausible. If I wanted to do the collapse magic trick, I would add that third unit, too. There are so many more ways to reach the "perfection" with that unit in there than to simply reuse the already obtained samples from the GT and GR units (as AND-ed signal). In any case, are you saying you can show violation (g2<1) without any subtractions, on raw data?

Mentor
Blog Entries: 27
 Quote by nightlight There appears to be something self-contradictory here, caroline. I am not Caroline. For one, my English (which I learned when I was over twenty) is much worse than her English. Our educational backgrouds are quite different, too.
1. It's interesting that you would even know WHO I was refering to.

3. And it appears that you will not put your money where your mouth is and send a rebuttal to AJP. Oblivion land, here we come!

Zz.

 There appears to be something self-contradictory here.... You first do not believe their results, and then you said look, it also works well when explained using semi classical model. Now either you discard it completely and don't use it, or you buy it and deal with the fact that two different descriptions, at best, can explain it. No, that's not what I said. You can't get what they claim, the violation of g2>=0 without subtractions (the accidentals and unpaired singles; the two can be traded by settings on the detectors, window sizes, etc). But if you do the subtractions and if you model semi-classically not just the raw data but also the subtractions themselves, then the semiclassical model is still fine. The subtraction-adjusted g2, say g2' is now much smaller than 1, but that is not a violation of "classicality" in any real sense but merely a convention of speech (when the adjusted g2' is below 1 then we define such phenomenone as "nonclassical"). But a classical model, which also does the same subtraction on its predictions will work fine. If you claim that the result DOES change, then put your money where your mouth is and send a rebuttal. If not, this goes into oblivion. As I said, being a nice guy, I will let them issue their own errata. They can simply say that the delays were not 6ns and give the correct value, if that is true (which from their g2 nearly 0, I don't think it is, unless they had few backup "last tricks" already active in the setup) . But if they do the experiment with the longer delay they won't have g2<1 on raw data (unless they fudge it in any of the 'million' other ways that their very 'flexible' setup offers).

Mentor
Blog Entries: 27
 Quote by nightlight There appears to be something self-contradictory here.... You first do not believe their results, and then you said look, it also works well when explained using semi classical model. Now either you discard it completely and don't use it, or you buy it and deal with the fact that two different descriptions, at best, can explain it. No, that's not what I said. You can't get what they claim, the violation of g2>=0 without subtractions (the accidentals and unpaired singles; the two can be traded by settings on the detectors, window sizes, etc). But if you do the subtractions and if you model semi-classically not just the raw data but also the subtractions themselves, then the semiclassical model is still fine. The subtraction-adjusted g2, say g2' is now much smaller than 1, but that is not a violation of "classicality" in any real sense but merely a convention of speech (when the adjusted g2' is below 1 then we define such phenomenone as "nonclassical"). But a classical model, which also does the same subtraction on its predictions will work fine.
Sorry, but HOW would you know it "will work fine" with the "classical model"? You haven't seen the full data, nor have you seen what kind of subtraction that has to be made. Have you ever performed a dead-count measurement on a photodetector and look at the energy spectrum from a dark count? This is a well-studied area and we do know how to distinguish between these and actual count rate.

 If you claim that the result DOES change, then put your money where your mouth is and send a rebuttal. If not, this goes into oblivion. As I said, being a nice guy, I will let them issue their own errata. They can simply say that the delays were not 6ns and give the correct value, if that is true (which from their g2 nearly 0, I don't think it is, unless they had few backup "last tricks" already active in the setup) . But if they do the experiment with the longer delay they won't have g2<1 on raw data (unless they fudge it in any of the 'million' other ways that their very 'flexible' setup offers).
No, you are not being "nice" at all. In fact, all I see is a cop out. Being "nice" means taking the responsibility to correct something that one sees as a possible error or misleading information and informing the community in question about it. You refused to do that. Instead, you whinned about it here where it possibly will make ZERO impact on anything. So why even bother in the first place? The authors may not make any corrections at all based on what you have described as their response, so most likely the paper will stand AS IS. They will continue to get the recognition, you get NOTHING.

Zz.

Mentor
Blog Entries: 27
 Quote by nightlight 1. It's interesting that you would even know WHO I was refering to. Why is it interesting? Haven't you argued with her here quite a bit, not long ago? And she has been all over the internet on this topic for several years. Despite of her handicaps in this field (of being neither a physicist nor a male), she is quite a warrior and at the bottom of it she is right on the experimental claims of QM/QO.
Then you must have missed her claims on Yahoo groups that "Maxwell equations are just math" when I asked her to derive the wave equation from them (she keeps claiming that light is ONLY a wave, yet she doesn't know what a "wave" is). She also has repeatedly claim that QM is dead" despite admitting she knows very little of it and of classical mechanics.

So the person you championed is ignorant of the very subject she's criticizing, by her own admission.

 And it appears that you will not put your money where your mouth is and send a rebuttal to AJP. Oblivion land, here we come! Nothing goes to oblivion, not even a butterfly flipping his wings somewhere in Hawaii. Writing to AJP, where they will censor anything they disagree with idelogocally is not kind of pursuit I care about. I left academia and work in industry to stay away from their kind of self-important poltician-scientists. I might post it to usenet, though.
I agree. The kind of info you are advertizing fits very nicely in the bastion of the Usenet.

It also appears that this conspiracy theory about physics journals has finally reared its ugly head again. Have you ever TRIED submitting to AJP, ever? Or did you already made up your mind and use that prejudice in deciding what they will do? It is ironic that you are criticizing a paper in which you claim they have possibly made a bias decision on what to select, and yet you practice the very same thing here. I hate to think you do this often in your profession.

Zz.

Recognitions:
Gold Member
 Quote by nightlight You refused to do that. Instead, you whinned about it here where it possibly will make ZERO impact on anything. Well, let's see if this paper (of which I heard first here) gets trotted out again as a definite proof of collapse or photon indivisibility, or some such. It was a magic trick for kids and now you know it, too. As to AJP letter impact, it would have zero impact there, if they were to publish it at all, since the authors of [1] would dismiss it as they did in email, and however phony their excuse sounds ("article was already too long" - give me break), that would be the end of it. No reply allowed. And the ortodoxy vs heretic scores once again 1:0. When, years back, Marshall and Santos challenged Aspects experiments, after the Aspect et al replied, their subsequent submissions (letters and papers) were turned down by the editors as of no further interest. That left the general impression, false as it was (since they had the last word), that Aspect's side won against the critics, adding thus extra feather to their triumph. Marshall and Santos have been battling it this way and it just doesn't work. The way the "priesthood" (Marshall's label) works, you just help them look better and you end up burned out if you play these games they have set-up so that half of the time they win and the other half you lose.
That is correct, they were rebuffed in their attempts to rebutt Aspect. And for good reason. They were wrong, although many (including Caroline and yourself) can't see it. It is clear the direction things are going in: more efficiency means greater violation of the local realistic position. So naturally you don't want the Thorn paper to be good, it completely undermines your theoretical objections to Bell tests - i.e. photons are really waves, not particles.

But you have missed THE most important point of the Thorn paper in the process. It is a cost-effective experiment which can be repeated in any undergraduate laboratory. When it is repeated your objections will eventually be accounted for and the results will either support or refute your hypothesis.

I do not agree that your objections are valid anyway, but there is nothing wrong with checking it out if resources allow. Recall that many felt that Aspect had left a loophole before he added the time-varying analyzers. When he added them, the results did not change. A scientist would conclude that a signal is not being sent from one measuring apparatus to the other at a speed of c or less. Other "loopholes" have been closed over the years as well. Now there is little left. New experiments using PDC sources don't have a lot of the issues the older cascade type did. But I note that certain members of the LR camp operate as if nothing has changed.

Make no mistake about it: loopholes do not invalidate all experimental results automatically. Sometimes you have to work with what you get until something better comes along. After all, almost any experiment has some loophole if you look it as hard as the LR crew has.

I wonder why - if the LR position is correct - the Thorn experiment supports the quantum view? If it is a "magic trick", why does it work? Hmmm.

Mentor
Blog Entries: 27
 Quote by nightlight Then you must have missed her claims on Yahoo groups that "Maxwell equations are just math" when I asked her to derive the wave equation from them (she keeps claiming that light is ONLY a wave, yet she doesn't know what a "wave" is). She also has repeatedly claim that QM is dead" despite admitting she knows very little of it and of classical mechanics. That has no relation to what I said.
Yes it does, because you were the one who made her to be what she's not - someone who posesses the knowledge to know what she's talking about. Unless you believe that one can learn physics in bits and pieces, and that physics are not interconnected, then knowing one aspect of it does not necessarily mean one has understood it. This is what she does and this is who you touting.

 I agree. The kind of info you are advertizing fits very nicely in the bastion of the Usenet.... It also appears that this conspiracy theory about physics journals has finally reared its ugly head again.... Why don't you challenge the substance of the critique, instead of drifting into irrelevant ad hominem tangents. This is, you know, the physics section here. If you want to discuss politics there are plenty of other places you can have fun.
But it IS physics - it is the PRACTICE of physics, which I do every single working day. Unlike you, I cannot challenge something based on what you PERCEIVE as the whole raw data. While you have no qualms in already making a definitive statement that the raw data would agree with "semi-classical" model, I can't! I haven't seen this particular raw data and simply refuse to make conjecture on what it SHOULD agree with.

Again, I will ask you if you have made ANY dark current or dark counts measurement, and have looked at the dark count energy spectrum, AND based on this, if we CAN make a definitive differentiation between dark counts and actual measurements triggered by the photons we're detecting. And don't tell me this is irrelevant, because it boils down to the justification of making cuts in the detection count not only in this type of experiments, but in high energy physics detectors, in neutrino background measurements, in photoemission spectra, etc.. etc.

Zz.

Mentor
Blog Entries: 27
 Quote by nightlight Have you ever performed a dead-count measurement on a photodetector and look at the energy spectrum from a dark count? This is a well-studied area and we do know how to distinguish between these and actual count rate. Engineering S/N enhancements, while certainly important, e.g. when dealing with TV signals, have no place in these types of experiments. In these experiments (be it this kind of beam splitter "collapse" or Bell inequality tests) the "impossibility" is essentially of enumerative kind, like a pigeonhole principle. Namely, if you can't violate classical inequalities on raw data, you can't possibly claim a violation on subtracted data since any such subtraction can be added to classical model, it is a perfectly classical extra operation. The only way the "violation" is claimed is to adjust the the data and then compare it to the original classical prediction that didn't perform the same subtractions on its predictions. That kind term-of-art "violation" is the only kind that exists so far (the "loophole" euphemisms aside).
Wait... so how did what you wrote answered the question I asked?

This isn't just a S/N ratio issue! It goes even deeper than that in the sense of how much is known about dark current/dark counts in a photodetector. It was something I had to deal with when I did photoemission spectroscopy, and something I deal with NOW in a photoinjector accelerator. And considering that I put in as much as 90 MV/m of E-field in the RF cavity, I'd better know damn well which ones are the dark currents and which are the real counts.

I will ask for the third time: Have you performed dark counts measurement on a photodetector and measure the dark current spectrum from it?

Zz.