Young's Experiment: Exploring Wave-Particle Duality

  • Thread starter Cruithne
  • Start date
  • Tags
    Experiment
In summary: This is not supported by the experiments.In summary, this article is discussing a phenomena that is observed in Young's experiment, however it contradicts what is said in the other examples of the experiment. The mystery of the experiment is that even when treating light as individual particles (photons) the light still produces behaviour that would imply it is acting as if it is a wave. Additionally, the statement suggests that the interference patterns produced were not the result of any observations.
  • #71
vanesch See, I didn't need any projection as such...

I think John Bell's last paper http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=against+measurement&AU=bell&AF=&CL=&RP=&YR= should suffice to convince you that you're using the non-dynamical collapse (Dirac's quantum jump). He explains it by picking apart the common obfuscatory verbiage, using the Landau-Lifsh-itz** and Gottfried's QM textbooks as the examples. You'll also find that his view of the collapse, measurement and teaching of the two is not very different of what I was saying here. Among other points I agree with, he argues that the collapse should be taught as a consequence of the dynamics, not in addition to it (as a postulate). He returns to and discusses the Schroedinger's original interpretation (the view that |Psi|^2 is a density of "stuff"). Then he looks at the ways to remedy the packet spread problem that Schroedinger found (the schemes such as de Broglie-Bohm's theory and the Ghirardi-Rimini-Weber dynamical, thus non-linear, collapse). Note that this is an open problem in Barut's approach as well, even though he had some toy models for the matter field localizations (from which he managed to pull out a rough approximation for the fine structure constant).



{ ** The PF's 4-letter-word filter apparently won't allow the entry of the Lanadau's coauthor.}
 
Last edited by a moderator:
Physics news on Phys.org
  • #72
nightlight said:
this is normally accomplished by a combination of adjustments to the detector's sensitivity and the post-detection subtracion of the backgorund rate).

Well, you're now in my field of expertise (which are particle detectors) and this is NOT how this works, you know. When you have a gas or vacuum amplification, this is essentially noiseless ; the "detector sensitivity" is completely determined by the electronic noise of the amplifier and the (passive) capacitive load by the detector and can be calculated using simple electronics simulations which do not take into account any detector properties except for its capacity (C).
A good photomultiplier combined with a good amplifier has a single-photon signal which stands out VERY CLEARLY (tens of sigma) from the noise. So the threshold of detection is NOT adjusted "just above the vacuum noise" as you seem to imply (or I misunderstood you).

cheers,
Patrick.


PS: you quote a lot of articles to read ; I think I'll look at (some) of them. But all this takes a lot of time, and I'm not very well aware of all these writings. I'm still convinced those approaches are misguided, but I agree that they are interesting, up to a point. Hey, maybe I'll change sides :-)) Problem is, there's SO MUCH to read.
 
  • #73
Nightlight ... I think John Bell's last paper http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=against+measurement&AU=bell&AF=&CL=&RP=&YR=

I is a good paper for a first introduction to the decoherence program.
Attention that "collapse" does not mean that the quantum state |psi> has really collapsed into a new state |An>. It is always a conditionnal state: the quantum state |An> when a measurement apparatus has given the value gn (like in probability).

Seratend.
 
Last edited by a moderator:
  • #74
seratend said:
Nightlight ... I think John Bell's last paper http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=against+measurement&AU=bell&AF=&CL=&RP=&YR=

I is a good paper for a first introduction to the decoherence program.
Attention that "collapse" does not mean that the quantum state |psi> has really collapsed into a new state |An>.

Yes, exactly. In fact, the funny thing is that the decoherence program seems to say that people who insist on the nonlinearity of the measurement process didn't take the linearity seriously enough :tongue:
However, it is true that the decoherence program, in itself, still doesn't solve completely the measurement problem, and any work that can shed more light on it can be interesting.
What disturbes me a bit in the approach suggested by nightlight and the people he cites is not so much that "standard theory is offended" but that they obstinately seem to refute the EPR experimental results EVEN if by fiddling around a lot, there are still many ways to explain the results by semiclassical approaches (or at least POTENTIALLY explain them). I'm maybe completely missing the point, but I think that there are 2 ways of comparing experimental results to theoretical predictions. One is to try to "correct the measurements and extract the ideal quantities". Of course that looks like fudging the data. But the other is: taking the ideal predictions, applying the experimentally expected transformations (like efficiencies and so on), and compare that with the raw data. Both procedures are of course equivalent, but the second one seems to be much more "acceptable". As far as I know, all EPR type experiments agree with the predictions of QM EVEN IF ONE COULD THINK OF SPECIFIC SEMICLASSICAL THEORIES almost made up for the purpose to obtain the same results - potentially. This, to me, is for the moment sufficient NOT TO REJECT the standard theory, which, at least, with simple "toy models" and "toy experimental coefficients" obtains correct results.

cheers,
Patrick.
 
Last edited by a moderator:
  • #75
A good photomultiplier combined with a good amplifier has a single-photon signal which stands out VERY CLEARLY (tens of sigma) from the noise. So the threshold of detection is NOT adjusted "just above the vacuum noise" as you seem to imply (or I misunderstood you).

You can change the PM's bias voltage and thus shift on its sensitivity curve (a sigmoid type function). That simply changes your dark current rate/vacuum fluctuations noise. The more sensitive you make it (to shrink the detection loophole) the larger the dark current, thus the larger explicit background subtraction. If you wish to have lower background subtractions you select lower PM sensitivity. It is a tradeoff between the detection and the subtraction "loopholes".

In the original Aspect's experiment, they found the setup to be the most efficient (most accepted data points per unit time) when they tune the background subtraction rate to be equivalent exactly to the background DC offset on the cos^2() that natural classical model predicts as necessary.

This was pointed out by Marshall, who demonstrated a simple classical, non-ZPF EM model, with exactly the same predictions as the pre-subtraction coincidence rates Aspect reported. After some public back and forth via letters and papers, Aspect repeated the experiments where he reduced the subtractions below the simple classical model (by lowering the sensitivity of the detectors) and still "violated" the Bell's inequality (while the subtraction loophole shrunk, the detection loophole grew). Then Marshall & Santos worked out the initial versions of their ZPF based model (without a good detection model at the time) which matched the new data, but by this time the journals had somehow lost the interest in the subject leaving Aspect with the last word in the debate.

In adition to tuning the particular PM, changing the type of the detector as well as the photon wavelengths (this change affects the analyzing efficiency of the polarizer, basically offseting any efficiency gains you made on the detection), selects by itself different tradeoff points between the dark rate and the quantum efficiency.

Since the classical predictions are within 1/2 photon equivalent from the "ideal" QM prediction and since more realistic QED models do use the "normal ordering" of a+, a- operators (which amounts to subtracting the divergent vacuum energy of 1/2 photon per mode; different ordering rules change the joint distributions types e.g. from Husimi to Wigner), it is highly doubtful that anything decisive can come out of the photon experiments ever. Separately, the popular source of recent years, PDC, is 100% equivalent to the classical ZPF based EM predictions for any coincidence measurements, with any number of detectors and any number of linear components (polarizers, beam splitters, etc). Just as one didn't have to bother debunking blow by blow any non-classicality claim based on linear optical components and thermal sources from 1956 (Hanbury Brown & Twiss effect) or laser sources from 1963 (Sudarshan, Glauber), since the late 1990s one doesn't need to do it with PDC sources either. The single atomic sources are still usable (if one is a believer in the remote, non-dynamical, instant collapse), but the three body problem has kept these setups even farther from the "ideal".
 
Last edited:
  • #76
vanesch but let's do a very simple calculation. ...

Simple, indeed. I don't wish to excessively pile on references, but there is an enormous amount of detailed analysis on this setup.

Take detector 1 as the "master" detector. ... Each time that detector 1 has seen a photon, we ASSUME - granted - that there has been a photon in branch 2 of the experiment,

The master may be a background, as well. Also, you have 2 detectors for A and 2 for B and thus you have 2^4-1 combinations of events to deal with (ignoring the 0,0,0,0).

and that two things can happen: or that the second photon was in the wrong polarization state, or that it was in the right one.

Or that there is none since the "master" trigger was the vacuum fluctuation, amplification or any other noise. Or that there is more than one photon (note that the "master" has a dead time after a trigger). Keep in mind that even if you included the orbital degrees of freedom and followed up the amplitudes in space and time, you are still putting in the assumption of non-relativistic QM approximation of the sharp, conserved particle number which isn't true for QED photons.

If you compute QED correlations for the coincidences using "normal operator ordering" (the Glauber's prescription, which is the cannonical Quantum Optics way) you're modelling a detection process which is calibrated to subtract vacuum fluctuations, which yields Wigner's joint distributions (in phase space variables). The "non-classicality" of these distribution (the negative probability regions) is always equivalent for any number of correlations to the classical EM with ZPF subtractions (see the Marshall & Santos papers for these results).

If your correlation computation doesn't use normal operator ordering to remove the divergent vacuum fluctuations from the Hamiltonian and instead uses high frequency cutoff, you get Husimi joint distribution in phase space which has only positive probabilites, meaning it is a perfectly classical stochastic model for all photon correlations (it is mathematically equivalent to the Wigner's distribution with smoothed out negative probability regions by a Gaussian; the smoothing is physically due to the indeterminacy of the infrared/soft photons). Daniele Tommasini has several papers on the QED "loophole" topic (it surely takes some chutzpah to belittle the more accurate QED result as a "loophole" for the approximate QM result).

So I'd guess that the quantum prediction to have coincidences in channel 2, if channel 1 triggered, to be equal to e2xC_ideal...

What you're trying to do is rationalize the fair sampling assumption. The efficiencies are features of averages of detections over Gaussian (for high rates) or Poisson (for low rates) detection event distributions, neither of which is usable for Bell's violation tests, since classical models trigger rates are equivalent to the "ideal" QM prediction smoothed out/smeared by precisely these types of distributions.

The QED "loophole" is not a problem in the routine Quantum Optics applications. But if you're trying to distinguish QM "ideal" model from a classical model, you need much sharper parameters than the averages over Poisson/Gaussian distributions.

In addition to the QED "loophole, the assumption that the average ensemble properties (such as the coincidence detection efficiency, which is a function of individual detectors quantum efficiencies) must also be properties of the individual pairs has another, even more specific problem in this context.

Namely, in a classical model you could have a hidden variable shared by the two photons (which was set at the pair creation time and which allows the photons to correlate nearly perfectly for the parallel polarizers). In the natural classical models this variable is a common and specific polarization orientation. For the ensemble of pairs this polarization is distributed equally in each direction. But for any individual pair it has a specific value (thus the rotational symmetry of the state is an ensemble property which need not be a property of individual pair in LHV theories; and it is not rotationally symmetrical for individual pairs even in the most natural classical models i.e. you don't need some contrived toy model to have this rotational asymmetry at the individual pair level while retaining the symmetry at the ensemble level).

Now, the splitting on the polarizer breaks the amplitudes into sin() and cos() projections relative to the polarizer axis for (+) and (-) result. But since the detection probability is sensitive to the squares of amplitude incident on each detector, this split automatically induces a different total probability of coincident detection for individual pair (for the general polarizer orientations). The quantum efficiency figure is an average insensitive to this kind of systematic bias which correlates the coincidence detection probability of an individual pair with the hidden polarization of this pair thus with the result of the pair.

This classically perfectly natural trait is a possibility that "fair sampling" assumption excludes upfront. The individual pair detection (as a coincidence) may be sensitive to the orientation of a hidden polarization relative to the two polarizer orientations even though the ensemble average may lack this sensitivity. See Santos paper on the absurdity of "fair sampling" (due to this very problem) and also a couple of papers by Khrennikov which propose a test of the "fair sampling" for the natural classical models (which yield a prediction for detecting this bias).
 
Last edited:
  • #77
nightlight said:
vanesch but let's do a very simple calculation. ...

Simple, indeed. I don't wish to excessively pile on references, but there is an enormous amount of detailed analysis on this setup.

May I ask you something ? Could you indicate me a "pedagogical reading list" in the right order for me to look at ? I'm having the impression in these papers I'm being sent from reference to reference, with no end to it (all papers have this treat of course, it is just that normally, at a certain point, this is absorbed by "common knowledge in the field").
I think this work is interesting because it illustrates what exactly is "purely quantum" and what not.

cheers,
Patrick
 
  • #78
vanesch May I ask you something ? Could you indicate me a "pedagogical reading list" in the right order for me to look at ?

I am not sure what specific topic for the list are you referring to. In any case, I haven't used or have any such list. Marshall & Santos have perhaps couple hundred papers, most of which I have in the paper preprint form. I would say that one can find their more recent equivalents/developments of most that was worth pursuing (http://homepages.tesco.net/~trevor.marshall/antiqm.html has few basic reference lists with informal intros into several branches of their work). The phase space distributions are a well developed field (I think Marshall & Santos work here was the first rational interpretation of the negative probabilities appearing in these distributions).

http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=&AU=barut&AF=&CL=&RP=&YR= servers have most of Barut's stuff (I did have to get some which are missing as paper copies via mail from ICTP and some hard to find from his former students).

There is also E.T. Jaynes publication archive (I found especially interesting #71, #74, #68, #67, and the unpublished paper on Dyson). His classic http://www-laplace.imag.fr/Jaynes/prob.html is also online.

Of other people you may have not heard of, an applied math professor Garnet Ord has found an interesting combinatorial origin of all the core equations of physics (Maxwell, Schroedinger, Dirac) - they're all contnuum limits of the combinatorial/enumerative properties of the fluctuations of the plain Brownian motion (obtained without analytic continuation, but purely combinatorially; Gerard 't Hooft has been playing with similar models lately, apparently having lost the faith in the QM orthodoxy and the physical relevance of the no-go "theorems").

Another collection of interesting and often rare papers and preprints is the http://kh.bu.edu/qcl/ , plus many of Toffoli's papers.
 
Last edited by a moderator:
  • #79
nightlight said:
Now, the splitting on the polarizer breaks the amplitudes into sin() and cos() projections relative to the polarizer axis for (+) and (-) result. But since the detection probability is sensitive to the squares of amplitude incident on each detector, this split automatically induces a different total probability of coincident detection for individual pair (for the general polarizer orientations). The quantum efficiency figure is an average insensitive to this kind of systematic bias which correlates the coincidence detection probability of an individual pair with the hidden polarization of this pair thus with the result of the pair.

This classically perfectly natural trait is a possibility that "fair sampling" assumption excludes upfront.

I think I understand what you mean. To you, a "photon" is a small, classical wavetrain of EM radiation, and the probability of detection depends on the square of its amplitude. I tried to find quickly a counter example, and indeed, in most if not all "single photon events" the behaviour seems identical with the "QED photon". So you claim that if we send a polarized EM beam under 45 degrees onto a detector, each little wavetrain arrives "fully" at the photocathode and hence has a probability epsilon to be detected. However, if we now put an X polarizer into the beam, contrarily to the QED view, we do not block half of the "photons" (wave trains), but we let through ALL of them, but they have diminished (by sqrt(2)) wave train amplitudes, namely purely their X component, which results, you claim, in half of the detection efficiency, and so it is only the detector who sees half of them, us naive physicists thinking that there physically are only half of them present. It is just that they are 'half photons'. By turning the polarizer, we can make "tiny tiny photons" which still encode the wavelength or frequency, but have a very very small probability to be detected.
Right. So you will agree with me that you are essentially refuting the photon as a particle, and just as a detection phenomenon of classical EM radiation ; in that case I also understand your insistence on how you refuse the fair sampling idea.
How do you explain partial absorption in a homogeneous material ? Loss of a number of wavetrains, or diminishing the amplitude of each one, keeping the total number equal ?

cheers,
Patrick.
 
  • #80
vanesch To you, a "photon" is a small, classical wavetrain of EM radiation, and the probability of detection depends on the square of its amplitude.

The QM (or Quantum Optics) amplitude does exactly the same here. Check any computation for the free space propagation or through the polarizer or other linear elements -- it follows precisely the Maxwell's equations. The only difference is that I don't imagine a marble floating somehow inside this amplitude packet. It is not necessary, it makes no empirical difference (and one can't even define a consistent theoretical position operator for the photon). The a+, a- operators don't give you any marble-like localization hint, they're field modes (which are spatially extended) creators and destructors.

Of course, it is fine to use any visual mnemonic one finds helpful for some context, but one has to be aware that that's all it is and not create "paradoxes" by attributing ontology to a private visual mnemonic device.

So you claim that if we send a polarized EM beam under 45 degrees onto a detector, each little wavetrain arrives "fully" at the photocathode and hence has a probability epsilon to be detected.

The Quntum Optics amplitude does exactly the same split since it follows the same equations for orbital propagation. For any thermal, laser or PDC source you cannot set-up any configuration of linear optical elements and square law detectors that will predict any difference in coincidence counts.

Of course, I am not including in the term "prediction" here the predictions of some QM toy-model since that is not a model of an actual system and it doesn't predict what is actually measured (the full raw counts, the existence, much less the rate, of background counts or the need and the model for the background subtractions). It predicts the behavior of some kind of "ideal" system (which has no empirical counterpart), that has no vacuum fluctuations (which are in QO/QED formally subtracted by normal ordering convention, and this subtraction is mirrored by the detector design, threshold calibration and background subtractions to exclude the vacuum events) and that has a sharp, conserved photon number.

(-- interrupted here--)
 
Last edited:
  • #81
vanesch ... us naive physicists thinking that there physically are only half of them present. It is just that they are 'half photons'. ...

What's exactly the empirical difference? The only sharp test that can exclude the natural classical field models (as well as any local model) is the Bell's inequality test, and so far it hasn't done it.

Dropping the "loophole" euphemistics, the plain factual situation is that these tests have excluded all local theories which satisfy the "fair sampling" property (which neither the classical EM fields nor the QED/QO amplitudes satisfy). So, the test has excluded the theories that never existed in the first place. Big deal.

So you will agree with me that you are essentially refuting the photon as a particle, and just as a detection phenomenon of classical EM radiation ; in that case I also understand your insistence on how you refuse the fair sampling idea.

In which sense is the photon a particle in QED? I mean, other than jargon (or a student taking the Feynman's diagrams a bit too literally). You prefer to visualise counting marbles while I prefer to imagine counting modes since it has neither position nor individuality.

The discreteness of the detector response is the design decision (the photo-ionisation in detectors is normally treated via semi-clasical model; even the few purely QED treatments of the detection don't invoke any point-like photons and show nothing different or new). Detectors could also perfectly well measure and correlate continuous photo-currents. (Of course, even in the continuous detection mode a short pulse might appear as a sharp spike, but that is a result of the amplitude modulation which is not related to a pointlike "photon." )

One needs also to keep in mind that the engineering jargon of Quantum Optics uses label non-classical in much weaker sense than the fundamental non-classicality we're discussing (for which the only sharp test is some future Bell type test which could violate the Bell's inequalities).

Basically, for them anything that the most simple minded boundary & initial conditions of classical EM model cannot replicate is "non-classical" (such as negative regions of Wigner distributions). Thus their term "classical" excludes by definition the classical EM models which account for the vacuum fluctuations (via the ZPF initial & bounday conditions).

How do you explain partial absorption in a homogeneous material ? Loss of a number of wavetrains, or diminishing the amplitude of each one, keeping the total number equal ?

What is this, a battle of visual mnemonic devices? Check the papers by Jaynes (esp. 71 and 74 the QED section) about the needs for quantization of the EM field. With Jaynes' and Barut's results included, you need to go beyond the first order of radiative corrections to even get to the area where they haven't carried out the computations and where the differences may appear (they've both passed away, unfortunately). The Quantum Optics, or some statistical physics bulk material properties are far away from the fundamental (as opposed to computationally practical) necessity to quantize the EM field. The Quantum Optics doesn't have a decisive test on this question (other than those for the weak non-classicality or some new design of the Bell's test). The potentially distinguishing efects are couple QED perturbative orders below the phenomena of Quantum Optics or the bulk material properties.

What you're talking above is the battle of visual mnemonic devices. For someone doing day to day work in QO it is perfectly fine to visualize photons as marbles somehow floating inside the amplitudes or collapsing out of amplitudes, if that helps them think about it or remember the equations better. That has nothing to do with the absolute empirical necessity for a marble-like photon (of which there is none, not even in QED formalism it is point-like, although the jargon is particle-like).

As a computational recipe for routine problems the standard scheme is far ahead of any existent non-linear models for the QED. As Jaynes put it (in #71):

Today, Quantum Mechanics (QM) and Quantum Electrodynamics (QED) have great
pragmatic success - small wonder, since they were created, like epicycles, by
empirical trial-and-error guided by just that requirement. For example, when
we advanced from the hydrogen atom to the helium atom, no theoretical
principle told us whether we should represent the two electrons by two wave
functions in ordinary 3-d space, or one wave function in a 6-d configuration
space; only trial-and-error showed which choice leads to the right answers.

Then to account for the effects now called `electron spin', no theoretical
principle told Goudsmit and Uhlenbeck how this should be incorporated into the
mathematics. The expedient that finally gave the right answers depended on
Pauli's knowing about the two-valued representations of the rotation group,
discovered by Cartan in 1913.

In advancing to QED, no theoretical principle told Dirac that electromagnetic
field modes should be quantized like material harmonic oscillators; and for
reasons to be explained here by Asim Barut, we think it still an open question
whether the right choice was made. It leads to many right answers but also to
some horrendously wrong ones that theorists simply ignore; but it is
now known that virtually all the right answers could have been found without,
while some of the wrong ones were caused by field quantization
.

Because of their empirical origins, QM and QED are not physical theories at
all
. In contrast, Newtonian celestial mechanics, Relativity, and Mendelian
genetics are physical theories, because their mathematics was developed by
reasoning out the consequences of clearly stated physical
principles which constrained the possibilities. To this day we have no
constraining principle from which one can deduce the mathematics of QM and
QED; in every new situation we must appeal once again to empirical evidence to
tell us how we must choose our mathematics in order to get the right answers
.

In other words, the mathematical system of present quantum theory is, like
that of epicycles, unconstrained by any physical principles
. Those
who have not perceived this have pointed to its empirical success to justify
a claim that all phenomena must be described in terms of Hilbert spaces,
energy levels, etc. This claim (and the gratuitous addition that it must be
interpreted physically in a particular manner) have captured the minds of
physicists for over sixty years. And for those same sixty years, all efforts
to get at the nonlinear `chromosomes and DNA' underlying that linear
mathematics have been deprecated and opposed by those practical men who,
being concerned only with phenomenology, find in the present formalism all
they need.


But is not this system of mathematics also flexible enough to accommodate any
phenomenology, whatever it might be?
Others have raised this question
seriously in connection with the BCS theory of superconductivity. We have all
been taught that it is a marvelous success of quantum theory, accounting for
persistent currents, Meissner effect, isotope effect, Josephson effect, etc.
Yet on examination one realizes that the model Hamiltonian is
phenomenological, chosen not from first principles but by trial-and-error
so as to agree with just those experiments.

Then in what sense can one claim that the BCS theory gives a physical
explanation
of superconductivity? Surely, if the Meissner effect did not
exist, a different phenomenological model would have been invented, that does
not predict it; one could have claimed just as great a success for quantum
theory whatever the phenomenology to be explained.

This situation is not limited to superconductivity; in magnetic resonance,
whatever the observed spectrum, one has always been able to invent a
phenomenological spin-Hamiltonian that ``accounts'' for it. In high-energy
physics one observes a few facts and considers it a big advance - and great
new triumph for quantum theory -- when it is always found possible to invent a
model conforming to QM, that ``accounts'' for them. The `technology' of QM,
like that of epicycles, has run far ahead of real understanding.

This is the grounds for our suggestion (Jaynes, 1989) that present QM is only
an empty mathematical shell
in which a future physical theory may, perhaps, be
built. But however that may be, the point we want to stress is that the
success - however great - of an empirically developed set of rules gives
us no reason to believe in any particular physical interpretation of them.
No physical principles went into them.


Contrast this with the logical status of a real physical theory; the success
of Newtonian celestial mechanics does give us a valid reason for believing in
the restricting inverse-square law, from which it was deduced; the success
of relativity theory gives us an excellent reason for believing in the
principle of relativity, from which it was deduced.
 
Last edited:
  • #82
nightlight said:
The Quntum Optics amplitude does exactly the same split since it follows the same equations for orbital propagation. For any thermal, laser or PDC source you cannot set-up any configuration of linear optical elements and square law detectors that will predict any difference in coincidence counts.

How about the following setup:
take a thermal source of light, with low intensity. After collimation into a narrow beam, send it onto a beam splitter (50-50%). Look at both split beams with a PM. In the "marble photon" picture, we have a Poisson stream of marbles, which, on the beam splitter, go left or right, and have then a probability of being seen by the PM (which is called its quantum efficiency). This means that only 3 cases can occur: a hit "left", a hit "right" or no hit. The only possibility of a "hit left and a hit right" is when there is a spurious coincidence in the Poisson stream, and lowering the intensity (so that the dead time is tiny compared to the average flux) lowers this coincidence. So in the limit of low intensities (as defined above), the coincidence rate of hits can be made as low as desired.
However, in the wavetrain picture, this is not the case. The wavetrain splits equally in two "half wavetrains" going each left and right. There they have half the probability to be detected. This leads to a coincidence rate which, if I'm not mistaking, to be e/2 where e is the quantum efficiency of the PM. Indeed, exactly at the same moment (or small lapse of time) when the left half wavetrain arrives at PM1, the right half wavetrain arrives at PM2. This cannot be lowered by lowering the intensity.
Now it is an elementary setup to show that we find anticoincidence, so how is this explained away in the classical picture ?


EDIT:
I add this because I already see a simple objection: you could say: hey, these are continuous beams, not little wavetrains, and it are both PM's which randomly (Poisson-like) select clicks as a function of intensity. But then the opposite bites you: how do you explain then that, in double cascades, or PDC, or whatever, we DO get clicks which are more correlated than a Poisson coincidence can account for ?
Now if you then object that "normal" sources have continuous wave outputs, but PDC or other "double photon emitters" do make little wavetrains, then do my proposed experiment again with such a source (but only using one of both wavetrains coming out of it, which is not difficult given the fact that they usually are emitted in opposite directions, and that we single out one single narrow beam).
What I want to say is that any classical mechanism that explains away NON-coincidence bites you when you need coincidence, and vice versa, while the "marble photon" model gives you naturally each time the right result.


cheers,
Patrick.
 
Last edited:
  • #83
nightlight said:
You can change the PM's bias voltage and thus shift on its sensitivity curve (a sigmoid type function). That simply changes your dark current rate/vacuum fluctuations noise. The more sensitive you make it (to shrink the detection loophole) the larger the dark current, thus the larger explicit background subtraction.

Have a look at:
http://www.hpk.co.jp/eng/products/ETD/pdf/PMT_construction.pdf
p 12, figure 28. You see a clear spectral (pulse height spectrum) separation between the dark current pulses, which have a rather exponential behaviour, and the bulk of "non-dark single photon" pulses, and allthough a small tradeoff can be made, it should be obvious that a lower cutoff in the amplitude spectrum clearly cuts away most of the noise, while not cutting away so much "non-dark" pulses. The very fact that these histograms have completely different signatures should indicate that the origins are quite different, no ?

cheers,
Patrick.

EDIT: but you might be interested also in other technologies, such as:

http://www.hpk.co.jp/eng/products/ETD/pdf/H8236-07_TPMO1011E03.pdf

where quantum efficiencies of ~40% are reached, in better noise conditions (look at the sharpness of the peak for single photon events !)
 
Last edited by a moderator:
  • #84
vanesch How about the following setup:
take a thermal source of light, with low intensity. After collimation into a narrow beam, send it onto a beam splitter (50-50%). Look at both split beams with a PM. In the "marble photon" picture, we have a Poisson stream of marbles, which, on the beam splitter, go left or right, and have then a probability of being seen by the PM (which is called its quantum efficiency). This means that only 3 cases can occur: a hit "left", a hit "right" or no hit. The only possibility of a "hit left and a hit right" is when there is a spurious coincidence in the Poisson stream, and lowering the intensity (so that the dead time is tiny compared to the average flux) lowers this coincidence. So in the limit of low intensities (as defined above), the coincidence rate of hits can be made as low as desired.
However, in the wavetrain picture, this is not the case. The wavetrain splits equally in two "half wavetrains" going each left and right. There they have half the probability to be detected. This leads to a coincidence rate which, if I'm not mistaking, to be e/2 where e is the quantum efficiency of the PM. Indeed, exactly at the same moment (or small lapse of time) when the left half wavetrain arrives at PM1, the right half wavetrain arrives at PM2. This cannot be lowered by lowering the intensity.
Now it is an elementary setup to show that we find anticoincidence, so how is this explained away in the classical picture ?


The semiclassical and QED theory of the square law detectors predicts Poisson distribution of counts P(k,n) where n is the average k (the average combines the efficiency and the incident intensity effects).

That means that classical case will have (A,B) triggers of type: (0,0), (1,0), (0,1), (2,0), (1,1), (0,2),... (with Poisson distribution applied independently on each detector). Thus, provided the average of your incident marbles Poissonian has this same n, you will have indistingushable correlations for any combination of events, since your marbles will be splitting in exactly same combinations at the same rates.

If you wish to divide the QM cases into "spurious" and "non-spurious" and start discarding events (instead of just using the incident Poissonian and computing the probabilities) and based on that predict some other kind of correlation, note that the same "refinement" of the Poisonian via spurious/non-spurious division can be included into any other model. Of course, any such data tweaking takes both models away from the Poissonian prediction (and the actual coincidence data).

It seems you're again confusing the average ensemble property (such as the efficiency e) with the individual events for the classicl case, i.e. assuming that the classical model implies a prediction of exactly 1/2 photocurrent in each try. That's not the case. The QED and the semiclassical theory of detectors yield exactly the same Poisson distribution of trigger events. The QED proper effects (which distinguish it empirically from the semiclassical models with ZPF) start at higher QED perturbative orders than those affecting the photo-ionization in detector modelling.
 
  • #85
nightlight said:
The semiclassical and QED theory of the square law detectors predicts Poisson distribution of counts P(k,n) where n is the average k (the average combines the efficiency and the incident intensity effects).

I was talking in cases where we have on average a hit every 200 seconds, while the hit itself takes 50 ns (so low intensity), so this means that (in the QM picture) double events are such a rarity that we can exclude them. Now you seem to say that - as I expected in my EDIT - you would now consider these beams as CONTINUOUS and not as little wavetrains ("photons") ; indeed, then you have independent Poisson streams on both sides.

It seems you're again confusing the average ensemble property (such as the efficiency e) with the individual events for the classicl case, i.e. assuming that the classical model implies a prediction of exactly 1/2 photocurrent in each try. That's not the case. The QED and the semiclassical theory of detectors yield exactly the same Poisson distribution of trigger events. The QED proper effects (which distinguish it empirically from the semiclassical models with ZPF) start at higher QED perturbative orders than those affecting the photo-ionization in detector modelling.

Indeed, I expected this remark (see my EDIT). However, I repeat my inverse difficulty then:
If light beams of low intensity are to be considered as "continuous" (so no little wave trains which are more or less synchronized in time), then how do you explain ANY coincidence which surpasses independent Poisson hits, such as there are clearly observed in PMD coincidences ?? Given the low intensity, the probability of coincidence based on individual Poisson streams is essentially neglegible, so it is extremely improbable that both detectors would trigger simultaneously, no ? So how do you explain ANY form of simultaneity of detection ?

cheers,
Patrick.
 
  • #86
vanesch Have a look at:
http://www.hpk.co.jp/eng/products/ETD/pdf/PMT_construction.pdf
p 12, figure 28. You see a clear spectral (pulse height spectrum) separation between the dark current pulses, which have a rather exponential behaviour, and the bulk of "non-dark single photon" pulses, and allthough a small tradeoff can be made, it should be obvious that a lower cutoff in the amplitude spectrum clearly cuts away most of the noise, while not cutting away so much "non-dark" pulses. The very fact that these histograms have completely different signatures should indicate that the origins are quite different, no ?


You can get low noise for low QE. The PM tubes will have max QE about 40% which is not adequate for any non-classicality test. Also, note that the marketing brochures sent to the engineers are not exactly the same quality information source as the scientific reports. This kind of engineering literature uses also its own jargon and conventions and needs a grain of salt to interpret.

To see the problems better, take a look at the ultra-high efficiency detector (in a scientific report), with 85% QE, which with some engineering refinements might go to 90-95% QE. To reduce the noise, it is cooled down to 6K (not your average commercial unit). You might think that this solves the Bell test problem since the 83% ought to remove the need for the "fair sampling" assumption.

Now you read on, it says the optimum QE is obtained for the signal rates of 20,000 photons/sec. And what is the dark rate. You look around, and way back you find it as also 20,000 events/sec. The diagrams for QE vs bias voltage shows increase in QE with voltage, so it seems one could just increase the bias. Well, it doesn't work, since this also increases the dark rate much faster and for the voltage 7.4V it achieves the 85% QE and the 20,000 cps dark rate for the flux of 20,000 photons/sec. Beyond the 7.4V the detector breaks down.

So why not increase the incident flux? Well, because the detector's dead time then rises, decreasing the efficiency. The best QE they could get was after tuning the flux to 20,000. Also, decreasing the temperature does lower the noise, but requires higher voltage to get same QE, thus that doesn't help them.


Their Fig. 5 in the paper then combines all the effects relating QE to dark rate, and you see for the QE vs dark rate that as QE rises to 85% the dark rate grows exponentially to 20,000 (QE shows the same dependency as for voltage increase). As they put it:

But if we plot the quantum efficiency as a function of dark counts, as is done in Figure 5, the data for different temperatures all lie along the same curve. This suggests that the quantum efficiency and dark counts both depend on a single parameter, the electric field intensity in the gain region. The temperature and bias voltage dependence of this parameter result in the behavior shown in Figure 4. From Figure 5 we see that the maximum quantum efficiency of 85% is achieved at a dark count rate of roughly 20,000.

So, with this kind of dark rates, this top of the line detector with maxed-out QE is basically useless for Bell tests. Anything it measures will be prefectly well within the semiclassical (with ZPE) models. No matter how you tweak it, you can't get rid of the stubborn side-effects of an equivalent of 1/2 photon per mode vacuum fluctuations (since they predicted by the the theory). And that 1/2 photon equivalent noise is exactly what makes the semiclassical model with ZPF indistingushable at the Quantum Optics level (for the highers orders of perturbative QED, the semiclassical models which don't include self-interaction break down; Barut's self-field ED which models self-interaction still matches the QED to all orders it was computed).
 
Last edited by a moderator:
  • #87
vanesch said:
Now you seem to say that - as I expected in my EDIT - you would now consider these beams as CONTINUOUS and not as little wavetrains ("photons") ; indeed, then you have independent Poisson streams on both sides.

I should maybe add that you have not that much liberty in the detector performance if you consider wavetrains of a duration of the order of a few ns with most of the time NOTHING happening. If you need, on the average, a certain efficiency, and you assume that it is only during these small time windows of arrival of the wavetrains that a detector can decide to click or not, then, with split wavetrains, you HAVE to have a certain coincidence rate if no "flag" is carried by the wavetrain to say whether it should click or not (and if you assume the existence of such a flag, then just do away with those wavetrains which have a "don't detect me" flag, and call those with the "detect me" flag, photons :-), because you assume these detection events to be independent in each branch.
So your only way out is to assume that the low level light is a continuous beam, right ? No wavetrains, and then nothing.

And with such a model, you can NEVER generate non-Poisson like coincidences, as far as I can see.

cheers,
Patrick.
 
  • #88
nightlight said:
And that 1/2 photon equivalent noise is exactly what makes the semiclassical model with ZPF indistingushable at the Quantum Optics level

Is this then also true for 1MeV gamma rays ?

cheers,
Patrick.
 
  • #89
vanesch I was talking in cases where we have on average a hit every 200 seconds, while the hit itself takes 50 ns (so low intensity), so this means that (in the QM picture) double events are such a rarity that we can exclude them.

It doesn't matter what the Poisson average rate is. The semiclassical photodetector model predicts the trigger on average once per 200 s. There is no difference there.

However, I repeat my inverse difficulty then:
If light beams of low intensity are to be considered as "continuous" (so no little wave trains which are more or less synchronized in time), then how do you explain ANY coincidence which surpasses independent Poisson hits, such as there are clearly observed in PMD coincidences ??


The previous example of thermal source was super Poissonian (or at best Poissonian for laser beam).

On the other hand, you're correct that the semiclassical theory which does not model vacuum fluctuations cannot predict sub-Poissonian correlations of PDC or other similar sub-Poissonian experiments.

But, as explained at length earlier, if classical theory uses ZPF (boundary & initial conditions) and the correlation functions are computed to subtract the ZPF (to match what is done in the Glauber's prescription for Quantum Optics and what the detectors do by background subtractions and tuning to have null triggers when there is no signal), then it produces the same coincidence counts. The sub-Poissonian trait of semiclassical model is the result of the contribution of sub-ZPF superpositions (the "dark wave"). See the earlier message and the references there.

Given the low intensity, the probability of coincidence based on individual Poisson streams is essentially neglegible, so it is extremely improbable that both detectors would trigger simultaneously, no ? So how do you explain ANY form of simultaneity of detection ?

There is nothing to explain about vague statements such as "extremely improbable". Show me an expression that demonstrates the difference. Note that two independent Poissonans of classical model yield p^2 for trigger probability. Now if your QM marble count is Poissonian with any average, small or large (it only has to match the singles rate of classical model) it yields also the probability quadratic in p of the two marbles arriving to the splitter.

What's exactly the difference for the Poissonan source if you assume that QM photon number Poissonian has the same average singles rate as the classical model (same intensity calibration)? There is none. The Poissonian source complete clasicality for any number of detectors, polarizers, splitters (any linear elements) is an old hat (1963, Glauber Sudarshan classical equivalence for the coherent states).

NOTE: I assume here, when you insist on Poissonian classical distribution, that your marble distribution is also Poissonian (otherwise why would you mention it). For the sub-Poissonian case, classical models with ZPF can do that too (using the same kind of subtractions of vacuum fluctuation effects as QO), as explained at the top.
 
Last edited:
  • #90
vanesch Is this then also true for 1MeV gamma rays ?

You still have vacuum fluctuations. But, the gamma ray photons are obviously in much better relation to any frequency cutoff of the fluctuations than optical photons, thus they indeed do have much better signal to noise on the detectiors. But this same energy advantage turns into disadvantage for non-classicality tests since the polarization coupling to atomic lattice is proportionately smaller, thus the regular polarizers (or half-silvered mirrors with preserved coherent splitting) won't work for the gamma rays.

The oldest EPR tests in fact were with gamma rays and they used Compton scattering to analyse polarization (for beam splitting), which is much less accurate than the regular optical polarizers. The net effect was much lower overall efficiency than for the optical photons. That's why no one uses gamma or X-ray photons for Bell's tests.
 
Last edited:
  • #91
nightlight said:
vanesch Is this then also true for 1MeV gamma rays ?

You still have vacuum fluctuations. But, the gamma ray photons are obviously in much better relation to any frequency cutoff of the fluctuations than optical photons, thus they indeed do have much better signal to noise on the detectiors. But this same energy advantage turns into disadvantage for non-classicality tests since the polarization coupling to atomic lattice is proportionately smaller, thus the regular polarizers (or half-silvered mirrors with preserved coherent splitting) won't work for the gamma rays.

I was not talking for EPR kinds of experiments, but to try to refute the idea that you do not need photons (that classical wave theory can do). I don't really see what's so different between gamma rays and optical photons: or both exist, or both are describable by the same semiclassical theory. And with gamma rays, there isn't this problem with quantum efficiency and dark currents, so I was wondering if this doesn't give a problem with the explanations used to explain away photons in the visible range.

cheers,
Patrick.
 
  • #92
nightlight said:
But, as explained at length earlier, if classical theory uses ZPF (boundary & initial conditions) and the correlation functions are computed to subtract the ZPF (to match what is done in the Glauber's prescription for Quantum Optics and what the detectors do by background subtractions and tuning to have null triggers when there is no signal), then it produces the same coincidence counts. The sub-Poissonian trait of semiclassical model is the result of the contribution of sub-ZPF superpositions (the "dark wave"). See the earlier message and the references there.

Eh, this sounds bizarre. I know you talked about it but it sounds so strange that I didn't look at it. I'll try to find your references to it.


cheers,
Patrick.
 
  • #93
vanesch Eh, this sounds bizarre. I know you talked about it but it sounds so strange that I didn't look at it. I'll try to find your references to it.

The term "dark wave" is just picturesque depiction (a visual aid), a la Dirac hole, except this would be like a half-photon equivalent hole in the ZPF. That of course does get averaged over the entire ZPF distribution and without further adjustments of data nothing non-classical happens.

But, if you model a setup calibrated to "ignore" vacuum fluctuations (via combination of background subtractions and detectors sensitivity adjustments), the way Quantum Optics setups are (since Glauber's correlation functions subtract the vacuum contributions as well) -- then the subtraction of the average ZPF contributions makes the sub-ZPF contribution appear as having negative probability (which mirrors the negative regions in Wigner distribution) and able to replicate the sub-Poissonian effects on the normalized counts. The raw counts don't have sub-Poissonian or negative probability traits, just as they don't in Quantum Optics. Only the adjusted counts (background subtractions and/or the extrapolations of the losses via the "fair sampling" assumption) have such traits.

See those references for PDC classicality via Stochastic electrodynamics I gave earlier.
 
Last edited:
  • #94
I think both of you should not argument on the possibility to get a detector triggering a single photon as it is impossible (with classical or quantum physics).
Experiences with single photon are only simplification of what really occurs (i.e the problem of the double slit experiment, cavities, etc …). We just roughly say: I have an electromagnetic energy of hbar.ω.

Remind that a “pure photon” is a single energy and thus requires an infinite time of measurement: reducing this time of measurement (interaction is changed) modifies the field.

It is like trying to build a super filter that would extract a pure sinus wave from an electrical signal or a sound wave.

Don’t also forget that the quantization of the electromagnetic field gives only the well known energy eigenvalues (a number of photons) in the case of a free field (no sources) where we have a free electromagnetic Hamiltonian (Hem).
Once you interact, the eigenvalues of the electromagnetic field are changed and are given by Hem+Hint (i.e. you modify the basis of the photons number (the energy) < => you change the number of photons).

Seratend.
 
  • #95
vanesch I was not talking for EPR kinds of experiments, but to try to refute the idea that you do not need photons (that classical wave theory can do). I don't really see what's so different between gamma rays and optical photons: or both exist, or both are describable by the same semiclassical theory. And with gamma rays, there isn't this problem with quantum efficiency and dark currents, so I was wondering if this doesn't give a problem with the explanations used to explain away photons in the visible range.

We were talking about coherent beam splitter which splits QM amplitude (or classical wave) into two equal coherent sub-packets. If you try same with the gamma rays, you have different type of split (non equal amplitudes) i.e. it would have merely classical mix (multidimensional rho instead of a single dimensional Psi in Hibert space terms) of the left-right sub-packets. This gives it a particle-like appearance, i.e. in each try it goes one way or the other but not both ways. But that type of scattering is perfectly within the semi-classical EM modelling.

Imagine a toy classical setup with a mirror containing large holes in the coating (covering half the area) and run quickly a "thin" light beam in random fashion over it. Now a pair of detectors behind will show perfect anti-correlation in their triggers. There is nothing contrary to classical EM about it, even though it appears as if particle went through on each trigger and on average half the light went each way. The difference is that you can't make the two halves interfere any more since they are not coherent. If you can detect which way "it" went you lose coherence and you won't have interference effects.

Some early (1970s) "anti-correlation" experimental claims with optical photons have made this kind of false leap. Namely they used randomly or circularly polarized light and used polarizer to split it in 50:50 ratio, then claimed the perfect anti-correlation. That's the same kind of trivially classical effect as the mirror with large holes. They can't make the two beams interfere, though (and they didn't try that, of course).
 
Last edited:
  • #96
nightlight said:
vanesch I was not talking for EPR kinds of experiments, but to try to refute the idea that you do not need photons (that classical wave theory can do). I don't really see what's so different between gamma rays and optical photons: or both exist, or both are describable by the same semiclassical theory. And with gamma rays, there isn't this problem with quantum efficiency and dark currents, so I was wondering if this doesn't give a problem with the explanations used to explain away photons in the visible range.

We were talking about coherent beam splitter which splits QM amplitude (or classical wave) into two equal coherent sub-packets.

No, we were simply talking about photon detection and background. If the zero-point energy or whatever it is, provoques what is usually considered as being a technical difficulty, namely the "dark current noise", then why is this the limiting factor in visible light detectors, but doesn't it occur in gamma ray detectors, which are close to 100% efficient with neglegible dark currents (take MWPC for instance). I don't see why something that is "a fundamental problem" at say, 2eV is suddenly no issue anymore at 2MeV.

cheers,
Patrick.
 
  • #97
vanesch said:
No, we were simply talking about photon detection and background. If the zero-point energy or whatever it is, provoques what is usually considered as being a technical difficulty, namely the "dark current noise", then why is this the limiting factor in visible light detectors, but doesn't it occur in gamma ray detectors, which are close to 100% efficient with neglegible dark currents (take MWPC for instance). I don't see why something that is "a fundamental problem" at say, 2eV is suddenly no issue anymore at 2MeV.

cheers,
Patrick.

I didn't want to jump into this and spoil the "fun", but I can't resist myself. :)

You are certainly justified in your puzzlement. Dark current, at least the ones we detect in photon detectors, has NOTHING to do with "zero-point" field. I deal with dark current all the time in accelerators. They are the result of field emission, and over the range of less than 10 MV/m, they are easily described by the Fowler-Nordheim theory of field emission. I do know that NO photodetector work with that kind of a gradient, or anywhere even close.

The most sensitive gamma-ray detector is the Gammasphere, now firmly anchored here at Argonne (http://www.anl.gov/Media_Center/News/2004/040928gammasphere.html ). In its operating mode when cooled to LHe, it is essentially 100% efficient in detecting gamma photons.

Zz.
 
Last edited by a moderator:
  • #98
ZapperZ said:
I didn't want to jump into this and spoil the "fun", but I can't resist myself. :)

No, please jump in ! I try to keep an open mind in this discussion, and I have to say that nightlight isn't the usual crackpot one encounters in this kind of discussions and is very well informed. I only regret a bit the tone of certain statements like what we are doing to those poor students and so on, but I've seen worse. What is potentially interesting in this discussion is how far one can push semiclassical explanations for optical phenomena. For instance, I remember having read that what is usually quoted as a "typical quantum phenomenon", namely the photo-electric effect, needs quantification of the solid state, but not of the EM field which can still be considered classical. Also, if it is true that semiclassical models can correctly predict higher order radiative corrections in QED, I have to say that I'm impressed. Tree diagrams however, are not so impressive.
I can very well accept that EPR like optical experiments do not close all loopholes and it can be fun to see how people still find reasonably looking semiclassical theories that explain the results.
However, I'm having most of the difficulties in this discussion with 2 things. The first one is that photon detectors seem to be very flexible devices which seem to get exactly those properties that are needed in each case to save the semiclassical explanation ; while I can accept each refutation in each individual case, I'm trying to find a contradiction between how it behaves in one case and how it behaves in another case.
The second difficulty I have is that I'm not aware of most of the litterature that is referred too, and I can only spend a certain amount of time on it. Also, I'm not very aware (even if I heard about the concepts) of these Wigner functions and so on. So any help from anybody here can do. Up to now I enjoyed the cat-and-mouse game :smile: :smile:

cheers,
Patrick.
 
  • #99
May be the principal problem is to know if the total spin 0, EPR state is possible with a classical vision.
I understand, nightlight needs to refute the physical existence of a "pure EPR state" in order to get "a classical theory" that describes the EPR experiment.

Seratend
 
  • #100
seratend said:
I understand, nightlight needs to refute the physical existence of a "pure EPR state" in order to get "a classical theory" that describes the EPR experiment.

Yes, that is exactly what he does, and it took me some time to realize this in the beginning of the thread, because he claimed to accept all of QM except for the projection postulate. But if you browse back, afterwards we agreed upon the fact that he disagreed on the existence of a product hilbert space, and only considers classical matter fields (kind of Dirac equation) coupled with the classical EM field (Maxwell), such that the probability current of charge from the dirac field is the source of the EM field. We didn't delve into the issue of the particle-like nature of matter. Apparently, one should also add some noise terms (zero point field or whatever) to the EM field in such a way that it corresponds to the QED half photon contribution in each mode. He claims (well, and most authors he cites too) that the clickediclack of photons is just a property of "photon detectors" which react in such a way to continuous radiation... it is there that things seem to escape me.
However, the fun thing is that indeed, one can find explanations of a lot of "quantum behaviour" this way. The thing that bothers me are the detectors.

cheers,
Patrick.
 
  • #101
vanesch said:
What is potentially interesting in this discussion is how far one can push semiclassical explanations for optical phenomena. For instance, I remember having read that what is usually quoted as a "typical quantum phenomenon", namely the photo-electric effect, needs quantification of the solid state, but not of the EM field which can still be considered classical.

I happened to work in angle-resolved photoemission (ARPES) for the 3 years that I spent as a postdoc. So here is what I know.

While it is true that the band structure of the material being looked at can dictate the nature of the result, this is more of a handshaking process between both the target material and the probe, which is the light. While the generic photoelectric effect can find a plausible argument that light is still a wave (and not photons), this requires strictly that the target material (i) be polycrystaline (meaning no specific or preferred orientation of the crystal structure) and (ii) has a continuous energy band, as in the metallic conduction band. If you have ALL that, then yes, the classical wave picture cannot be ruled out in explaining the photoelectric effect.

The problem is, people who still hang on to those ideas have NOT followed the recent developments and advancements in photoemission. ARPES, inverse photoemission (IPES), resonant photoemission (RPES), spin-polarized photoemission, etc., and the expansion of the target material being studied, ranging from single-crystal surface states all the way to mott insulators, have made classical description of light quite puzzling. For example, the multi-photon photoemission, which I worked on till about 3 months ago, would be CRAZY if we have no photons. The fact that we can adjust the work function to have either 1-photon, 2-photon, 3-photon, etc. photoemission AND to be able to angularly map this, is damn convincing as far as the experiment goes.

There are NO quantitative semiclassical theory for ARPES, RPES, etc.. There are also NO quantitative semiclassical theory for multi-photon photoemission, or at least to explain the observation from such experiments IF we do not accept photons. I know, I've looked. It is of no surprise if I say that my "loyalty" is towards the experimental observation, not some dogma or someone's pet cause. So far, there have been no viable alternatives that are consistent to the experimental observations that I have made.

Zz.
 
  • #102
Vanesh
Yes, that is exactly what he does, and it took me some time to realize this in the beginning of the thread, because he claimed to accept all of QM except for the projection postulate. But if you browse back, afterwards we agreed upon the fact that he disagreed on the existence of a product hilbert space, and only considers classical matter fields (kind of Dirac equation) coupled with the classical EM field (Maxwell), such that the probability current of charge from the dirac field is the source of the EM field. …

… He claims (well, and most authors he cites too) that the clickediclack of photons is just a property of "photon detectors" which react in such a way to continuous radiation... it is there that things seem to escape me.
However, the fun thing is that indeed, one can find explanations of a lot of "quantum behaviour" this way. The thing that bothers me are the detectors.


Like classical mechanic can explain many things even light as particles (Newton assumption). It has taken a long time before scientist community accepted that light was a wave. Even with the initial results of young slits experiment, they have preferred to wait for another experiment, the experiment, showing the impossibility: the diffraction figure of an enlightened sphere: a light point in the middle of the shadow. This diffraction figure was predicted, if I remember, by Poisson in a demonstration to show that if light was not made of particles an incredible light point should appear :eek: .


ZapperZ
The problem is, people who still hang on to those ideas have NOT followed the recent developments and advancements in photoemission. ARPES, inverse photoemission (IPES), resonant photoemission (RPES), spin-polarized photoemission, etc., and the expansion of the target material being studied, ranging from single-crystal surface states all the way to mott insulators, have made classical description of light quite puzzling. For example, the multi-photon photoemission, which I worked on till about 3 months ago, would be CRAZY if we have no photons. The fact that we can adjust the work function to have either 1-photon, 2-photon, 3-photon, etc. photoemission AND to be able to angularly map this, is damn convincing as far as the experiment goes.

There are NO quantitative semiclassical theory for ARPES, RPES, etc.. There are also NO quantitative semiclassical theory for multi-photon photoemission, or at least to explain the observation from such experiments IF we do not accept photons. I know, I've looked. It is of no surprise if I say that my "loyalty" is towards the experimental observation, not some dogma or someone's pet cause. So far, there have been no viable alternatives that are consistent to the experimental observations that I have made.


I really think that arguing about the minimum error in the photo detectors would not solve the problem: the existence of the EPR like state and its associated results.

I think the current problem of “simple alternate classical” theories (with local interactions, local variables) lies in the lack of providing a reasonable result to EPR like states (photons, electrons): a pair of particles with a total null spin.

This state is so special: a measurement on any axis always gives a 50/50% of spin up/down for each particle of the pair. When we take a separated particle experiment, the result measurement shows that a 50/50 is only achievable on a unique orientation!

Seratend.
 
  • #103
vanesch No, we were simply talking about photon detection and background. If the zero-point energy or whatever it is, provoques what is usually considered as being a technical difficulty, namely the "dark current noise", then why is this the limiting factor in visible light detectors, but doesn't it occur in gamma ray detectors, which are close to 100% efficient with neglegible dark currents (take MWPC for instance). I don't see why something that is "a fundamental problem" at say, 2eV is suddenly no issue anymore at 2MeV.

You may have lost the context of the arguments that brought us to the detecton question -- the alleged insufficiency of the non-quantized EM models to account for various phenomena (Bell tests, PDC, coherent beam splitters). It was in that specific context that the phenomenon of detection noise due to the vacuum fluctuation precludes any effect to help decide the EM non-classicality -- it is a problem for these specific non-classicality claims, not "a problem" for all conceivable non-classicality claims, much less that it is some kind of general technological or scientific "problem".

The low energy photons interact differently with the optical components than the gamma rays (the ratio of their energy/momenta to their interaction with atoms is hugely different), so the phenomena you used for the earlier non-classicality claims has lost their key features (such as the sharp polarization splitting or the sharp interference on the beam splitter).

Like with perpetuum mobile claims (which these absolute non-classicality claims increasingly resemble, after three, or even five, decades of excuses and ever more creative euphemisms for the failure), each device may have its own loophole or ways to create illusion of energy excess.

Now you have left the visible photons, moved to gamma rays. Present a phenomenon which makes them absolutely non-classical. You didn't even make a case how this sharper detectibility (better S/N) for these photons yields to anything that prohibits non-quantised EM field from modelling the phenomenon. I explained why the original setups don't help you with these photons.

The second confusion you may have is that between the Old Quantum Theory (before Schroedinger/Heisenberg) particle photon claims and the conventional jargon and visual mnemonics of the QED or Quantum Optics. The OQT didn't have correct dynamical theory of ionisation or of photon photo-electron scattering cross sections. So they imagined needle radiation or point-photons. After the Shoredinger-Heisenberg created QM dynamics, these phenomena became computable using purely semi-classical models (with only matter particles being quantized). The OQM arguments became irrelevant, even though you'll still see them parroted in the textbooks. The point-like jargon for photons survived into QED (and it is indeed heuristically/mnemonically useful in many cases, as long as one doesn't make ontology out of the personal mnemonic device and then claim paradoxes).

I asked you where is the point-photon in QED. You first tried via the apparent coexistence of anti-correlation and the coherence on a beam splitter. As demonstrated that doesn't work (and has been known so for 4-5 decades, it was tried first in 1950s and resolved during the Hanbury Brown and Twiss effect debates). The photon count is Poissonian, which is same as detector model response distribution. So the anti-correlation here isn't as sharp as it is usally claimed.

Then (after a brief detour into PDC) you brought up gamma rays as not having detection problem. True, they don't. But they lose the coherence and the visibility of the interference as they gain on the anti-correlation (due to sharper detection). That is a perfectly classical EM tradeoff. The less equal and less synchronized the packets after the split, the more exclusive they appear to the detectors while their intererence fringes become less sharp (lower visibility).

That is the same "complementarity" phenomenon that QM aplititudes (which follow Maxwell's equations for free space & through linear optical elements) describe. And that is identical to the semiclassical EM description since the same equations describe the propagation.

So what is the non-classicality claim about the gamma rays? The anti-correlation on the beam splitter is not relevant for that purpose for these photons.

You may wonder why is it always so that there is something else that blocks the non-classicality from manifesting. My guess is that it is so because it doesn't exist and all attempted contraptions claiming to achieve it are bound to fail one way or the other, just as perpetuum mobile has to. The reasons the failure causes shift is simply because the new contraptions which fix the previous failure shift to some other trick, confusion, obfuscation... The explanations of the flaws has to shift if the flaws shift. That is where the "shiftiness" originated.


The origin of these differences on the need of the 2nd quantization (that of the EM field) is in the different view on what was the 1st quantization all about. To those who view it as the introduction of the Hilbert space, observable non-commutativity into the classical system it is natural to try repeating the trick with remaining classical systems.

To those who view it as a solution to the classical physics dichotomy between the fields and the particles, since it replaced the particles with matter fields (Barut has shown that 3N configuration space vs 3D space issue is irrelevant here) it is sensless trying to "quantize" the EM field, since it is already "quantized" (i.e. it is a field).

In this perspective the Hilbert space formulation is a linear approximation, which due to the inherent non-linearity of the coupled matter and EM fields is limited, thus the collapse is needed which patches the approximation inadequacy via the piecewise linear evolution. The non-commutatitivity of the observables is general artifact with these kinds of piecewise linearizations, not a fundamental principle (Feynman noted and was particularly intrigued by this emergence of noncommutativity from the approximation in his checkerboard/lattice toy models of EM and Dirac equations; see Garnet Ord's extended discussions on this topic, including the Feynman's puzzlement; note that Ord is a matematician, thus the physics in those papers is a bit thin).
 
Last edited:
  • #104
nightlight said:
ZapperZ You are certainly justified in your puzzlement. Dark current, at least the ones we detect in photon detectors, has NOTHING to do with "zero-point" field.

The "dark current" is somewhat fuzzy. Label it the irreducible part of the noise due to quantum vacuum fluctuations since that the part of the noise that was relevant in the detector discussion. The temperature can be lowered to 0K (the detector discussed was on 6K) and you will still have that noise.

No, what YOU think of "dark current" is the one that is fuzzy. The dark current that *I* detect isn't. I don't just engage in endless yapping about dark current. I actually measure, make spectral analysis, and other characterization of it. The field emission origin of dark current is well-established and well-tested.[1]

Zz.

1. R.H. Fowler and L. Nordheim, Proc. Roy. Soc. Lond., A119, 173 (1928).
 
  • #105
vanesch he disagreed on the existence of a product hilbert space,

That's a bit of over-simplification. Yes, in non-relativistic QM you take each electron into its own factor space. But then you anti-symmetrize this huge product space and shrink it back to almost where it was (the 1 dimensional subspace of the symmetric group representation), the Fock space.

So the original electron product space was merely a pedagogical scaffolding to construct the Fock space. After all the smoke and hoopla has cleared, you're back where you started before constructing the 3N dimensional configuration space -- one 3-D space PDE set (Dirac Equation) describing now propagation of classical Dirac field (or the modes of quantized Dirac field represented by the Fock space) which now represents all classical electrons with one 3D matter field (similar to the Maxwell's EM field) instead of the QM amplitudes of a single Dirac electron. Plus you get around 3*(N-1) tonnes of obfuscation and vacuous verbiage on meaning of identity (that will be laughed at few generations from now). Note that Schroedinger believed from day one that these 1-particle QM amplitudes should have been interpreted as a single classical matter field of all electrons.

That's not my "theory" that's what it is for you or for anyone who cares to look it up.

The remaining original factors belong to different types of particles, they're simply different fields. Barut has shown that depending on which functions one picks for variation of the action, one gets an equivalent dynamics represented as either the 3N Dimensional configuration space fields or the regular 3 Dimensional space fields.

Apparently, one should also add some noise terms (zero point field or whatever) to the EM field in such a way that it corresponds to the QED half photon contribution in each mode.

That's not something one can fudge very much since the distribution uniquely follows from Lorenz invariance. It's not something you put in by hand and tweak as you go to fit this or that.

He claims (well, and most authors he cites too) that the clickediclack of photons is just a property of "photon detectors" which react in such a way to continuous radiation... it is there that things seem to escape me.

You never explained how QED predicts anything more marble-like about photons. Photons don't have position observable, they are not conserved, they don't have identity. They can be redefined by change of base. They propagate in space following Maxwell's equations. What is left of the marbles? That you can count them? You can count anything e.g. the QED EM field modes (which is what they are). They make clickediclacks? I'll grant you this one, they sound just like marbles droped into a box.

You must be still under the spell of the Old Quantum Theory (pre-1926) ideas of point photons (which is how this gets taught). They're not the same model as QED photons.
 
Last edited:

Similar threads

  • Quantum Physics
2
Replies
36
Views
1K
  • Quantum Physics
Replies
2
Views
251
Replies
28
Views
529
  • Quantum Physics
Replies
14
Views
1K
  • Quantum Physics
3
Replies
81
Views
4K
Replies
1
Views
630
  • Quantum Physics
Replies
22
Views
932
Replies
11
Views
1K
Replies
60
Views
3K
Replies
18
Views
2K
Back
Top