What Confusion Surrounds Young's Experiment on Wave-Particle Duality?

  • Thread starter Thread starter Cruithne
  • Start date Start date
  • Tags Tags
    Experiment
Click For Summary
The discussion addresses confusion surrounding Young's experiment on wave-particle duality, particularly regarding the behavior of light as both waves and particles. Participants note that when photons are sent through a double-slit apparatus, they create an interference pattern, even when sent one at a time, contradicting claims that observation alters this behavior. The conversation highlights the misconception that observing light forces it to behave like particles, while in reality, it continues to exhibit wave-like properties unless specific measurements are taken. Additionally, the role of detectors and their influence on the results is debated, with references to classical interpretations of quantum phenomena. Overall, the discussion emphasizes the complexities and misunderstandings inherent in quantum mechanics and the interpretation of experimental outcomes.
  • #121
vanesch Yes if we take into account atoms, but the very fact that we take atoms to make up our detection apparatus might 1) very well be at the origin of being unable to produce EPR raw data - I don't know (I even think so), but 2) this is nothing fundamental, is it ? It is just because we earthlings are supposed to work with atomic matter. The description of the fundamental behaviour of EM fields shouldn't be dealing with atoms, should it ?

It is merely related to specific choice of the setup which claims (potential) violation. Obviously, why are the constants what they are, is fundamental.

But why the constants (in combination with laws) happen to block this or that claim in just that particular way is the function of the setup, what is the physics they overlooked[/color] to make them pick that design. That isn't fundamental.

Now you come along and say that these shunt corrections are fudging the data, that the raw data give us 500V x 20A = 10 KW output, while we put 5000Vx2000A = 10MW input in the thing, and that hence we haven't shown any perpetuum mobile.

Not quite analogical situation. The "fair sampling" is plainly violated by the natural semiclassical model (see earlier message on sin/cos split) as well as in the Quantum Optics, which will have exactly the same amplitude propagation through the polarizers to detectors and follow the same square law detector probabilities of trigger. See also Khrennikov's paper cited earlier on proposed test for the "fair sampling."

So the situation is as if you picked some imagined behavior that is contrary to the existent laws and claimed that you will assume in your setup the property holds.

Additionally, there is no universal "fair sampling" law which you just invoke without examination whether it applies in your setup. And in the EPR-Bell setups it should be at least tested since the claim is so fundamental.
 
Last edited:
Physics news on Phys.org
  • #122
nightlight You change rate of tunneling since the ionization energies are different in those cases.[/color]

ZapperZBingo! What did you think I meant by "field emission" and "Fowler-Nordheim theory" all along? You disputed these and kept on insisting it has something to do with "QED vacuum fluctuation"! Obviously, you felt nothing wrong with dismissing things without finding out what they are.

Complete non sequitur. You stated that if you change detector materials, you will get different dark currents and that this would imply (if vacuum fluctuations had a role in dark currents) that vacuum fluctuations were changed.

It doesn't mean that vacuum fluctuations changed. If you change material[/color] and thus change the gap energies[/color], the tunneling rates will change even though the vacuum stayed the same, since the tunneling rates depend on the gap size[/color], which is the material property (in addition to depending on vacuum fluctuation energy).

For the last time - dark currents as detected in photodetectors have NOTHING to do with "QED vacuum fluctuation". Their properties and behavior are VERY well-known and well-characterized.

Take any textbook on Quantum Optics and read up on photo-detectors noise and find out for yourself.
 
  • #123
seratend And if I have understood the paper, Kowalski has demonstrated, a well known fact of the hilbert space users, that with a simple gauge transformation (selection of other p,q type variables), you can change a non hermitian operator into an hermitian one (its relation (4.55 to 4.57)). Therefore, many non-unitary evolutions may be expressed exactly as functions of unitary evolutions.

That was not it. The point is the linearization procedure which is the step of construction of M, which is linear operator. That step occurs in transition between 4.44 and 4.46 where the M is obtained. The stuff you quote is the comparatively trivial part about making 4.45 look like Schroedinger equation with the hermitean operator (instead of M).

So I still conclude that even a unitary evolution of the Shroedinger equation may represent a non linear ODE.
Therefore, I still not understand what you call the “linear approximation” as in all cases I get exact results (approximation means, for me the existence of small errors).


The point of his procedure is to have as input any set of non-linear (evolution) PD equations and create a set of linear equations which approximate the nonlinear set (iterative procedure which arrives to infinite set of linear approximations). The other direction, from linear to non-linear, is relatively trivial.

So, if we interpret your possible physical interpretation stochastic electrodynamics as giving the true results, what we just need to do is to rewrite it formally into the Hilbert space formalism and look at the difference with QED?

If you want to try, check first Barut's work which should save you lots of time. It's not that simple, though.
 
Last edited:
  • #124
nightlight said:
nightlight You change rate of tunneling since the ionization energies are different in those cases.[/color]

ZapperZBingo! What did you think I meant by "field emission" and "Fowler-Nordheim theory" all along? You disputed these and kept on insisting it has something to do with "QED vacuum fluctuation"! Obviously, you felt nothing wrong with dismissing things without finding out what they are.

Complete non sequitur. You stated that if you change detector materials, you will get different dark currents and that this would imply (if vacuum fluctuations had a role in dark currents) that vacuum fluctuations were changed.

It doesn't mean that vacuum fluctuations changed. If you change material[/color] and thus change the gap energies[/color], the tunneling rates will change even though the vacuum stayed the same, since the tunneling rates depend on the gap size[/color], which is the material property (in addition to depending on vacuum fluctuation energy).

Eh? Aren't you actually strengtening my point? It is EXACTLY what I was trying to indicate that changing the material SHOULD NOT change the nature of the vacuum beyond the material - and I'm talking about the order of 1 meter here from the material! If you argue that the dark currents are due to "QED vacuum fluctuations", then the dark current should NOT change simply because I switch the photocathode since, by your own account, vacuum fluctuations hasn't changed!

But the reality is, IT DOES! I detect this all the time! So how could you argue the dark currents are due to QED vacuum fluctuation? Just simply because a book tells you? And you're the one who are whinning about other physicists being stuck with some textbook dogma?

For the last time - dark currents as detected in photodetectors have NOTHING to do with "QED vacuum fluctuation". Their properties and behavior are VERY well-known and well-characterized.

Take any textbook on Quantum Optics and read up on photo-detectors noise and find out for yourself.

... and why don't you go visit an experimental site that you criticizing and actually DO some of these things? It appears that your confiment within theoretical boundaries are cutting you off from physical reality. Grab a photodiode and make a measurement for yourself! And don't tell me that you don't need to know how it works to be able to comment on it. You are explicitly questioning its methodology and what it can and cannot measure. Without actually using it, all you have is a superficial knowledge of what it can and cannot do.

Comments such as the one above is exactly the kind of theorists that Harry Lipkin criticized in his "Who Ordered Theorists" essay. When faced with an experimental results, such theorists typically say "Oh, that can't happen because my theory says it can't". It's the same thing here. When I tell you that I SEE dark currents, and LOTS of it, and they have nothing to do with QED vacuum fluctuations, all you can tell me is to go look up some textbook? Oh, it can't happen because that text says it can't!

If all the dark currents I observe in the beamline is due to QED vacuum fluctuation, then our universe will be OPAQUE!

Somehow, there is a complete disconnect between what's on paper and what is observed. Houston, we have a problem...

Zz.
 
  • #125
nightlight said:
Not quite analogical situation. The "fair sampling" is plainly violated by the natural semiclassical model (see earlier message on sin/cos split) as well as in the Quantum Optics, which will have exactly the same amplitude propagation through the polarizers to detectors and follow the same square law detector probabilities of trigger.

Well, the fair sampling is absolutely natural if you accept photons as particles, and you only worry about the spin part. It is only in this context that EPR excludes semiclassical models. The concept of "photon" is supposed to be accepted.

I've been reading up a bit on a few of Barut's articles which are quite interesting, and although I don't have the time and courage to go through all of the algebra, it is impressive indeed. I have a way of understanding his results: in the QED language, when it is expressed in the path integral formalism, he's working with the full classical solution, and not working out the "nearby paths in configuration space". However, this technique has a power over the usual perturbative expansion, in that the non-linearity of the classical solution is also expanded in that approach. So on one hand he neglects "second quantization effects", but on the other hand he includes non-linear effects fully, which are usually only dealt with in a perturbative expansion. There might even be cancellations of "second quantization effects", I don't know. In the usual path integral approach, these are entangled (the classical non-linearities and the quantum effects).

However, I still fail to see the link to what we are discussing here, and how this leads to correlations in 2-photon detection.

cheers,
Patrick.
 
  • #126
vanesch said:
However, I still fail to see the link to what we are discussing here, and how this leads to correlations in 2-photon detection.

I guess the crux is given by equation 27 in quant-ph/9711042,
which gives us the coincidence probability for 2 photon detector clicks in 2 coherent beams. However, this is borrowed from quantum physics, no ? Aren't we cheating here ?
I still cannot see naively how you can have synchronized clicks with classical fields...

cheers,
Patrick.

EDIT: PS: I just ordered Mandel's quantum optics book to delve a bit deeper in these issues, I'll get it next week... (this is a positive aspect for me of this discussion)
 
Last edited:
  • #127
nightlight ... Therefore, many non-unitary evolutions may be expressed exactly as functions of unitary evolutions.

That was not it. The point is the linearization procedure which is the step of construction of M, which is linear operator. That step occurs in transition between 4.44 and 4.46 where the M is obtained. The stuff you quote is the comparatively trivial part about making 4.45 look like Schroedinger equation with the hermitean operator (instead of M).
[/color]

Oki, for me the redefinition of M of (3.2) into the “M’” of (4.46) was trivial : ). In fact I only considered §D and §E as only a method on a well known problem of vector spaces: how to make diagonal some non hermitian operators (i.e. where solving the time evolution operator is easy in our case).
Starting from a non-hermitian operator M=M1+iM2 where M1 and M2 are hermitian, we may search a linear transformation on the Hilbert space where M becomes hermitian (and thus it may be diagonal with further transformation).

So, the important point for me, is that I think you accept that a PDE/ODE equation may be reformulated into the Hilbert space formalism, but without the obligation to get a unitary time evolution (i.e id/dt|psi>=M|psi> where M may or may not be hermitian.

nightlight The point of his procedure is to have as input any set of non-linear (evolution) PD equations and create a set of linear equations which approximate the nonlinear set (iterative procedure which arrives to infinite set of linear approximations).

So “approximation” for you mean an infinite set of equations to solve (like computing the inverse of an infinite dimensional matrix). But theoretically, you know that you have an exact result. Ok, let’s take this definition. Well, you may accept more possibilities than what I do: that a non unitary time evolution (id/dt|psi>=M|psi>) may be changed, with an adequate transformation, into a unitary one.

So, the use of stochastic electrodynamics may be theoretically transposed into an exact id/dt|psi>=H|psi> (may be in a 2nd quantification view) where H (the hypothetic Hamiltonian) may not be hermitian (and I think you assume It is not).

So if we assume (very quickly), that H is non hermitian (=H1+iH2, H1 and H2 hermitian, and with H1>> H2 for all practical purposes), we thus have a schroedinger equation that has a kind of diffusion coefficient (H2) that may approach some of the proposed modifications found in the literature to justify the “collapse” of the wave function. Decoherence program – see for example Joos quant-ph/9908008 eq. 22 and many other papers.

Thus the stochastic electrodynamics may be (?) checked more easily with experiments that measure the decoherence time of quantum systems rather than the EPR experiment where you have demonstrated that it is not possible to make the difference.


Thus, I think, that the promoters of this theory should work on a Hilbert space formulation (at least approximate, but with enough precision in order to get a non hermitian Hamiltonian). Therefore they may apply it to other experiments to demonstrate the difference (i.e different previsions between classical QM and stochastic electrodynamics through, may be, decoherence experiments) rather than working on experiments that cannot separate the theories and verifying that the 2 theories agree with the experiment:
It is their job, as their claim seem to say that classical QM is not ok.

Seratend.
 
  • #128
vanesch Well, the fair sampling is absolutely natural if you accept photons as particles, and you only worry about the spin part. It is only in this context that EPR excludes semiclassical models. The concept of "photon" is supposed to be accepted.

I don't see how does "natural" part help you restate the plain facts: Experiments refuted the "fair sampling" theories. It seems the plain fact still stands as is.

Secondly, I don't see it natural for the particles either. Barut has some EPR-Bell papers for spin 1/2 point particles in classical SG and he gets the unfair sampling. In that case it was due to the dependence of the beam spread in the SG on the hidden classical magnetic momentum. This yields particle paths in the ambiguous, even wrong region, depending on the value of hidden spin orientation to the SG axis. Thus for the coincidence it makes the coincidence accuracy sensitive to the angle between the two SG's i.e. the "loophole" was in the analyzer violating the fair sampling (same analyzer problem happens for the high energy photons).

It seems what you mean by "particle" is strict unitary evolution with sharp particle number[/color], and in the Bell setup with sharp value 1 for each side. That is not true of QED photon number (you neither have sharp nor conserved photon number; and specifically for the proposed setups you either can't detect them reliably enough or you can't analyze them on a polarizer reliably enough, depending on photon energy; these limitations are facts arising from the values of relevant atomic interaction constants and cross sections which are what they are i.e. you can't assume you can change those, you can only come up with another design which works around any existent flaw which was due to the relevant constants).

However, I still fail to see the link to what we are discussing here, and how this leads to correlations in 2-photon detection.

It helps in the sense that if you can show that Barut's self-field ED agrees with QED beyond the perturbative orders relevant for Quantum Optics measurements (which is actually the case), you can immediately conclude[/color], without having to work out a detailed Quantum Optics/QED prediction (which is accurate enough to predict exactly what would actually[/color] be obtained for correlations with the best possible detectors and polarizers consistent with the QED vacuum fluctuations and[/color] all the relevant physical constants of the detectors and the polarizers) that a local model of the experiment exists for any such prediction[/color] -- the model would be the self-field ED itself.

Thus you would immediately know that however you vary the experimental setup/parameters, within the perturbative orders to which QED & self-fields agree, something will be there to block it from violating locality. You don't need to argue about detector QE or plarizer efficiency, you know that anything they cobble together within the given perturbative orders, will subject any conjecture about future technology being able to improve efficiency of a detector (or other components) beyond threshold X, to reductio ad absurdum. Thus the experimenter would have to come up with a design which relies in an essential way on the predictions of QED perturbative orders going beyond the orders of agreement with the self-field ED. Note also that the existent experiments are well within the orders of agreement of the two theories, thus for this type of tests there will always be a "loophole" (i.e. the actual data won't violate the inequality).

This is the same kind of conclusion that you would make about perpertuum mobile claim which asserts that some particular design will yield 110% efficiency if the technology gets improved enough to have some sub-component of the apratus work with the accuracy X or better. The Bell tests make exactly this kind of assertion (without the fair sampling conjecture which merely helps them claim that experiments exclude all local "fair sampling" theories, and that is too weak to exlude anything that exists, much less all future local theories) -- we could violate it if we can get X=83% or better the overall setup efficiency (which accounts for all types of losses, such as detection, polarizer, apperture efficiency, photon number spread,...).

Now, since in the perpetuum mobile case you can invoke general energy conservation law, you can immediately tell the inventor that (provided the rest of his logic is valid) his critical component will not be able to achieve accuracy X since that would violate energy conservation. You don't need to go into detailed survey of all conceivable relations between the relevant interaction constants in order to compute the limits on the accuracy X for all conceivable technological implementatons of that component. You would know it can't work since assuming otherwise would lead to reductio ad absurdum.
 
Last edited:
  • #129
vanesch I guess the crux is given by equation 27 in quant-ph/9711042,
which gives us the coincidence probability for 2 photon detector clicks in 2 coherent beams. However, this is borrowed from quantum physics, no ? Aren't we cheating here ?


The whole section 2 is derivation of standard Quantum Optics prediction (in contrast to a toy QM prediction) expressed in terms of Wigner functions, which are perfectly standard tool for QED or QM. So, of course, it is all standard Quantum Optics, just using Wigner's joint distribution functions instead of the more conventional Glauber's correlation functions. The two are fully equivalent, thus that is all standard, entirely uncontroversial part. They are simply looking for a more precise expression of the standard Quantum Optics prediction showing the nature of the non-classicality in a form suitable for their analysis.

The gist of their argument is that if for a given setup you can derive strictly positive Wigner function for joint probabilities, there is nothing else to discuss, all statistics is perfectly classical. The problem is how to interpret the other case, when there are negative probability regions of W (which show up in PDC or any sub-Poissonan setup). To do that they need to examine to what kind of detection and counting does the Wigner distribution correspond operationally[/color] (to make prediction with any theoretical model you need operational mapping rules of that kind). With the correspondence established, their objective is to show that the effects of the background subtractions and the above-vacuum threshold adjustments on individual detectors (the detectors adjustments can only set the detector's Poisson average, there is no sharp decision for each try, thus there is always some loss and some false positives) are in combination always sufficient to explain the negative "probability" of the Wigner measuring setups as artifacts of the combined subtraction and losses (the two can be traded off by the experimenter but he will always have to chose between the increased losses and the increased dark currents, thus subtract the background and unpaired singles in proportions of his choice but with the total subtraction not reducible below the vacuum fluctuations limit).

That is what the subsequent sections 3-5 deal with. I recall discussing with them the problems of that paper at the time since I thought that the presentation (especially in sect 5) wasn't sharp and detailed enough for readers who weren't up to speed on their whole previous work. They had since then sharpened their detector analysis with more concrete detection models and detailed computations, you can check their later preprints in quant-ph if the sections 4 & 5 don't seem sufficient for you.

Note that they're not pushing here SED model explicitly, but rather working in the standard QO scheme (even though they know from their SED treatment where to look for the weak spots in the QO treatment). This is to make the paper more publishable, easier to get through the hostile gatekeepers (who might just say, SED who? and toss it all out). After all, it is not a question whether experiments have produced any violation -- they didn't. In a fully objective system, that would be it, no one would have to refute anything since nothing was shown that needs refuting.

But in the present system, the non-locality side has an advantage so that their handwave about "ideal" future technology is taken as a "proof" that it will work some day, which then the popularizers and the educators translate into "it worked" (which is mostly the accepted wisdom you'll find in these kind of student forums). Unfortunately, with this kind of bias, the only remaining way for critics of the prevailing dogma is to show, setup by setup, why a given setup, even when granted the best conceivable future technology can't produce a violation.

I still cannot see naively how you can have synchronized clicks with classical fields...

You can't get them and you don't get them with the raw data, which will always have either too much uncorrelated background or too much pair losses. You can always lower the sensitivity to lower the background noise in order to have smaller explicit subtractions, but then you will be discarding too many unpaired singles (events which don't have a matching event on the other side).
 
Last edited:
  • #130
nightlight said:
The gist of their argument is that if for a given setup you can derive strictly positive Wigner function for joint probabilities, there is nothing else to discuss, all statistics is perfectly classical.

This is what I don't understand (I'm waiting for my book on quantum optics..., hence my silence). After all, these Wigner functions describe - if I understand well - the Fock states that correspond closely to classical EM waves. However, the statistics derived from these Fock states for photon detection assume a second quantization, no ? So how can you 1) use the statistics derived from second quantization (admitted, for cases that correspond to classical waves) to 2) argue that you don't need a second quantization ??

You see, the way I understand the message (and maybe I'm wrong) is the following procedure:
-Take a classical EM wave.
-Find the Fock space description corresponding to it (with the Wigner function).
-From that Fock space description, calculate coincidence rates and other statistical properties.
-Associate with each classical EM wave, such statistics. (*)
-Take some correlation experiment with some data.
-Show that you can find, for the cases at hand, a classical EM wave such that when we apply the above rules, we find a correct prediction for the data.
-Now claim that these data are explainable purely classically.

To me, the last statement is wrong, because of step (*)
You needed second quantization in the first place to derive (*).

cheers,
Patrick.

EDIT: I think that I've been too fast here. It seems indeed that two-photon correlations cannot be described that way, and that you always end up with the equivalence of a single-photon repeated experiment, the coherent state experiment, and the classical <E-(r1,t1) x E+(r2,t2> intensity cross correlation.
 
Last edited:
  • #131
nightlight said:
I still cannot see naively how you can have synchronized clicks with classical fields...

You can't get them and you don't get them with the raw data, which will always have either too much uncorrelated background or too much pair losses. You can always lower the sensitivity to lower the background noise in order to have smaller explicit subtractions, but then you will be discarding too many unpaired singles (events which don't have a matching event on the other side).

How about this:

http://scotty.quantum.physik.uni-muenchen.de/publ/pra_64_023802.pdf

look at figure 4

I don't see how you can explain these (raw, I think) coincidence rates unless you assume (and I'm beginning to understand what you're pointing at) that the classical intensities are "bunched" in synchronized intensity peaks.
However, if that is the case, a SECOND split of one of the beams towards 2 detectors should show a similar correlation, while the quantum prediction will of course be that they are anti-correlated.


2-photon beam === Pol. Beam splitter === beam splitter -- D3
.......|........|
....... D1......D2

Quantum prediction: D1 is correlated with (D2+D3) in the same way as in the paper ; D2 and D3 are Poisson distributed, so at low intensities anti-correlated.

Your prediction: D1 is correlated with D2 and with D3 in the same way, and D2 is correlated with D3 in the same way.

Is that correct ?

cheers,
Patrick.
 
Last edited by a moderator:
  • #132
vanesch After all, these Wigner functions describe - if I understand well - the Fock states that correspond closely to classical EM waves.

No, the Wigner (or Husimi) joint distributions and their differential equations are a formalism fully equivalent to the Fock space formulation. The so-called "deformation quantization" (an alternative form of quantization) is based on this equivalence.

It is the (Glauber's) "coherent states" which correspond to the classical EM theory. Laser light is a template for coherent states. For them the Wigner functions are always positive, thus there is no non-classical prediction for these states.

So how can you 1) use the statistics derived from second quantization (admitted, for cases that correspond to classical waves) to 2) argue that you don't need a second quantization ??

The Marshall-Santos PDC paper you cited uses standard Quantum Optics (in Wigner formalism) and a detection model to show that there is no non-classical correlation predicted by Quantum Optics for the PDC sources. They're not adding or modifying PDC treatment, merely rederiving it in Wingner function formalism.

The key is in analyzing with more precision and finesse (than usual engineering style QO) operational mapping rules between the theoretical distributions (computed under the normal operator ordering convention which correspond to Wigner's functions) and the detection and counting procedures. (You may need to check also few of their other preprints on detection for more details and specific models and calculations.) Their point is that the conventional detection & counting procedures (with the background subtractions and tuning to [almost] no-vacuum detection) amount to the full subtraction needed to produce the negative probability regions (conventionally claimed as non-classicality) of the Wigner distributions, thus the standard QO predictions, for the PDC correlations.

The point of these papers is to show that, at least for the cases analyzed, Quantum Optics doesn't predict anything non-classical, even though PDC, sub-Poissonian distributions, anti-bunching,... are a soft non-classicality[/color] (they're only suggestive, e.g. at the superficial engineering or pedagogical levels of analysis, but not decisive as the violation of Bell's inequality which absolutely no classical theory, deterministics or stochastic, can violate).

The classical-Quantum Optics equivalence of the thermal (or chaotic) light was known since 1950s (this was clarified during the Hanbury Brown and Twiss effect controversy). Similar equivalence was established in 1963 for the coherent states, making all the laser light effects (plus linear optical elements and any number of detectors) fully equivalent to a classical description. Marshall and Santos (and their students) have extended this equivalence to the PDC sources.

Note also that 2nd quantization is in these approaches (Marshall-Santos SED, Barut self-field ED, Jaynes neoclassical ED) viewed as a mathematical linearization procedure of the uderlying non-linear system, and not something that adds any new physics. After all the same 2nd quantization techniques are used in solid state physics and other areas for entirely different underlying physics. The 1st quantization is seen as a replacement of point particles by matter fields[/color], thus there is no point in "quantizing" the EM field at all (it is a field already), or the Dirac matter field (again).

As a background to this point, I mentioned (few messages back) some quite interesting results by a mathematician Krzysztof Kowalski, which show explicitly how a classical non-linear ODE/PDE systems can be linearized in a form that looks just like a bosonic Fock space formalism (with creation/anihilation operators, vacuum state, particle number states, coherent states, bosonic Hamiltonian and standard quantum state evolution). In that case it is perfectly transparent that there is no new physics brought in by the 2nd quantization[/color], it is merely a linear approximation of a non-linear system (it yields iteratively an inifinte number of linear equations from a finite number of non-linear equations). While Kowalski's particular linearization scheme doesn't show that QED is a linearized form of the non-linear equations such as Barut's self-field, it provides an example of this type of relation between the Fock space formalism and the non-linear clasical equations.

You see, the way I understand the message (and maybe I'm wrong) is the following procedure:
-Take a classical EM wave.


No, they don't do that (here). They're using the standard PDC model and are treating it fully within the QO, just in the Wigner function formalism.
 
Last edited:
  • #133
vanesch I don't see how you can explain these (raw[/color], I think) coincidence rates

Those are not even close to raw counts. Their pairs/singles ratio is 0.286 and their QE is 0.214. Avalanche diodes are very noisy and to reduce background they have used short coincidence window, resulting in quite low (for any non-classicality claim) pair/singles ratio. Thus the combination of the background subtractions and the unpaired singles are much larger than what Marshal-Santos PDC classicality requires (which assumes only the vacuum fluctuation noise subtraction, with everything else granted as optimal). Note that M-S SED (which is largely equivalent in predictions to Wigner distributions with positive probabilities) is not some contrived model made to get around some loophole, but a perfectly natural classical model, a worked out and refined version of the Planck's 2nd Quantum Theory (of 1911, where he added an equivalent of 1/2 hv noise per mode).
 
  • #134
Brief question about Aharanov-Bohm effect

Hello Nightlight,

I follow your exceptional discussion with Vanesch and a couple of other contributors. It is maybe the best dispute that I have read on this forum so far. In particular, it looks as if you and Vanesch are really getting 'somewhere'. I am looking forward to the final verdict whether or not the reported proof of spooky action at a distance is in fact valid.

Anyway, I consider the Aharanov-Bohm effect as a similar fundamental non-local manifestation of QM as the (here strongly questioned) Bell violation.

To say it a bit more challenging, it also looks like a 'quantum mystery' which you seem to despise.

My question is: Are you familiar with this effect and, if yes, do you believe in it (whatever that means) or do you similarly think that there is a sophisticated semi-classical explanation?

Roberth
 
  • #135
Roberth Anyway, I consider the Aharanov-Bohm effect as a similar fundamental non-local manifestation of QM as the (here strongly questioned) Bell violation.

Unfortunately, I haven't studied it in any depth beyond a shallow coverage in the undergraduate QM class. I always classified it among the "soft" non-locality phenomena, those which are superficially suggestive of non-locality (after all, both Psi and A still evolve fully locally), but lack decisive criteria (unlike the Bell's inequalities).
 
  • #136
nightlight said:
Their pairs/singles ratio is 0.286 and their QE is 0.214. Avalanche diodes are very noisy and to reduce background they have used short coincidence window, resulting in quite low (for any non-classicality claim) pair/singles ratio.

Aaah, stop with your QE requirements I'm NOT talking about EPR kinds of experiments here, because I have to fight the battle upstream with you :smile: because normally, people accept second quantization, and you don't. So I'm looking at possible differences between what first quantization and second quantization can do. And in the mean time I'm learning a lot of quantum optics :approve: which is one of the reasons why I continue here with this debate. You have 10 lengths of advantage over me there (but hey, I run fast :devil:)

All single-photon situations are indeed fully compatible with classical EM, so I won't be able to find any difference in prediction there. Also, I have to live with low efficiency photon detectors, because otherwise you object about fair sampling, so I'm looking at a possibility of a feasible experiment with today's technology that proves (or refutes ?) second quantization. I'm probably a big naive guy, and this must have been done before, but as nobody is helping here, I have to do all the work myself :bugeye:

What the paper that I showed proves to me is that we can get correlated clicks in detectors way beyond simple Poisson coincidences. I now understand that you picture this as correlated fluctuations in intensity in the two classical beams.
But if the photon picture is correct, I think that after the first split (with the polarizing beam splitter) one photon of the pair goes one way, and the other one goes the other way, giving correlated clicks, superposed on about 3 times more uncorrelated clicks, and with a detection probability of about 20% or so, while you picture that as two intensity peaks in the two beams, giving rise to enhanced detection probabilities (and hence coincidences) at the two detectors (is that right ?).
Ok, up to now, the pictures are indistinguishable.
But I proposed the following extension of the experiment:

In the photon picture, each of the two branches just contains "single photon beams", right ? So if we put a universal beam splitter in one such branch (not polarizing), we should get uncorrelated Poisson streams in each one, so the coincidences between these two detectors (D2 and D3) should be those of two independent Poisson streams. However, D2 PLUS D3 should give a signal which is close to what the old detector gave before we split the branch again. So we will have a similar correlation between (D2+D3) and D1 as we had in the paper, but D2 and D3 shouldn't be particularly correlated.

In your "classical wave" picture, the intensity peak in our split branch splits in two half intensity peaks, which should give rise to a correlation of D2 and D3 which is comparable to half the correlation between D2 and D1 and D3 and D1, no ?

Is this a correct view ? Can this distinguish between the photon picture and the classical wave picture ? I think so, but as I'm not an expert in quantum optics, nor an expert in your views, I need an agreement here.

cheers,
Patrick.
 
  • #137
In your "classical wave" picture, the intensity peak in our split branch splits in two half intensity peaks, which should give rise to a correlation of D2 and D3 which is comparable to half the correlation between D2 and D1 and D3 and D1, no ?

You're ignoring few additional effects here. One is that the detectors counts are Poissonian (which are significant for the visible range photons). Another is that you don't have a sharp photon number state but a superposition with at least 1/2 photon equivalent spread.

Finally, the "classical" picture with ZPF allows a limited form of sub-Poissonian statistics for the adjusted counts[/color] (or the extrapolated counts, e.g. if you tune your detectors to a higher trigger threshold to reduce the explicit background subtractions in which case you raise the unpaired singles counts and have to extrapolate). This is due to the sub-ZPF superpositions (which enter the ZPF averaging for the ensemble statistics) of the signal and the ZPF in one branch after the splitter. Unless you're working in a high-noise, high sensitivity detector mode (which would show you, if you're specifically looking for it, a drop in the noise coinciding with the detection in the other branch), all you would see is an appearance of the sub-Poissonian behavior on the subtracted/extrapolated counts. But this is exactly the level of anticorrelation that the classical ZPF model predicts for the adjusted counts.

The variation you're describing was done for exactly that purpose in 1986 by P. Grangier, G. Roger and A. Aspect ("Experimental evidence for a photon anticorrelation effect on a beam splitter: a new light on a single photon interefernce" Europhys. Lett. Vol 1 (4) pp 173-179, 1986). For the pair source they have used their original Bell test atomic cascade. Of course, the classical model they tested against, to declare the non-classicality, was the non-ZPF classical field which can't reproduce the observed level of anticorrelation on the adjusted data. { I recall seeing another preprint of that experiment (dealing with the setup properties prior to the final experiments) which had more detailed noise data indicating a slight dip in the noise in the 2nd branch, for which they had some aperture/alignment type of explanation. }

Marshall & Santos had several papers following that experiment where their final Stochastic Optics[/color] (the SED applied for Quantum Optics) had crystallized, including their idea of "subthreshold" superposition[/color], which was the key for solving the anticorrelation puzzle. A longer very readable overview of their ideas at the time, especially regarding the anticorrelations, was published in Trevor Marshall, Emilio Santos "Stochastic Optics: A Reaffirmation of the Wave Nature of Light" Found. Phys., Vol 18, No 2. 1988, pp 185-223, where they show that a perfectly natural "subthreshold" model is in full quantitative agreeement with the anticorrelation data (they also relate their "subthreshold"[/color] idea to its precursors, such as the "empty-wave"[/color] by Selleri 1984 and the "latent order"[/color] by Greenberger 1987; I tried to convince Trevor to adopt a less accurate but catchier term "antiphoton" but he didn't like it). Some day, when this present QM non-locality spell is broken, these two will be seen as Galileo and Bruno of our own dark ages.
 
Last edited:
  • #138
nightlight said:
You're ignoring few additional effects here. One is that the detectors counts are Poissonian (which are significant for the visible range photons). Another is that you don't have a sharp photon number state but a superposition with at least 1/2 photon equivalent spread.

You repeated that already a few times, but I don't understand this. After all, I can just as well work in the Hamiltonian eigenstates. In quantum theory, if you lower the intensity of the beam enough, you have 0 or 1 photon, and there is no such thing to my knowledge as half a photon spread, because you should then find the same with gammas, which isn't the case (and is closer to my experience, I admit). After all, gamma photons are nothing else but Lorentz-transformed visible photons, so what is true for visible photons is also true for gamma photons, it is sufficient to have the detector speeding at you (ok, there are some practical problems to do that in the lab :-)
Also, if the beam intensity is low enough, (but unfortunately the background scales with the beam intensity), it is a fair assumption that there is only one photon at a time in the field. So I'm pretty sure about what I said about the quantum predictions:

Initial pair -> probability e1 to have a click in D1
probability e2/2 to have a click in D2 and no click in D3
probability e3/2 to have a click in D3 and no click in D3

This is superposed on independent probability b1 to have a click in D1, a probability b2 to have a click in D2 and probability b3 to have a click in D3, independent, but all proportional to some beam power I.

It is possible to have a neglegible background by putting the thresholds high enough and isolating well enough from any lightsource other than the original power source. This can cost some efficiency, but not much.

If I is low enough, we consider that, due to the Poisson nature of the events, no independent events occur together (that is, we neglect probabilities that go in I^2). After all, this is just a matter of spreading the statistics over longer times.

So: rate of coincidences as predicted by quantum theory:
a) D1 D2 D3: none (order I^3)
b) D1 D2 no D3: I x e1 x e2/2 (+ order I^2)
c) D1 no D2 D3: I x e1 x e3/2 (+ order I^2)
d) D1 no D2 no D3: I x (b1 + e1 x (1- e2/2 - e3/2))
e) no D1 D2 D3: none (order I^2)
f) no D1 D2 no D3: I x (b2 + (1-e1)x e2/2)
g) no D1 no D2 D3: I x (b3 + (1-e1)xe3/2)


A longer very readable overview of their ideas at the time, especially regarding the anticorrelations, was published in Trevor Marshall, Emilio Santos "Stochastic Optics: A Reaffirmation of the Wave Nature of Light" Found. Phys., Vol 18, No 2. 1988, pp 185-223,

If you have it, maybe you can make it available here ; I don't have access to Foundations of Physics.

cheers,
Patrick.
 
Last edited:
  • #139
vanesch After all, I can just as well work in the Hamiltonian eigenstates.

The output of PDC source is not same as picking a state in Fock space freely. That is why they restricted their analysis to PDC sources where they can show that the resulting states will not have the Wigner distribution negative beyond what the detection & counting callibrated to null result for the 'vacuum fluctuations alone' would produce. That source doesn't produce eigenstates of free Hamiltonian (consider also the time resolution of such modes with sharp energy). It also doesn't produce gamma photons.

because you should then find the same with gammas, which isn't the case (and is closer to my experience, I admit).

You're trying to make the argument universal which it is not. It is merely addressing an overlooked effect for the particular non-classicality claim setup (which also includes particular type of source and nearly perfectly efficient polarizer and beam splitters). The interaction constants, cross sections, tunneling rates,... don't scale with the photon energy[/color]. You can have a virtually perfect detector for gamma photons. But you won't have a perfect analyzer or a beam splitter. Thus, for gamma you can get nearly perfect particle-like behavior (and very weak wave-like behavior) which is no more puzzling or non-classical than a mirror with holes in the coating scanned by a thin light beam mentioned earlier.

To preempt the loose argument shifts of this kind, I will recall the essence of contention here. We're looking at a setup where a wave packet splits into two equal, coherent parts A and B (packet fragments in orbital space). If brought together to a common area, A and B will produce perfect interference. If any phase shifts are inserted in the paths of A or B, the interference pattern will shift depening on relative phase shift on two paths, implying that in each try the two packet fragments propagate on both paths (this is also the propagation that the dynamical/Maxwell equations describe for the amplitude).

The point of contention is what happens if you insert two detectors DA and DB in paths of A and B. I am saying that the two fragments propagate to respective detectors, interact with the detector and each detectors triggers or doesn't trigger, regardless of what happened on the other detector. The dynamical evolution is never suspended and the triggering is solely a result of the interaction between the local fragment and its detector.

You're saying that, at some undefined stage of triggering process of the detector DA, the dynamical evolution of the fragment B will stop, the fragment B will somehow shrink/vanish even if it is light years away from A and DA. Then, again at some undefined later time, the dynamical evolution of B/DB will be resumed.

The alleged empirical consequence of this conjecture will be the "exclusion" of the trigger B whenever trigger A occurs. The "exclusion" is such that it cannot be explained by the local mechanism of independent detection under the uninterrupted dynamical evolution of each fragment and its detector.

Your subsequent attempt to illustrate this "exclusion" unfortunately mixes up the entirely trivial forms of exclusions[/color], which are perfectly consistent with the model of uninterrupted local dynamics. To clarify the main mixup (and assuming no mixups regarding the entirely classical correlation aspect due to any amplitude modulation), let's look at the Poissonian square law detectors (which apply to the energies of photons relevant here, i.e. those for which there are nearly perfect coherent splitters).

Suppose we have a PDC source and we use "photon 1" of the pair as a reference to define our "try" so that whenever detector D1 triggers we have a time window T in which we enable detection of "photon 2." Keep also in mind that the laser pump which feeds the non-linear crystal is Poissonian source (produces coherent states which superpose all photon number states using for coefficient magnutudes the square-roots of Poissonian probabilities), thus neither the input nor the output states are sharp photon number states (pedagogical toy derivations might use as the input the sharp number state, thus they'll show a sharp number state in output).

To avoid the issues of detector dead time or multiphoton capabilities, imagine we use a perfect coherent splitter, split the beam, then we add in each path another perfect splitter, and so on, for L levels of splitting, and place ND=2^L detectors in the final layer of subpaths. The probability of k detectors (Poissonian, square law detectors relevant here) triggering in a given try is P(n,k)=n^k exp(-n)/k! where n is the average number of triggers. A single multiphoton capable detector with no dead time would show this same distribution of k for a given average rate n.

Let's say we tune down the input power (or sensitivity of the detectors) to get an average number of "photon 2" detectors triggering as n=1. Thus the probability of exactly 1 among the ND detectors triggering is P(n=1,k=1)=1/e=37%. Probability of no ND trigger is P(n=1,k=0)=1/e=37%. Thus, the probability of more than 1 detector triggering is 26%, which doesn't look very "exclusive".

Your suggestion was to lower (e.g. via adjustments of detectors thresholds or by lowering the input intensity) the average n to a value much smaller than 1. So, let's look at n=0.1, i.e. on average we get .1 ND triggers for each trigger on the reference "photon 1" detector. The probability of a single ND trigger is P(n=0.1,k=1)=9% and of no trigger P(n=0.1,k=0)=90%.

Thus the probability of multiple ND triggers is now only 1%, i.e. we have 9 times more single triggers than the multiple triggers, while before, for n=1, we had only 37/26=1.4 times more single triggers than multiple triggers. It appears we had greatly improved the "exclusivity". By lowering n further we can make this ratio as large as we wish, thus the counts will appear as "exclusive" as we wish. But does this kind of low intensity exclusivity, which is what your argument keeps returning to[/color], indicate in any way a collapse of the wave packet fragments on all ND-1 detectors as soon as the 1 detector triggers?

Of course not. Let's look what happens under assumption that each of ND detectors triggers via its own Poissonian entirely independently of others. Since the "photon 2" beam splits its intensity into ND equal parts, the Poissonian for each of ND detectors will be P(m,k), where m=n/ND is the average trigger rate of each of ND detectors. Let's denote p0=P(m,k=0) the probability that one (specific) detector will not trigger. Thus p0=exp(-m). The probability that this particular detector will trigger at all (indicating 1 or more "photons") is then p1=1-p0=1-exp(-m). In your high "exclusivity" (i.e. low intensity) limit n->0, we will have m<<1 and p0~1-m, p1~m.

The probability that none of ND's will trigger, call it D(0), is thus D(0)=p0^ND=exp(-m*ND)=exp(-n), which is, as expected, the same as no-trigger probability of the single perfect multiphoton (square law Poissonian) detector capturing all of the "photon 2". Since we can select k detectors in C[ND,k] ways (C[] is a binomial coefficient), the probability of exactly k detectors triggering is D(k)=p1^k*p0^(ND-k)*C[ND,k], which is a binomial distribution with average number of triggers p1*ND. In the low intensity limit (n->0) and for large ND (corresponding to a perfect multiphoton resolution), D(k) becomes (using Stirling approximation and using p1*ND~m*ND=n) precisely the Poisson distribution P(n,k). Therefore, this low intensity exclusivity which you keep bringing up is trivial[/color] since it is precisely what the independent triggers of each detector predict no matter how you divide and combine the detectors (it is, after all, the basic property of the Poissonian distribution).

The real question is how to deal with the apparent sub-Poissonian cases as in PDC. That is where these kinds of trivial arguments don't help. One has to, as Marshall & Santos do, look at the specific output states and find the precise degree of the non-classicality (which they express for convenience in the Wigner function formalism). Their ZPF ("vacuum fluctuations" in conventional treatment) based detection and coincidence counting model allows for a limited degree of non-classicality in the adjusted counts[/color]. Their PDC series of papers shows that for PDC sources all non-classicality is of this apparent type (the same holds for laser/coherent/Poissonian sources and chaotic/super-Poissonian sources).

Without the universal locality principle, you can only refute specific overlooked effects of a particular claimed non-classicality setup. This does not mean that the nature somehow conspires to thwart non-locality through some obscure loopholes[/color]. It simply means that a particular experimental design has overlooked some effect and that it is more likely that the experiment designer will overlook more obscure effects.

In a fully objective scientific system one wouldn't have to bother refuting anything about any of these flawed experiments since their data hasn't objectively shown anything non-local. But in the present nature-is-non-local zeitgeist, a mere wishful excuse by an experimenter that the failure is a minor technical glitch which will be remedied by future technology, becomes, by the time it trickles down to the popular and pedagogical levels, an experimentally established non-locality.
 
Last edited:
  • #140
If you have it, maybe you can make it available here ; I don't have access to Foundations of Physics.

I have only a paper preprint but no scanner handy which could make a usable electronic copy of it. The Los Alamos archive has their more recent preprints. Their preprint "The myth of the Photon" also reviews the basic ideas and contains a citation to a Phys.Rev. version of that Found.Phys. paper. For intro on Wigner functions (and the related pseudo-distributions, the Husimi and the Glauber-Sudarshan functions) you can check these http://web.utk.edu/~pasi/davidovich.pdf and a longer paper with more on their operational aspects.
 
Last edited:
  • #141
nightlight said:
You're trying to make the argument universal which it is not. It is merely addressing an overlooked effect for the particular non-classicality claim setup (which also includes particular type of source and nearly perfectly efficient polarizer and beam splitters).

I'm looking more into Santos and Co's articles. It's a slow read, but I'm working up my way... so patience :-) BTW, thanks for the lecture notes, they look great !


You're saying that, at some undefined stage of triggering process of the detector DA, the dynamical evolution of the fragment B will stop, the fragment B will somehow shrink/vanish even if it is light years away from A and DA. Then, again at some undefined later time, the dynamical evolution of B/DB will be resumed.

Not at all. My view (which I have expressed here already a few times in other threads) is quite different and I don't really think you need a "collapse at a distance" at all - I'm in fact quite a fan of the decoherence program. You just get interference of measurement results when they are compared by the single observer who gets a hold on both measurements in order to calculate the correlation. This means that macroscopic systems can be in a superposition, but that's no problem, just continuing the unitary evolution (this is th essence of the decoherence program). But the point was not MY view :-)

Your subsequent attempt to illustrate this "exclusion" unfortunately mixes up the entirely trivial forms of exclusions[/color], which are perfectly consistent with the model of uninterrupted local dynamics. To clarify the main mixup (and assuming no mixups regarding the entirely classical correlation aspect due to any amplitude modulation), let's look at the Poissonian square law detectors (which apply to the energies of photons relevant here, i.e. those for which there are nearly perfect coherent splitters).

Suppose we have a PDC source and we use "photon 1" of the pair as a reference to define our "try" so that whenever detector D1 triggers we have a time window T in which we enable detection of "photon 2." Keep also in mind that the laser pump which feeds the non-linear crystal is Poissonian source (produces coherent states which superpose all photon number states using for coefficient magnutudes the square-roots of Poissonian probabilities), thus neither the input nor the output states are sharp photon number states (pedagogical toy derivations might use as the input the sharp number state, thus they'll show a sharp number state in output).

To avoid the issues of detector dead time or multiphoton capabilities, imagine we use a perfect coherent splitter, split the beam, then we add in each path another perfect splitter, and so on, for L levels of splitting, and place ND=2^L detectors in the final layer of subpaths. The probability of k detectors (Poissonian, square law detectors relevant here) triggering in a given try is P(n,k)=n^k exp(-n)/k! where n is the average number of triggers. A single multiphoton capable detector with no dead time would show this same distribution of k for a given average rate n.

Let's say we tune down the input power (or sensitivity of the detectors) to get an average number of "photon 2" detectors triggering as n=1. Thus the probability of exactly 1 among the ND detectors triggering is P(n=1,k=1)=1/e=37%. Probability of no ND trigger is P(n=1,k=0)=1/e=37%. Thus, the probability of more than 1 detector triggering is 26%, which doesn't look very "exclusive".

Your suggestion was to lower (e.g. via adjustments of detectors thresholds or by lowering the input intensity) the average n to a value much smaller than 1. So, let's look at n=0.1, i.e. on average we get .1 ND triggers for each trigger on the reference "photon 1" detector. The probability of a single ND trigger is P(n=0.1,k=1)=9% and of no trigger P(n=0.1,k=0)=90%.

Thus the probability of multiple ND triggers is now only 1%, i.e. we have 9 times more single triggers than the multiple triggers, while before, for n=1, we had only 37/26=1.4 times more single triggers than multiple triggers. It appears we had greatly improved the "exclusivity". By lowering n further we can make this ratio as large as we wish, thus the counts will appear as "exclusive" as we wish. But does this kind of low intensity exclusivity, which is what your argument keeps returning to[/color], indicate in any way a collapse of the wave packet fragments on all ND-1 detectors as soon as the 1 detector triggers?

Of course not. Let's look what happens under assumption that each of ND detectors triggers via its own Poissonian entirely independently of others. Since the "photon 2" beam splits its intensity into ND equal parts, the Poissonian for each of ND detectors will be P(m,k), where m=n/ND is the average trigger rate of each of ND detectors. Let's denote p0=P(m,k=0) the probability that one (specific) detector will not trigger. Thus p0=exp(-m). The probability that this particular detector will trigger at all (indicating 1 or more "photons") is then p1=1-p0=1-exp(-m). In your high "exclusivity" (i.e. low intensity) limit n->0, we will have m<<1 and p0~1-m, p1~m.

The probability that none of ND's will trigger, call it D(0), is thus D(0)=p0^ND=exp(-m*ND)=exp(-n), which is, as expected, the same as no-trigger probability of the single perfect multiphoton (square law Poissonian) detector capturing all of the "photon 2". Since we can select k detectors in C[ND,k] ways (C[] is a binomial coefficient), the probability of exactly k detectors triggering is D(k)=p1^k*p0^(ND-k)*C[ND,k], which is a binomial distribution with average number of triggers p1*ND. In the low intensity limit (n->0) and for large ND (corresponding to a perfect multiphoton resolution), D(k) becomes (using Stirling approximation and using p1*ND~m*ND=n) precisely the Poisson distribution P(n,k). Therefore, this low intensity exclusivity which you keep bringing up is trivial[/color] since it is precisely what the independent triggers of each detector predict no matter how you divide and combine the detectors (it is, after all, the basic property of the Poissonian distribution).

You're perfectly right, and I acknowledged that already a while ago when I said that there's indeed no way to distinguish "single photon" events that way. What I said was that such a single-photon event (which is one of a pair of photons), GIVEN A TRIGGER WITH ITS CORRELATED TWIN, will give you an indication of such an exclusivity in the limit of low intensities. It doesn't indicate any non-locality or whatever, but indicates the particle-like nature of photons, which is a first step, in that the marble can only be in one place at a time, and with perfect detectors WILL be in one place at a time. It would correspond to the 2 511KeV photons in positron annihilation, for example. I admit that my views are maybe a bit naive for opticians: my background is in particle physics, and currently I work with thermal neutrons, which come nicely in low-intensity Poissonian streams after interference all the way down the detection spot. So clicks are marbles :-)) There are of course differences with optics: First of all, out of a reactor rarely come correlated neutron pairs :-), but on the other hand, I have all the interference stuff (you can have amazing correlation lengths with neutrons!), and the one-click-one-particle detection (with 98% efficiency or more if you want), background ~ 1 click per hour.


This does not mean that the nature somehow conspires to thwart non-locality through some obscure loopholes[/color]. It simply means that a particular experimental design has overlooked some effect and that it is more likely that the experiment designer will overlook more obscure effects.

In a fully objective scientific system one wouldn't have to bother refuting anything about any of these flawed experiments since their data hasn't objectively shown anything non-local. But in the present nature-is-non-local zeitgeist, a mere wishful excuse by an experimenter that the failure is a minor technical glitch which will be remedied by future technology, becomes, by the time it trickles down to the popular and pedagogical levels, an experimentally established non-locality.

I agree with you here concerning the scientific attitude to adopt, and apart from a stimulus for learning more quantum optics, it is the main motivation to continue this discussion :-) To me, these experiments don't exclude anything, but they confirm beautifully the quantum predictions. So it is very well possible that completely other theories will have similar predictions, it is "sufficient" to work them out. However, if I were to advise a student (but I won't because it is not my job) on whether to take that path or not, I'd strongly advise against it, because there's so much work to do first: you have to show agreement on SUCH A HUGE AMOUNT OF DATA that the work is enormous, and the probability of failure rather great. On the other hand, we have a beautifully working theory which explains most if not all of it. So it is probably more fruitful to go further in the successfull path than to err "where no man has gone before". On the other hand, for a retired professor, why not play with these things :-) I myself wouldn't dare, for the moment: I hope to make more "standard" contributions and I'm perfectly happy with quantum theory as it stands now - even though I think it isn't the last word, and we will have another theory, 500 years from now. But I can make sense of it, it works great, and that's what matters. Which doesn't mean that I don't like challenges like you're proposing :-)


cheers,
Patrick.
 
Last edited:
  • #142
I might have misunderstood an argument you gave. After reading your text twice, I think we're not agreeing on something.

nightlight said:
Your subsequent attempt to illustrate this "exclusion" unfortunately mixes up the entirely trivial forms of exclusions[/color], which are perfectly consistent with the model of uninterrupted local dynamics. To clarify the main mixup (and assuming no mixups regarding the entirely classical correlation aspect due to any amplitude modulation), let's look at the Poissonian square law detectors (which apply to the energies of photons relevant here, i.e. those for which there are nearly perfect coherent splitters).

Nope, I am assuming 100% efficient detectors. I don't really know what you mean with "Poissonian square law detectors" (I guess you mean some kind of Bolometers which give a Poissonian click rate as a function of incident energy). I'm within the framework of standard quantum theory and I assume "quantum theory" detectors. You can claim they don't exist, but that doesn't matter, I'm talking about a QUANTUM THEORY prediction. This prediction can be adapted with finite quantum efficiency, and assumes fair sampling. Again, I'm not talking about what really happens or not, I'm talking about standard quantum theory predictions, whether correct or not.

Suppose we have a PDC source and we use "photon 1" of the pair as a reference to define our "try" so that whenever detector D1 triggers we have a time window T in which we enable detection of "photon 2."

Well, the quantum prediction with 100% efficient detectors is 100% correlation, because there are EXACTLY as many photons, at the same moment (at least on the time scale of the window) in both beams. The photons can really be seen as marbles, in the same way as the two 511KeV photons from a positron desintegration can be seen, pairwise, in a tracking detector, or the tritium and proton desintegration components can be seen when a He3 nucleus interacts with a neutron.

Keep also in mind that the laser pump which feeds the non-linear crystal is Poissonian source (produces coherent states which superpose all photon number states using for coefficient magnutudes the square-roots of Poissonian probabilities), thus neither the input nor the output states are sharp photon number states (pedagogical toy derivations might use as the input the sharp number state, thus they'll show a sharp number state in output).

Yes, but this is the only Poissonian source, so if we turn down the production rate of couples (which will be a Poissonian source with much lower rate than the incident beam, which has to be rather intense). So the 2-photon states come indeed also in a Poissonian superposition (namely the state |0,0>, the state |1,1>, the state |2,2> ...) where |n,m> indicates n blue and m red photons, but with coefficients which are from a much lower rate Poissonian distribution, which means that essentially only the |0,0> and |1,1> contribute. So one can always, take the low intensity limit and work with a single state.

To avoid the issues of detector dead time or multiphoton capabilities, imagine we use a perfect coherent splitter, split the beam, then we add in each path another perfect splitter, and so on, for L levels of splitting, and place ND=2^L detectors in the final layer of subpaths. The probability of k detectors (Poissonian, square law detectors relevant here) triggering in a given try is P(n,k)=n^k exp(-n)/k! where n is the average number of triggers.

This is correct, for a single-photon coherent beam, if we take A RANDOMLY SELECTED TIME INTERVAL. It is just a dead time calculation, in fact.

Let's say we tune down the input power (or sensitivity of the detectors) to get an average number of "photon 2" detectors triggering as n=1. Thus the probability of exactly 1 among the ND detectors triggering is P(n=1,k=1)=1/e=37%. Probability of no ND trigger is P(n=1,k=0)=1/e=37%.

No, this is not correct, because there is a 100% correlation between the photon-2 trigger and the sum of all the photon-1 clicks. THE TIME INTERVAL IS NOT RANDOM ! You will have AT LEAST 1 click in one of the detectors (and maybe more, if we hit a |2,2> state).
So you have to scale up the above Poissonian probabilities with a factor e.

Your suggestion was to lower (e.g. via adjustments of detectors thresholds or by lowering the input intensity) the average n to a value much smaller than 1. So, let's look at n=0.1, i.e. on average we get .1 ND triggers for each trigger on the reference "photon 1" detector. The probability of a single ND trigger is P(n=0.1,k=1)=9% and of no trigger P(n=0.1,k=0)=90%.

Again, because of the trigger on detector 2, we do not have a random time interval, and we have to scale up the probabilities by a factor 10. So the probability of seeing a single ND trigger is 90%, and the probability of having more than 1 is 10%. The case of no triggers is excluded by the perfect correlation.


cheers,
Patrick.
 
  • #143
vanesch You just get interference of measurement results when they are compared by the single observer who gets a hold on both measurements in order to calculate the correlation.

And how did the world run and pick what to do before there was anyone to measure so they can interfere their results? I don't think universe is being ran by some kind of magnified Stalin[/color], lording over the creation and every now and then erasing fallen comrades from the photos to make a different more consistent history.

This means that macroscopic systems can be in a superposition, but that's no problem, just continuing the unitary evolution (this is the essence of the decoherence program).

Unless you suspend the fully local dynamical evolution (ignoring the non-relativistic approximate non-locality of Coulomb potential and such), you can't reach, at least not coherently, a conclusion of non-locality (a no-go for purely local dynamics).

The formal decoherence schemes have been around since at least early 1960s. Without adding ad hoc, vague (no less so than the collapse) super-selection rules to pick a preferred basis, they still have no way of making a unitary evolution pick a particular result out of a superposition. And without the pick for each instance, it makes no sense to talk of statistics of many such picks. You can't say you will get 30% heads in 1000 flips, while insisting you don't have to get any specific result in the individual flips (which is what these "decoherence" schemes, old and new, claim to be able somehow to achieve).

It is just another try to come up with a new and an improved mind-numbing verbiage, more mesmerising and slippery than the old one which got worn out, to uphold the illusion of being in possession of a coherent theory, for just a bit longer until there is something truly coherent to take its place.

I am ashamed to admit, but I was once taken in, for couple years or so, by one of these "decoherence" verbal shell games, Prigogines' version, and was blabbing senslessly "superoperators" and "subdynamics" and "dissipative systems" and "Friedrichs model" ... to any poor soul I could corner, gave a seminar, then a lecture to undergraduates as their QM TA (I hope it didn't take),...

What I said was that such a single-photon event (which is one of a pair of photons), GIVEN A TRIGGER WITH ITS CORRELATED TWIN, will give you an indication of such an exclusivity in the limit of low intensities. It doesn't indicate any non-locality or whatever, but indicates the particle-like nature of photons, which is a first step, in that the marble can only be in one place at a time,

You seem to be talking about time modulation of Poisson P(n,k), where n=n(t). That does correlate 1 a 2 trigger rates, but that kind of exclusivity is equally representative of fields and particles. In the context of QM/QED, where you already have a complete dynamics for the fields, such informal duality violates the Occam's razor { Classical particles can be simulated in all regards by classical fields (and vice versa). It is the dual QM kind that lacks coherence.}

you have to show agreement on SUCH A HUGE AMOUNT OF DATA that the work is enormous, and the probability of failure rather great.

The explorers don't have to build roads, bridges and cities, they just discover new lands and if these are worthy, the rest will happen without any of their doing.

On the other hand, we have a beautifully working theory which explains most if not all of it.

If you read the Jaynes passage quoted few messages back (or his other papers on the theme, or Barut's views, or Einstein's and Schroedinger's, even Dirac's and some of contemporary greats as well), "beautiful" isn't the attribute that goes anywhere in the vicinity of QED, in any role and under any excuse. Its chief power is in being able to wrap tightly around any experimental numbers which come along, thanks to a rubbery scheme which can as happily "explain" a phenomenon today as it will its exact opposite tomorrow (see Jaynes for the full argument). It is not a kind of power directed forward to the new unseen phenomena, the way Newton's or Maxwell's theories were. Rather, it is more like a scheme for post hoc rationalizations of whatever came along from the experimenters (as Jaynes put it -- the Ptolomean epicycles of our age).

On the other hand, for a retired professor, why not play with these things :-)

I've met few like that, too. There are other ways, though. Many physicists in USA ended up, after graduate school or maybe one postdoc, on the Wall Street or in the computer industry, created their companies (especially in software). They don't live by the publish or perish dictum and don't have to compromise any ideas or research paths to academic fashions and politicking. While they have less time, they have more creative freedom. If I were to bet, I'd say that the future physics will come precisely from these folks (e.g. Wolfram).

even though I think it isn't the last word, and we will have another theory, 500 years from now.

I'd say it's around the corner. Who would be going into physics if he believed otherwise. (Isn't there a little Einstein hiding in each of us?)
 
  • #144
nightlight said:
vanesch You just get interference of measurement results when they are compared by the single observer who gets a hold on both measurements in order to calculate the correlation.

And how did the world run and pick what to do before there was anyone to measure so they can interfere their results?

It just continued in unitary evolution. It started collapsing when I was born, it is collapsing all the time now that I'm living, and it will continue to run unitarily after I die. If ever I reincarnate it will start collapsing again. Nobody else can do the collapse but me, and nobody else is observing but me. How about that ? It is a view of the universe which is completely in sync with my egocentric attitudes. I never even dreamed of physics giving me a reason to be that way :smile:

You should look a bit more at recent decohence work by Zeh and Joos for instance. Their work is quite impressive. I think you're mixing up the relative state view (which dates mostly from the sixties) with their work which dates from the nineties.

cheers,
Patrick.
 
  • #145
nightlight said:
You can't say you will get 30% heads in 1000 flips, while insisting you don't have to get any specific result in the individual flips (which is what these "decoherence" schemes, old and new, claim to be able somehow to achieve).

This is not true. If you take a probability of series of 1000 flips together as one observation in a decoherence-like way, then this probability is exactly equal to the probability you would get classically when considering each flip at a time and making up series of 1000, meaning the series with about 30% heads in it will have a relatively high probability as compared to series in which, say, you'll find 45% heads. It depends whether I personally observe each flip or whether I just look at the record of the result of 1000 flips. But the results are indistinguishable.

cheers,
Patrick.
 
  • #146
nightlight said:
I am ashamed to admit, but I was once taken in, for couple years or so, by one of these "decoherence" verbal shell games, Prigogines' version, and was blabbing senslessly "superoperators" and "subdynamics" and "dissipative systems" and "Friedrichs model" ... to any poor soul I could corner, gave a seminar, then a lecture to undergraduates as their QM TA (I hope it didn't take),...
...
I'd say it's around the corner. Who would be going into physics if he believed otherwise. (Isn't there a little Einstein hiding in each of us?)

Like you're now babbling senselessly about Santos and Barut's views ? :-p :-p :-p
Ok, that one was easy, I admit. No, these discussions are really fun, so I should refrain from provoking namecalling games :rolleyes:
As I said, it makes me learn a lot of quantum optics, and you seem to know quite well what you're talking about.

cheers,
Patrick.
 
  • #147
nightlight said:
Its chief power is in being able to wrap tightly around any experimental numbers which come along, thanks to a rubbery scheme which can as happily "explain" a phenomenon today as it will its exact opposite tomorrow (see Jaynes for the full argument).

Yes, I read that, but I can't agree with it. When you read Weinberg, QFT is derived from 3 principles: special relativity, the superposition principle and the cluster decomposition principle. This fixes completely the QFT framework. The only thing you plug in by hand is the representation of the gauge group and that group itself (U(1)xSU(2)xSU(3)), a Higgs potential, and out pops the standard model, completely with all its fields and particles, from classical EM over beta decay, to nuclear structure (true, the QCD calculations in the low energy range are still messy ; but lattice QCD starts giving results).
I have to say that I find this impressive, that from a handful of parameters, you can build up all of known physics (except gravity of course). That doesn't exclude the possibility that other ways exist, but you are quickly in awe for the monumental work such a task will hold.

cheers,
patrick.
 
  • #148
vanesch said:
Again, because of the trigger on detector 2, we do not have a random time interval, and we have to scale up the probabilities by a factor 10. So the probability of seeing a single ND trigger is 90%, and the probability of having more than 1 is 10%. The case of no triggers is excluded by the perfect correlation.

I'd like to point out that a similar reasoning (different from the "Poissonian square law" radiation detectors) holds even for rather low photon detection efficiencies. If the efficiencies are, say, 20% (we take them all equal), then in our above scheme, we will have a probability of exactly one detector triggering equal to 0.2 x 0.9 = 18%, the probability of having exactly two detectors triggering equal to 0.2 x 0.2 x (0.09...) = a bit less than 4% etc...
So indeed, there is some "Poisson-like" distribution due to the finite efficiencies, but it is a FIXED suppression of coincidences by factors of 0.2, 0.04...
At very low intensities, the statistical Poisson coincidences should be much lower than these fixed suppressions (which are the quantum theory way of saying "fair sampling"), so we'd still be able to discriminate the "anticoincidences" due to the fact that each time there's only one marble in the pipe, and the anticoincidences due to lack of efficiency if each detector is generating its Poisson series on its own.

A way to picture this is by using, instead of beam splitters, a setup which causes diffraction of the second photon, and a position-sensitive photomultiplier (which is just an array of independent photomultipliers) looking at the diffraction picture.
You will build up slowly the diffraction picture with the synchronized clicks from detector 1 (which looks at the first photon of the pair) ; of course, each time detector one clicks, you will only have a chance of 0.2 to find a click on the position-sensitive PM. If the beam intensity is low enough, you will NOT find of the order of 0.04 times a second click on that PM. This is something that can only be achieved with a particle and is a prediction of QM.

cheers,
Patrick.
 
  • #149
vanesch You should look a bit more at recent decohence work by Zeh and Joos for instance. Their work is quite impressive. I think you're mixing up the relative state view (which dates mostly from the sixties) with their work which dates from the nineties.

Zeh has been a QM guru pontificating on QM measurement when my QM professor was a student and still appears to be in a superposition of vews. I looked up some of his & Joos' recent preprints on the theme. Irreversible macroscopic aparatus, 'coherence destroyed very rapidly',... basically the same old stuff. Still the same problem as with the origina Daneri, Loinger and Prosperi macroscopic decoherence scheme of 1962.

It is a simple question. You have |Psi> = a1 |A1> + a2 |A2> = b1 |B1> + b2 |B2>. These are two equivalent orhtogonal exapansions of state Psi, for two observables [A] and , of some system (where the system may be a single particle, an apparatus with a particle, rest of the building with the apparatus and the particle,...). On what basis does one declare that we have value A1 of [A] for a given individual instance (you need this to be able to even to talk about statistics of the sequence of such values)?

At some strategically placed point wihin their mind-numbing verbiage these "decoherence" folks will start pretending that it was already established that a1|A1>+a2|A2> is the "true" expansion and A1 is its "true" result, in the sense that allows them talk about statistical poperties of a sequence of outcomes at all i.e. in exactly the same sense that in order to say: word "baloney" has 7 letters, you have (tacitly at least) assumed that the word has the first letter, the second letter,... Yet, these slippery folks will fight tooth an nail such conclusion, or even that they assume there is an individual word at all, yet this word-non-word still somehow manages to have exactly seven letters-non-letters. (The only innovation worthy a note in the "new and improved" version is that they start by telling you right upfront they're not going to do, no sir, not ever, absolutely never, that kind of slippery maneuver.)

No thanks. Can't buy any of it. Considering that the only rationale[/color] precluding one from plainly saying that an individual system has the definite properties all along[/color] is the Bell's "QM prediction"[/color] (which in turn cannot be deduced without assuming the non-dynamical collapse/projection postulate, the collapse postulate which is needed to solve the "measurement" problem, the problem of absence of definite properties in a superposition, thus the problem which still exists solely because of the Bell's QM prediction").

If you drop the non-dynamical collapse postulate, you don't have Bell's "prediction" (you would still have the genuine "predictions" of the kind that actually predict the data obtained, warts and all). There is no other result at that point preventing you from interpeting the wave function as a real matter field[/color], evolving purely, without any interruptions, according to the dynamical equations (which happen to be nonlinear in the general coupled case) and representing thus the local "hidden" variables of the system. The Born rule would be reshuffled from its pedestal of a postulates to a footnote in scattering theory, the way and place it got the into QM. It is an approximate rule of thumb, its precise operational meaning depending ultimately on the apparatus design and measuring and counting rules (as it actually happens in any application, e.g. with the operational interpretations of Glauber P vs Wigner vs Husimi phase space functions), just as it would have beeen if someone had introduced it into the classical EM theory of light scattering in 19th century. The 2nd quantization is then only an approximation scheme for these coupled matter-EM fields, a linearization algorithm (similar to the Kowalski's and virtually identical to the QFT algorithms used in solid state and other branches of physics), adding no more new physics to the coupled nonlinear fields than, say, the Runge-Kutta numeric algorithm adds to the fluid dynamics Navier-Stokes equations.

Interestingly, in one of his superposed eigenviews, master guru Zeh[/color] himself insists that the wave function is a regular matter field[/color] and definitely not a probability "amplitude"[/color] -- see his paper "There is no "first" quantization", where he characterizes the "1st quantization" as merely a transition from a particle model to a field model[/color] (the way I did several times in this thread; which is of course how Schroedinger, Barut, Jaynes, Marshall & Santos, and others have viewed it). Unfortunately, somewhere down the article, his |Everett> state superposes in, nudging very gently at first, but eventually overtaking his |Schroedinger> state. I hope he makes up his mind in the next fourty years.
 
  • #150
nightlight said:
Unless you suspend the fully local dynamical evolution (ignoring the non-relativistic approximate non-locality of Coulomb potential and such), you can't reach, at least not coherently, a conclusion of non-locality (a no-go for purely local dynamics).

But that is exactly how locality is preserved in my way of viewing things ! I pretend that there is NO collapse at a distance, but that the record of remote measurements remains in a superposition until I look at the result and compare it with the other record (also in a superposition). It is only the power of my mind who forces a LOCAL collapse (beware: it is sufficient that I look at you and you collapse :devil:)

The formal decoherence schemes have been around since at least early 1960s. Without adding ad hoc, vague (no less so than the collapse) super-selection rules to pick a preferred basis, they still have no way of making a unitary evolution pick a particular result out of a superposition.

I know, but that is absolutely not the issue (and often people who do not know exactly what decoherence means make this statement). Zeh himself is very keen on pointing out that decoherence by itself doesn't solve the measurement problem ! It only explains why - after a measurement - everything looks as if it were classical, by showing which is the PREFERRED BASIS to work in. Pure relative-state fans (Many Worlds fans, of which I was one until a few months ago) think that somehow they will, one day, get around this issue. I think it won't happen if you do not add in something else, and the something else I add in is that it is my conciousness that applies the Born rule. In fact this comes very close to the "many minds" interpretation, except that I prefer the single mind interpretation :biggrin: exactly in order to be able to preserve locality. After all, I'm not aware of any other awareness except the one I'm aware of, namely mine :redface:.



And without the pick for each instance, it makes no sense to talk of statistics of many such picks. You can't say you will get 30% heads in 1000 flips, while insisting you don't have to get any specific result in the individual flips (which is what these "decoherence" schemes, old and new, claim to be able somehow to achieve).

I think you're misreading the decoherence program which you seem to confuse with hardline manyworlders. The decoherence program tells you that if you consider the collapse at each individual flip, or you only consider the collapse at the 1000 flips series, the result will be the same, because the non-diagonal terms in the density matrix vanish at a monstruously fast rate for any macroscopic system (UNLESS, of course, WE ARE DEALING WITH EPR LIKE SITUATIONS!). But in order to be able to even talk about a density matrix, you need to assume the Born rule (in its modern version). So the knowledgeable proponents of decoherence are well aware that they'll never DERIVE the Born rule that way, because they USE it. They just show equivalence between two different ways of using it.


cheers,
Patrick.
 

Similar threads

  • · Replies 36 ·
2
Replies
36
Views
7K
Replies
65
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 20 ·
Replies
20
Views
2K
Replies
3
Views
3K
  • · Replies 81 ·
3
Replies
81
Views
7K
  • · Replies 14 ·
Replies
14
Views
4K
Replies
28
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K