What Confusion Surrounds Young's Experiment on Wave-Particle Duality?

  • Thread starter Thread starter Cruithne
  • Start date Start date
  • Tags Tags
    Experiment
Click For Summary
The discussion addresses confusion surrounding Young's experiment on wave-particle duality, particularly regarding the behavior of light as both waves and particles. Participants note that when photons are sent through a double-slit apparatus, they create an interference pattern, even when sent one at a time, contradicting claims that observation alters this behavior. The conversation highlights the misconception that observing light forces it to behave like particles, while in reality, it continues to exhibit wave-like properties unless specific measurements are taken. Additionally, the role of detectors and their influence on the results is debated, with references to classical interpretations of quantum phenomena. Overall, the discussion emphasizes the complexities and misunderstandings inherent in quantum mechanics and the interpretation of experimental outcomes.
  • #91
nightlight said:
vanesch Is this then also true for 1MeV gamma rays ?

You still have vacuum fluctuations. But, the gamma ray photons are obviously in much better relation to any frequency cutoff of the fluctuations than optical photons, thus they indeed do have much better signal to noise on the detectiors. But this same energy advantage turns into disadvantage for non-classicality tests since the polarization coupling to atomic lattice is proportionately smaller, thus the regular polarizers (or half-silvered mirrors with preserved coherent splitting) won't work for the gamma rays.

I was not talking for EPR kinds of experiments, but to try to refute the idea that you do not need photons (that classical wave theory can do). I don't really see what's so different between gamma rays and optical photons: or both exist, or both are describable by the same semiclassical theory. And with gamma rays, there isn't this problem with quantum efficiency and dark currents, so I was wondering if this doesn't give a problem with the explanations used to explain away photons in the visible range.

cheers,
Patrick.
 
Physics news on Phys.org
  • #92
nightlight said:
But, as explained at length earlier, if classical theory uses ZPF (boundary & initial conditions) and the correlation functions are computed to subtract the ZPF (to match what is done in the Glauber's prescription for Quantum Optics and what the detectors do by background subtractions and tuning to have null triggers when there is no signal), then it produces the same coincidence counts. The sub-Poissonian trait of semiclassical model is the result of the contribution of sub-ZPF superpositions[/color] (the "dark wave"). See the earlier message and the references there.

Eh, this sounds bizarre. I know you talked about it but it sounds so strange that I didn't look at it. I'll try to find your references to it.


cheers,
Patrick.
 
  • #93
vanesch Eh, this sounds bizarre. I know you talked about it but it sounds so strange that I didn't look at it. I'll try to find your references to it.

The term "dark wave" is just picturesque depiction (a visual aid), a la Dirac hole, except this would be like a half-photon equivalent hole in the ZPF. That of course does get averaged over the entire ZPF distribution and without further adjustments of data nothing non-classical happens.

But, if you model a setup calibrated to "ignore" vacuum fluctuations (via combination of background subtractions and detectors sensitivity adjustments), the way Quantum Optics setups are (since Glauber's correlation functions subtract the vacuum contributions as well) -- then the subtraction of the average ZPF contributions makes the sub-ZPF contribution appear as having negative probability (which mirrors the negative regions in Wigner distribution) and able to replicate the sub-Poissonian effects on the normalized counts. The raw counts don't have sub-Poissonian or negative probability traits, just as they don't in Quantum Optics. Only the adjusted counts (background subtractions and/or the extrapolations of the losses via the "fair sampling" assumption) have such traits.

See those references for PDC classicality via Stochastic electrodynamics I gave earlier.
 
Last edited:
  • #94
I think both of you should not argument on the possibility to get a detector triggering a single photon as it is impossible (with classical or quantum physics).
Experiences with single photon are only simplification of what really occurs (i.e the problem of the double slit experiment, cavities, etc …). We just roughly say: I have an electromagnetic energy of hbar.ω.

Remind that a “pure photon” is a single energy and thus requires an infinite time of measurement: reducing this time of measurement (interaction is changed) modifies the field.

It is like trying to build a super filter that would extract a pure sinus wave from an electrical signal or a sound wave.

Don’t also forget that the quantization of the electromagnetic field gives only the well known energy eigenvalues (a number of photons) in the case of a free field (no sources) where we have a free electromagnetic Hamiltonian (Hem).
Once you interact, the eigenvalues of the electromagnetic field are changed and are given by Hem+Hint (i.e. you modify the basis of the photons number (the energy) < => you change the number of photons).

Seratend.
 
  • #95
vanesch I was not talking for EPR kinds of experiments, but to try to refute the idea that you do not need photons (that classical wave theory can do). I don't really see what's so different between gamma rays and optical photons: or both exist, or both are describable by the same semiclassical theory. And with gamma rays, there isn't this problem with quantum efficiency and dark currents, so I was wondering if this doesn't give a problem with the explanations used to explain away photons in the visible range.

We were talking about coherent beam splitter which splits QM amplitude (or classical wave) into two equal coherent sub-packets. If you try same with the gamma rays, you have different type of split (non equal amplitudes) i.e. it would have merely classical mix (multidimensional rho instead of a single dimensional Psi in Hibert space terms) of the left-right sub-packets. This gives it a particle-like appearance, i.e. in each try it goes one way or the other but not both ways. But that type of scattering is perfectly within the semi-classical EM modelling.

Imagine a toy classical setup with a mirror containing large holes in the coating (covering half the area) and run quickly a "thin" light beam in random fashion over it. Now a pair of detectors behind will show perfect anti-correlation in their triggers. There is nothing contrary to classical EM about it, even though it appears as if particle went through on each trigger and on average half the light went each way. The difference is that you can't make the two halves interfere any more since they are not coherent. If you can detect which way "it" went you lose coherence and you won't have interference effects. [/color]

Some early (1970s) "anti-correlation" experimental claims with optical photons have made this kind of false leap. Namely they used randomly or circularly polarized light and used polarizer to split it in 50:50 ratio, then claimed the perfect anti-correlation. That's the same kind of trivially classical effect as the mirror with large holes. They can't make the two beams interfere, though (and they didn't try that, of course).
 
Last edited:
  • #96
nightlight said:
vanesch I was not talking for EPR kinds of experiments, but to try to refute the idea that you do not need photons (that classical wave theory can do). I don't really see what's so different between gamma rays and optical photons: or both exist, or both are describable by the same semiclassical theory. And with gamma rays, there isn't this problem with quantum efficiency and dark currents, so I was wondering if this doesn't give a problem with the explanations used to explain away photons in the visible range.

We were talking about coherent beam splitter which splits QM amplitude (or classical wave) into two equal coherent sub-packets.

No, we were simply talking about photon detection and background. If the zero-point energy or whatever it is, provoques what is usually considered as being a technical difficulty, namely the "dark current noise", then why is this the limiting factor in visible light detectors, but doesn't it occur in gamma ray detectors, which are close to 100% efficient with neglegible dark currents (take MWPC for instance). I don't see why something that is "a fundamental problem" at say, 2eV is suddenly no issue anymore at 2MeV.

cheers,
Patrick.
 
  • #97
vanesch said:
No, we were simply talking about photon detection and background. If the zero-point energy or whatever it is, provoques what is usually considered as being a technical difficulty, namely the "dark current noise", then why is this the limiting factor in visible light detectors, but doesn't it occur in gamma ray detectors, which are close to 100% efficient with neglegible dark currents (take MWPC for instance). I don't see why something that is "a fundamental problem" at say, 2eV is suddenly no issue anymore at 2MeV.

cheers,
Patrick.

I didn't want to jump into this and spoil the "fun", but I can't resist myself. :)

You are certainly justified in your puzzlement. Dark current, at least the ones we detect in photon detectors, has NOTHING to do with "zero-point" field. I deal with dark current all the time in accelerators. They are the result of field emission, and over the range of less than 10 MV/m, they are easily described by the Fowler-Nordheim theory of field emission. I do know that NO photodetector work with that kind of a gradient, or anywhere even close.

The most sensitive gamma-ray detector is the Gammasphere, now firmly anchored here at Argonne (http://www.anl.gov/Media_Center/News/2004/040928gammasphere.html ). In its operating mode when cooled to LHe, it is essentially 100% efficient in detecting gamma photons.

Zz.
 
Last edited by a moderator:
  • #98
ZapperZ said:
I didn't want to jump into this and spoil the "fun", but I can't resist myself. :)

No, please jump in ! I try to keep an open mind in this discussion, and I have to say that nightlight isn't the usual crackpot one encounters in this kind of discussions and is very well informed. I only regret a bit the tone of certain statements like what we are doing to those poor students and so on, but I've seen worse. What is potentially interesting in this discussion is how far one can push semiclassical explanations for optical phenomena. For instance, I remember having read that what is usually quoted as a "typical quantum phenomenon", namely the photo-electric effect, needs quantification of the solid state, but not of the EM field which can still be considered classical. Also, if it is true that semiclassical models can correctly predict higher order radiative corrections in QED, I have to say that I'm impressed. Tree diagrams however, are not so impressive.
I can very well accept that EPR like optical experiments do not close all loopholes and it can be fun to see how people still find reasonably looking semiclassical theories that explain the results.
However, I'm having most of the difficulties in this discussion with 2 things. The first one is that photon detectors seem to be very flexible devices which seem to get exactly those properties that are needed in each case to save the semiclassical explanation ; while I can accept each refutation in each individual case, I'm trying to find a contradiction between how it behaves in one case and how it behaves in another case.
The second difficulty I have is that I'm not aware of most of the litterature that is referred too, and I can only spend a certain amount of time on it. Also, I'm not very aware (even if I heard about the concepts) of these Wigner functions and so on. So any help from anybody here can do. Up to now I enjoyed the cat-and-mouse game :smile: :smile:

cheers,
Patrick.
 
  • #99
May be the principal problem is to know if the total spin 0, EPR state is possible with a classical vision.
I understand, nightlight needs to refute the physical existence of a "pure EPR state" in order to get "a classical theory" that describes the EPR experiment.

Seratend
 
  • #100
seratend said:
I understand, nightlight needs to refute the physical existence of a "pure EPR state" in order to get "a classical theory" that describes the EPR experiment.

Yes, that is exactly what he does, and it took me some time to realize this in the beginning of the thread, because he claimed to accept all of QM except for the projection postulate. But if you browse back, afterwards we agreed upon the fact that he disagreed on the existence of a product hilbert space, and only considers classical matter fields (kind of Dirac equation) coupled with the classical EM field (Maxwell), such that the probability current of charge from the dirac field is the source of the EM field. We didn't delve into the issue of the particle-like nature of matter. Apparently, one should also add some noise terms (zero point field or whatever) to the EM field in such a way that it corresponds to the QED half photon contribution in each mode. He claims (well, and most authors he cites too) that the clickediclack of photons is just a property of "photon detectors" which react in such a way to continuous radiation... it is there that things seem to escape me.
However, the fun thing is that indeed, one can find explanations of a lot of "quantum behaviour" this way. The thing that bothers me are the detectors.

cheers,
Patrick.
 
  • #101
vanesch said:
What is potentially interesting in this discussion is how far one can push semiclassical explanations for optical phenomena. For instance, I remember having read that what is usually quoted as a "typical quantum phenomenon", namely the photo-electric effect, needs quantification of the solid state, but not of the EM field which can still be considered classical.

I happened to work in angle-resolved photoemission (ARPES) for the 3 years that I spent as a postdoc. So here is what I know.

While it is true that the band structure of the material being looked at can dictate the nature of the result, this is more of a handshaking process between both the target material and the probe, which is the light. While the generic photoelectric effect can find a plausible argument that light is still a wave (and not photons), this requires strictly that the target material (i) be polycrystaline (meaning no specific or preferred orientation of the crystal structure) and (ii) has a continuous energy band, as in the metallic conduction band. If you have ALL that, then yes, the classical wave picture cannot be ruled out in explaining the photoelectric effect.

The problem is, people who still hang on to those ideas have NOT followed the recent developments and advancements in photoemission. ARPES, inverse photoemission (IPES), resonant photoemission (RPES), spin-polarized photoemission, etc., and the expansion of the target material being studied, ranging from single-crystal surface states all the way to mott insulators, have made classical description of light quite puzzling. For example, the multi-photon photoemission, which I worked on till about 3 months ago, would be CRAZY if we have no photons. The fact that we can adjust the work function to have either 1-photon, 2-photon, 3-photon, etc. photoemission AND to be able to angularly map this, is damn convincing as far as the experiment goes.

There are NO quantitative semiclassical theory for ARPES, RPES, etc.. There are also NO quantitative semiclassical theory for multi-photon photoemission, or at least to explain the observation from such experiments IF we do not accept photons. I know, I've looked. It is of no surprise if I say that my "loyalty" is towards the experimental observation, not some dogma or someone's pet cause. So far, there have been no viable alternatives that are consistent to the experimental observations that I have made.

Zz.
 
  • #102
Vanesh
Yes, that is exactly what he does, and it took me some time to realize this in the beginning of the thread, because he claimed to accept all of QM except for the projection postulate. But if you browse back, afterwards we agreed upon the fact that he disagreed on the existence of a product hilbert space, and only considers classical matter fields (kind of Dirac equation) coupled with the classical EM field (Maxwell), such that the probability current of charge from the dirac field is the source of the EM field. …

… He claims (well, and most authors he cites too) that the clickediclack of photons is just a property of "photon detectors" which react in such a way to continuous radiation... it is there that things seem to escape me.
However, the fun thing is that indeed, one can find explanations of a lot of "quantum behaviour" this way. The thing that bothers me are the detectors.
[/color]

Like classical mechanic can explain many things even light as particles (Newton assumption). It has taken a long time before scientist community accepted that light was a wave. Even with the initial results of young slits experiment, they have preferred to wait for another experiment, the experiment, showing the impossibility: the diffraction figure of an enlightened sphere: a light point in the middle of the shadow. This diffraction figure was predicted, if I remember, by Poisson in a demonstration to show that if light was not made of particles an incredible light point should appear :eek: .


ZapperZ
The problem is, people who still hang on to those ideas have NOT followed the recent developments and advancements in photoemission. ARPES, inverse photoemission (IPES), resonant photoemission (RPES), spin-polarized photoemission, etc., and the expansion of the target material being studied, ranging from single-crystal surface states all the way to mott insulators, have made classical description of light quite puzzling. For example, the multi-photon photoemission, which I worked on till about 3 months ago, would be CRAZY if we have no photons. The fact that we can adjust the work function to have either 1-photon, 2-photon, 3-photon, etc. photoemission AND to be able to angularly map this, is damn convincing as far as the experiment goes.

There are NO quantitative semiclassical theory for ARPES, RPES, etc.. There are also NO quantitative semiclassical theory for multi-photon photoemission, or at least to explain the observation from such experiments IF we do not accept photons. I know, I've looked. It is of no surprise if I say that my "loyalty" is towards the experimental observation, not some dogma or someone's pet cause. So far, there have been no viable alternatives that are consistent to the experimental observations that I have made.
[/color]

I really think that arguing about the minimum error in the photo detectors would not solve the problem: the existence of the EPR like state and its associated results.

I think the current problem of “simple alternate classical” theories (with local interactions, local variables) lies in the lack of providing a reasonable result to EPR like states (photons, electrons): a pair of particles with a total null spin.

This state is so special: a measurement on any axis always gives a 50/50% of spin up/down for each particle of the pair. When we take a separated particle experiment, the result measurement shows that a 50/50 is only achievable on a unique orientation!

Seratend.
 
  • #103
vanesch No, we were simply talking about photon detection and background. If the zero-point energy or whatever it is, provoques what is usually considered as being a technical difficulty, namely the "dark current noise", then why is this the limiting factor in visible light detectors, but doesn't it occur in gamma ray detectors, which are close to 100% efficient with neglegible dark currents (take MWPC for instance). I don't see why something that is "a fundamental problem"[/color] at say, 2eV is suddenly no issue anymore at 2MeV.

You may have lost the context of the arguments that brought us to the detecton question -- the alleged insufficiency of the non-quantized EM models to account for various phenomena (Bell tests, PDC, coherent beam splitters). It was in that specific context[/color] that the phenomenon of detection noise due to the vacuum fluctuation precludes any effect to help decide the EM non-classicality -- it is a problem for these specific non-classicality claims[/color], not "a problem" for all conceivable non-classicality claims, much less that it is some kind of general technological or scientific "problem".

The low energy photons interact differently with the optical components than the gamma rays (the ratio of their energy/momenta to their interaction with atoms is hugely different), so the phenomena you used for the earlier non-classicality claims has lost their key features (such as the sharp polarization splitting or the sharp interference on the beam splitter).

Like with perpetuum mobile claims (which these absolute non-classicality claims increasingly resemble, after three, or even five, decades of excuses and ever more creative euphemisms for the failure), each device may have its own loophole or ways to create illusion of energy excess.

Now you have left the visible photons, moved to gamma rays. Present a phenomenon which makes them absolutely non-classical. You didn't even make a case how this sharper detectibility (better S/N) for these photons[/color] yields to anything that prohibits non-quantised EM field from modelling the phenomenon. I explained why the original setups don't help you with these photons.

The second confusion you may have is that between the Old Quantum Theory (before Schroedinger/Heisenberg) particle photon claims and the conventional jargon and visual mnemonics of the QED or Quantum Optics. The OQT didn't have correct dynamical theory of ionisation or of photon photo-electron scattering cross sections. So they imagined needle radiation or point-photons. After the Shoredinger-Heisenberg created QM dynamics, these phenomena became computable using purely semi-classical models (with only matter particles being quantized). The OQM arguments became irrelevant, even though you'll still see them parroted in the textbooks. The point-like jargon for photons survived into QED (and it is indeed heuristically/mnemonically useful in many cases, as long as one doesn't make ontology out of the personal mnemonic device and then claim paradoxes).

I asked you where is the point-photon in QED. You first tried via the apparent coexistence of anti-correlation and the coherence on a beam splitter. As demonstrated that doesn't work (and has been known so for 4-5 decades, it was tried first in 1950s and resolved during the Hanbury Brown and Twiss effect debates). The photon count is Poissonian, which is same as detector model response distribution. So the anti-correlation here isn't as sharp as it is usally claimed.

Then (after a brief detour into PDC) you brought up gamma rays as not having detection problem. True, they don't. But they lose the coherence and the visibility of the interference as they gain on the anti-correlation (due to sharper detection). That is a perfectly classical EM tradeoff. The less equal and less synchronized the packets after the split, the more exclusive they appear to the detectors while their intererence fringes become less sharp (lower visibility).

That is the same "complementarity" phenomenon that QM aplititudes (which follow Maxwell's equations for free space & through linear optical elements) describe. And that is identical to the semiclassical EM description since the same equations describe the propagation.

So what is the non-classicality claim about the gamma rays? The anti-correlation on the beam splitter is not relevant for that purpose for these[/color] photons.

You may wonder why is it always so that there is something else that blocks the non-classicality from manifesting. My guess is that it is so because it doesn't exist and all attempted contraptions claiming to achieve it are bound to fail one way or the other, just as perpetuum mobile has to. The reasons the failure causes shift is simply because the new contraptions which fix the previous failure shift to some other trick, confusion, obfuscation... [/color] The explanations of the flaws has to shift if the flaws shift. That is where the "shiftiness" originated.


The origin of these differences on the need of the 2nd quantization[/color] (that of the EM field) is in the different view on what was the 1st quantization all about. [/color]To those who view it as the introduction of the Hilbert space, observable non-commutativity into the classical system it is natural to try repeating the trick with remaining classical systems.

To those who view it as a solution to the classical physics dichotomy between the fields and the particles[/color], since it replaced the particles with matter fields (Barut has shown that 3N configuration space vs 3D space issue is irrelevant here) it is sensless trying to "quantize" the EM field, since it is already "quantized"[/color] (i.e. it is a field).

In this perspective the Hilbert space formulation is a linear approximation, which due to the inherent non-linearity of the coupled matter and EM fields is limited, thus the collapse is needed which patches the approximation inadequacy via the piecewise linear evolution. The non-commutatitivity of the observables is general artifact with these kinds of piecewise linearizations, not a fundamental principle[/color] (Feynman noted and was particularly intrigued by this emergence of noncommutativity from the approximation in his checkerboard/lattice toy models of EM and Dirac equations; see Garnet Ord's extended discussions on this topic, including the Feynman's puzzlement; note that Ord is a matematician, thus the physics in those papers is a bit thin).
 
Last edited:
  • #104
nightlight said:
ZapperZ You are certainly justified in your puzzlement. Dark current, at least the ones we detect in photon detectors, has NOTHING to do with "zero-point" field.

The "dark current" is somewhat fuzzy. Label it the irreducible part of the noise due to quantum vacuum fluctuations since that the part of the noise that was relevant in the detector discussion. The temperature can be lowered to 0K (the detector discussed was on 6K) and you will still have that noise.

No, what YOU think of "dark current" is the one that is fuzzy. The dark current that *I* detect isn't. I don't just engage in endless yapping about dark current. I actually measure, make spectral analysis, and other characterization of it. The field emission origin of dark current is well-established and well-tested.[1]

Zz.

1. R.H. Fowler and L. Nordheim, Proc. Roy. Soc. Lond., A119, 173 (1928).
 
  • #105
vanesch he disagreed on the existence of a product hilbert space,

That's a bit of over-simplification. Yes, in non-relativistic QM you take each electron into its own factor space. But then you anti-symmetrize this huge product space and shrink it back to almost where it was (the 1 dimensional subspace of the symmetric group representation), the Fock space.

So the original electron product space was merely a pedagogical scaffolding[/color] to construct the Fock space. After all the smoke and hoopla has cleared, you're back where you started before constructing the 3N dimensional configuration space[/color] -- one 3-D space PDE set (Dirac Equation) describing now propagation of classical Dirac field[/color] (or the modes of quantized Dirac field represented by the Fock space) which now represents all classical electrons with one 3D matter field[/color] (similar to the Maxwell's EM field) instead of the QM amplitudes of a single Dirac electron. Plus you get around 3*(N-1) tonnes of obfuscation and vacuous verbiage on meaning of identity (that will be laughed at few generations from now). Note that Schroedinger believed from day one that these 1-particle QM amplitudes should have been interpreted as a single classical matter field of all electrons.

That's not my "theory" that's what it is for you or for anyone who cares to look it up.

The remaining original factors belong to different types of particles, they're simply different fields. Barut has shown that depending on which functions one picks for variation of the action, one gets an equivalent dynamics represented as either the 3N Dimensional configuration space fields or the regular 3 Dimensional space fields.

Apparently, one should also add some noise terms (zero point field or whatever) to the EM field in such a way that it corresponds to the QED half photon contribution in each mode.

That's not something one can fudge very much since the distribution uniquely follows from Lorenz invariance. It's not something you put in by hand and tweak as you go to fit this or that.

He claims (well, and most authors he cites too) that the clickediclack of photons is just a property of "photon detectors" which react in such a way to continuous radiation... it is there that things seem to escape me.

You never explained how QED predicts anything more marble-like about photons. Photons don't have position observable, they are not conserved, they don't have identity. They can be redefined by change of base. They propagate in space following Maxwell's equations[/color]. What is left of the marbles? [/color] That you can count them? You can count anything e.g. the QED EM field modes (which is what they are). They make clickediclacks? I'll grant you this one, they sound just like marbles droped into a box.

You must be still under the spell of the Old Quantum Theory (pre-1926) ideas of point photons (which is how this gets taught). They're not the same model as QED photons.
 
Last edited:
  • #106
nightlight said:
ZapperZ No, what YOU think of "dark current" is the one that is fuzzy. The dark current that *I* detect isn't.

I was talking about the usage of the term. Namely you claimed that in your field of work, the contribution of the QED vacuum fluctuation doesn't count into the "dark current." Yet, the authors of the detector preprint I cited (as well as the other Quantum Optics literature) count the vacuum fluctuations contributions under the "dark current" term. Therefore, that disagreement in usage alone demonstrates the term usage is fuzzy. QED.

The usage of the term "dark current" as used in PHOTODETECTORS (that is, after all, what we are talking about, aren't we?) has NOTHING to do with "QED vacuum fluctuation". Spew all the theories and ideology that you want. The experimental observation as applied to photodetectors and photocathodes do NOT make such connection.

This is getting way too funny, because as I'm typing this, I am seeing dark currents from a photocathode sloshing in an RF cavity. Not only that, I can control how much dark current is in there. There's so much of it, I can detect it on a phosphor screen! QED vacuum fluctuations? It is THIS easy?! C'mon now! Let's do a reality check here!

Zz.
 
  • #107
vanesch What is potentially interesting in this discussion is how far[/color] one can push semiclassical explanations for optical phenomena.

If you recall that Stochastic Quantization[/color] starts by adding Gaussian noise to the classical field dynamics (in functional form and with the added imaginary time variable) to construct the quantized field (in path integrals form), you will realize that the Marshall-Santos model of Maxwell equations+ZPF (Stochastic Electrodynamics) can go at least as far as QED, provided you drop the external field and the external current approximations, thus turning it into ZPF enchanced version of the Barut's nonlinear self-field ED (which even without the ZPF reproduces the leading order of QED radiative corrections; see Barut group's KEK preprints; they had published much of it in Phys Rev as well).

The Ord's results provide much nicer, more transparent, combinatorial interpretation of the analytic continuation (of time variable) role in transforming the ordinary diffusion process into the QT wave equations -- they extract the working part of the analytic continuation step in the pure form[/color] -- all of its trick, the go of it, is contained in the simple cyclic nature of the powers if "i", which serves to separate the object's path sections between the collisions into 4 sets (this is the same type of role that powers of x in combinatorial generating functions techniques perform - they collect and separate the terms for the same powers of x).

Marshall-Santos-Barut SED model (in the full self-interacting form) is the physics behind the Stochastic Quantization[/color], the stuff that really makes it work (as a model of natural phenomena). The field quantization[/color] step doesn't add new physics[/color] to "the non-linear DNA" (as Jaynes put it). It is merely a fancy jargon for a linearization scheme[/color] (a linear approximation) of the starting non-linear PD equations. For a bit of background on this relation check some recent papers by a mathematician Krzysztof Kowalski, which show how the sets of non-linear PDEs [/color] (the general non-linear evolution equations, such as those occurring in chemical kintecs or in population dynamics) can be linearized in the form of a regular Hilbert space linear evolution with realistic (e.g. bosonic) Hamiltonians[/color]. Kowalski's results extend the 1930s Carleman's & Koopman's PDE linearization techniques. See for example his preprints: solv-int/9801018, solv-int/9801020, chao-dyn/9801022, math-ph/0002044, hep-th/9212031. He also has a textbook http://www.worldscibooks.com/chaos/2345.html with much more on this technique.
 
Last edited by a moderator:
  • #108
nightlight said:
You may have lost the context of the arguments that brought us to the detecton question -- the alleged insufficiency of the non-quantized EM models to account for various phenomena (Bell tests, PDC, coherent beam splitters).

No, the logic in my approach is the following.
You claim that we will never have raw EPR data with photons, or with anything for that matter, and you claim that to be something fundamental. While I can easily accept the fact that maybe we'll never have raw EPR data ; after all, there might indeed be limits to all kinds of experiments, at least in the forseable future, I have difficulties with its fundamental character. After all, I think we both agree on the fact that if there are individual technical reasons for this inability to have EPR data, this is not a sufficient reason to conclude that the QM model is wrong. If it is something fundamental, it should be operative on a fundamental level, and not depend on technological issues we understand. After all (maybe that's where we differ in opinion), if you need a patchwork of different technical reasons for each different setup, I don't consider that as something fundamental, but more like the technical difficulties people once had to make an airplane go faster than the speed of sound.

You see, what basically bothers me in your approach, is that you seem to have one goal in mind: explaining semiclassically the EPR-like data. But in order to be a plausible explanation, it has to fit into ALL THE REST of physics, and so I try to play the devil's advocate by taking each individual explanation you need, and try to find counterexamples when it is moved outside of the EPR context (but should, as a valid physical principle, still be applicable).

To refute the "fair sampling" hypothesis used to "upconvert" raw data with low efficiencies into EPR data, you needed to show that visible photon detectors are apparently plagued by a FUNDAMENTAL problem of tradeoff between quantum efficiency and dark current. If this is something FUNDAMENTAL, I don't see why this tradeoff should occur for visible light and not for gamma rays ; after all, the number of modes (and its "half photon energy") in the zero-point stuff scales up from the eV range to the MeV range. So if it is something that is fundamental, and not related to a specific technology, you should understand my wondering why this happens in the case that interests you, namely the eV photons, and not in the case that doesn't interest you, namely gamma rays. If after all, this phenomenon doesn't appear for gamma rays, you might not bother because there's another reason why we cannot do well EPR experiments with gamma rays, but to me, you should still explain why a fundamental property of EM radiation at eV suddenly disappears in the MeV range when you don't need it for your specific need of refuting EPR experiments.

Like with perpetuum mobile claims (which these absolute non-classicality claims increasingly resemble, after three, or even five, decades of excuses and ever more creative euphemisms for the failure), each device may have its own loophole or ways to create illusion of energy excess.

Let's say that where we agree, is that current EPR data do not exclude classical explanations. However, they conform completely with quantum predictions, including the functioning of the detectors. It seems that it is rather your point of view which needs to do strange gymnastics to explain the EPR data, together with all the rest of physics.
It is probably correct to claim that no experiment can exclude ALL local realistic models. However, they all AGREE with the quantum predictions - quantum predictions that you do have difficulties with explaining in a fundamental way without giving trouble somewhere else, like the visible photon versus gamma photon detection.

I asked you where is the point-photon in QED. You first tried via the apparent coexistence of anti-correlation and the coherence on a beam splitter.

No, I asked you whether the "classical photon" beam is to be considered as short wavetrains, or as a continuous beam.
In the first case (which I thought was your point of view, but apparently not), you should find extra correlations beyond the Poisson prediction. But of course in the continuous beam case, you find the same anti-correlations as in the billiard ball photon picture.
However, in this case, I saw another problem: namely if all light beams are continuous beams, then how can we obtain extra correlations when we have a 2-photon process (which I suppose, you deny, and just consider as 2 continuous beams). This is still opaque to me.

You may wonder why is it always so that there is something else that blocks the non-classicality from manifesting. My guess is that it is so because it doesn't exist and all attempted contraptions claiming to achieve it are bound to fail one way or the other, just as perpetuum mobile has to.

The reason I have difficulties with this explanation is that the shifting reasons should be fundamental, and not "technological". I can understand your viewpoint and the comparison to perpetuum mobile, in that for each different construction, it is yet ANOTHER reason. That by itself is no problem. However, each time the "another reason" should be fundamental (meaning, not depending on a specific technology, but a property common to all technologies that try to achieve the same functionality). If the reason photon detectors in the visible range are limited in QE/darkcurrent tradeoff, it should be due to something fundamental - and you say it is due to the ZPF.
However, then my legitime question was: where's this problem then in gamma photon detectors ?

cheers,
Patrick.
 
  • #109
vanesch The reason I have difficulties with this explanation is that the shifting reasons should be fundamental, and not "technological"

Before energy conservation laws were declared the fundamental laws, you could only refute perptuum mobile claims on case by case basis. You would have to analyze and measure the forces, frictions, etc and show that the inventors claims don't add to net excess of energy.

The conventional QM doesn't have any such general principle. But in the Stochastic ED, the falsity of the violation immediately follows from the locality of the theory itself (it is an LHV theory).

In nonrelativistic QM you not only lack a general locality principle, but you have a non-local state collapse as a postulate, which is the key in deducing QM prediction that violates locality. Thus you can't have a fundamental refutation in QM -- it is an approximate theory which not only lacks but explicitly violates locality principle (via collapse and via the non-local potentials in Hamiltonians).

So, you are asking too much, at least with the current explicitly non-local QM.

The QED (the Glauber's Quantum Optics correlations) itself doesn't predict a sharp cos(theta) correlation. Namely it predicts that for the detectors and the data normalized (via background subtractions and the trigger level tuning) to the above-vacuum-fluctuations baseline counting, one will have perfect correlation for these kinds of counts[/color].

But these are not the same counts as the "ideal" Bell's QM counts[/color] (which assume sharp and conserved photon number, none of which is true in a QED model of the setups), since both calibration operations, the subtractions and the above-vacuum-fluctuation detector threshold, remove data from the set of pair events, thus the violation "prediction" doesn't follow without much more work. Or if you're in hurry, you can just declare "fair sampling" principle is true (in the face of the fact that the semiclasical ED and the QED don't satisfy such "principle") and you spare yourself all the trouble. After all, the Phys. Rev. referees will be on your side, so why bother.

On the other hand, the correlations computed without normal ordering, thus corresponding to the setup which doesn't remove vacuum fluctuations, yields Husimi joint distributions which are always positive, hence their correlations are perfectly classical.
 
  • #110
Let's say that where we agree, is that current EPR data do not exclude classical explanations.

We don't need to agree about any conjectures, but only about the plain facts. Euphemisms aside, the plain fact is that the experiments refute all local fair sampling theories[/color] (even though there never were and there are no such theories in the first place).
 
  • #111
No, I asked you whether the "classical photon" beam is to be considered as short wavetrains, or as a continuous beam.

There is nothing different about "classical photon" or QED photon regarding the amplitude modulations. They both propagate via the same equations (Maxwell) throughout the EPR setup. Whatever amplitude modulation the QED photon has there, the same modulation is assumed for the "classical" one. This is a complete non-starter for anything.

While you still avoid to answer where did you get point photon idea from QED (other than through mixup with the "photons" of the Old Quantum Theory[/color], of pre-1920s; since you're using the kind of arguments used in that era, e.g. only one grain of silver blackened modernized to "only one detector trigger", etc). It doesn't follow from QED any more than the point phonons[/color] follow from the exactly same QFT methods used in the solid state theory.
 
  • #112
vanesch If this is something FUNDAMENTAL, I don't see why this tradeoff should occur for visible light and not for gamma rays ; after all, the number of modes (and its "half photon energy") in the zero-point stuff scales up from the eV range to the MeV [/color]range.

But the fundamental interaction constants don't scale along[/color]. The design of the detectors and the analyzers depends on the interaction constants. For example, for visible photons there is too little space left between 1/2 hv1 and 1 hv1 compared to atomic ionisation energies. At the MeV level, there is plenty of room in terms of atomic ionization energies to insert between the 1/2 hv2 and 1 hv2. The tunneling rates for these two kinds of gaps are hugely different since the interaction constants don't scale. Similarly, the polarization ineraction is negligable for MeV photons to affect their energy-momentum for analyzer design... etc.


In the current QM you don't have general locality principle to refute outright the Bell locality violation claims. So, the only thing one can do is refute particular designs, case by case. The detection problem plagues the visibile photon experiments. Analyzer problems plague the MeV photons.

To refute the perpetuum mobile claims before the energy conservation principles, one had to find friction or gravity or temperature or current of whatever other design specific mechanism ended up balancing the energy.

The situation with EPR Bell tests is only different in the sense that the believers in the non-locality are on top, as it were, so even though they never even showed anything that violates the locality, they insist that the opponents show why that whole class of experiments can't be improved in the future and made to work. That is quite a bit larger burden on the opponents. If they just showed anything that explicitly appeared to violate non-locality, one could just look their data and find error there (if locality holds). But there is no such data. So one has to look for underlying physics of the design and show how that particular design can't yield anything decisive.
 
  • #113
nightlight said:
There is nothing different about "classical photon" or QED photon regarding the amplitude modulations. They both propagate via the same equations (Maxwell) throughout the EPR setup. Whatever amplitude modulation the QED photon has there, the same modulation is assumed for the "classical" one. This is a complete non-starter for anything.

No, this is only true for single-photon situations. There is no classical way to describe "multi-photon" situations, and that's where I was aiming at. If you consider these "multi-photon" situations as just the classical superposition of modes, then there is no way to get synchronized hits beyond the Poisson coincidence. Nevertheless, that HAS been found experimentally. Or I misunderstand your picture.

cheers,
Patrick.
 
  • #114
vanesch said:
The reason I have difficulties with this explanation is that the shifting reasons should be fundamental, and not "technological". I can understand your viewpoint and the comparison to perpetuum mobile, in that for each different construction, it is yet ANOTHER reason. That by itself is no problem. However, each time the "another reason" should be fundamental (meaning, not depending on a specific technology, but a property common to all technologies that try to achieve the same functionality). If the reason photon detectors in the visible range are limited in QE/darkcurrent tradeoff, it should be due to something fundamental - and you say it is due to the ZPF.
However, then my legitime question was: where's this problem then in gamma photon detectors ?

cheers,
Patrick.

Here's a bit more ammunition for you, vanesch. If I substitute the photocathode in a detector from, oh, let's say, tungsten to diamond, and keeping everything else the same, the dark current INCREASES! Imagine that! I can change the rate of vacuum fluctuation simply by changing the type of media near to it! So forget about going from visible light detector to gamma ray detectors. Even just for visible light detector, you can already manipulate the dark current level. There are no QED theory on this!

I suggest we write something and publish it in Crank Dot Net. :)

Zz.
 
  • #115
ZapperZ Here's a bit more ammunition for you, vanesch. If I substitute the photocathode in a detector from, oh, let's say, tungsten to diamond, and keeping everything else the same, the dark current INCREASES! Imagine that! I can change the rate of vacuum fluctuation [/color]simply by changing the type of media near to it!

You change rate of tunneling[/color] since the ionization energies are different in those cases.
 
  • #116
nightlight said:
vanesch If this is something FUNDAMENTAL, I don't see why this tradeoff should occur for visible light and not for gamma rays ; after all, the number of modes (and its "half photon energy") in the zero-point stuff scales up from the eV range to the MeV [/color]range.

But the fundamental interaction constants don't scale along[/color].

Yes if we take into account atoms, but the very fact that we take atoms to make up our detection apparatus might 1) very well be at the origin of being unable to produce EPR raw data - I don't know (I even think so), but 2) this is nothing fundamental, is it ? It is just because we earthlings are supposed to work with atomic matter. The description of the fundamental behaviour of EM fields shouldn't be dealing with atoms, should it ?

To refute the perpetuum mobile claims before the energy conservation principles, one had to find friction or gravity or temperature or current of whatever other design specific mechanism ended up balancing the energy.

Well, let us take the following example. Imagine that perpetuum mobile DO exist, when working, say, with dilithium crystals. Imagine that I make such a perpetuum mobile, where on one I put in 5000V and 2000A, and on the other side, I have a huge power output on a power line, 500000V and 20000A.

Now I cannot make a direct volt meter measuring 500000 V, so I use a voltage divider with a 1GOhm resistor and a 1MOhm resistor, and I measure 500V over my 1MOhm resistor. Also I cannot pump 20000A in my ampmeter, so I use a shunt resistance of 1milliOhm in parallel with 1 Ohm, and I measure 20 Amps in the shunt.
Using basic stuff, I calculate from my measurement that 500V on my voltmeter, times my divider factor (1000) gives me 500 000V and that my 20 Amps times my shunt factor (1000) gives me 20 000 A.

Now you come along and say that these shunt corrections are fudging the data, that the raw data give us 500V x 20A = 10 KW output, while we put 5000Vx2000A = 10MW input in the thing, and that hence we haven't shown any perpetuum mobile.
I personally would be quite convinced that these 500V and 20A, with the shunt and divider factors, are correct measurements.
Of course, it is always possible to claim that shunts don't work beyond 200A, and dividers don't work beyond 8000V, in those specific cases when we apply them to such kinds of perpetuum mobile. But I find that far-stretched, if you accept shunts and dividers in all other cases.

cheers,
Patrick.
 
  • #117
nightlight said:
ZapperZ Here's a bit more ammunition for you, vanesch. If I substitute the photocathode in a detector from, oh, let's say, tungsten to diamond, and keeping everything else the same, the dark current INCREASES! Imagine that! I can change the rate of vacuum fluctuation [/color]simply by changing the type of media near to it!

You change rate of tunneling[/color] since the ionization energies are different in those cases.

Bingo! What did you think I meant by "field emission" and "Fowler-Nordheim theory" all along? You disputed these and kept on insisting it has something to do with "QED vacuum fluctuation"! Obviously, you felt nothing wrong with dismissing things without finding out what they are.

For the last time - dark currents as detected in photodetectors have NOTHING to do with "QED vacuum fluctuation". Their properties and behavior are VERY well-known and well-characterized.

Zz.
 
  • #118
vanesch No, this is only true for single-photon situations. There is no classical way to describe "multi-photon" situations, and that's where I was aiming at. If you consider these "multi-photon" situations as just the classical superposition of modes, then there is no way to get synchronized hits beyond the Poisson coincidence. Nevertheless, that HAS been found experimentally.[/color] Or Imisunderstand your picture.

We already went over "sub-Poissonian" distributions few messages back -- it is the same problem as with anti-correlation. This is purely the effect of the vacuum fluctuation subtraction (for the correlation functions by using normal operator ordering), yielding negative probability regions in the Wigner joint distribution functions. The detection and the setups corresponding to correlations computed this way adjust data and tune detectors so that in the absence of signal all counts are 0.

The semiclassical theory which model this data normalization for coincidence counting, also predicts the sub-Poissonian distribution, the same way. (We already went over this at some length earlier...)
 
  • #119
Nightlight


Marshall-Santos-Barut SED model (in the full self-interacting form) is the physics behind the Stochastic Quantization , the stuff that really makes it work (as a model of natural phenomena). The field quantization step doesn't add new physics to "the non-linear DNA" (as Jaynes put it).
It is merely a fancy jargon for a linearization scheme (a linear approximation) of the starting non-linear PD equations.
For a bit of background on this relation check some recent papers by a mathematician Krzysztof Kowalski, which show how the sets of non-linear PDEs ... can be linearized in the form of a regular Hilbert space linear evolution with realistic (e.g. bosonic) Hamiltonians. .



Finally, I understand (may be not :) you agree that PDE, ODE and their non linearities may be rewritten into the Hilbert space formalism. See your example, Kowalski's paper (ODE) solv-int/9801018.

So, it is the words “linear approximation” that seem no to be correct as the reformulation of ODE, PDE with non-linearities in Hilbert space are equivalent.
In fact, I understand what you may reject in the Hilbert space formulation is the following:

-- H is an hermitian operator in id/dt|psi>=H|psi> <=> the time evolution is unitary (what you may call the linear approximation?).

And if I have understood the paper, Kowalski has demonstrated, a well known fact of the hilbert space users, that with a simple gauge transformation (selection of other p,q type variables), you can change a non hermitian operator into an hermitian one (its relation (4.55 to 4.57)). Therefore, many non-unitary evolutions may be expressed exactly as functions of unitary evolutions.
So I still conclude that even a unitary evolution of the Shroedinger equation may represent a non linear ODE.
Therefore, I still not understand what you call the “linear approximation” as in all cases I get exact results (approximation means, for me the existence of small errors).

--All the possible solutions given by the Schrödinger type equation id/dt|psi>=H|psi> (and its derivatives in the second quantization formalism).
At least the solutions that are non-compatible with the local hidden variables theories.

So, if we interpret your possible physical interpretation stochastic electrodynamics as giving the true results, what we just need to do is to rewrite it formally into the Hilbert space formalism and look at the difference with QED?


Seratend.

P.S. For example, with this comparison, we can see effectively if the projection postulate is really critical or only a way of selecting an initial state.

P.P.S We thus may also check in a rather good confidence if the stochastic electrodynamics formulation does not contain by itself an hidden “hidden variable”.

P.P.P.S and then, if there is really fundamental difference we identify, we can update the schroedinger equation : ) as well as the other physical equations.
 
  • #120
nightlight said:
We already went over "sub-Poissonian" distributions few messages back -- it is the same problem as with anti-correlation. This is purely the effect of the vacuum fluctuation subtraction (for the correlation functions by using normal operator ordering), yielding negative probability regions in the Wigner joint distribution functions.

Yes, I know, but I didn't understand it.

cheers,
Patrick.
 

Similar threads

  • · Replies 36 ·
2
Replies
36
Views
7K
Replies
65
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 20 ·
Replies
20
Views
2K
Replies
3
Views
3K
  • · Replies 81 ·
3
Replies
81
Views
7K
  • · Replies 14 ·
Replies
14
Views
4K
Replies
28
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K