What Confusion Surrounds Young's Experiment on Wave-Particle Duality?

  • Thread starter Thread starter Cruithne
  • Start date Start date
  • Tags Tags
    Experiment
  • #101
vanesch said:
What is potentially interesting in this discussion is how far one can push semiclassical explanations for optical phenomena. For instance, I remember having read that what is usually quoted as a "typical quantum phenomenon", namely the photo-electric effect, needs quantification of the solid state, but not of the EM field which can still be considered classical.

I happened to work in angle-resolved photoemission (ARPES) for the 3 years that I spent as a postdoc. So here is what I know.

While it is true that the band structure of the material being looked at can dictate the nature of the result, this is more of a handshaking process between both the target material and the probe, which is the light. While the generic photoelectric effect can find a plausible argument that light is still a wave (and not photons), this requires strictly that the target material (i) be polycrystaline (meaning no specific or preferred orientation of the crystal structure) and (ii) has a continuous energy band, as in the metallic conduction band. If you have ALL that, then yes, the classical wave picture cannot be ruled out in explaining the photoelectric effect.

The problem is, people who still hang on to those ideas have NOT followed the recent developments and advancements in photoemission. ARPES, inverse photoemission (IPES), resonant photoemission (RPES), spin-polarized photoemission, etc., and the expansion of the target material being studied, ranging from single-crystal surface states all the way to mott insulators, have made classical description of light quite puzzling. For example, the multi-photon photoemission, which I worked on till about 3 months ago, would be CRAZY if we have no photons. The fact that we can adjust the work function to have either 1-photon, 2-photon, 3-photon, etc. photoemission AND to be able to angularly map this, is damn convincing as far as the experiment goes.

There are NO quantitative semiclassical theory for ARPES, RPES, etc.. There are also NO quantitative semiclassical theory for multi-photon photoemission, or at least to explain the observation from such experiments IF we do not accept photons. I know, I've looked. It is of no surprise if I say that my "loyalty" is towards the experimental observation, not some dogma or someone's pet cause. So far, there have been no viable alternatives that are consistent to the experimental observations that I have made.

Zz.
 
Physics news on Phys.org
  • #102
Vanesh
Yes, that is exactly what he does, and it took me some time to realize this in the beginning of the thread, because he claimed to accept all of QM except for the projection postulate. But if you browse back, afterwards we agreed upon the fact that he disagreed on the existence of a product hilbert space, and only considers classical matter fields (kind of Dirac equation) coupled with the classical EM field (Maxwell), such that the probability current of charge from the dirac field is the source of the EM field. …

… He claims (well, and most authors he cites too) that the clickediclack of photons is just a property of "photon detectors" which react in such a way to continuous radiation... it is there that things seem to escape me.
However, the fun thing is that indeed, one can find explanations of a lot of "quantum behaviour" this way. The thing that bothers me are the detectors.
[/color]

Like classical mechanic can explain many things even light as particles (Newton assumption). It has taken a long time before scientist community accepted that light was a wave. Even with the initial results of young slits experiment, they have preferred to wait for another experiment, the experiment, showing the impossibility: the diffraction figure of an enlightened sphere: a light point in the middle of the shadow. This diffraction figure was predicted, if I remember, by Poisson in a demonstration to show that if light was not made of particles an incredible light point should appear :eek: .


ZapperZ
The problem is, people who still hang on to those ideas have NOT followed the recent developments and advancements in photoemission. ARPES, inverse photoemission (IPES), resonant photoemission (RPES), spin-polarized photoemission, etc., and the expansion of the target material being studied, ranging from single-crystal surface states all the way to mott insulators, have made classical description of light quite puzzling. For example, the multi-photon photoemission, which I worked on till about 3 months ago, would be CRAZY if we have no photons. The fact that we can adjust the work function to have either 1-photon, 2-photon, 3-photon, etc. photoemission AND to be able to angularly map this, is damn convincing as far as the experiment goes.

There are NO quantitative semiclassical theory for ARPES, RPES, etc.. There are also NO quantitative semiclassical theory for multi-photon photoemission, or at least to explain the observation from such experiments IF we do not accept photons. I know, I've looked. It is of no surprise if I say that my "loyalty" is towards the experimental observation, not some dogma or someone's pet cause. So far, there have been no viable alternatives that are consistent to the experimental observations that I have made.
[/color]

I really think that arguing about the minimum error in the photo detectors would not solve the problem: the existence of the EPR like state and its associated results.

I think the current problem of “simple alternate classical” theories (with local interactions, local variables) lies in the lack of providing a reasonable result to EPR like states (photons, electrons): a pair of particles with a total null spin.

This state is so special: a measurement on any axis always gives a 50/50% of spin up/down for each particle of the pair. When we take a separated particle experiment, the result measurement shows that a 50/50 is only achievable on a unique orientation!

Seratend.
 
  • #103
vanesch No, we were simply talking about photon detection and background. If the zero-point energy or whatever it is, provoques what is usually considered as being a technical difficulty, namely the "dark current noise", then why is this the limiting factor in visible light detectors, but doesn't it occur in gamma ray detectors, which are close to 100% efficient with neglegible dark currents (take MWPC for instance). I don't see why something that is "a fundamental problem"[/color] at say, 2eV is suddenly no issue anymore at 2MeV.

You may have lost the context of the arguments that brought us to the detecton question -- the alleged insufficiency of the non-quantized EM models to account for various phenomena (Bell tests, PDC, coherent beam splitters). It was in that specific context[/color] that the phenomenon of detection noise due to the vacuum fluctuation precludes any effect to help decide the EM non-classicality -- it is a problem for these specific non-classicality claims[/color], not "a problem" for all conceivable non-classicality claims, much less that it is some kind of general technological or scientific "problem".

The low energy photons interact differently with the optical components than the gamma rays (the ratio of their energy/momenta to their interaction with atoms is hugely different), so the phenomena you used for the earlier non-classicality claims has lost their key features (such as the sharp polarization splitting or the sharp interference on the beam splitter).

Like with perpetuum mobile claims (which these absolute non-classicality claims increasingly resemble, after three, or even five, decades of excuses and ever more creative euphemisms for the failure), each device may have its own loophole or ways to create illusion of energy excess.

Now you have left the visible photons, moved to gamma rays. Present a phenomenon which makes them absolutely non-classical. You didn't even make a case how this sharper detectibility (better S/N) for these photons[/color] yields to anything that prohibits non-quantised EM field from modelling the phenomenon. I explained why the original setups don't help you with these photons.

The second confusion you may have is that between the Old Quantum Theory (before Schroedinger/Heisenberg) particle photon claims and the conventional jargon and visual mnemonics of the QED or Quantum Optics. The OQT didn't have correct dynamical theory of ionisation or of photon photo-electron scattering cross sections. So they imagined needle radiation or point-photons. After the Shoredinger-Heisenberg created QM dynamics, these phenomena became computable using purely semi-classical models (with only matter particles being quantized). The OQM arguments became irrelevant, even though you'll still see them parroted in the textbooks. The point-like jargon for photons survived into QED (and it is indeed heuristically/mnemonically useful in many cases, as long as one doesn't make ontology out of the personal mnemonic device and then claim paradoxes).

I asked you where is the point-photon in QED. You first tried via the apparent coexistence of anti-correlation and the coherence on a beam splitter. As demonstrated that doesn't work (and has been known so for 4-5 decades, it was tried first in 1950s and resolved during the Hanbury Brown and Twiss effect debates). The photon count is Poissonian, which is same as detector model response distribution. So the anti-correlation here isn't as sharp as it is usally claimed.

Then (after a brief detour into PDC) you brought up gamma rays as not having detection problem. True, they don't. But they lose the coherence and the visibility of the interference as they gain on the anti-correlation (due to sharper detection). That is a perfectly classical EM tradeoff. The less equal and less synchronized the packets after the split, the more exclusive they appear to the detectors while their intererence fringes become less sharp (lower visibility).

That is the same "complementarity" phenomenon that QM aplititudes (which follow Maxwell's equations for free space & through linear optical elements) describe. And that is identical to the semiclassical EM description since the same equations describe the propagation.

So what is the non-classicality claim about the gamma rays? The anti-correlation on the beam splitter is not relevant for that purpose for these[/color] photons.

You may wonder why is it always so that there is something else that blocks the non-classicality from manifesting. My guess is that it is so because it doesn't exist and all attempted contraptions claiming to achieve it are bound to fail one way or the other, just as perpetuum mobile has to. The reasons the failure causes shift is simply because the new contraptions which fix the previous failure shift to some other trick, confusion, obfuscation... [/color] The explanations of the flaws has to shift if the flaws shift. That is where the "shiftiness" originated.


The origin of these differences on the need of the 2nd quantization[/color] (that of the EM field) is in the different view on what was the 1st quantization all about. [/color]To those who view it as the introduction of the Hilbert space, observable non-commutativity into the classical system it is natural to try repeating the trick with remaining classical systems.

To those who view it as a solution to the classical physics dichotomy between the fields and the particles[/color], since it replaced the particles with matter fields (Barut has shown that 3N configuration space vs 3D space issue is irrelevant here) it is sensless trying to "quantize" the EM field, since it is already "quantized"[/color] (i.e. it is a field).

In this perspective the Hilbert space formulation is a linear approximation, which due to the inherent non-linearity of the coupled matter and EM fields is limited, thus the collapse is needed which patches the approximation inadequacy via the piecewise linear evolution. The non-commutatitivity of the observables is general artifact with these kinds of piecewise linearizations, not a fundamental principle[/color] (Feynman noted and was particularly intrigued by this emergence of noncommutativity from the approximation in his checkerboard/lattice toy models of EM and Dirac equations; see Garnet Ord's extended discussions on this topic, including the Feynman's puzzlement; note that Ord is a matematician, thus the physics in those papers is a bit thin).
 
Last edited:
  • #104
nightlight said:
ZapperZ You are certainly justified in your puzzlement. Dark current, at least the ones we detect in photon detectors, has NOTHING to do with "zero-point" field.

The "dark current" is somewhat fuzzy. Label it the irreducible part of the noise due to quantum vacuum fluctuations since that the part of the noise that was relevant in the detector discussion. The temperature can be lowered to 0K (the detector discussed was on 6K) and you will still have that noise.

No, what YOU think of "dark current" is the one that is fuzzy. The dark current that *I* detect isn't. I don't just engage in endless yapping about dark current. I actually measure, make spectral analysis, and other characterization of it. The field emission origin of dark current is well-established and well-tested.[1]

Zz.

1. R.H. Fowler and L. Nordheim, Proc. Roy. Soc. Lond., A119, 173 (1928).
 
  • #105
vanesch he disagreed on the existence of a product hilbert space,

That's a bit of over-simplification. Yes, in non-relativistic QM you take each electron into its own factor space. But then you anti-symmetrize this huge product space and shrink it back to almost where it was (the 1 dimensional subspace of the symmetric group representation), the Fock space.

So the original electron product space was merely a pedagogical scaffolding[/color] to construct the Fock space. After all the smoke and hoopla has cleared, you're back where you started before constructing the 3N dimensional configuration space[/color] -- one 3-D space PDE set (Dirac Equation) describing now propagation of classical Dirac field[/color] (or the modes of quantized Dirac field represented by the Fock space) which now represents all classical electrons with one 3D matter field[/color] (similar to the Maxwell's EM field) instead of the QM amplitudes of a single Dirac electron. Plus you get around 3*(N-1) tonnes of obfuscation and vacuous verbiage on meaning of identity (that will be laughed at few generations from now). Note that Schroedinger believed from day one that these 1-particle QM amplitudes should have been interpreted as a single classical matter field of all electrons.

That's not my "theory" that's what it is for you or for anyone who cares to look it up.

The remaining original factors belong to different types of particles, they're simply different fields. Barut has shown that depending on which functions one picks for variation of the action, one gets an equivalent dynamics represented as either the 3N Dimensional configuration space fields or the regular 3 Dimensional space fields.

Apparently, one should also add some noise terms (zero point field or whatever) to the EM field in such a way that it corresponds to the QED half photon contribution in each mode.

That's not something one can fudge very much since the distribution uniquely follows from Lorenz invariance. It's not something you put in by hand and tweak as you go to fit this or that.

He claims (well, and most authors he cites too) that the clickediclack of photons is just a property of "photon detectors" which react in such a way to continuous radiation... it is there that things seem to escape me.

You never explained how QED predicts anything more marble-like about photons. Photons don't have position observable, they are not conserved, they don't have identity. They can be redefined by change of base. They propagate in space following Maxwell's equations[/color]. What is left of the marbles? [/color] That you can count them? You can count anything e.g. the QED EM field modes (which is what they are). They make clickediclacks? I'll grant you this one, they sound just like marbles droped into a box.

You must be still under the spell of the Old Quantum Theory (pre-1926) ideas of point photons (which is how this gets taught). They're not the same model as QED photons.
 
Last edited:
  • #106
nightlight said:
ZapperZ No, what YOU think of "dark current" is the one that is fuzzy. The dark current that *I* detect isn't.

I was talking about the usage of the term. Namely you claimed that in your field of work, the contribution of the QED vacuum fluctuation doesn't count into the "dark current." Yet, the authors of the detector preprint I cited (as well as the other Quantum Optics literature) count the vacuum fluctuations contributions under the "dark current" term. Therefore, that disagreement in usage alone demonstrates the term usage is fuzzy. QED.

The usage of the term "dark current" as used in PHOTODETECTORS (that is, after all, what we are talking about, aren't we?) has NOTHING to do with "QED vacuum fluctuation". Spew all the theories and ideology that you want. The experimental observation as applied to photodetectors and photocathodes do NOT make such connection.

This is getting way too funny, because as I'm typing this, I am seeing dark currents from a photocathode sloshing in an RF cavity. Not only that, I can control how much dark current is in there. There's so much of it, I can detect it on a phosphor screen! QED vacuum fluctuations? It is THIS easy?! C'mon now! Let's do a reality check here!

Zz.
 
  • #107
vanesch What is potentially interesting in this discussion is how far[/color] one can push semiclassical explanations for optical phenomena.

If you recall that Stochastic Quantization[/color] starts by adding Gaussian noise to the classical field dynamics (in functional form and with the added imaginary time variable) to construct the quantized field (in path integrals form), you will realize that the Marshall-Santos model of Maxwell equations+ZPF (Stochastic Electrodynamics) can go at least as far as QED, provided you drop the external field and the external current approximations, thus turning it into ZPF enchanced version of the Barut's nonlinear self-field ED (which even without the ZPF reproduces the leading order of QED radiative corrections; see Barut group's KEK preprints; they had published much of it in Phys Rev as well).

The Ord's results provide much nicer, more transparent, combinatorial interpretation of the analytic continuation (of time variable) role in transforming the ordinary diffusion process into the QT wave equations -- they extract the working part of the analytic continuation step in the pure form[/color] -- all of its trick, the go of it, is contained in the simple cyclic nature of the powers if "i", which serves to separate the object's path sections between the collisions into 4 sets (this is the same type of role that powers of x in combinatorial generating functions techniques perform - they collect and separate the terms for the same powers of x).

Marshall-Santos-Barut SED model (in the full self-interacting form) is the physics behind the Stochastic Quantization[/color], the stuff that really makes it work (as a model of natural phenomena). The field quantization[/color] step doesn't add new physics[/color] to "the non-linear DNA" (as Jaynes put it). It is merely a fancy jargon for a linearization scheme[/color] (a linear approximation) of the starting non-linear PD equations. For a bit of background on this relation check some recent papers by a mathematician Krzysztof Kowalski, which show how the sets of non-linear PDEs [/color] (the general non-linear evolution equations, such as those occurring in chemical kintecs or in population dynamics) can be linearized in the form of a regular Hilbert space linear evolution with realistic (e.g. bosonic) Hamiltonians[/color]. Kowalski's results extend the 1930s Carleman's & Koopman's PDE linearization techniques. See for example his preprints: solv-int/9801018, solv-int/9801020, chao-dyn/9801022, math-ph/0002044, hep-th/9212031. He also has a textbook http://www.worldscibooks.com/chaos/2345.html with much more on this technique.
 
Last edited by a moderator:
  • #108
nightlight said:
You may have lost the context of the arguments that brought us to the detecton question -- the alleged insufficiency of the non-quantized EM models to account for various phenomena (Bell tests, PDC, coherent beam splitters).

No, the logic in my approach is the following.
You claim that we will never have raw EPR data with photons, or with anything for that matter, and you claim that to be something fundamental. While I can easily accept the fact that maybe we'll never have raw EPR data ; after all, there might indeed be limits to all kinds of experiments, at least in the forseable future, I have difficulties with its fundamental character. After all, I think we both agree on the fact that if there are individual technical reasons for this inability to have EPR data, this is not a sufficient reason to conclude that the QM model is wrong. If it is something fundamental, it should be operative on a fundamental level, and not depend on technological issues we understand. After all (maybe that's where we differ in opinion), if you need a patchwork of different technical reasons for each different setup, I don't consider that as something fundamental, but more like the technical difficulties people once had to make an airplane go faster than the speed of sound.

You see, what basically bothers me in your approach, is that you seem to have one goal in mind: explaining semiclassically the EPR-like data. But in order to be a plausible explanation, it has to fit into ALL THE REST of physics, and so I try to play the devil's advocate by taking each individual explanation you need, and try to find counterexamples when it is moved outside of the EPR context (but should, as a valid physical principle, still be applicable).

To refute the "fair sampling" hypothesis used to "upconvert" raw data with low efficiencies into EPR data, you needed to show that visible photon detectors are apparently plagued by a FUNDAMENTAL problem of tradeoff between quantum efficiency and dark current. If this is something FUNDAMENTAL, I don't see why this tradeoff should occur for visible light and not for gamma rays ; after all, the number of modes (and its "half photon energy") in the zero-point stuff scales up from the eV range to the MeV range. So if it is something that is fundamental, and not related to a specific technology, you should understand my wondering why this happens in the case that interests you, namely the eV photons, and not in the case that doesn't interest you, namely gamma rays. If after all, this phenomenon doesn't appear for gamma rays, you might not bother because there's another reason why we cannot do well EPR experiments with gamma rays, but to me, you should still explain why a fundamental property of EM radiation at eV suddenly disappears in the MeV range when you don't need it for your specific need of refuting EPR experiments.

Like with perpetuum mobile claims (which these absolute non-classicality claims increasingly resemble, after three, or even five, decades of excuses and ever more creative euphemisms for the failure), each device may have its own loophole or ways to create illusion of energy excess.

Let's say that where we agree, is that current EPR data do not exclude classical explanations. However, they conform completely with quantum predictions, including the functioning of the detectors. It seems that it is rather your point of view which needs to do strange gymnastics to explain the EPR data, together with all the rest of physics.
It is probably correct to claim that no experiment can exclude ALL local realistic models. However, they all AGREE with the quantum predictions - quantum predictions that you do have difficulties with explaining in a fundamental way without giving trouble somewhere else, like the visible photon versus gamma photon detection.

I asked you where is the point-photon in QED. You first tried via the apparent coexistence of anti-correlation and the coherence on a beam splitter.

No, I asked you whether the "classical photon" beam is to be considered as short wavetrains, or as a continuous beam.
In the first case (which I thought was your point of view, but apparently not), you should find extra correlations beyond the Poisson prediction. But of course in the continuous beam case, you find the same anti-correlations as in the billiard ball photon picture.
However, in this case, I saw another problem: namely if all light beams are continuous beams, then how can we obtain extra correlations when we have a 2-photon process (which I suppose, you deny, and just consider as 2 continuous beams). This is still opaque to me.

You may wonder why is it always so that there is something else that blocks the non-classicality from manifesting. My guess is that it is so because it doesn't exist and all attempted contraptions claiming to achieve it are bound to fail one way or the other, just as perpetuum mobile has to.

The reason I have difficulties with this explanation is that the shifting reasons should be fundamental, and not "technological". I can understand your viewpoint and the comparison to perpetuum mobile, in that for each different construction, it is yet ANOTHER reason. That by itself is no problem. However, each time the "another reason" should be fundamental (meaning, not depending on a specific technology, but a property common to all technologies that try to achieve the same functionality). If the reason photon detectors in the visible range are limited in QE/darkcurrent tradeoff, it should be due to something fundamental - and you say it is due to the ZPF.
However, then my legitime question was: where's this problem then in gamma photon detectors ?

cheers,
Patrick.
 
  • #109
vanesch The reason I have difficulties with this explanation is that the shifting reasons should be fundamental, and not "technological"

Before energy conservation laws were declared the fundamental laws, you could only refute perptuum mobile claims on case by case basis. You would have to analyze and measure the forces, frictions, etc and show that the inventors claims don't add to net excess of energy.

The conventional QM doesn't have any such general principle. But in the Stochastic ED, the falsity of the violation immediately follows from the locality of the theory itself (it is an LHV theory).

In nonrelativistic QM you not only lack a general locality principle, but you have a non-local state collapse as a postulate, which is the key in deducing QM prediction that violates locality. Thus you can't have a fundamental refutation in QM -- it is an approximate theory which not only lacks but explicitly violates locality principle (via collapse and via the non-local potentials in Hamiltonians).

So, you are asking too much, at least with the current explicitly non-local QM.

The QED (the Glauber's Quantum Optics correlations) itself doesn't predict a sharp cos(theta) correlation. Namely it predicts that for the detectors and the data normalized (via background subtractions and the trigger level tuning) to the above-vacuum-fluctuations baseline counting, one will have perfect correlation for these kinds of counts[/color].

But these are not the same counts as the "ideal" Bell's QM counts[/color] (which assume sharp and conserved photon number, none of which is true in a QED model of the setups), since both calibration operations, the subtractions and the above-vacuum-fluctuation detector threshold, remove data from the set of pair events, thus the violation "prediction" doesn't follow without much more work. Or if you're in hurry, you can just declare "fair sampling" principle is true (in the face of the fact that the semiclasical ED and the QED don't satisfy such "principle") and you spare yourself all the trouble. After all, the Phys. Rev. referees will be on your side, so why bother.

On the other hand, the correlations computed without normal ordering, thus corresponding to the setup which doesn't remove vacuum fluctuations, yields Husimi joint distributions which are always positive, hence their correlations are perfectly classical.
 
  • #110
Let's say that where we agree, is that current EPR data do not exclude classical explanations.

We don't need to agree about any conjectures, but only about the plain facts. Euphemisms aside, the plain fact is that the experiments refute all local fair sampling theories[/color] (even though there never were and there are no such theories in the first place).
 
  • #111
No, I asked you whether the "classical photon" beam is to be considered as short wavetrains, or as a continuous beam.

There is nothing different about "classical photon" or QED photon regarding the amplitude modulations. They both propagate via the same equations (Maxwell) throughout the EPR setup. Whatever amplitude modulation the QED photon has there, the same modulation is assumed for the "classical" one. This is a complete non-starter for anything.

While you still avoid to answer where did you get point photon idea from QED (other than through mixup with the "photons" of the Old Quantum Theory[/color], of pre-1920s; since you're using the kind of arguments used in that era, e.g. only one grain of silver blackened modernized to "only one detector trigger", etc). It doesn't follow from QED any more than the point phonons[/color] follow from the exactly same QFT methods used in the solid state theory.
 
  • #112
vanesch If this is something FUNDAMENTAL, I don't see why this tradeoff should occur for visible light and not for gamma rays ; after all, the number of modes (and its "half photon energy") in the zero-point stuff scales up from the eV range to the MeV [/color]range.

But the fundamental interaction constants don't scale along[/color]. The design of the detectors and the analyzers depends on the interaction constants. For example, for visible photons there is too little space left between 1/2 hv1 and 1 hv1 compared to atomic ionisation energies. At the MeV level, there is plenty of room in terms of atomic ionization energies to insert between the 1/2 hv2 and 1 hv2. The tunneling rates for these two kinds of gaps are hugely different since the interaction constants don't scale. Similarly, the polarization ineraction is negligable for MeV photons to affect their energy-momentum for analyzer design... etc.


In the current QM you don't have general locality principle to refute outright the Bell locality violation claims. So, the only thing one can do is refute particular designs, case by case. The detection problem plagues the visibile photon experiments. Analyzer problems plague the MeV photons.

To refute the perpetuum mobile claims before the energy conservation principles, one had to find friction or gravity or temperature or current of whatever other design specific mechanism ended up balancing the energy.

The situation with EPR Bell tests is only different in the sense that the believers in the non-locality are on top, as it were, so even though they never even showed anything that violates the locality, they insist that the opponents show why that whole class of experiments can't be improved in the future and made to work. That is quite a bit larger burden on the opponents. If they just showed anything that explicitly appeared to violate non-locality, one could just look their data and find error there (if locality holds). But there is no such data. So one has to look for underlying physics of the design and show how that particular design can't yield anything decisive.
 
  • #113
nightlight said:
There is nothing different about "classical photon" or QED photon regarding the amplitude modulations. They both propagate via the same equations (Maxwell) throughout the EPR setup. Whatever amplitude modulation the QED photon has there, the same modulation is assumed for the "classical" one. This is a complete non-starter for anything.

No, this is only true for single-photon situations. There is no classical way to describe "multi-photon" situations, and that's where I was aiming at. If you consider these "multi-photon" situations as just the classical superposition of modes, then there is no way to get synchronized hits beyond the Poisson coincidence. Nevertheless, that HAS been found experimentally. Or I misunderstand your picture.

cheers,
Patrick.
 
  • #114
vanesch said:
The reason I have difficulties with this explanation is that the shifting reasons should be fundamental, and not "technological". I can understand your viewpoint and the comparison to perpetuum mobile, in that for each different construction, it is yet ANOTHER reason. That by itself is no problem. However, each time the "another reason" should be fundamental (meaning, not depending on a specific technology, but a property common to all technologies that try to achieve the same functionality). If the reason photon detectors in the visible range are limited in QE/darkcurrent tradeoff, it should be due to something fundamental - and you say it is due to the ZPF.
However, then my legitime question was: where's this problem then in gamma photon detectors ?

cheers,
Patrick.

Here's a bit more ammunition for you, vanesch. If I substitute the photocathode in a detector from, oh, let's say, tungsten to diamond, and keeping everything else the same, the dark current INCREASES! Imagine that! I can change the rate of vacuum fluctuation simply by changing the type of media near to it! So forget about going from visible light detector to gamma ray detectors. Even just for visible light detector, you can already manipulate the dark current level. There are no QED theory on this!

I suggest we write something and publish it in Crank Dot Net. :)

Zz.
 
  • #115
ZapperZ Here's a bit more ammunition for you, vanesch. If I substitute the photocathode in a detector from, oh, let's say, tungsten to diamond, and keeping everything else the same, the dark current INCREASES! Imagine that! I can change the rate of vacuum fluctuation [/color]simply by changing the type of media near to it!

You change rate of tunneling[/color] since the ionization energies are different in those cases.
 
  • #116
nightlight said:
vanesch If this is something FUNDAMENTAL, I don't see why this tradeoff should occur for visible light and not for gamma rays ; after all, the number of modes (and its "half photon energy") in the zero-point stuff scales up from the eV range to the MeV [/color]range.

But the fundamental interaction constants don't scale along[/color].

Yes if we take into account atoms, but the very fact that we take atoms to make up our detection apparatus might 1) very well be at the origin of being unable to produce EPR raw data - I don't know (I even think so), but 2) this is nothing fundamental, is it ? It is just because we earthlings are supposed to work with atomic matter. The description of the fundamental behaviour of EM fields shouldn't be dealing with atoms, should it ?

To refute the perpetuum mobile claims before the energy conservation principles, one had to find friction or gravity or temperature or current of whatever other design specific mechanism ended up balancing the energy.

Well, let us take the following example. Imagine that perpetuum mobile DO exist, when working, say, with dilithium crystals. Imagine that I make such a perpetuum mobile, where on one I put in 5000V and 2000A, and on the other side, I have a huge power output on a power line, 500000V and 20000A.

Now I cannot make a direct volt meter measuring 500000 V, so I use a voltage divider with a 1GOhm resistor and a 1MOhm resistor, and I measure 500V over my 1MOhm resistor. Also I cannot pump 20000A in my ampmeter, so I use a shunt resistance of 1milliOhm in parallel with 1 Ohm, and I measure 20 Amps in the shunt.
Using basic stuff, I calculate from my measurement that 500V on my voltmeter, times my divider factor (1000) gives me 500 000V and that my 20 Amps times my shunt factor (1000) gives me 20 000 A.

Now you come along and say that these shunt corrections are fudging the data, that the raw data give us 500V x 20A = 10 KW output, while we put 5000Vx2000A = 10MW input in the thing, and that hence we haven't shown any perpetuum mobile.
I personally would be quite convinced that these 500V and 20A, with the shunt and divider factors, are correct measurements.
Of course, it is always possible to claim that shunts don't work beyond 200A, and dividers don't work beyond 8000V, in those specific cases when we apply them to such kinds of perpetuum mobile. But I find that far-stretched, if you accept shunts and dividers in all other cases.

cheers,
Patrick.
 
  • #117
nightlight said:
ZapperZ Here's a bit more ammunition for you, vanesch. If I substitute the photocathode in a detector from, oh, let's say, tungsten to diamond, and keeping everything else the same, the dark current INCREASES! Imagine that! I can change the rate of vacuum fluctuation [/color]simply by changing the type of media near to it!

You change rate of tunneling[/color] since the ionization energies are different in those cases.

Bingo! What did you think I meant by "field emission" and "Fowler-Nordheim theory" all along? You disputed these and kept on insisting it has something to do with "QED vacuum fluctuation"! Obviously, you felt nothing wrong with dismissing things without finding out what they are.

For the last time - dark currents as detected in photodetectors have NOTHING to do with "QED vacuum fluctuation". Their properties and behavior are VERY well-known and well-characterized.

Zz.
 
  • #118
vanesch No, this is only true for single-photon situations. There is no classical way to describe "multi-photon" situations, and that's where I was aiming at. If you consider these "multi-photon" situations as just the classical superposition of modes, then there is no way to get synchronized hits beyond the Poisson coincidence. Nevertheless, that HAS been found experimentally.[/color] Or Imisunderstand your picture.

We already went over "sub-Poissonian" distributions few messages back -- it is the same problem as with anti-correlation. This is purely the effect of the vacuum fluctuation subtraction (for the correlation functions by using normal operator ordering), yielding negative probability regions in the Wigner joint distribution functions. The detection and the setups corresponding to correlations computed this way adjust data and tune detectors so that in the absence of signal all counts are 0.

The semiclassical theory which model this data normalization for coincidence counting, also predicts the sub-Poissonian distribution, the same way. (We already went over this at some length earlier...)
 
  • #119
Nightlight


Marshall-Santos-Barut SED model (in the full self-interacting form) is the physics behind the Stochastic Quantization , the stuff that really makes it work (as a model of natural phenomena). The field quantization step doesn't add new physics to "the non-linear DNA" (as Jaynes put it).
It is merely a fancy jargon for a linearization scheme (a linear approximation) of the starting non-linear PD equations.
For a bit of background on this relation check some recent papers by a mathematician Krzysztof Kowalski, which show how the sets of non-linear PDEs ... can be linearized in the form of a regular Hilbert space linear evolution with realistic (e.g. bosonic) Hamiltonians. .



Finally, I understand (may be not :) you agree that PDE, ODE and their non linearities may be rewritten into the Hilbert space formalism. See your example, Kowalski's paper (ODE) solv-int/9801018.

So, it is the words “linear approximation” that seem no to be correct as the reformulation of ODE, PDE with non-linearities in Hilbert space are equivalent.
In fact, I understand what you may reject in the Hilbert space formulation is the following:

-- H is an hermitian operator in id/dt|psi>=H|psi> <=> the time evolution is unitary (what you may call the linear approximation?).

And if I have understood the paper, Kowalski has demonstrated, a well known fact of the hilbert space users, that with a simple gauge transformation (selection of other p,q type variables), you can change a non hermitian operator into an hermitian one (its relation (4.55 to 4.57)). Therefore, many non-unitary evolutions may be expressed exactly as functions of unitary evolutions.
So I still conclude that even a unitary evolution of the Shroedinger equation may represent a non linear ODE.
Therefore, I still not understand what you call the “linear approximation” as in all cases I get exact results (approximation means, for me the existence of small errors).

--All the possible solutions given by the Schrödinger type equation id/dt|psi>=H|psi> (and its derivatives in the second quantization formalism).
At least the solutions that are non-compatible with the local hidden variables theories.

So, if we interpret your possible physical interpretation stochastic electrodynamics as giving the true results, what we just need to do is to rewrite it formally into the Hilbert space formalism and look at the difference with QED?


Seratend.

P.S. For example, with this comparison, we can see effectively if the projection postulate is really critical or only a way of selecting an initial state.

P.P.S We thus may also check in a rather good confidence if the stochastic electrodynamics formulation does not contain by itself an hidden “hidden variable”.

P.P.P.S and then, if there is really fundamental difference we identify, we can update the schroedinger equation : ) as well as the other physical equations.
 
  • #120
nightlight said:
We already went over "sub-Poissonian" distributions few messages back -- it is the same problem as with anti-correlation. This is purely the effect of the vacuum fluctuation subtraction (for the correlation functions by using normal operator ordering), yielding negative probability regions in the Wigner joint distribution functions.

Yes, I know, but I didn't understand it.

cheers,
Patrick.
 
  • #121
vanesch Yes if we take into account atoms, but the very fact that we take atoms to make up our detection apparatus might 1) very well be at the origin of being unable to produce EPR raw data - I don't know (I even think so), but 2) this is nothing fundamental, is it ? It is just because we earthlings are supposed to work with atomic matter. The description of the fundamental behaviour of EM fields shouldn't be dealing with atoms, should it ?

It is merely related to specific choice of the setup which claims (potential) violation. Obviously, why are the constants what they are, is fundamental.

But why the constants (in combination with laws) happen to block this or that claim in just that particular way is the function of the setup, what is the physics they overlooked[/color] to make them pick that design. That isn't fundamental.

Now you come along and say that these shunt corrections are fudging the data, that the raw data give us 500V x 20A = 10 KW output, while we put 5000Vx2000A = 10MW input in the thing, and that hence we haven't shown any perpetuum mobile.

Not quite analogical situation. The "fair sampling" is plainly violated by the natural semiclassical model (see earlier message on sin/cos split) as well as in the Quantum Optics, which will have exactly the same amplitude propagation through the polarizers to detectors and follow the same square law detector probabilities of trigger. See also Khrennikov's paper cited earlier on proposed test for the "fair sampling."

So the situation is as if you picked some imagined behavior that is contrary to the existent laws and claimed that you will assume in your setup the property holds.

Additionally, there is no universal "fair sampling" law which you just invoke without examination whether it applies in your setup. And in the EPR-Bell setups it should be at least tested since the claim is so fundamental.
 
Last edited:
  • #122
nightlight You change rate of tunneling since the ionization energies are different in those cases.[/color]

ZapperZBingo! What did you think I meant by "field emission" and "Fowler-Nordheim theory" all along? You disputed these and kept on insisting it has something to do with "QED vacuum fluctuation"! Obviously, you felt nothing wrong with dismissing things without finding out what they are.

Complete non sequitur. You stated that if you change detector materials, you will get different dark currents and that this would imply (if vacuum fluctuations had a role in dark currents) that vacuum fluctuations were changed.

It doesn't mean that vacuum fluctuations changed. If you change material[/color] and thus change the gap energies[/color], the tunneling rates will change even though the vacuum stayed the same, since the tunneling rates depend on the gap size[/color], which is the material property (in addition to depending on vacuum fluctuation energy).

For the last time - dark currents as detected in photodetectors have NOTHING to do with "QED vacuum fluctuation". Their properties and behavior are VERY well-known and well-characterized.

Take any textbook on Quantum Optics and read up on photo-detectors noise and find out for yourself.
 
  • #123
seratend And if I have understood the paper, Kowalski has demonstrated, a well known fact of the hilbert space users, that with a simple gauge transformation (selection of other p,q type variables), you can change a non hermitian operator into an hermitian one (its relation (4.55 to 4.57)). Therefore, many non-unitary evolutions may be expressed exactly as functions of unitary evolutions.

That was not it. The point is the linearization procedure which is the step of construction of M, which is linear operator. That step occurs in transition between 4.44 and 4.46 where the M is obtained. The stuff you quote is the comparatively trivial part about making 4.45 look like Schroedinger equation with the hermitean operator (instead of M).

So I still conclude that even a unitary evolution of the Shroedinger equation may represent a non linear ODE.
Therefore, I still not understand what you call the “linear approximation” as in all cases I get exact results (approximation means, for me the existence of small errors).


The point of his procedure is to have as input any set of non-linear (evolution) PD equations and create a set of linear equations which approximate the nonlinear set (iterative procedure which arrives to infinite set of linear approximations). The other direction, from linear to non-linear, is relatively trivial.

So, if we interpret your possible physical interpretation stochastic electrodynamics as giving the true results, what we just need to do is to rewrite it formally into the Hilbert space formalism and look at the difference with QED?

If you want to try, check first Barut's work which should save you lots of time. It's not that simple, though.
 
Last edited:
  • #124
nightlight said:
nightlight You change rate of tunneling since the ionization energies are different in those cases.[/color]

ZapperZBingo! What did you think I meant by "field emission" and "Fowler-Nordheim theory" all along? You disputed these and kept on insisting it has something to do with "QED vacuum fluctuation"! Obviously, you felt nothing wrong with dismissing things without finding out what they are.

Complete non sequitur. You stated that if you change detector materials, you will get different dark currents and that this would imply (if vacuum fluctuations had a role in dark currents) that vacuum fluctuations were changed.

It doesn't mean that vacuum fluctuations changed. If you change material[/color] and thus change the gap energies[/color], the tunneling rates will change even though the vacuum stayed the same, since the tunneling rates depend on the gap size[/color], which is the material property (in addition to depending on vacuum fluctuation energy).

Eh? Aren't you actually strengtening my point? It is EXACTLY what I was trying to indicate that changing the material SHOULD NOT change the nature of the vacuum beyond the material - and I'm talking about the order of 1 meter here from the material! If you argue that the dark currents are due to "QED vacuum fluctuations", then the dark current should NOT change simply because I switch the photocathode since, by your own account, vacuum fluctuations hasn't changed!

But the reality is, IT DOES! I detect this all the time! So how could you argue the dark currents are due to QED vacuum fluctuation? Just simply because a book tells you? And you're the one who are whinning about other physicists being stuck with some textbook dogma?

For the last time - dark currents as detected in photodetectors have NOTHING to do with "QED vacuum fluctuation". Their properties and behavior are VERY well-known and well-characterized.

Take any textbook on Quantum Optics and read up on photo-detectors noise and find out for yourself.

... and why don't you go visit an experimental site that you criticizing and actually DO some of these things? It appears that your confiment within theoretical boundaries are cutting you off from physical reality. Grab a photodiode and make a measurement for yourself! And don't tell me that you don't need to know how it works to be able to comment on it. You are explicitly questioning its methodology and what it can and cannot measure. Without actually using it, all you have is a superficial knowledge of what it can and cannot do.

Comments such as the one above is exactly the kind of theorists that Harry Lipkin criticized in his "Who Ordered Theorists" essay. When faced with an experimental results, such theorists typically say "Oh, that can't happen because my theory says it can't". It's the same thing here. When I tell you that I SEE dark currents, and LOTS of it, and they have nothing to do with QED vacuum fluctuations, all you can tell me is to go look up some textbook? Oh, it can't happen because that text says it can't!

If all the dark currents I observe in the beamline is due to QED vacuum fluctuation, then our universe will be OPAQUE!

Somehow, there is a complete disconnect between what's on paper and what is observed. Houston, we have a problem...

Zz.
 
  • #125
nightlight said:
Not quite analogical situation. The "fair sampling" is plainly violated by the natural semiclassical model (see earlier message on sin/cos split) as well as in the Quantum Optics, which will have exactly the same amplitude propagation through the polarizers to detectors and follow the same square law detector probabilities of trigger.

Well, the fair sampling is absolutely natural if you accept photons as particles, and you only worry about the spin part. It is only in this context that EPR excludes semiclassical models. The concept of "photon" is supposed to be accepted.

I've been reading up a bit on a few of Barut's articles which are quite interesting, and although I don't have the time and courage to go through all of the algebra, it is impressive indeed. I have a way of understanding his results: in the QED language, when it is expressed in the path integral formalism, he's working with the full classical solution, and not working out the "nearby paths in configuration space". However, this technique has a power over the usual perturbative expansion, in that the non-linearity of the classical solution is also expanded in that approach. So on one hand he neglects "second quantization effects", but on the other hand he includes non-linear effects fully, which are usually only dealt with in a perturbative expansion. There might even be cancellations of "second quantization effects", I don't know. In the usual path integral approach, these are entangled (the classical non-linearities and the quantum effects).

However, I still fail to see the link to what we are discussing here, and how this leads to correlations in 2-photon detection.

cheers,
Patrick.
 
  • #126
vanesch said:
However, I still fail to see the link to what we are discussing here, and how this leads to correlations in 2-photon detection.

I guess the crux is given by equation 27 in quant-ph/9711042,
which gives us the coincidence probability for 2 photon detector clicks in 2 coherent beams. However, this is borrowed from quantum physics, no ? Aren't we cheating here ?
I still cannot see naively how you can have synchronized clicks with classical fields...

cheers,
Patrick.

EDIT: PS: I just ordered Mandel's quantum optics book to delve a bit deeper in these issues, I'll get it next week... (this is a positive aspect for me of this discussion)
 
Last edited:
  • #127
nightlight ... Therefore, many non-unitary evolutions may be expressed exactly as functions of unitary evolutions.

That was not it. The point is the linearization procedure which is the step of construction of M, which is linear operator. That step occurs in transition between 4.44 and 4.46 where the M is obtained. The stuff you quote is the comparatively trivial part about making 4.45 look like Schroedinger equation with the hermitean operator (instead of M).
[/color]

Oki, for me the redefinition of M of (3.2) into the “M’” of (4.46) was trivial : ). In fact I only considered §D and §E as only a method on a well known problem of vector spaces: how to make diagonal some non hermitian operators (i.e. where solving the time evolution operator is easy in our case).
Starting from a non-hermitian operator M=M1+iM2 where M1 and M2 are hermitian, we may search a linear transformation on the Hilbert space where M becomes hermitian (and thus it may be diagonal with further transformation).

So, the important point for me, is that I think you accept that a PDE/ODE equation may be reformulated into the Hilbert space formalism, but without the obligation to get a unitary time evolution (i.e id/dt|psi>=M|psi> where M may or may not be hermitian.

nightlight The point of his procedure is to have as input any set of non-linear (evolution) PD equations and create a set of linear equations which approximate the nonlinear set (iterative procedure which arrives to infinite set of linear approximations).

So “approximation” for you mean an infinite set of equations to solve (like computing the inverse of an infinite dimensional matrix). But theoretically, you know that you have an exact result. Ok, let’s take this definition. Well, you may accept more possibilities than what I do: that a non unitary time evolution (id/dt|psi>=M|psi>) may be changed, with an adequate transformation, into a unitary one.

So, the use of stochastic electrodynamics may be theoretically transposed into an exact id/dt|psi>=H|psi> (may be in a 2nd quantification view) where H (the hypothetic Hamiltonian) may not be hermitian (and I think you assume It is not).

So if we assume (very quickly), that H is non hermitian (=H1+iH2, H1 and H2 hermitian, and with H1>> H2 for all practical purposes), we thus have a schroedinger equation that has a kind of diffusion coefficient (H2) that may approach some of the proposed modifications found in the literature to justify the “collapse” of the wave function. Decoherence program – see for example Joos quant-ph/9908008 eq. 22 and many other papers.

Thus the stochastic electrodynamics may be (?) checked more easily with experiments that measure the decoherence time of quantum systems rather than the EPR experiment where you have demonstrated that it is not possible to make the difference.


Thus, I think, that the promoters of this theory should work on a Hilbert space formulation (at least approximate, but with enough precision in order to get a non hermitian Hamiltonian). Therefore they may apply it to other experiments to demonstrate the difference (i.e different previsions between classical QM and stochastic electrodynamics through, may be, decoherence experiments) rather than working on experiments that cannot separate the theories and verifying that the 2 theories agree with the experiment:
It is their job, as their claim seem to say that classical QM is not ok.

Seratend.
 
  • #128
vanesch Well, the fair sampling is absolutely natural if you accept photons as particles, and you only worry about the spin part. It is only in this context that EPR excludes semiclassical models. The concept of "photon" is supposed to be accepted.

I don't see how does "natural" part help you restate the plain facts: Experiments refuted the "fair sampling" theories. It seems the plain fact still stands as is.

Secondly, I don't see it natural for the particles either. Barut has some EPR-Bell papers for spin 1/2 point particles in classical SG and he gets the unfair sampling. In that case it was due to the dependence of the beam spread in the SG on the hidden classical magnetic momentum. This yields particle paths in the ambiguous, even wrong region, depending on the value of hidden spin orientation to the SG axis. Thus for the coincidence it makes the coincidence accuracy sensitive to the angle between the two SG's i.e. the "loophole" was in the analyzer violating the fair sampling (same analyzer problem happens for the high energy photons).

It seems what you mean by "particle" is strict unitary evolution with sharp particle number[/color], and in the Bell setup with sharp value 1 for each side. That is not true of QED photon number (you neither have sharp nor conserved photon number; and specifically for the proposed setups you either can't detect them reliably enough or you can't analyze them on a polarizer reliably enough, depending on photon energy; these limitations are facts arising from the values of relevant atomic interaction constants and cross sections which are what they are i.e. you can't assume you can change those, you can only come up with another design which works around any existent flaw which was due to the relevant constants).

However, I still fail to see the link to what we are discussing here, and how this leads to correlations in 2-photon detection.

It helps in the sense that if you can show that Barut's self-field ED agrees with QED beyond the perturbative orders relevant for Quantum Optics measurements (which is actually the case), you can immediately conclude[/color], without having to work out a detailed Quantum Optics/QED prediction (which is accurate enough to predict exactly what would actually[/color] be obtained for correlations with the best possible detectors and polarizers consistent with the QED vacuum fluctuations and[/color] all the relevant physical constants of the detectors and the polarizers) that a local model of the experiment exists for any such prediction[/color] -- the model would be the self-field ED itself.

Thus you would immediately know that however you vary the experimental setup/parameters, within the perturbative orders to which QED & self-fields agree, something will be there to block it from violating locality. You don't need to argue about detector QE or plarizer efficiency, you know that anything they cobble together within the given perturbative orders, will subject any conjecture about future technology being able to improve efficiency of a detector (or other components) beyond threshold X, to reductio ad absurdum. Thus the experimenter would have to come up with a design which relies in an essential way on the predictions of QED perturbative orders going beyond the orders of agreement with the self-field ED. Note also that the existent experiments are well within the orders of agreement of the two theories, thus for this type of tests there will always be a "loophole" (i.e. the actual data won't violate the inequality).

This is the same kind of conclusion that you would make about perpertuum mobile claim which asserts that some particular design will yield 110% efficiency if the technology gets improved enough to have some sub-component of the apratus work with the accuracy X or better. The Bell tests make exactly this kind of assertion (without the fair sampling conjecture which merely helps them claim that experiments exclude all local "fair sampling" theories, and that is too weak to exlude anything that exists, much less all future local theories) -- we could violate it if we can get X=83% or better the overall setup efficiency (which accounts for all types of losses, such as detection, polarizer, apperture efficiency, photon number spread,...).

Now, since in the perpetuum mobile case you can invoke general energy conservation law, you can immediately tell the inventor that (provided the rest of his logic is valid) his critical component will not be able to achieve accuracy X since that would violate energy conservation. You don't need to go into detailed survey of all conceivable relations between the relevant interaction constants in order to compute the limits on the accuracy X for all conceivable technological implementatons of that component. You would know it can't work since assuming otherwise would lead to reductio ad absurdum.
 
Last edited:
  • #129
vanesch I guess the crux is given by equation 27 in quant-ph/9711042,
which gives us the coincidence probability for 2 photon detector clicks in 2 coherent beams. However, this is borrowed from quantum physics, no ? Aren't we cheating here ?


The whole section 2 is derivation of standard Quantum Optics prediction (in contrast to a toy QM prediction) expressed in terms of Wigner functions, which are perfectly standard tool for QED or QM. So, of course, it is all standard Quantum Optics, just using Wigner's joint distribution functions instead of the more conventional Glauber's correlation functions. The two are fully equivalent, thus that is all standard, entirely uncontroversial part. They are simply looking for a more precise expression of the standard Quantum Optics prediction showing the nature of the non-classicality in a form suitable for their analysis.

The gist of their argument is that if for a given setup you can derive strictly positive Wigner function for joint probabilities, there is nothing else to discuss, all statistics is perfectly classical. The problem is how to interpret the other case, when there are negative probability regions of W (which show up in PDC or any sub-Poissonan setup). To do that they need to examine to what kind of detection and counting does the Wigner distribution correspond operationally[/color] (to make prediction with any theoretical model you need operational mapping rules of that kind). With the correspondence established, their objective is to show that the effects of the background subtractions and the above-vacuum threshold adjustments on individual detectors (the detectors adjustments can only set the detector's Poisson average, there is no sharp decision for each try, thus there is always some loss and some false positives) are in combination always sufficient to explain the negative "probability" of the Wigner measuring setups as artifacts of the combined subtraction and losses (the two can be traded off by the experimenter but he will always have to chose between the increased losses and the increased dark currents, thus subtract the background and unpaired singles in proportions of his choice but with the total subtraction not reducible below the vacuum fluctuations limit).

That is what the subsequent sections 3-5 deal with. I recall discussing with them the problems of that paper at the time since I thought that the presentation (especially in sect 5) wasn't sharp and detailed enough for readers who weren't up to speed on their whole previous work. They had since then sharpened their detector analysis with more concrete detection models and detailed computations, you can check their later preprints in quant-ph if the sections 4 & 5 don't seem sufficient for you.

Note that they're not pushing here SED model explicitly, but rather working in the standard QO scheme (even though they know from their SED treatment where to look for the weak spots in the QO treatment). This is to make the paper more publishable, easier to get through the hostile gatekeepers (who might just say, SED who? and toss it all out). After all, it is not a question whether experiments have produced any violation -- they didn't. In a fully objective system, that would be it, no one would have to refute anything since nothing was shown that needs refuting.

But in the present system, the non-locality side has an advantage so that their handwave about "ideal" future technology is taken as a "proof" that it will work some day, which then the popularizers and the educators translate into "it worked" (which is mostly the accepted wisdom you'll find in these kind of student forums). Unfortunately, with this kind of bias, the only remaining way for critics of the prevailing dogma is to show, setup by setup, why a given setup, even when granted the best conceivable future technology can't produce a violation.

I still cannot see naively how you can have synchronized clicks with classical fields...

You can't get them and you don't get them with the raw data, which will always have either too much uncorrelated background or too much pair losses. You can always lower the sensitivity to lower the background noise in order to have smaller explicit subtractions, but then you will be discarding too many unpaired singles (events which don't have a matching event on the other side).
 
Last edited:
  • #130
nightlight said:
The gist of their argument is that if for a given setup you can derive strictly positive Wigner function for joint probabilities, there is nothing else to discuss, all statistics is perfectly classical.

This is what I don't understand (I'm waiting for my book on quantum optics..., hence my silence). After all, these Wigner functions describe - if I understand well - the Fock states that correspond closely to classical EM waves. However, the statistics derived from these Fock states for photon detection assume a second quantization, no ? So how can you 1) use the statistics derived from second quantization (admitted, for cases that correspond to classical waves) to 2) argue that you don't need a second quantization ??

You see, the way I understand the message (and maybe I'm wrong) is the following procedure:
-Take a classical EM wave.
-Find the Fock space description corresponding to it (with the Wigner function).
-From that Fock space description, calculate coincidence rates and other statistical properties.
-Associate with each classical EM wave, such statistics. (*)
-Take some correlation experiment with some data.
-Show that you can find, for the cases at hand, a classical EM wave such that when we apply the above rules, we find a correct prediction for the data.
-Now claim that these data are explainable purely classically.

To me, the last statement is wrong, because of step (*)
You needed second quantization in the first place to derive (*).

cheers,
Patrick.

EDIT: I think that I've been too fast here. It seems indeed that two-photon correlations cannot be described that way, and that you always end up with the equivalence of a single-photon repeated experiment, the coherent state experiment, and the classical <E-(r1,t1) x E+(r2,t2> intensity cross correlation.
 
Last edited:
  • #131
nightlight said:
I still cannot see naively how you can have synchronized clicks with classical fields...

You can't get them and you don't get them with the raw data, which will always have either too much uncorrelated background or too much pair losses. You can always lower the sensitivity to lower the background noise in order to have smaller explicit subtractions, but then you will be discarding too many unpaired singles (events which don't have a matching event on the other side).

How about this:

http://scotty.quantum.physik.uni-muenchen.de/publ/pra_64_023802.pdf

look at figure 4

I don't see how you can explain these (raw, I think) coincidence rates unless you assume (and I'm beginning to understand what you're pointing at) that the classical intensities are "bunched" in synchronized intensity peaks.
However, if that is the case, a SECOND split of one of the beams towards 2 detectors should show a similar correlation, while the quantum prediction will of course be that they are anti-correlated.


2-photon beam === Pol. Beam splitter === beam splitter -- D3
.......|........|
....... D1......D2

Quantum prediction: D1 is correlated with (D2+D3) in the same way as in the paper ; D2 and D3 are Poisson distributed, so at low intensities anti-correlated.

Your prediction: D1 is correlated with D2 and with D3 in the same way, and D2 is correlated with D3 in the same way.

Is that correct ?

cheers,
Patrick.
 
Last edited by a moderator:
  • #132
vanesch After all, these Wigner functions describe - if I understand well - the Fock states that correspond closely to classical EM waves.

No, the Wigner (or Husimi) joint distributions and their differential equations are a formalism fully equivalent to the Fock space formulation. The so-called "deformation quantization" (an alternative form of quantization) is based on this equivalence.

It is the (Glauber's) "coherent states" which correspond to the classical EM theory. Laser light is a template for coherent states. For them the Wigner functions are always positive, thus there is no non-classical prediction for these states.

So how can you 1) use the statistics derived from second quantization (admitted, for cases that correspond to classical waves) to 2) argue that you don't need a second quantization ??

The Marshall-Santos PDC paper you cited uses standard Quantum Optics (in Wigner formalism) and a detection model to show that there is no non-classical correlation predicted by Quantum Optics for the PDC sources. They're not adding or modifying PDC treatment, merely rederiving it in Wingner function formalism.

The key is in analyzing with more precision and finesse (than usual engineering style QO) operational mapping rules between the theoretical distributions (computed under the normal operator ordering convention which correspond to Wigner's functions) and the detection and counting procedures. (You may need to check also few of their other preprints on detection for more details and specific models and calculations.) Their point is that the conventional detection & counting procedures (with the background subtractions and tuning to [almost] no-vacuum detection) amount to the full subtraction needed to produce the negative probability regions (conventionally claimed as non-classicality) of the Wigner distributions, thus the standard QO predictions, for the PDC correlations.

The point of these papers is to show that, at least for the cases analyzed, Quantum Optics doesn't predict anything non-classical, even though PDC, sub-Poissonian distributions, anti-bunching,... are a soft non-classicality[/color] (they're only suggestive, e.g. at the superficial engineering or pedagogical levels of analysis, but not decisive as the violation of Bell's inequality which absolutely no classical theory, deterministics or stochastic, can violate).

The classical-Quantum Optics equivalence of the thermal (or chaotic) light was known since 1950s (this was clarified during the Hanbury Brown and Twiss effect controversy). Similar equivalence was established in 1963 for the coherent states, making all the laser light effects (plus linear optical elements and any number of detectors) fully equivalent to a classical description. Marshall and Santos (and their students) have extended this equivalence to the PDC sources.

Note also that 2nd quantization is in these approaches (Marshall-Santos SED, Barut self-field ED, Jaynes neoclassical ED) viewed as a mathematical linearization procedure of the uderlying non-linear system, and not something that adds any new physics. After all the same 2nd quantization techniques are used in solid state physics and other areas for entirely different underlying physics. The 1st quantization is seen as a replacement of point particles by matter fields[/color], thus there is no point in "quantizing" the EM field at all (it is a field already), or the Dirac matter field (again).

As a background to this point, I mentioned (few messages back) some quite interesting results by a mathematician Krzysztof Kowalski, which show explicitly how a classical non-linear ODE/PDE systems can be linearized in a form that looks just like a bosonic Fock space formalism (with creation/anihilation operators, vacuum state, particle number states, coherent states, bosonic Hamiltonian and standard quantum state evolution). In that case it is perfectly transparent that there is no new physics brought in by the 2nd quantization[/color], it is merely a linear approximation of a non-linear system (it yields iteratively an inifinte number of linear equations from a finite number of non-linear equations). While Kowalski's particular linearization scheme doesn't show that QED is a linearized form of the non-linear equations such as Barut's self-field, it provides an example of this type of relation between the Fock space formalism and the non-linear clasical equations.

You see, the way I understand the message (and maybe I'm wrong) is the following procedure:
-Take a classical EM wave.


No, they don't do that (here). They're using the standard PDC model and are treating it fully within the QO, just in the Wigner function formalism.
 
Last edited:
  • #133
vanesch I don't see how you can explain these (raw[/color], I think) coincidence rates

Those are not even close to raw counts. Their pairs/singles ratio is 0.286 and their QE is 0.214. Avalanche diodes are very noisy and to reduce background they have used short coincidence window, resulting in quite low (for any non-classicality claim) pair/singles ratio. Thus the combination of the background subtractions and the unpaired singles are much larger than what Marshal-Santos PDC classicality requires (which assumes only the vacuum fluctuation noise subtraction, with everything else granted as optimal). Note that M-S SED (which is largely equivalent in predictions to Wigner distributions with positive probabilities) is not some contrived model made to get around some loophole, but a perfectly natural classical model, a worked out and refined version of the Planck's 2nd Quantum Theory (of 1911, where he added an equivalent of 1/2 hv noise per mode).
 
  • #134
Brief question about Aharanov-Bohm effect

Hello Nightlight,

I follow your exceptional discussion with Vanesch and a couple of other contributors. It is maybe the best dispute that I have read on this forum so far. In particular, it looks as if you and Vanesch are really getting 'somewhere'. I am looking forward to the final verdict whether or not the reported proof of spooky action at a distance is in fact valid.

Anyway, I consider the Aharanov-Bohm effect as a similar fundamental non-local manifestation of QM as the (here strongly questioned) Bell violation.

To say it a bit more challenging, it also looks like a 'quantum mystery' which you seem to despise.

My question is: Are you familiar with this effect and, if yes, do you believe in it (whatever that means) or do you similarly think that there is a sophisticated semi-classical explanation?

Roberth
 
  • #135
Roberth Anyway, I consider the Aharanov-Bohm effect as a similar fundamental non-local manifestation of QM as the (here strongly questioned) Bell violation.

Unfortunately, I haven't studied it in any depth beyond a shallow coverage in the undergraduate QM class. I always classified it among the "soft" non-locality phenomena, those which are superficially suggestive of non-locality (after all, both Psi and A still evolve fully locally), but lack decisive criteria (unlike the Bell's inequalities).
 
  • #136
nightlight said:
Their pairs/singles ratio is 0.286 and their QE is 0.214. Avalanche diodes are very noisy and to reduce background they have used short coincidence window, resulting in quite low (for any non-classicality claim) pair/singles ratio.

Aaah, stop with your QE requirements I'm NOT talking about EPR kinds of experiments here, because I have to fight the battle upstream with you :smile: because normally, people accept second quantization, and you don't. So I'm looking at possible differences between what first quantization and second quantization can do. And in the mean time I'm learning a lot of quantum optics :approve: which is one of the reasons why I continue here with this debate. You have 10 lengths of advantage over me there (but hey, I run fast :devil:)

All single-photon situations are indeed fully compatible with classical EM, so I won't be able to find any difference in prediction there. Also, I have to live with low efficiency photon detectors, because otherwise you object about fair sampling, so I'm looking at a possibility of a feasible experiment with today's technology that proves (or refutes ?) second quantization. I'm probably a big naive guy, and this must have been done before, but as nobody is helping here, I have to do all the work myself :bugeye:

What the paper that I showed proves to me is that we can get correlated clicks in detectors way beyond simple Poisson coincidences. I now understand that you picture this as correlated fluctuations in intensity in the two classical beams.
But if the photon picture is correct, I think that after the first split (with the polarizing beam splitter) one photon of the pair goes one way, and the other one goes the other way, giving correlated clicks, superposed on about 3 times more uncorrelated clicks, and with a detection probability of about 20% or so, while you picture that as two intensity peaks in the two beams, giving rise to enhanced detection probabilities (and hence coincidences) at the two detectors (is that right ?).
Ok, up to now, the pictures are indistinguishable.
But I proposed the following extension of the experiment:

In the photon picture, each of the two branches just contains "single photon beams", right ? So if we put a universal beam splitter in one such branch (not polarizing), we should get uncorrelated Poisson streams in each one, so the coincidences between these two detectors (D2 and D3) should be those of two independent Poisson streams. However, D2 PLUS D3 should give a signal which is close to what the old detector gave before we split the branch again. So we will have a similar correlation between (D2+D3) and D1 as we had in the paper, but D2 and D3 shouldn't be particularly correlated.

In your "classical wave" picture, the intensity peak in our split branch splits in two half intensity peaks, which should give rise to a correlation of D2 and D3 which is comparable to half the correlation between D2 and D1 and D3 and D1, no ?

Is this a correct view ? Can this distinguish between the photon picture and the classical wave picture ? I think so, but as I'm not an expert in quantum optics, nor an expert in your views, I need an agreement here.

cheers,
Patrick.
 
  • #137
In your "classical wave" picture, the intensity peak in our split branch splits in two half intensity peaks, which should give rise to a correlation of D2 and D3 which is comparable to half the correlation between D2 and D1 and D3 and D1, no ?

You're ignoring few additional effects here. One is that the detectors counts are Poissonian (which are significant for the visible range photons). Another is that you don't have a sharp photon number state but a superposition with at least 1/2 photon equivalent spread.

Finally, the "classical" picture with ZPF allows a limited form of sub-Poissonian statistics for the adjusted counts[/color] (or the extrapolated counts, e.g. if you tune your detectors to a higher trigger threshold to reduce the explicit background subtractions in which case you raise the unpaired singles counts and have to extrapolate). This is due to the sub-ZPF superpositions (which enter the ZPF averaging for the ensemble statistics) of the signal and the ZPF in one branch after the splitter. Unless you're working in a high-noise, high sensitivity detector mode (which would show you, if you're specifically looking for it, a drop in the noise coinciding with the detection in the other branch), all you would see is an appearance of the sub-Poissonian behavior on the subtracted/extrapolated counts. But this is exactly the level of anticorrelation that the classical ZPF model predicts for the adjusted counts.

The variation you're describing was done for exactly that purpose in 1986 by P. Grangier, G. Roger and A. Aspect ("Experimental evidence for a photon anticorrelation effect on a beam splitter: a new light on a single photon interefernce" Europhys. Lett. Vol 1 (4) pp 173-179, 1986). For the pair source they have used their original Bell test atomic cascade. Of course, the classical model they tested against, to declare the non-classicality, was the non-ZPF classical field which can't reproduce the observed level of anticorrelation on the adjusted data. { I recall seeing another preprint of that experiment (dealing with the setup properties prior to the final experiments) which had more detailed noise data indicating a slight dip in the noise in the 2nd branch, for which they had some aperture/alignment type of explanation. }

Marshall & Santos had several papers following that experiment where their final Stochastic Optics[/color] (the SED applied for Quantum Optics) had crystallized, including their idea of "subthreshold" superposition[/color], which was the key for solving the anticorrelation puzzle. A longer very readable overview of their ideas at the time, especially regarding the anticorrelations, was published in Trevor Marshall, Emilio Santos "Stochastic Optics: A Reaffirmation of the Wave Nature of Light" Found. Phys., Vol 18, No 2. 1988, pp 185-223, where they show that a perfectly natural "subthreshold" model is in full quantitative agreeement with the anticorrelation data (they also relate their "subthreshold"[/color] idea to its precursors, such as the "empty-wave"[/color] by Selleri 1984 and the "latent order"[/color] by Greenberger 1987; I tried to convince Trevor to adopt a less accurate but catchier term "antiphoton" but he didn't like it). Some day, when this present QM non-locality spell is broken, these two will be seen as Galileo and Bruno of our own dark ages.
 
Last edited:
  • #138
nightlight said:
You're ignoring few additional effects here. One is that the detectors counts are Poissonian (which are significant for the visible range photons). Another is that you don't have a sharp photon number state but a superposition with at least 1/2 photon equivalent spread.

You repeated that already a few times, but I don't understand this. After all, I can just as well work in the Hamiltonian eigenstates. In quantum theory, if you lower the intensity of the beam enough, you have 0 or 1 photon, and there is no such thing to my knowledge as half a photon spread, because you should then find the same with gammas, which isn't the case (and is closer to my experience, I admit). After all, gamma photons are nothing else but Lorentz-transformed visible photons, so what is true for visible photons is also true for gamma photons, it is sufficient to have the detector speeding at you (ok, there are some practical problems to do that in the lab :-)
Also, if the beam intensity is low enough, (but unfortunately the background scales with the beam intensity), it is a fair assumption that there is only one photon at a time in the field. So I'm pretty sure about what I said about the quantum predictions:

Initial pair -> probability e1 to have a click in D1
probability e2/2 to have a click in D2 and no click in D3
probability e3/2 to have a click in D3 and no click in D3

This is superposed on independent probability b1 to have a click in D1, a probability b2 to have a click in D2 and probability b3 to have a click in D3, independent, but all proportional to some beam power I.

It is possible to have a neglegible background by putting the thresholds high enough and isolating well enough from any lightsource other than the original power source. This can cost some efficiency, but not much.

If I is low enough, we consider that, due to the Poisson nature of the events, no independent events occur together (that is, we neglect probabilities that go in I^2). After all, this is just a matter of spreading the statistics over longer times.

So: rate of coincidences as predicted by quantum theory:
a) D1 D2 D3: none (order I^3)
b) D1 D2 no D3: I x e1 x e2/2 (+ order I^2)
c) D1 no D2 D3: I x e1 x e3/2 (+ order I^2)
d) D1 no D2 no D3: I x (b1 + e1 x (1- e2/2 - e3/2))
e) no D1 D2 D3: none (order I^2)
f) no D1 D2 no D3: I x (b2 + (1-e1)x e2/2)
g) no D1 no D2 D3: I x (b3 + (1-e1)xe3/2)


A longer very readable overview of their ideas at the time, especially regarding the anticorrelations, was published in Trevor Marshall, Emilio Santos "Stochastic Optics: A Reaffirmation of the Wave Nature of Light" Found. Phys., Vol 18, No 2. 1988, pp 185-223,

If you have it, maybe you can make it available here ; I don't have access to Foundations of Physics.

cheers,
Patrick.
 
Last edited:
  • #139
vanesch After all, I can just as well work in the Hamiltonian eigenstates.

The output of PDC source is not same as picking a state in Fock space freely. That is why they restricted their analysis to PDC sources where they can show that the resulting states will not have the Wigner distribution negative beyond what the detection & counting callibrated to null result for the 'vacuum fluctuations alone' would produce. That source doesn't produce eigenstates of free Hamiltonian (consider also the time resolution of such modes with sharp energy). It also doesn't produce gamma photons.

because you should then find the same with gammas, which isn't the case (and is closer to my experience, I admit).

You're trying to make the argument universal which it is not. It is merely addressing an overlooked effect for the particular non-classicality claim setup (which also includes particular type of source and nearly perfectly efficient polarizer and beam splitters). The interaction constants, cross sections, tunneling rates,... don't scale with the photon energy[/color]. You can have a virtually perfect detector for gamma photons. But you won't have a perfect analyzer or a beam splitter. Thus, for gamma you can get nearly perfect particle-like behavior (and very weak wave-like behavior) which is no more puzzling or non-classical than a mirror with holes in the coating scanned by a thin light beam mentioned earlier.

To preempt the loose argument shifts of this kind, I will recall the essence of contention here. We're looking at a setup where a wave packet splits into two equal, coherent parts A and B (packet fragments in orbital space). If brought together to a common area, A and B will produce perfect interference. If any phase shifts are inserted in the paths of A or B, the interference pattern will shift depening on relative phase shift on two paths, implying that in each try the two packet fragments propagate on both paths (this is also the propagation that the dynamical/Maxwell equations describe for the amplitude).

The point of contention is what happens if you insert two detectors DA and DB in paths of A and B. I am saying that the two fragments propagate to respective detectors, interact with the detector and each detectors triggers or doesn't trigger, regardless of what happened on the other detector. The dynamical evolution is never suspended and the triggering is solely a result of the interaction between the local fragment and its detector.

You're saying that, at some undefined stage of triggering process of the detector DA, the dynamical evolution of the fragment B will stop, the fragment B will somehow shrink/vanish even if it is light years away from A and DA. Then, again at some undefined later time, the dynamical evolution of B/DB will be resumed.

The alleged empirical consequence of this conjecture will be the "exclusion" of the trigger B whenever trigger A occurs. The "exclusion" is such that it cannot be explained by the local mechanism of independent detection under the uninterrupted dynamical evolution of each fragment and its detector.

Your subsequent attempt to illustrate this "exclusion" unfortunately mixes up the entirely trivial forms of exclusions[/color], which are perfectly consistent with the model of uninterrupted local dynamics. To clarify the main mixup (and assuming no mixups regarding the entirely classical correlation aspect due to any amplitude modulation), let's look at the Poissonian square law detectors (which apply to the energies of photons relevant here, i.e. those for which there are nearly perfect coherent splitters).

Suppose we have a PDC source and we use "photon 1" of the pair as a reference to define our "try" so that whenever detector D1 triggers we have a time window T in which we enable detection of "photon 2." Keep also in mind that the laser pump which feeds the non-linear crystal is Poissonian source (produces coherent states which superpose all photon number states using for coefficient magnutudes the square-roots of Poissonian probabilities), thus neither the input nor the output states are sharp photon number states (pedagogical toy derivations might use as the input the sharp number state, thus they'll show a sharp number state in output).

To avoid the issues of detector dead time or multiphoton capabilities, imagine we use a perfect coherent splitter, split the beam, then we add in each path another perfect splitter, and so on, for L levels of splitting, and place ND=2^L detectors in the final layer of subpaths. The probability of k detectors (Poissonian, square law detectors relevant here) triggering in a given try is P(n,k)=n^k exp(-n)/k! where n is the average number of triggers. A single multiphoton capable detector with no dead time would show this same distribution of k for a given average rate n.

Let's say we tune down the input power (or sensitivity of the detectors) to get an average number of "photon 2" detectors triggering as n=1. Thus the probability of exactly 1 among the ND detectors triggering is P(n=1,k=1)=1/e=37%. Probability of no ND trigger is P(n=1,k=0)=1/e=37%. Thus, the probability of more than 1 detector triggering is 26%, which doesn't look very "exclusive".

Your suggestion was to lower (e.g. via adjustments of detectors thresholds or by lowering the input intensity) the average n to a value much smaller than 1. So, let's look at n=0.1, i.e. on average we get .1 ND triggers for each trigger on the reference "photon 1" detector. The probability of a single ND trigger is P(n=0.1,k=1)=9% and of no trigger P(n=0.1,k=0)=90%.

Thus the probability of multiple ND triggers is now only 1%, i.e. we have 9 times more single triggers than the multiple triggers, while before, for n=1, we had only 37/26=1.4 times more single triggers than multiple triggers. It appears we had greatly improved the "exclusivity". By lowering n further we can make this ratio as large as we wish, thus the counts will appear as "exclusive" as we wish. But does this kind of low intensity exclusivity, which is what your argument keeps returning to[/color], indicate in any way a collapse of the wave packet fragments on all ND-1 detectors as soon as the 1 detector triggers?

Of course not. Let's look what happens under assumption that each of ND detectors triggers via its own Poissonian entirely independently of others. Since the "photon 2" beam splits its intensity into ND equal parts, the Poissonian for each of ND detectors will be P(m,k), where m=n/ND is the average trigger rate of each of ND detectors. Let's denote p0=P(m,k=0) the probability that one (specific) detector will not trigger. Thus p0=exp(-m). The probability that this particular detector will trigger at all (indicating 1 or more "photons") is then p1=1-p0=1-exp(-m). In your high "exclusivity" (i.e. low intensity) limit n->0, we will have m<<1 and p0~1-m, p1~m.

The probability that none of ND's will trigger, call it D(0), is thus D(0)=p0^ND=exp(-m*ND)=exp(-n), which is, as expected, the same as no-trigger probability of the single perfect multiphoton (square law Poissonian) detector capturing all of the "photon 2". Since we can select k detectors in C[ND,k] ways (C[] is a binomial coefficient), the probability of exactly k detectors triggering is D(k)=p1^k*p0^(ND-k)*C[ND,k], which is a binomial distribution with average number of triggers p1*ND. In the low intensity limit (n->0) and for large ND (corresponding to a perfect multiphoton resolution), D(k) becomes (using Stirling approximation and using p1*ND~m*ND=n) precisely the Poisson distribution P(n,k). Therefore, this low intensity exclusivity which you keep bringing up is trivial[/color] since it is precisely what the independent triggers of each detector predict no matter how you divide and combine the detectors (it is, after all, the basic property of the Poissonian distribution).

The real question is how to deal with the apparent sub-Poissonian cases as in PDC. That is where these kinds of trivial arguments don't help. One has to, as Marshall & Santos do, look at the specific output states and find the precise degree of the non-classicality (which they express for convenience in the Wigner function formalism). Their ZPF ("vacuum fluctuations" in conventional treatment) based detection and coincidence counting model allows for a limited degree of non-classicality in the adjusted counts[/color]. Their PDC series of papers shows that for PDC sources all non-classicality is of this apparent type (the same holds for laser/coherent/Poissonian sources and chaotic/super-Poissonian sources).

Without the universal locality principle, you can only refute specific overlooked effects of a particular claimed non-classicality setup. This does not mean that the nature somehow conspires to thwart non-locality through some obscure loopholes[/color]. It simply means that a particular experimental design has overlooked some effect and that it is more likely that the experiment designer will overlook more obscure effects.

In a fully objective scientific system one wouldn't have to bother refuting anything about any of these flawed experiments since their data hasn't objectively shown anything non-local. But in the present nature-is-non-local zeitgeist, a mere wishful excuse by an experimenter that the failure is a minor technical glitch which will be remedied by future technology, becomes, by the time it trickles down to the popular and pedagogical levels, an experimentally established non-locality.
 
Last edited:
  • #140
If you have it, maybe you can make it available here ; I don't have access to Foundations of Physics.

I have only a paper preprint but no scanner handy which could make a usable electronic copy of it. The Los Alamos archive has their more recent preprints. Their preprint "The myth of the Photon" also reviews the basic ideas and contains a citation to a Phys.Rev. version of that Found.Phys. paper. For intro on Wigner functions (and the related pseudo-distributions, the Husimi and the Glauber-Sudarshan functions) you can check these http://web.utk.edu/~pasi/davidovich.pdf and a longer paper with more on their operational aspects.
 
Last edited:
  • #141
nightlight said:
You're trying to make the argument universal which it is not. It is merely addressing an overlooked effect for the particular non-classicality claim setup (which also includes particular type of source and nearly perfectly efficient polarizer and beam splitters).

I'm looking more into Santos and Co's articles. It's a slow read, but I'm working up my way... so patience :-) BTW, thanks for the lecture notes, they look great !


You're saying that, at some undefined stage of triggering process of the detector DA, the dynamical evolution of the fragment B will stop, the fragment B will somehow shrink/vanish even if it is light years away from A and DA. Then, again at some undefined later time, the dynamical evolution of B/DB will be resumed.

Not at all. My view (which I have expressed here already a few times in other threads) is quite different and I don't really think you need a "collapse at a distance" at all - I'm in fact quite a fan of the decoherence program. You just get interference of measurement results when they are compared by the single observer who gets a hold on both measurements in order to calculate the correlation. This means that macroscopic systems can be in a superposition, but that's no problem, just continuing the unitary evolution (this is th essence of the decoherence program). But the point was not MY view :-)

Your subsequent attempt to illustrate this "exclusion" unfortunately mixes up the entirely trivial forms of exclusions[/color], which are perfectly consistent with the model of uninterrupted local dynamics. To clarify the main mixup (and assuming no mixups regarding the entirely classical correlation aspect due to any amplitude modulation), let's look at the Poissonian square law detectors (which apply to the energies of photons relevant here, i.e. those for which there are nearly perfect coherent splitters).

Suppose we have a PDC source and we use "photon 1" of the pair as a reference to define our "try" so that whenever detector D1 triggers we have a time window T in which we enable detection of "photon 2." Keep also in mind that the laser pump which feeds the non-linear crystal is Poissonian source (produces coherent states which superpose all photon number states using for coefficient magnutudes the square-roots of Poissonian probabilities), thus neither the input nor the output states are sharp photon number states (pedagogical toy derivations might use as the input the sharp number state, thus they'll show a sharp number state in output).

To avoid the issues of detector dead time or multiphoton capabilities, imagine we use a perfect coherent splitter, split the beam, then we add in each path another perfect splitter, and so on, for L levels of splitting, and place ND=2^L detectors in the final layer of subpaths. The probability of k detectors (Poissonian, square law detectors relevant here) triggering in a given try is P(n,k)=n^k exp(-n)/k! where n is the average number of triggers. A single multiphoton capable detector with no dead time would show this same distribution of k for a given average rate n.

Let's say we tune down the input power (or sensitivity of the detectors) to get an average number of "photon 2" detectors triggering as n=1. Thus the probability of exactly 1 among the ND detectors triggering is P(n=1,k=1)=1/e=37%. Probability of no ND trigger is P(n=1,k=0)=1/e=37%. Thus, the probability of more than 1 detector triggering is 26%, which doesn't look very "exclusive".

Your suggestion was to lower (e.g. via adjustments of detectors thresholds or by lowering the input intensity) the average n to a value much smaller than 1. So, let's look at n=0.1, i.e. on average we get .1 ND triggers for each trigger on the reference "photon 1" detector. The probability of a single ND trigger is P(n=0.1,k=1)=9% and of no trigger P(n=0.1,k=0)=90%.

Thus the probability of multiple ND triggers is now only 1%, i.e. we have 9 times more single triggers than the multiple triggers, while before, for n=1, we had only 37/26=1.4 times more single triggers than multiple triggers. It appears we had greatly improved the "exclusivity". By lowering n further we can make this ratio as large as we wish, thus the counts will appear as "exclusive" as we wish. But does this kind of low intensity exclusivity, which is what your argument keeps returning to[/color], indicate in any way a collapse of the wave packet fragments on all ND-1 detectors as soon as the 1 detector triggers?

Of course not. Let's look what happens under assumption that each of ND detectors triggers via its own Poissonian entirely independently of others. Since the "photon 2" beam splits its intensity into ND equal parts, the Poissonian for each of ND detectors will be P(m,k), where m=n/ND is the average trigger rate of each of ND detectors. Let's denote p0=P(m,k=0) the probability that one (specific) detector will not trigger. Thus p0=exp(-m). The probability that this particular detector will trigger at all (indicating 1 or more "photons") is then p1=1-p0=1-exp(-m). In your high "exclusivity" (i.e. low intensity) limit n->0, we will have m<<1 and p0~1-m, p1~m.

The probability that none of ND's will trigger, call it D(0), is thus D(0)=p0^ND=exp(-m*ND)=exp(-n), which is, as expected, the same as no-trigger probability of the single perfect multiphoton (square law Poissonian) detector capturing all of the "photon 2". Since we can select k detectors in C[ND,k] ways (C[] is a binomial coefficient), the probability of exactly k detectors triggering is D(k)=p1^k*p0^(ND-k)*C[ND,k], which is a binomial distribution with average number of triggers p1*ND. In the low intensity limit (n->0) and for large ND (corresponding to a perfect multiphoton resolution), D(k) becomes (using Stirling approximation and using p1*ND~m*ND=n) precisely the Poisson distribution P(n,k). Therefore, this low intensity exclusivity which you keep bringing up is trivial[/color] since it is precisely what the independent triggers of each detector predict no matter how you divide and combine the detectors (it is, after all, the basic property of the Poissonian distribution).

You're perfectly right, and I acknowledged that already a while ago when I said that there's indeed no way to distinguish "single photon" events that way. What I said was that such a single-photon event (which is one of a pair of photons), GIVEN A TRIGGER WITH ITS CORRELATED TWIN, will give you an indication of such an exclusivity in the limit of low intensities. It doesn't indicate any non-locality or whatever, but indicates the particle-like nature of photons, which is a first step, in that the marble can only be in one place at a time, and with perfect detectors WILL be in one place at a time. It would correspond to the 2 511KeV photons in positron annihilation, for example. I admit that my views are maybe a bit naive for opticians: my background is in particle physics, and currently I work with thermal neutrons, which come nicely in low-intensity Poissonian streams after interference all the way down the detection spot. So clicks are marbles :-)) There are of course differences with optics: First of all, out of a reactor rarely come correlated neutron pairs :-), but on the other hand, I have all the interference stuff (you can have amazing correlation lengths with neutrons!), and the one-click-one-particle detection (with 98% efficiency or more if you want), background ~ 1 click per hour.


This does not mean that the nature somehow conspires to thwart non-locality through some obscure loopholes[/color]. It simply means that a particular experimental design has overlooked some effect and that it is more likely that the experiment designer will overlook more obscure effects.

In a fully objective scientific system one wouldn't have to bother refuting anything about any of these flawed experiments since their data hasn't objectively shown anything non-local. But in the present nature-is-non-local zeitgeist, a mere wishful excuse by an experimenter that the failure is a minor technical glitch which will be remedied by future technology, becomes, by the time it trickles down to the popular and pedagogical levels, an experimentally established non-locality.

I agree with you here concerning the scientific attitude to adopt, and apart from a stimulus for learning more quantum optics, it is the main motivation to continue this discussion :-) To me, these experiments don't exclude anything, but they confirm beautifully the quantum predictions. So it is very well possible that completely other theories will have similar predictions, it is "sufficient" to work them out. However, if I were to advise a student (but I won't because it is not my job) on whether to take that path or not, I'd strongly advise against it, because there's so much work to do first: you have to show agreement on SUCH A HUGE AMOUNT OF DATA that the work is enormous, and the probability of failure rather great. On the other hand, we have a beautifully working theory which explains most if not all of it. So it is probably more fruitful to go further in the successfull path than to err "where no man has gone before". On the other hand, for a retired professor, why not play with these things :-) I myself wouldn't dare, for the moment: I hope to make more "standard" contributions and I'm perfectly happy with quantum theory as it stands now - even though I think it isn't the last word, and we will have another theory, 500 years from now. But I can make sense of it, it works great, and that's what matters. Which doesn't mean that I don't like challenges like you're proposing :-)


cheers,
Patrick.
 
Last edited:
  • #142
I might have misunderstood an argument you gave. After reading your text twice, I think we're not agreeing on something.

nightlight said:
Your subsequent attempt to illustrate this "exclusion" unfortunately mixes up the entirely trivial forms of exclusions[/color], which are perfectly consistent with the model of uninterrupted local dynamics. To clarify the main mixup (and assuming no mixups regarding the entirely classical correlation aspect due to any amplitude modulation), let's look at the Poissonian square law detectors (which apply to the energies of photons relevant here, i.e. those for which there are nearly perfect coherent splitters).

Nope, I am assuming 100% efficient detectors. I don't really know what you mean with "Poissonian square law detectors" (I guess you mean some kind of Bolometers which give a Poissonian click rate as a function of incident energy). I'm within the framework of standard quantum theory and I assume "quantum theory" detectors. You can claim they don't exist, but that doesn't matter, I'm talking about a QUANTUM THEORY prediction. This prediction can be adapted with finite quantum efficiency, and assumes fair sampling. Again, I'm not talking about what really happens or not, I'm talking about standard quantum theory predictions, whether correct or not.

Suppose we have a PDC source and we use "photon 1" of the pair as a reference to define our "try" so that whenever detector D1 triggers we have a time window T in which we enable detection of "photon 2."

Well, the quantum prediction with 100% efficient detectors is 100% correlation, because there are EXACTLY as many photons, at the same moment (at least on the time scale of the window) in both beams. The photons can really be seen as marbles, in the same way as the two 511KeV photons from a positron desintegration can be seen, pairwise, in a tracking detector, or the tritium and proton desintegration components can be seen when a He3 nucleus interacts with a neutron.

Keep also in mind that the laser pump which feeds the non-linear crystal is Poissonian source (produces coherent states which superpose all photon number states using for coefficient magnutudes the square-roots of Poissonian probabilities), thus neither the input nor the output states are sharp photon number states (pedagogical toy derivations might use as the input the sharp number state, thus they'll show a sharp number state in output).

Yes, but this is the only Poissonian source, so if we turn down the production rate of couples (which will be a Poissonian source with much lower rate than the incident beam, which has to be rather intense). So the 2-photon states come indeed also in a Poissonian superposition (namely the state |0,0>, the state |1,1>, the state |2,2> ...) where |n,m> indicates n blue and m red photons, but with coefficients which are from a much lower rate Poissonian distribution, which means that essentially only the |0,0> and |1,1> contribute. So one can always, take the low intensity limit and work with a single state.

To avoid the issues of detector dead time or multiphoton capabilities, imagine we use a perfect coherent splitter, split the beam, then we add in each path another perfect splitter, and so on, for L levels of splitting, and place ND=2^L detectors in the final layer of subpaths. The probability of k detectors (Poissonian, square law detectors relevant here) triggering in a given try is P(n,k)=n^k exp(-n)/k! where n is the average number of triggers.

This is correct, for a single-photon coherent beam, if we take A RANDOMLY SELECTED TIME INTERVAL. It is just a dead time calculation, in fact.

Let's say we tune down the input power (or sensitivity of the detectors) to get an average number of "photon 2" detectors triggering as n=1. Thus the probability of exactly 1 among the ND detectors triggering is P(n=1,k=1)=1/e=37%. Probability of no ND trigger is P(n=1,k=0)=1/e=37%.

No, this is not correct, because there is a 100% correlation between the photon-2 trigger and the sum of all the photon-1 clicks. THE TIME INTERVAL IS NOT RANDOM ! You will have AT LEAST 1 click in one of the detectors (and maybe more, if we hit a |2,2> state).
So you have to scale up the above Poissonian probabilities with a factor e.

Your suggestion was to lower (e.g. via adjustments of detectors thresholds or by lowering the input intensity) the average n to a value much smaller than 1. So, let's look at n=0.1, i.e. on average we get .1 ND triggers for each trigger on the reference "photon 1" detector. The probability of a single ND trigger is P(n=0.1,k=1)=9% and of no trigger P(n=0.1,k=0)=90%.

Again, because of the trigger on detector 2, we do not have a random time interval, and we have to scale up the probabilities by a factor 10. So the probability of seeing a single ND trigger is 90%, and the probability of having more than 1 is 10%. The case of no triggers is excluded by the perfect correlation.


cheers,
Patrick.
 
  • #143
vanesch You just get interference of measurement results when they are compared by the single observer who gets a hold on both measurements in order to calculate the correlation.

And how did the world run and pick what to do before there was anyone to measure so they can interfere their results? I don't think universe is being ran by some kind of magnified Stalin[/color], lording over the creation and every now and then erasing fallen comrades from the photos to make a different more consistent history.

This means that macroscopic systems can be in a superposition, but that's no problem, just continuing the unitary evolution (this is the essence of the decoherence program).

Unless you suspend the fully local dynamical evolution (ignoring the non-relativistic approximate non-locality of Coulomb potential and such), you can't reach, at least not coherently, a conclusion of non-locality (a no-go for purely local dynamics).

The formal decoherence schemes have been around since at least early 1960s. Without adding ad hoc, vague (no less so than the collapse) super-selection rules to pick a preferred basis, they still have no way of making a unitary evolution pick a particular result out of a superposition. And without the pick for each instance, it makes no sense to talk of statistics of many such picks. You can't say you will get 30% heads in 1000 flips, while insisting you don't have to get any specific result in the individual flips (which is what these "decoherence" schemes, old and new, claim to be able somehow to achieve).

It is just another try to come up with a new and an improved mind-numbing verbiage, more mesmerising and slippery than the old one which got worn out, to uphold the illusion of being in possession of a coherent theory, for just a bit longer until there is something truly coherent to take its place.

I am ashamed to admit, but I was once taken in, for couple years or so, by one of these "decoherence" verbal shell games, Prigogines' version, and was blabbing senslessly "superoperators" and "subdynamics" and "dissipative systems" and "Friedrichs model" ... to any poor soul I could corner, gave a seminar, then a lecture to undergraduates as their QM TA (I hope it didn't take),...

What I said was that such a single-photon event (which is one of a pair of photons), GIVEN A TRIGGER WITH ITS CORRELATED TWIN, will give you an indication of such an exclusivity in the limit of low intensities. It doesn't indicate any non-locality or whatever, but indicates the particle-like nature of photons, which is a first step, in that the marble can only be in one place at a time,

You seem to be talking about time modulation of Poisson P(n,k), where n=n(t). That does correlate 1 a 2 trigger rates, but that kind of exclusivity is equally representative of fields and particles. In the context of QM/QED, where you already have a complete dynamics for the fields, such informal duality violates the Occam's razor { Classical particles can be simulated in all regards by classical fields (and vice versa). It is the dual QM kind that lacks coherence.}

you have to show agreement on SUCH A HUGE AMOUNT OF DATA that the work is enormous, and the probability of failure rather great.

The explorers don't have to build roads, bridges and cities, they just discover new lands and if these are worthy, the rest will happen without any of their doing.

On the other hand, we have a beautifully working theory which explains most if not all of it.

If you read the Jaynes passage quoted few messages back (or his other papers on the theme, or Barut's views, or Einstein's and Schroedinger's, even Dirac's and some of contemporary greats as well), "beautiful" isn't the attribute that goes anywhere in the vicinity of QED, in any role and under any excuse. Its chief power is in being able to wrap tightly around any experimental numbers which come along, thanks to a rubbery scheme which can as happily "explain" a phenomenon today as it will its exact opposite tomorrow (see Jaynes for the full argument). It is not a kind of power directed forward to the new unseen phenomena, the way Newton's or Maxwell's theories were. Rather, it is more like a scheme for post hoc rationalizations of whatever came along from the experimenters (as Jaynes put it -- the Ptolomean epicycles of our age).

On the other hand, for a retired professor, why not play with these things :-)

I've met few like that, too. There are other ways, though. Many physicists in USA ended up, after graduate school or maybe one postdoc, on the Wall Street or in the computer industry, created their companies (especially in software). They don't live by the publish or perish dictum and don't have to compromise any ideas or research paths to academic fashions and politicking. While they have less time, they have more creative freedom. If I were to bet, I'd say that the future physics will come precisely from these folks (e.g. Wolfram).

even though I think it isn't the last word, and we will have another theory, 500 years from now.

I'd say it's around the corner. Who would be going into physics if he believed otherwise. (Isn't there a little Einstein hiding in each of us?)
 
  • #144
nightlight said:
vanesch You just get interference of measurement results when they are compared by the single observer who gets a hold on both measurements in order to calculate the correlation.

And how did the world run and pick what to do before there was anyone to measure so they can interfere their results?

It just continued in unitary evolution. It started collapsing when I was born, it is collapsing all the time now that I'm living, and it will continue to run unitarily after I die. If ever I reincarnate it will start collapsing again. Nobody else can do the collapse but me, and nobody else is observing but me. How about that ? It is a view of the universe which is completely in sync with my egocentric attitudes. I never even dreamed of physics giving me a reason to be that way :smile:

You should look a bit more at recent decohence work by Zeh and Joos for instance. Their work is quite impressive. I think you're mixing up the relative state view (which dates mostly from the sixties) with their work which dates from the nineties.

cheers,
Patrick.
 
  • #145
nightlight said:
You can't say you will get 30% heads in 1000 flips, while insisting you don't have to get any specific result in the individual flips (which is what these "decoherence" schemes, old and new, claim to be able somehow to achieve).

This is not true. If you take a probability of series of 1000 flips together as one observation in a decoherence-like way, then this probability is exactly equal to the probability you would get classically when considering each flip at a time and making up series of 1000, meaning the series with about 30% heads in it will have a relatively high probability as compared to series in which, say, you'll find 45% heads. It depends whether I personally observe each flip or whether I just look at the record of the result of 1000 flips. But the results are indistinguishable.

cheers,
Patrick.
 
  • #146
nightlight said:
I am ashamed to admit, but I was once taken in, for couple years or so, by one of these "decoherence" verbal shell games, Prigogines' version, and was blabbing senslessly "superoperators" and "subdynamics" and "dissipative systems" and "Friedrichs model" ... to any poor soul I could corner, gave a seminar, then a lecture to undergraduates as their QM TA (I hope it didn't take),...
...
I'd say it's around the corner. Who would be going into physics if he believed otherwise. (Isn't there a little Einstein hiding in each of us?)

Like you're now babbling senselessly about Santos and Barut's views ? :-p :-p :-p
Ok, that one was easy, I admit. No, these discussions are really fun, so I should refrain from provoking namecalling games :rolleyes:
As I said, it makes me learn a lot of quantum optics, and you seem to know quite well what you're talking about.

cheers,
Patrick.
 
  • #147
nightlight said:
Its chief power is in being able to wrap tightly around any experimental numbers which come along, thanks to a rubbery scheme which can as happily "explain" a phenomenon today as it will its exact opposite tomorrow (see Jaynes for the full argument).

Yes, I read that, but I can't agree with it. When you read Weinberg, QFT is derived from 3 principles: special relativity, the superposition principle and the cluster decomposition principle. This fixes completely the QFT framework. The only thing you plug in by hand is the representation of the gauge group and that group itself (U(1)xSU(2)xSU(3)), a Higgs potential, and out pops the standard model, completely with all its fields and particles, from classical EM over beta decay, to nuclear structure (true, the QCD calculations in the low energy range are still messy ; but lattice QCD starts giving results).
I have to say that I find this impressive, that from a handful of parameters, you can build up all of known physics (except gravity of course). That doesn't exclude the possibility that other ways exist, but you are quickly in awe for the monumental work such a task will hold.

cheers,
patrick.
 
  • #148
vanesch said:
Again, because of the trigger on detector 2, we do not have a random time interval, and we have to scale up the probabilities by a factor 10. So the probability of seeing a single ND trigger is 90%, and the probability of having more than 1 is 10%. The case of no triggers is excluded by the perfect correlation.

I'd like to point out that a similar reasoning (different from the "Poissonian square law" radiation detectors) holds even for rather low photon detection efficiencies. If the efficiencies are, say, 20% (we take them all equal), then in our above scheme, we will have a probability of exactly one detector triggering equal to 0.2 x 0.9 = 18%, the probability of having exactly two detectors triggering equal to 0.2 x 0.2 x (0.09...) = a bit less than 4% etc...
So indeed, there is some "Poisson-like" distribution due to the finite efficiencies, but it is a FIXED suppression of coincidences by factors of 0.2, 0.04...
At very low intensities, the statistical Poisson coincidences should be much lower than these fixed suppressions (which are the quantum theory way of saying "fair sampling"), so we'd still be able to discriminate the "anticoincidences" due to the fact that each time there's only one marble in the pipe, and the anticoincidences due to lack of efficiency if each detector is generating its Poisson series on its own.

A way to picture this is by using, instead of beam splitters, a setup which causes diffraction of the second photon, and a position-sensitive photomultiplier (which is just an array of independent photomultipliers) looking at the diffraction picture.
You will build up slowly the diffraction picture with the synchronized clicks from detector 1 (which looks at the first photon of the pair) ; of course, each time detector one clicks, you will only have a chance of 0.2 to find a click on the position-sensitive PM. If the beam intensity is low enough, you will NOT find of the order of 0.04 times a second click on that PM. This is something that can only be achieved with a particle and is a prediction of QM.

cheers,
Patrick.
 
  • #149
vanesch You should look a bit more at recent decohence work by Zeh and Joos for instance. Their work is quite impressive. I think you're mixing up the relative state view (which dates mostly from the sixties) with their work which dates from the nineties.

Zeh has been a QM guru pontificating on QM measurement when my QM professor was a student and still appears to be in a superposition of vews. I looked up some of his & Joos' recent preprints on the theme. Irreversible macroscopic aparatus, 'coherence destroyed very rapidly',... basically the same old stuff. Still the same problem as with the origina Daneri, Loinger and Prosperi macroscopic decoherence scheme of 1962.

It is a simple question. You have |Psi> = a1 |A1> + a2 |A2> = b1 |B1> + b2 |B2>. These are two equivalent orhtogonal exapansions of state Psi, for two observables [A] and , of some system (where the system may be a single particle, an apparatus with a particle, rest of the building with the apparatus and the particle,...). On what basis does one declare that we have value A1 of [A] for a given individual instance (you need this to be able to even to talk about statistics of the sequence of such values)?

At some strategically placed point wihin their mind-numbing verbiage these "decoherence" folks will start pretending that it was already established that a1|A1>+a2|A2> is the "true" expansion and A1 is its "true" result, in the sense that allows them talk about statistical poperties of a sequence of outcomes at all i.e. in exactly the same sense that in order to say: word "baloney" has 7 letters, you have (tacitly at least) assumed that the word has the first letter, the second letter,... Yet, these slippery folks will fight tooth an nail such conclusion, or even that they assume there is an individual word at all, yet this word-non-word still somehow manages to have exactly seven letters-non-letters. (The only innovation worthy a note in the "new and improved" version is that they start by telling you right upfront they're not going to do, no sir, not ever, absolutely never, that kind of slippery maneuver.)

No thanks. Can't buy any of it. Considering that the only rationale[/color] precluding one from plainly saying that an individual system has the definite properties all along[/color] is the Bell's "QM prediction"[/color] (which in turn cannot be deduced without assuming the non-dynamical collapse/projection postulate, the collapse postulate which is needed to solve the "measurement" problem, the problem of absence of definite properties in a superposition, thus the problem which still exists solely because of the Bell's QM prediction").

If you drop the non-dynamical collapse postulate, you don't have Bell's "prediction" (you would still have the genuine "predictions" of the kind that actually predict the data obtained, warts and all). There is no other result at that point preventing you from interpeting the wave function as a real matter field[/color], evolving purely, without any interruptions, according to the dynamical equations (which happen to be nonlinear in the general coupled case) and representing thus the local "hidden" variables of the system. The Born rule would be reshuffled from its pedestal of a postulates to a footnote in scattering theory, the way and place it got the into QM. It is an approximate rule of thumb, its precise operational meaning depending ultimately on the apparatus design and measuring and counting rules (as it actually happens in any application, e.g. with the operational interpretations of Glauber P vs Wigner vs Husimi phase space functions), just as it would have beeen if someone had introduced it into the classical EM theory of light scattering in 19th century. The 2nd quantization is then only an approximation scheme for these coupled matter-EM fields, a linearization algorithm (similar to the Kowalski's and virtually identical to the QFT algorithms used in solid state and other branches of physics), adding no more new physics to the coupled nonlinear fields than, say, the Runge-Kutta numeric algorithm adds to the fluid dynamics Navier-Stokes equations.

Interestingly, in one of his superposed eigenviews, master guru Zeh[/color] himself insists that the wave function is a regular matter field[/color] and definitely not a probability "amplitude"[/color] -- see his paper "There is no "first" quantization", where he characterizes the "1st quantization" as merely a transition from a particle model to a field model[/color] (the way I did several times in this thread; which is of course how Schroedinger, Barut, Jaynes, Marshall & Santos, and others have viewed it). Unfortunately, somewhere down the article, his |Everett> state superposes in, nudging very gently at first, but eventually overtaking his |Schroedinger> state. I hope he makes up his mind in the next fourty years.
 
  • #150
nightlight said:
Unless you suspend the fully local dynamical evolution (ignoring the non-relativistic approximate non-locality of Coulomb potential and such), you can't reach, at least not coherently, a conclusion of non-locality (a no-go for purely local dynamics).

But that is exactly how locality is preserved in my way of viewing things ! I pretend that there is NO collapse at a distance, but that the record of remote measurements remains in a superposition until I look at the result and compare it with the other record (also in a superposition). It is only the power of my mind who forces a LOCAL collapse (beware: it is sufficient that I look at you and you collapse :devil:)

The formal decoherence schemes have been around since at least early 1960s. Without adding ad hoc, vague (no less so than the collapse) super-selection rules to pick a preferred basis, they still have no way of making a unitary evolution pick a particular result out of a superposition.

I know, but that is absolutely not the issue (and often people who do not know exactly what decoherence means make this statement). Zeh himself is very keen on pointing out that decoherence by itself doesn't solve the measurement problem ! It only explains why - after a measurement - everything looks as if it were classical, by showing which is the PREFERRED BASIS to work in. Pure relative-state fans (Many Worlds fans, of which I was one until a few months ago) think that somehow they will, one day, get around this issue. I think it won't happen if you do not add in something else, and the something else I add in is that it is my conciousness that applies the Born rule. In fact this comes very close to the "many minds" interpretation, except that I prefer the single mind interpretation :biggrin: exactly in order to be able to preserve locality. After all, I'm not aware of any other awareness except the one I'm aware of, namely mine :redface:.



And without the pick for each instance, it makes no sense to talk of statistics of many such picks. You can't say you will get 30% heads in 1000 flips, while insisting you don't have to get any specific result in the individual flips (which is what these "decoherence" schemes, old and new, claim to be able somehow to achieve).

I think you're misreading the decoherence program which you seem to confuse with hardline manyworlders. The decoherence program tells you that if you consider the collapse at each individual flip, or you only consider the collapse at the 1000 flips series, the result will be the same, because the non-diagonal terms in the density matrix vanish at a monstruously fast rate for any macroscopic system (UNLESS, of course, WE ARE DEALING WITH EPR LIKE SITUATIONS!). But in order to be able to even talk about a density matrix, you need to assume the Born rule (in its modern version). So the knowledgeable proponents of decoherence are well aware that they'll never DERIVE the Born rule that way, because they USE it. They just show equivalence between two different ways of using it.


cheers,
Patrick.
 
Back
Top