What Confusion Surrounds Young's Experiment on Wave-Particle Duality?

  • Thread starter Thread starter Cruithne
  • Start date Start date
  • Tags Tags
    Experiment
  • #151
nightlight said:
It is a simple question. You have |Psi> = a1 |A1> + a2 |A2> = b1 |B1> + b2 |B2>. These are two equivalent orhtogonal exapansions of state Psi, for two observables [A] and , of some system (where the system may be a single particle, an apparatus with a particle, rest of the building with the apparatus and the particle,...). On what basis does one declare that we have value A1 of [A] for a given individual instance (you need this to be able to even to talk about statistics of the sequence of such values)?


What the decoherence program indicates is that once you're macroscopic enough, certain *coherent* states survive (by factorization) the coupling with their environment, while others get hopelessly mixed up and cannot factorize out. It is the interaction hamiltonian of the system with the environment that determines this set of preferred (sometimes called coherent) states. This is the preferred basis problem which is then solved, and is the essential result of decoherence. But again, a common misconception is that decoherence deduces the Born rule and the projection which is not the case.

A simple example is the position of a charged particle at macroscopic distances. A superposition of macroscopic position states will entangle very quickly (through the Coulomb interaction) with its environment, so states with macroscopically distinguishable positions for a charged particle will not be able to get factorized out. However, a localized position (even though it doesn't have to be a Dirac pulse), will not be affected by this interaction.
So the "position" basis is preferred because it factors out.

There is no other result at that point preventing you from interpeting the wave function as a real matter field[/color], evolving purely, without any interruptions, according to the dynamical equations (which happen to be nonlinear in the general coupled case) and representing thus the local "hidden" variables of the system.

Well, you still have the small problem of how a real matter field (take neutrons) always gives point-like observations. How do you explain the 100% efficient (or close) detection of spot-like neutron interactions from a neutron diffraction pattern that can measure 4 meters across (I'm working right now on such a project) ? And here, the flux is REALLY LOW, we're often at a count rate of a few counts per second, with a time resolution of 1 microsecond, a background of 1 per day, and a detection efficiency of 95%.

The 2nd quantization is then only an approximation scheme for these coupled matter-EM fields, a linearization algorithm (similar to the Kowalski's and virtually identical to the QFT algorithms used in solid state and other branches of physics), adding no more new physics to the coupled nonlinear fields than, say, the Runge-Kutta numeric algorithm adds to the fluid dynamics Navier-Stokes equations.

I already said this a few times, but you ignored it. There is a big difference between 2nd quantization and not. It is given by the Feynman path integral. If you do not consider second quantization, you take the integral only over the *classical* solution (which are the solution of the non-linear field equations you are always talking about). If you do take into account second quantization, you INTEGRATE OVER ALL THE POSSIBLE NON-SOLUTIONS, with a weight factor which is given by exp(i (S-S0)/h-bar), with S the action calculated for a particular non-solution, and S0 the action of your solution (action from the Lagrangian that gives your non-linear coupled EM and Dirac field equations). So this must make a difference.

Interestingly, in one of his superposed eigenviews, master guru Zeh[/color] himself insists that the wave function is a regular matter field[/color] and definitely not a probability "amplitude"[/color] -- see his paper "There is no "first" quantization", where he characterizes the "1st quantization" as merely a transition from a particle model to a field model[/color] (the way I did several times in this thread; which is of course how Schroedinger, Barut, Jaynes, Marshall & Santos, and others have viewed it).

But OF COURSE. This is the way quantum field theory is done! The old quantum fields are replaced by REAL MATTER FIELDS, and then we apply quantization (which is called second quantization, but is in fact the first time we introduce quantum theory). So there's nothing exceptional in Zeh's statements. Any modern quantum field theory book treats the solutions of the Dirac equation on the same footing as the classical EM field. What is called "classical" in a quantum field book, is what you are proposing: namely the solution to the nonlinearly coupled Dirac and EM field equations

But you need then to QUANTIFY those fields in order to extract the appearance of particles. And yes, if you take this in the non-relativistic limit, you find back the Schroedinger picture (also with multiple particle superpositions and all that)... AFTER you introduced "second" quantization.

cheers,
Patrick.
 
Physics news on Phys.org
  • #152
vanesch This is not true. If you take a probability of series of 1000 flips together as one observation

And what if you don't take it just so. What happens in one flip. What dynamical equation is valid for that one instance? You can't hold that there is an instance of thousand flips with a definite statistics, or anything definite at all, while insisting there is no single flip within that instance with a definite result.

What if I define a Killoflip to consist of a sequence of thousand regular flips (doing exacly as before), and which I call one intsance of measurement of a variable which can take value 0 to 1000. And then I do 1000 Killoflips to obtain statistics of values. Does each Killoflip have a definite result? Even approximate value, say around 300. Or was that good only before I called what I was doing Killoflip, and now under the name Killoflip, you have to look "a series of 1000 Killoflips" as one observation to be able to say...

It is plain nonsense.

I was simply asking what is the dynamics (on the splitter setup) of wave fragment B in one instance[/color], as it, reaches the detector (viewed as a physical system[/color]) and interacts with it. If the joint dynamics proceeds uninterrupted, it yields either to trigger or no trigger based solely on the precise local state of all the fields, regardless of what goes on in the interaction between the physical system[/color] Detector-A and the packet fragment A.

To put it even more clearly, imagine we are not "measuring" but simply want to compute dynamical evolution of a combined system, A,B, Splitter, DA, DB, in a manner of a fluid dynamics simulation. We have a system of PDEs and we put in some initial and boundary and run a program to compute what is going on in one instance, under the assumed I&B conditions. In each run, the program follows the combined packet (with whatever time-slicing we set it to do), as it splits to A and B, follows the fragments as they enter detectors, gets the first ionization, then the cascade if it these happen to result in this run for the given I&B conditions (of the full system). As it makes multiple runs, the program also accumulates the statistics of coincidences.

Clearly, the A-B statistics computed this way will always be classical. The sharpest it can be for any given intensity with expectation value n of photo-electrons emitted per try (on DA or DB) is the Poissonian distribution, which will have variance (sigma square) equal n (if the n varies from try to try, you get compound Poissonian). This is also precisely the prediction of both the semiclassical and the QED model of this process.

The point I am making is that if you claim that you are not going to suspend the program at any stage, but let it go through all the steps including counting for any desired number of runs (to collect statistics), you cannot get anything but the classical correlation in the counts.

Explain how you can claim that you will let the program run uninterrupted and that it will show different than classical Poissonian (at best) statistics. We're not talking "measurement" or "postulates" but simply about the computation of the PDE problem, so don't introduce non sequiturs such "perfect detector" or to understand it I now need to imagine 1000 computers together as one computer,...

in a decoherence-like way[/color]

Oh, yeah, that's it. Silly me, how come I didn't of so simple solution. I see.

It depends whether I personally observe each flip or whether I just look at the record of the result of 1000 flips. But the results are indistinguishable.

I see, as fallback, if the "decoherence-like way" fails to mesmerize, then we're all constructs of your mind, and your mind has constructed all these constructs to contain a belief construct of every other construct as the just a construct of the last construct's construct of mind.
 
Last edited:
  • #153
vanesch What the decoherence program indicates is that once you're macroscopic enough, certain *coherent* states survive (by factorization) the coupling with their environment,

The "enviroment" is assumed in |Psi>. A subsystem can trivially follow a non-unitary evolution in the subsystem's factor during the interaction. If our |Psi> includes all that you ever plan to add in, including yourself (whose mind has apparently constructed all of us anyway; do always argue as much with the other constructs of your mind?), you're back exactly where you started -- two different decompositions of |Psi> and you need to specify which is the "true" one and how would a precise postulate be formulated out of such criteria (saying which variables/observables, when and how, gain or lose definitive values, so that you can make sense when talking about a sequence of such values, statistics and such).

It is the interaction hamiltonian of the system with the environment that determines this set of preferred (sometimes called coherent) states. This is the preferred basis problem which is then solved, and is the essential result of decoherence.

The combined hamiltonian basis was claimed as preferred in older approaches (as well as other variants, such as integrals of motion, or Prigogine's "subdynamics" which represents a closed classical-like reduced/effective dynamics of the macroscopic apparatus whose variables can have definite values, a kind of an emergent property). Any basis you claim as preferred will have the same problems with the non-commuting observables if you decide to allow definite values for the preferred basis. It would be as if you declared the Sz observable of particle A and/or B in EPR-Bell model as the "preferred" one that has a definite value and claimed that somehow solves the problem. The problems always re-emerge once you include the stuff you kept outside to help decohere the subsystem. The outer layer never decoheres.

Well, you still have the small problem of how a real matter field (take neutrons) always gives point-like observations. How do you explain the 100% efficient (or close) detection of spot-like neutron interactions from a neutron diffraction pattern that can measure 4 meters across (I'm working right now on such a project) ? And here, the flux is REALLY LOW, we're often at a count rate of a few counts per second, with a time resolution of 1 microsecond, a background of 1 per day, and a detection efficiency of 95%.

The localization problem for matter fields hasn't been solved (even though there are heuristic models indicating some plausible mechanisms, e.g. Jaynes and Barut had toy models of this kind). If your counts are Poissonian (or super-Poissonian) for the buildup of the high visibility 4m large diffraction pattern, there should be no conceptual problem in conceiving a purely local self-focusing or some kind of toplogical/structural unravelling mechanism which could at least superficially replicate such point-like detections with the diffractions. After all, the standard theory doesn't have an explanation here other than saying that is how it is.

There is a big difference between 2nd quantization and not. It is given by the Feynman path integral. If you do not consider second quantization, you take the integral only over the *classical* solution (which are the solution of the non-linear field equations you are always talking about). If you do take into account second quantization, you INTEGRATE OVER ALL THE POSSIBLE NON-SOLUTIONS, with a weight factor which is given by exp(i (S-S0)/h-bar), with S the action calculated for a particular non-solution, and S0 the action of your solution (action from the Lagrangian that gives your non-linear coupled EM and Dirac field equations). So this must make a difference.

You're not using the full system nonlinear dynamics (a la Barut's self-field) for the QED in the path integral representation i.e. the S0 used is not computed for the Barut's full nonlinear solution but for the (iteratively) linearized approximations. The difference is even more transparent in the canonical quantization via Fock space, where it is obvious that you are forming the Fock space from the linear approximations (external fields/current approximation) of the nonlinear fields.

But OF COURSE. This is the way quantum field theory is done! The old quantum fields are replaced by REAL MATTER FIELDS, and then we apply quantization (which is called second quantization, but is in fact the first time we introduce quantum theory). So there's nothing exceptional in Zeh's statements.

Well, it is not quite so simple to weasel out. For the QM reasoning, such as Bell's QM prediction, you had used the same matter fields as the probability "amplitides" (be it for Dirac or for it's approximation, Schroedinger-Pauli particle) and insisted they are not local matter fields since they can non-dynamically collapse. How do you transition you logic from the "amplitude" and all that goes with it to just plain matter field right before you go on to quantize it now as plain classical system.

If it were a classical system all along, like the classical EM field, then the issue of collapse when the observer learns the result "in a decoherence-like way" or any other way, is plain nonsense. We never did the same kind of probability "amplitude" talk with the Maxwell field before quantizing it. It was just plain classical field with no mystery, no collapse "decoherence-like way" or jump-like way... There was no question of being able to deduce the no-go for LHVs by just using the dynamics of that field. Yet you somehow claim you can do that for the Dirac field in Stern-Gerlach, without having to stop or suspend its purely local dynamics (Dirac equation). Does it again come back to your mind in a decohernce-like way to have it do the collapse of the superposed amplitudes?

Then suddenly, both fields are declared just plain classical fields (essentially equivalent except for slightly different equations), that we proceed to quantize. There is a dichotomy here (and it has nothing to do with the switch from Schroedinger to Dirac equation).

That is precisely the dichotomy I discussed few messages back when responding to your charge of heresy for failing to show the offcially required level of veneration and befuddlement with the Hilbert product space.

Any modern quantum field theory book treats the solutions of the Dirac equation on the same footing as the classical EM field.

Yes, but my Jackson ED textbook doesn't treat EM fields as my Messiah QM textbook treats Dirac or Schroedinger matter field. The difference in treatment is hugely disproportionate to just the difference implied by the different form of equations.

Note also that, so far there is no experimental data showing that the local dynamics of these fields has to be replaced by a non-local one. There is such data for the "fair sampling" type of local theories, but neither Dirac nor Maxwell fields are of that "fair sampling" type.

What is called "classical" in a quantum field book, is what you are proposing: namely the solution to the nonlinearly coupled Dirac and EM field equations

Not at all. What is called classical in a QED book is the Dirac field in the external EM field approximation[/color] and the EM field in the external current approximation[/color]. These are linear approximations of the non-linear fields. It is these linear approximations which are being (second) quantized (be it canonically or via path integrals), not the full non-linear equations. Only then, with Fock space defined, the interaction is iteratively phased in via perturbative expansion which is defined in terms of the quantized linear fields. The whole perturbative expansion phasing in the interaction, with all the rest of the (empirically tuned ad hoc) rules on how to do everything just so in order to come out right, is what really defines the QED, not the initial creation of the Fock space from linearized "classical" fields.

That is exactly how the Kowalski's explicit linearization via the Fock space formalism appears. In that case the Fock space formalism was a mere linear approximation (an infinite set of linear equations) to the original nonlinear system, no new effect could exists in the quantized formalism than what was already present in the classical nonlinear system. In particular, one would have no non-locality even for sharp boson number states of the quantized formalism (the usual trump card for the QM/QED non-locality advocates). Any contradiction or a Bell-like no-go theorem for the classical model, should one somehow deduce any such for the Kowalski's Fock space bosons, would simply mean a faulty approximation. (Not that anyone has ever deduced Bell's QM "prediction" without instantaneous and interaction-free collapse of the remote state, but just using the QED dynamics and any polarizer and detector properties & interactions with the quantized EM field.)

And yes, if you take this in the non-relativistic limit, you find back the Schroedinger picture (also with multiple particle superpositions and all that)... AFTER you introduced "second" quantization.

The non-relativistic limit, i.e. the Dirac equation to Schroedinger-Pauli equation limit is irrelevant. The Bell EPR setup and conclusion for spin 1/2 particle was exactly same for Dirac particle or for its Schroedinger-Pauli approximation.
 
Last edited:
  • #154
nightlight said:
The "enviroment" is assumed in |Psi>. A subsystem can trivially follow a non-unitary evolution in the subsystem's factor during the interaction. If our |Psi> includes all that you ever plan to add in, including yourself

It is the "including yourself" part that distinguishes the decoherence part from hardline Many Worlds and at that point, in the decoherence program, the projection postulate has to be applied. At that point, you have included so many many degrees of freedom that you WILL NOT OBSERVE, that you restrict your attention to the reduced density matrix, which has become essentially diagonal in the "preferred basis of the environment".

It would be as if you declared the Sz observable of particle A and/or B in EPR-Bell model as the "preferred" one that has a definite value and claimed that somehow solves the problem. The problems always re-emerge once you include the stuff you kept outside to help decohere the subsystem. The outer layer never decoheres.

Exactly. That's why the outer layer has to apply the projection postulate. And that outer layer is the conscious observer. Again, the decoherence program doesn't solve the measurement problem in replacing the projection postulate. It only indicates which are the observables of the subsystem which are going to be observable during a while (factor out from all the rest which is not observed).

The localization problem for matter fields hasn't been solved (even though there are heuristic models indicating some plausible mechanisms, e.g. Jaynes and Barut had toy models of this kind). If your counts are Poissonian (or super-Poissonian) for the buildup of the high visibility 4m large diffraction pattern, there should be no conceptual problem in conceiving a purely local self-focusing or some kind of toplogical/structural unravelling mechanism which could at least superficially replicate such point-like detections with the diffractions. After all, the standard theory doesn't have an explanation here other than saying that is how it is.

I would be VERY SURPRISED that you can construct such a thing, because it is the main reason of existence of quantum theory. Of course, once this is done, I'd be glad to consider it, but let us not forget that it is the principal reason to quantify fields in the first place!

You're not using the full system nonlinear dynamics (a la Barut's self-field) for the QED in the path integral representation i.e. the S0 used is not computed for the Barut's full nonlinear solution but for the (iteratively) linearized approximations.

No, in PERTURBATIVE quantum field theory, both the nonlinear "classical dynamics" and the quantum effects (from the path integral) are approximated in a series devellopment, simply because we don't know how to do otherwise in all generality, except for some toy models. But in non-perturbative approaches, the full non-linear dynamics is included (for example solitons are a non-linear solution to the classical field problem).

The difference is even more transparent in the canonical quantization via Fock space, where it is obvious that you are forming the Fock space from the linear approximations (external fields/current approximation) of the nonlinear fields.

No, it is a series devellopment. If you consider external fields, that's a semiclassical approach, and not full QFT.

Well, it is not quite so simple to weasel out. For the QM reasoning, such as Bell's QM prediction, you had used the same matter fields as the probability "amplitides" (be it for Dirac or for it's approximation, Schroedinger-Pauli particle) and insisted they are not local matter fields since they can non-dynamically collapse. How do you transition you logic from the "amplitude" and all that goes with it to just plain matter field right before you go on to quantize it now as plain classical system.

Because there is an equivalence between the linear and non-relativistic part of the classical field equation as a quantum wave equation of a single particle and the single-particle states of the second quantized field. It only works if you are sure you work with one or a fixed number of particles. This is a subtle issue of which others know more than I here. I will have to look it up again in more detail.

If it were a classical system all along, like the classical EM field, then the issue of collapse when the observer learns the result "in a decoherence-like way" or any other way, is plain nonsense. We never did the same kind of probability "amplitude" talk with the Maxwell field before quantizing it. It was just plain classical field with no mystery, no collapse "decoherence-like way" or jump-like way... There was no question of being able to deduce the no-go for LHVs by just using the dynamics of that field.

No, you need a multiparticle wave function (because we're working with a 2-particle system) which is essentially non-local, OR you need a classical field which is second-quantized. The second approach is more general, but the first one was sufficient for the case at hand. If there's only ONE particle in the game, there is an equivalence between the classical field and the one-particle wave function: the maxwell equations describe the one-photon situation (but not the two-photon situation).

Then suddenly, both fields are declared just plain classical fields (essentially equivalent except for slightly different equations), that we proceed to quantize.

It is because of the above-mentioned equivalence.


Yes, but my Jackson ED textbook doesn't treat EM fields as my Messiah QM textbook treats Dirac or Schroedinger matter field.

Messiah still works with "relativistic quantum mechanics" which is a confusing issue at best. In fact there exists no such thing. You can work with non-relativistic fixed-particle situations, non-relativistic quantum field situations or relativistic quantum field situations, but there's no such thing as relativistic fixed-particle quantum mechanics.

The difference in treatment is hugely disproportionate to just the difference implied by the different form of equations.

No, Jackson treats the classical field situation, and QFT (the modern approach) quantizes that classical field. The difference comes from the quantization, not from the field equation itself.

Not at all. What is called classical in a QED book is the Dirac field in the external EM field approximation[/color] and the EM field in the external current approximation[/color]. These are linear approximations of the non-linear fields.
It is these linear approximations which are being (second) quantized (be it canonically or via path integrals), not the full non-linear equations. Only then, with Fock space defined, the interaction is iteratively phased in via perturbative expansion which is defined in terms of the quantized linear fields. The whole perturbative expansion phasing in the interaction, with all the rest of the (empirically tuned ad hoc) rules on how to do everything just so in order to come out right, is what really defines the QED, not the initial creation of the Fock space from linearized "classical" fields.

We must be reading different books on quantum field theory :-)
Look at the general formulation of the path integral (a recent exposition is by Zee, but any modern book such as Peskin and Schroeder, or Hatfield will do).
The integral clearly contains the FULL nonlinear dynamics of the classical fields.


cheers,
patrick.
 
  • #155
I would like to add something here, because the discussion is taking a turn that is in my opinion wrongheaded. You're trying to attack "my view" of quantum theory (which is 90% standard and 10% personal: the "standard" part being the decoherence program, and the personal part the "single mind" part). But honestly that's not very interesting because I don't take that view very seriously myself. To me it is a way, at the moment, with the current theories, to have an unambiguous view on things. Although it is very strange because no ontological view is presented - everything is epistemology (and you qualify it as nonsense), it is a consistent view that gives me "peace of mind" with the actual state of things. If one day another theory takes over, I'll think up of something else. At the time of Poisson, there was a lot of philosophy on the mechanistic workings of the universe, an effort that was futile. In the same way, our current theories shouldn't give us such a fundamental view that gives a ground to do philosophy with, because they aren't the final word. After all, what counts is the agreement with experiment, and that's all there is to it. If we have two different theories with the same experimental success, of course the temptation is to go for the one that fits nicest with our mental preferences. In fact, the best choice is dictated by what allows us to expand the theory more easily (and correctly).
That's why I was interested in what you are telling, in that I wasn't aware that you could build a classical field theory that gives you equivalent results with quantum field theory for all experimentally verified results. Honestly I have a hard time believing it, but I can accept that you can go way further than what is usually said with such a model.

What is called second quantization (but which is in fact a first quantisation of classical fields) has as main aim to explain the wave-particle duality when you can have particle creation and annihilation. You are right that the Fock space is build up on the linear part of the classical field equations ; however, that's not a linearization of the dynamics, but almost the DEFINITION of what are the associated particles with the matter field. Indeed, particles only have a well-defined meaning when they are propagating freely through space, without interaction (or where the interaction has been renormalized away). The fact that these states are used to span the Fock space should simply be seen as the classical analogon to do, say, a Fourier transform on the fields you're working with. You simply assume that your solutions are going to be written in terms of sine and cosines, not that the solutions ARE sines and cosines. So that, by itself, is not a "linearisation of the dynamics", it is just a choice of basis, in a way. But we're not that free in the choice: at the end of the day we observe particles, so we observe in that basis.

The only way that is known to find particles associated with fields (at least to me, which doesn't exclude others of course) is the trick with the creator-anticreator operators of a harmonic oscillator. That's why I said that I would be VERY SURPRISED INDEED if you can crank out a non-linear classical field theory (even including ZPF noise or whatever) that gives us nice particle-like solutions with the correct mass and energy momentum relationship which then propagate nicely throughout space, but which act for the rest like what quantum theory predicts (at least in those cases where it has been tested).

That's why I talked about gammas and then about neutrons. Indeed, once you consider that in one way or another you're only going to observe lumps of fields when you've integrated enough "field" locally, and that you're then going to generate a Poisson series of clicks, you can do a lot with a "classical field", and it will be indistinguishable from any quantum prediction as long as you have independent single-particle situations. The classical field then looks a lot like a single-particle wave function. But to explain WHY you integrate "neutron-ness" until you have a full neutron, and then you will click Poisson-like looks to me quite a challenging task if no mechanism is build into the theory that makes you have steps of neutrons in the first place. And as I said, the only way I know how to do that is through the method of "second quantization" ; and that supposes linear superposition of particle states (such as EPR like states).

cheers,
patrick.
 
  • #156
vanesch Nope, I am assuming 100% efficient detectors.

If you recall that 85% QE detector cooled to 6K, check the QE calculation -- it subtracts the dark current, which was exactly at 20,000 triggers/sec as the incident photon count. Thus your 100% efficient detector is no good for anticorrelations if half the triggers might be vacuum noise. You can look their curves, see if you get more suitable QE to noise deal (they were looking only to max out on the QE).

Unfortunately, you comments in this message show you are off again on an unrelated tangent, ignoring the context you're disputing, and I expect gamma photons to be trotted out any moment now.

The context was -- I am trying to pinpoint exactly the place you depart from the dynamical evolution in the PDC beam splitter case in order to arrive at the non-classical anticorrelation. Especially since you're claiming that the dynamical evolution is not being suspended at any point. My aim was to show that this position is self-contradictory -- you cannot obtain different result from the classical one without suspending the dynamics and declaring collapse.

In order to analyse this, the monotonous mantras of "strict QM", "perfect 100% detectors"... are much too shallow and vacuous to yield anything. We need to look at the "detectors" DA and DB as physical systems, subject to QM/QED (as anything else) in order to see whether they could do what you say they could without violating logic or any agreed upon facts (theoretical or empirical) and without bringing in your mind/consciousness into the equations, the decoherence-like way or otherwise.

To this end we are following a wave packet (in coordinate representation) of PDC "photon 2" and we are using "photon 1" as as an indicator that there is a "photon 1" heading to our splitter. This part of the correlation between 1 and 2 is purely classical amplitude based correlation[/color] i.e. a trigger of D1 at time t1 indicates that the incident intensity of "photon 2", I2(t) will be sharply increased within some time window T at starting at t1 (with an offset from t1, depending on path lengths).

The only implication of this amplitude correlation used is that we can have a time window defined via the "photon 1" trigger as [t1,t1+T] and DB. During this window the incident intensity of "photon 2" can be considered some constant I, or in terms of the average photon counts on DA/DB denoted as n. The constancy assumption within the window simplifies discussion and it is favorable to the stronger anticorrelation anyway (since the variable rate would result in the compound Poissonian which has greater variance for any given expectation value). This is again just a minor point.

I don't really know what you mean with "Poissonian square law detectors".

I mean the photon detectors for the type of photons we're discussing (PDC, atomic cascades, laser,...). The "square law" refers to the trigger probability in a given time window [t,t+T] being proportional to the incident EM energy in that time window. The Poissonian means that the ejected photo-electron count has a Poissonian distribution i.e. probability of k electrons being ejected in a given time interval is P(n,k)=n^k exp(-n)/k! where n=average numbers of photoelectrons ejected in that time interval. In our case, the time window is the short coincidence window (defined via the PDC "photon 1" trigger) and we assume n=constant in this window. As indicated earlier, the n(t) varies with time t, it is low generally except for the sharp peaks within the windows defined by the triggers of "photon 1" detector.

Note that we might assume an idealized multiphoton detector which amplifies optimally all photo-electrons, thus we would then have for the case of k electron ejection exactly k triggers. Alternatively, as suggested earlier, we can divide the "photon 2" beam via the L level binary tree of beam splitters obtaining ND=2^L reduced beams on which we place simpler "1-photon" detectors (which only indicate yes/no). These simple 1-photon detectors would receive intensity I1=I/ND and thus have the photo-electron Poissonian with n1=n/ND. But since they don't differentiate in their output k=1 from k=2,3,... we would have to count 1 when they trigger at all, 0 if they don't trigger. For "large" enough ND (see earlier msg for discussion) the single ideal multiphoton detector is equivalent to the array of ND 1-photon detectors.

For brevity, I'll assume the multiphoton detector (optimal amplification and photon number resolution). The rest of your comments indicate some confusion on what precisely this P(n,k) means, what does it depend on and how does it apply to our case. Let's try clearing that a bit.

There are two primary references which analyze photodetection from the ground up, [1] is semiclassical and [2] is full QED derivation. Mandel & Wolf's textbook [3] has one chapter for each approach with more readable coverage and nearly full detail of derivations. Papers [4] and [5] discuss in more depth the relation between the two approaches, especially the operational meaning of the normal ordering convention and of the resulting Glauber's correlation functions (the usual Qunatum Optics correlations).

Both approaches yield exactly the same conclusion, the square law[/color] property of photo-ionization and the Poisson distribution[/color] (super-Poisson for mixed or varying fields within the detection window) of the ejected photo-electrons. They all also derive detector counts (proportional to electron counts/currents) for single and multiple detectors in arbitrary fields and any positions and time windows (for correlations).

The essential aspect of the derivations relevant here is that the general time dependent photo-electron count distribution (photo-current) of each detector depends exclusively on the local field incident on that detector[/color]. There is absolutely no interruption of the purely local continuous dynamics and the photo ejections depend solely on the local fields/QED amplitudes which also never collapse or deviate in any way from the local interaction dynamics. (Note: the count variance is due to averaging over the detector states => the identical incident field in multiple tries repats at best[/color] up to a Poissonian distribution.)

The QED derivations also show explicitly how the vacuum contribution to the count yields 0 for the normal operator ordering convention. That immediately provides operational meaning of the quantum correlations as vacuum-normalized counts, i.e. to match the Glauber "correlation" function, all background effects need to be subtracted from the obtained counts.

The complete content of the multi-point correlations is entirely contained in the purely classical correlations of the instantaneous local intensities, which throughout follow purely local evolution. There are two main "quantum mysteries" often brought up here:

a) The "mysterious" quantum amplitude superposition[/color] (which the popular/pedagogical accounts make a huge ballyhoo out of) for the multiple sources is a simple local EM field superposition (even in QED treatment) which yields local intensity (which is naturally, not a sum of non-superposed intensities). The "mystery" here is simply due to "explaining" to student that he must imagine particles (for which the counts would add up) instead of fields which superpose into the net amplitude which then gets squared to get the count (resulting in the "mysterious" interference, the non-additivity of counts for separate fields, duh).

b) Negative probabilities, anti-correlations, sub-Poissonian light[/color] -- The multi-detector coincidence counts are computed (in QED and semiclassical derivations) by constructing a product of individual instantaneous counts -- a perfectly classical expression (same kind as Bell's LHV). These positive counts are then expressed via local intensities (operators in QED), which are still fully positive definite. It is at this point that QED treatment introduces the normal ordering convention (which simplifes integrals by canceling out the vacuum sourced terms in each detector's count integrals; see [4],[5]), redefining thus the observable whose expectation value is being computed, while retaining the classical coincidence terminology[/color] they started with, resulting in much confusion and bewilderment (in popular and "pedagogical" retelling, where to harmonize their "understanding" with the facts they had to invent "ideal" detectors endowed with magical power of reproducing G via plain counts correlations[/color]).

The resulting Glauber "correlation" function G(X1,X2...), (the standard QO "correlation" functions), is not the correlation of the counts at X1,X2,... but a correlation-like expression[/color] extracted (though the vacuum terms removal via [a+],[a] operator reordering) from the expectation value of the observable corresponding to the counts correlations (which, naturally, shows no negative counts or non-classical statistics).

-----
1. L. Mandel, E.C.G. Sudarshan, E. Wolf "Theory of Photoelectric Detection of Light Fluctuations" Proc. Phys. Soc. V84, 1964, pp. 435-444 (reproduced also in P.L. Knight's 'Concepts of Quantum Optics' )

2. P.L. Kelly, W.H. Kleiner "Theory of Electromagnetic Field Measurement and Photoelectron Counting" Phys. Rev. 136 (1964) pp. A316-A334.

3. L. Mandel, E. Wolf "Optical Coherence and Quantum Optics" Cambridge Univ. Press 1995.

4. L. Mandel "Physical Significance of Operators in Quantum Optics" Phys. Rev. 136 (1964), pp B1221-B1224.

5. C.L. Mehta, E.C.G Sudarshan "Relation between Quantum and Semiclassical Description of Optical Coherence" Phys. Rev. 138 (1965) pp B274-B280.
 
Last edited:
  • #157
vanesch I would like to add something here, because the discussion is taking a turn that is in my opinion wrongheaded. You're trying to attack "my view" of quantum theory (which is 90% standard and 10% personal: the "standard" part being the decoherence program, and the personal part the "single mind" part). But honestly that's not very interesting because I don't take that view very seriously myself.

Agree on this, that's a waste of time. I would also hate to have to defend the standard QM interpretation. Even arguing from this side, against its slippery language is like mud-wrestling a lawyer. It never goes anywhere.
 
  • #158
nightlight said:
To this end we are following a wave packet (in coordinate representation) of PDC "photon 2" and we are using "photon 1" as as an indicator that there is a "photon 1" heading to our splitter. This part of the correlation between 1 and 2 is purely classical amplitude based correlation[/color] i.e. a trigger of D1 at time t1 indicates that the incident intensity of "photon 2", I2(t) will be sharply increased within some time window T at starting at t1 (with an offset from t1, depending on path lengths).

I know that that is how YOU picture things. But it is not the case of the quantum description, where the two photons are genuine particles.

I don't really know what you mean with "Poissonian square law detectors".

I mean the photon detectors for the type of photons we're discussing (PDC, atomic cascades, laser,...). The "square law" refers to the trigger probability in a given time window [t,t+T] being proportional to the incident EM energy in that time window.

Again, that's your view of a photon detector, and it is not the view of quantum theory proper. So I used that distinction to point out a potential difference in predictions. A quantum photon detector detects a marble, or not. A finite quantum efficiency photon detector has QE chance of seeing the marble when it hits, and (1-QE) chance of not seeing it. But if the marble isn't there, it doesn't see it. This was exactly what I tried to point out in the anti-coincidence series. If the detectors are "square law" then they have a different behaviour than if they are "marble or no marble", so such an experiment can discriminate between both.

The Poissonian means that the ejected photo-electron count has a Poissonian distribution i.e. probability of k electrons being ejected in a given time interval is P(n,k)=n^k exp(-n)/k! where n=average numbers of photoelectrons ejected in that time interval. In our case, the time window is the short coincidence window (defined via the PDC "photon 1" trigger) and we assume n=constant in this window. As indicated earlier, the n(t) varies with time t, it is low generally except for the sharp peaks within the windows defined by the triggers of "photon 1" detector. For simplicity you can assume n(t)=0 outside the window and n(t)=n within the window (we're not putting numbers in yet, so hold on your |1,1>, |2,2>...).

Exactly, that's a "square law" detector. And not a quantum photon detector. But statistically you cannot distinguish both if you have no trigger, so the semiclassical description works in that case for both. However, this is NOT the case for a 2-photon state, because then, BEFORE TAKING INTO ACCOUNT THE PHOTOELECTRON distribution, you have somehow to decide whether the first photon was there or not. If it is not there, nothing will happen, and if it is there, you draw from the photoelectron distribution, which will generate you a finite probability of having detected the photon or not.

Both approaches yield exactly the same conclusion, the square law[/color] property of photo-ionization and the Poisson distribution[/color] (super-Poisson for mixed or varying fields within the detection window) of the ejected photo-electrons. They all also derive detector counts (proportional to electron counts/currents) for single and multiple detectors in arbitrary fields and any positions and time windows (for correlations).

I don't disagree with the Poissonian distribution of the photoelectrons of course, IN THE CASE A PHOTON DID HIT. The big difference is that you consider streams of 1/N photons (intensity peaks which are 1/N of the original, pre-split peak) which give rise to INDEPENDENT detection probabilities at each detector, while I claim that pure quantum theory predicts you individually the same rates, with the same distributions, but MOREOVER ANTICORRELATED in that the click (the undivisible) photon can be only at one place at a time. This discriminates both approaches, and is perfectly realizable with finite-efficiency detectors.

The essential aspect of the derivations relevant here is that the general time dependent photo-electron count (photo-current) of each detector depends exclusively on the local field incident on that detector[/color].

Not in 2-photon states. They are not independent, in quantum theory. I'm pretty sure about that, but I'll try to back up my statements with other people's opinions in how QUANTUM THEORY works (how nature works is a different issue which can only be decided by experiment).

cheers,
Patrick.
 
  • #159
vanesch I know that that is how YOU picture things. But it is not the case of the quantum description, where the two photons are genuine particles.

One photon state spans infinite space and time and would require inifinite detector and infinite time to detect. Any finite time, finite space field is necessarily a superposition of multiple photon number states (also called Fock states). The "photon" is not what photo detectors count[/color], despite suggestive name and the pedagocial/popular literature mixups. That's just a jargon, a shorthand. They count photo-electrons.[/color] The theory of detectors shows what these counts are and how the QED (or classical) field amplitudes relate to the photodetector counts. Of course, the integrated absolute squares of field amplitudes are proportional to the expectation value of the "photon number operator" [n], so one can loosely relate the average "photon number" <[n]> to the photo detector counts, but this is a relation between an average of an observable [n] and a (Poisson) distribution P(ne,k), which is a much too soft connection for the kind of tight enumerative reasoning required in the analysis of the anti-correlations or of the Bell's inequality tests.

There is nothing corresponding operationally to the "photon detector". And to speak of coincidence correlations of |1,1> at places X1,X2 at times t1 and t2 is vacuous to the square.

What photodetector counts are, is quite relevant if you wish to interpret what the Quantum Optics experiments are showing or even to state what the QED predictions are, at all.

Ignoring the toy "prediction"[/color] based on simple reasoning with |1,1>, |2,2> and such, the QED prediction[/color] for coincidences is given precisely by the Glauber's correlation functions (which includes their operational interpretation, the usual QO vacuum effects subtractions).

You can't predict any coincidences with |1,1> since |1,1> represents plane waves of infinite duration and infinite extent (they are expansion basis sensitive, too). All you can "predict" with such mnemonic devices is what the real coincidence prediction given via G() might roughly look like. Coincidence count is meaningless for |1,1>, it is operationally empty.

You did manage to miss the main point (which in retrospect appears a bit too fine a point), though, which is that what is shown in the papers cited (and explicitly stated in [4]), is that the "Quantum Optics correlation" G(X1,X2,..) is not[/color] the (expectation value of) observable representing the correlation of coincidence counts[/color]. The observable which does correspond to the correlation of coincidence counts (the pre-normal ordered correlation function) yields prefectly local (Bell's LHV type) predictions, no negative counts, no sub-Poissonian statistics, no anti-correlations. That is the result of both QED and semiclassical treatment of multi-detector coincidences.

Debating whether there is a "photon" and whether it is hiding inside the amplitude somehow (since the formalism doesn't have a counterpart for it), is like debating the count of dancing angels on a head of a pin. All you can say is that fields have such and such amplitudes (which are measurable for any given setup, e.g. via tomographic methods based on Wigner functions). The theory of detection then tells you how such amplitudes relate to counts (of photo-electrons) produced by the photo-detector.

While it is perfectly fine to imagine "photon" somewhere inside the amplitude (if such mnemonics helps you) you can't call the counts produced by the photodetectors the counts of your mnemonic devices (without risking a major confusion). That simply is not what those counts are, as the references cited show clearly and in great detail. After you clear up this bit, you're not done since there is the next layer of confusion to get through, which is the common mixup between the Glauber's G() "correlation" function and the detector counts correlations, for which [4] and [5] should help clear it up. And after that, you will come to still finer layers of mixups which Marshall and Santos address in their PDC papers.
 
Last edited:
  • #160
nightlight said:
The observable which does correspond to the correlation of coincidence counts (the pre-normal ordered correlation function) yields prefectly local (Bell's LHV type) predictions, no negative counts, no sub-Poissonian statistics, no anti-correlations. That is the result of both QED and semiclassical treatment of multi-detector coincidences.

My Mandel and Wolf is in the mail, but I checked the copy in our library. You are quite wrong concerning the standard quantum predictions. On p706, section 14.5, it is indicated that the anticoincidence in a 1-photon state is perfect, as calculated from the full quantum formalism.
On p 1079, section 22.4.3, it is indicated that PDC, using one photon of the pair as a trigger, gives you a state which is an extremely close approximation to a 1-photon state. So stop saying that what I told you are not the predictions of standard quantum theory.

I know YOUR predictions are different, but standard quantum mechanical predictions indicate a perfect anticorrelation in the hits. You should be happy about that because it indicates an experimentally verifiable claim of your approach, and not an unverifiable claim such as "you will never be able to build a photon detector with 92% efficiency and neglegible dark current".

cheers,
Patrick.

PS: I can see the confusion if you apply equation 14.3-5 in all generality. It has been deduced in the case of COHERENT states only, which have a classical description.
 
Last edited:
  • #161
while I claim that pure quantum theory predicts you individually the same rates, with the same distributions, but MOREOVER ANTICORRELATED in that the click (the undivisible) photon can be only at one place at a time. This discriminates both approaches, and is perfectly realizable with finite-efficiency detectors.

That is not a QED prediction, but a shorthand for the prediction which needs a grain of salt before use. If you check the detector foundations in the references given, you will find out that even preparing the absolutely identical field amplitudes[/color] in each try, you will still get (at best) the the Poisson distribution of photoelectrons in each try, thus the Poissonian for the detector's count (equivalent to the tree-split count of activated detectors). The reason for the non-repeatability of the exact count is the averaging over the states of the detector. The only thing which remains the same for the absolutely identical preparation in each try is the Poissonian average n=<k>.

Therefore the beam splitter, which can only approximate the absolutely identical packets in the two paths, will also yield in each try the independent Poissonian counts[/color] on each side, with only the Poissonian average n=<k> being the same on the two sides.[/color]

The finer effect that PDC (and some anticorrelation experiments) specifically add in this context is the apparent excess of the anti-correlation, the apparent sub-Poissonian behavior on the coincidence data processed by the usual Quantum Optics prescription (the Glauber's prescription of vacuum effects removal, corresponding to the normal ordering). Here you need to recheck Marshall & Santos PDC and detector papers for full details and derivations, I will only sketch the answer here.

First one needs to recall that the Glauber's correlation (mapped to the experimental counts via usual vacuum-effects subtractions) is not the (expected value of) observable corresponding to the counts correlation. It is only obtained by modifying the observable for the count corrrelations through normal ordering of operators (to in effect remove the vacuum generated photons, the vacuum fluctuations).

As pointed out in [4] these vacuum terms subtractions from the true count correlation observable, do not merely make G(X1,X2,..) depart from the true correlation observable, but if one were to rewrite G() in a form of a regular correlation function of some abstract "G counts" [/color]one would need to use negative numbers for these abstract "G counts" for some conceivable density operators [rho] (G is the expectation value of Glaubers observable [G] over density operator [rho] i.e. G=Tr([rho] [G])). There is never question, of course, of the regular correlation observable [C] requiring negative counts for any [rho] -- it is defined as a proper correlation function observable of the photo-electron counts (as predicted in the detection theory), which are always non-negative.

These abstract "G counts" lead to the exactly same kind "paradoxes" (if confused with the counts) that the superposition mystery[/color] presents (item (a) couple messages back) . Namely in the "superposition mystery", your counts are always (proportional to) C=|A|^2, where A is the total EM field amplitude. When you have two sources yielding in separate experiments amplitudes A1 and A2 and the corresponding counts C1=|A1|^2 and C2=|A2|^2, then if you have both sources turned on, the net amplitude is: A=A1+A2 and the net count is C=|A1+A2|^2, which is generally different from C1+C2=|A1|^2+|A2|^2.

If one wants to rewrite formally C as a sum of two abstract counts G1 and G2, one can say C=G1+G2, but one should not confuse C1 and C2 with the abstract counts G1 and G2. In the case of negative interference you could have A1=-A2, so that C=0. If one knows C1 and also misattributes G1=C1, G2=C2[/color], then one would need to set G2=-G1, a negative abstract count. Indeed, the negative interference is precisely the aspect the students will be the most puzzled with.

It turns out, the PDC generates [rho] states which require <[G]> to use negative "abstract G counts" if one were to write it as a proper correlation function correlating these abstract "G counts". The physical reason (in Marshal-Santos jargon) it does so is because the PDC generation uses two vacuum modes (to turn them into down-converted photons on phase matching conditions), thus the vacuum fluctuation noise alone (without these down-converted photons) is actually smaller in the PDC photons detection region. Therefore the conventional QO vacuum effects subtractions from the observed counts, aiming to reconstruct Glauber's <[G]> and implicitly its abstract "G counts", oversubtract here since the remaining vacuum fluctuations effects are smaller in the space region traveling with the PDC photons (they are modified vacuum modes, where the modification is done by the nonlinear crystal via absorption and re-emission of the matching vacuum modes).
 
Last edited:
  • #162
vanesch My Mandel and Wolf is in the mail, but I checked the copy in our library. You are quite wrong concerning the standard quantum predictions. On p706, section 14.5, it is indicated that the anticoincidence in a 1-photon state is perfect, as calculated from the full quantum formalism.

The textbook doesn't dwell on the fine points of distinction between the Glauber's pseudo-correlation function <[G]> and the true count coincidence correlation observable <[C]> but uses the conventional Quantum Optics shorthand (which tacitly includes vacuum effects subtractons by standard QO procedures, to reconstruct <[G]> from the obtained counts correlation <[C]>).

This same Mandel, of course, wrote about these finer distinctions in ref [4].

Although Mandel-Wolf textbook is better than most, it is still a textbook, for students just learning the material and it has used there a didactic toy "prediction", not a scientific prediction one would find in a paper as predicting something experimenters ought to test against. Find me a real paper in QO which predicts (seriously) such anticorrelation for the plane wave, then goes to measure coincidences on them with infinite detector in infinite time. It is a toy derivation, for students, not a scientifc prediction that you can go out and measure and hope to get what you "predicted".

On p 1079, section 22.4.3, it is indicated that PDC, using one photon of the pair as a trigger, gives you a state which is an extremely close approximation to a 1-photon state[/color]. So stop saying that what I told you are not the predictions of standard quantum theory.

Well, the "extremely close" could be, say, about half a photon short of a single photon (generally it will be at best as close as the Poissonian distribution allows, which has a variance <[n]>).

Again, you're taking didactic material toy models without sufficient grain of salt. I cited Mandel & Wolf as a more readable overview of the photodectection theory than the master references in the original papers. It is not a scientific paper, though (not that those are The Word from Above, either). You can't take all it says at the face value and in the most literal way. It is a very nice and useful book (much nicer for a physicist than Yariv's QO textbook), if one reads it maturely and factors out the unavoidable didactic presentation effects.

I know YOUR predictions are different, but standard quantum mechanical predictions indicate a perfect anticorrelation in the hits. You should be happy about that because it indicates an experimentally verifiable claim of your approach, and not an unverifiable claim such as "you will never be able to build a photon detector with 92% efficiency and neglegible dark current".

I was describing what the standard detection theory says, what are the counts and what is the difference between <[G]> and <[C]>. You keep bringing up same didactic toy models and claims, taken in the most literal way.

The shorthand "correlation" function for <[G]> in Quantum Optics is just that. You need a grain of salt to translate what it means in terms of detector counts (it is standard QO reconstruction of <[G]> from the measurement of the count correlation observable <[C]>). It is not a correlation function of any counts -- [G] is not an observable which measures correlations in photo-electron counts. It is an observable which is measured by reconstruction of such observable [C], the count correlation observable (see [4] for distinctions).

The [G] is defined by taking [C] and commuting all field amplitude operators A+ to the left and all A to the right. The two, [C] and [G] are different observables. <[C]> is measured directly as the correlation among the counts (the measured photocurrents approximate the "ideal" photo-electron counts assumed in [C]). Then the <[G]> is reconstructed from <[C]> through the standard QO subtractions procedures.

As a rough shorthand one can think of <[G]> as a correlation function. But [G] is not a correlation observable of anything that can be counted (that observable is [C]). So one has to watch how far one goes, with such rough shorthand, otherwise one may end up wondering why the "counts" which correlate as <[G]> says they ought to, come out negative. They don't. There are no such "G counts" since [G] isn't a genuine correlation observable, thus <[G]> is not a genuine correlation function. You can, of course, rewrite it purely formally as a correlation of some abstract G_counts, but there is no direct operational mapping of these abstract G_counts to anything that can be counted. In contrast, for the observable [C] which is defined as a proper correlation observable of the photo-electron counts, the kind of counts you can assign operational meaning to (i.e. map to something which can be counted -- the photo-electrons, or approximately their amplified currents) the correlation function <[C]> has no such problems as the puzzling non-classical correlations or the mysterious negative counts.
 
Last edited:
  • #163
vanesch Look at the general formulation of the path integral (a recent exposition is by Zee, but any modern book such as Peskin and Schroeder, or Hatfield will do).
The integral clearly contains the FULL nonlinear dynamics of the classical fields.


You may be overinterpreting the formal path integral "solutions" here. The path integral computes the Green functions of the regular QFT (the linear evolution model in Fock space with its basis from the linearized classical theory), and not (explicitly) the nonlinear classical fields solution. The multipoint Green function is a solution only in the sense of approximating the nonlinear solution via the multipoint collisions (of quasiparticles; or what you call particles such as "photons", "phonons" etc) while the propagation between the collisions is still via the linear approximations of the nonlinear fields, the free field propagators. The collisions merely re-attach the approximate free (linearized) propagators back to the nonlinear template, or as I labeled it earlier, it is a piecewise linear approximation (for a funny tidbit on this, check L.S. Schulman "Techniques and applications of Path Integration" Wiley 1981; pp 39-41, the Appendix for the WKB chapter, where a bit too explicit re-attachment to the classical solution is labeled "an embarrassment to the purist"). If you already have the exact solutions of the classical nonlinear fields problem, you don't need the QFT propagators of any order at all.

For example, the Barut's replication of QED radiative corrections in his nonlinear fields model purely as the classical solution effects, makes redundant the computation of these same effects via the QED expansion and propagators (to say nothing of making a whole lot more sense). You can compute the QFT propagators if you absolutely want to do so (e.g. to check the error behavior of the perturbative linearized approximation). But you don't need them to find out any new physics that is missing in the exact solutions of the nonlinear classical fields that you already have.

Note also that path integrals and the Feynman diagrams with their 'quasiparticle' heuristic/mnemonic imagery are pretty standard approximation tool for many nonlinear PDE systems outside of the QFT, and even outside of physics (cf. R.D. Mattuck "A Guide to Feynman Diagrams in the Many-Body Problem" Dover 1976; or various math books on nonlinear PDE systems; Kowalski's is just one general treatment of that kind where he emphasized the linearization angle and the connection to the Backlund transformation and inverse scattering methods, which is why I brought him up earlier).
 
Last edited:
  • #164
nightlight said:
As a rough shorthand one can think of <[G]> as a correlation function. But [G] is not a correlation observable of anything that can be counted (that observable is [C]). So one has to watch how far one goes, with such rough shorthand, otherwise one may end up wondering why the "counts" which correlate as <[G]> says they ought to, come out negative.

My opinion is that you make an intrinsically simple situation hopelessly complicated. BTW, |<psi2| O | psi1>|^2 and |<psi2| :O: |psi1>|^2 are both absolute values squared of complex numbers, so are both positive definite. Hence the normal ordering cannot render G negative.

Nevertheless, this stimulated me to have another look at quantum field theory, which I thought was locked up in particle physics. But I think I'll bail out of this discussion for a while until I'm better armed to tear your arguments to pieces :devil: :devil:

cheers,
Patrick.
 
  • #165
nightlight said:
The multipoint Green function is a solution only in the sense of approximating the nonlinear solution via the multipoint collisions (of quasiparticles; or what you call particles such as "photons", "phonons" etc) while the propagation between the collisions is still via the linear approximations of the nonlinear fields, the free field propagators.

No, the multipoint green function IS the path integral with the full nonlinear solution and all the neighboring non-solutions. It is its series development in the interaction constants (the Feynman graphs) what you seem to be talking about, not the path integral itself.

But I will tell you a little secret which will make you famous if you listen carefully. You know, in QCD, the difficulty is that the series development in the coupling constant doesn't work well at "low" energies (however, it works quite well in the high energy limit). Now, the stupid lot of second quantization physicists are doing extremely complicated things in order to try to get a glimpse of what might happen at low energies. They don't realize that it is sufficient to solve a classical non-linear problem. If they would realize this, they would be able to simplify enormously the calculations, because you could then apply finite-element techniques. That would then allow them to calculate nuclear structure without any difficulty ; compared to what they try to do right away, it would be easy. So, hint hint, solve the classical non-linear problem of the fields, say, for a 3 up quarks and 3 down quarks and you'll find... deuterium ! :smile:

cheers,
Patrick.
 
  • #166
vanesch No, the multipoint green function IS the path integral with the full nonlinear solution and all the neighboring non-solutions. It is its series development in the interaction constants (the Feynman graphs) what you seem to be talking about, not the path integral itself.

Formally the path integral with full S contains implicitly the full solution of the nonlinear PDE system, just as the symbolic sum from 0 to infinity of Taylor series is formally the exact function. But each specific multipoint G is only a scattering matrix type piecewise linearization of the exact nonlinear problem. The multipoint Green function approximates the propagation between the collisions using the free field propagators[/color] (which are only the solutions of the linearized approximation of the nonlinear equation, but they're not the solutions of the exact nonlinear equation) while the interaction part, the full nonlinear system, is "turned on" only for the finite number of points[/color] (and in their infinitesimal vicinity), as in the scattering theory (this is all explicit in the S matrix formulations of QED).

That is roughly analogous to each x^n term of Taylor series approximating some function f(x) with the next higher polynomial, where the full function f(x) is "turned on" only in the point x=0 and its "infintesimal" vicinity to obtain the needed derivatives.

So, hint hint, solve the classical non-linear problem of the fields, say, for a 3 up quarks and 3 down quarks and you'll find... deuterium !

Before deciding that solving a general equations of interacting nonlinear QCD fields exactly is a trivial matter, why don't you try a toy problem of 3 simple point particles with simple Newtonian gravitational interaction and given masses m1, m2, m3 and positions r1, r2, r3. It is an infinitely simpler problem than the QCD nonlinear fields problem, so maybe, say in ten minutes, you could come back and give us the exact three body problem solution in closed form in the parameters as given? Try it, it ought to be trivial. I'll check back to see the solution.

Note also that even the solutions to the much simpler nonlinear coupled Maxwell-Dirac equations in 3+1 or even reduced dimensions hasn't budged an inch (in terms of getting exact solutions) for decades of pretty good mathematicians banging their heads against it. That's why physicists invented the QED expansion, to at least get some numbers out of it. Compare Barut's QED calculations to the conventional QED calculations of the same numbers and see which is easier and why it wasn't done via the nonlinear equations. The nonlinear field approach is only a conceptual simplification, not a computational one. The computational simplification was the QED.
 
Last edited:
  • #167
vanesch My opinion is that you make an intrinsically simple situation hopelessly complicated. BTW, |<psi2| O | psi1>|^2 and |<psi2| :O: |psi1>|^2 are both absolute values squared of complex numbers, so are both positive definite. Hence the normal ordering cannot render G negative.

It is not <[G]> that is being made negative (just as in the superposition example (a), it wasn't C that was made negative). What I said is that if you take <[G]> and re-express it formally as if it were an actual correlation function of some fictitious/abstract "G_counts"[/color] (i.e. re-express it in the form of a sum or an integral of the products GN1*GN2... over the sequence of time intervals/coincidence windows), it is these fictitious G_Counts[/color], the GN's, that may have to become negative if they were to reproduce <[G]>[/color] (which in turn is reconstructed from measured <[C]>, the real correlation function) for some density operators (such as PDC). This is not an inherent problem of definition of <[G]> or its practical use (since nothing is counted as GN's, they're merely formal variables without any operational meaning), but it is a common mixup in the operational interpretation of Glauber/QO correlations (which mixes up the genuine correlation observable [C] with the Glauber's pseudo-correlation observable [G]).

This is the same type of negative probability on abstract counts as in the interference (a) example where, if you were to express the combined source count formally as a sum of fictitious counts ie. C=G1+G2, then some of these fictitious counts may have to be negative (see the earlier msg) to reproduce the measured C. The C is itself is not negative, though.

But I think I'll bail out of this discussion for a while ...

I am hitting a busy patch in my day job, too. It was fun and it helped me clear up few bits for myself. Thanks for the lively discussion, and to all the other participants as well.
 
Last edited:
  • #168
vanesch No, the multipoint green function IS the path integral with the full nonlinear solution[/color] and all the neighboring non-solutions. It is its series development in the interaction constants (the Feynman graphs) what you seem to be talking about, not the path integral itself.

But I will tell you a little secret which will make you famous if you listen carefully...


Your mixup here may be between the "full nonlinear solution"[/color] of the Langrangian in the S of the path integral (which is a single classical particle dynamics[/color] problem, a nonlinear ODE) and the full nonlinear solution of the classical fields[/color] (nonlinear PDE). These two are quite different kinds of "classical" nonlinear equations (ODE vs PDE), and they're only formally related via the substitution of the particle's momentum p (a plain function in the particle ODE) with the partial derivative iD/Dx for the fields PDE equations. That's the only way I can see any kind of a rationale (erroneous as it may be) behind your comments above (and the related earlier comments).
 
Last edited:
  • #169
How we can be sure that we send only one electron, and then the diffraction makes the interference pattern?
 
  • #170
nightlight said:
vanesch
Your mixup here may be between the "full nonlinear solution"[/color] of the Langrangian in the S of the path integral (which is a single classical particle dynamics[/color] problem, a nonlinear ODE) and the full nonlinear solution of the classical fields[/color] (nonlinear PDE).

And I was not going to respond... But I can't let this one pass :-p

The QED path integral is given by:
(there might be small errors, I'm typing this without any reference, from the top of my head)

Lagrangian density:

L = -1/4 F_uv F^uv + psi-bar (D_mu gamma^mu + m) psi

(I don't remember if it is m or m^2 in this way of writing...)

with D_m the covariant derivative: D_mu = d_mu - q A_mu

Clearly, this is the lagrangian density which gives you the coupled Maxwell-Dirac equations if you work out the Euler-Lagrange equations. Note that these are the non-linear PDE you are talking about, no ?
(indeed, the coupling is present through the term A in D_m)

the action is defined:

S[A_mu,psi] = Integral over spacetime (4 coordinates) of L, given a solution (or non-solution) of fields A_mu and psi.

S is extremal if A_mu and psi are the solutions to your non-linear PDE.

Path integral for an n-point correlation function (I take an example: a 4-point function, which is taken to be a positron-anti positron field and two photon fields, for instance corresponding to the pair annihilation ; but also Compton scattering - however, it doesn't matter what it represents in QED ; it is just the quantum amplitude corresponding to a product of 4 fields)

<0| psibar[x1] psi[x2] A[x3] A[x4] |0> = Path integral over all possible field configurations of A_m and psi of {Exp[ i / hbar S] psibar[x1] psi[x2] A[x3] A[x4]}

For the classical solution, the path integral reduces to one single field configuration, namely the classical solution psi_cl[x] and A_cl[x], and we find of course that the 4-point correlation function is then nothing else but
psi_cl[x1] psi_cl[x2] A_cl[x3] A_cl[x4] (times a phase factor exp(iS0) with S0 the action when you fill in the classical solution). This classical solution is the one that makes S extremal, and hence for which the psi_cl and A_cl obey the non-linear PDE which I think you propose.
But the path integral also includes all the non-solutions to the classical problem, with their phase weight, and it is THAT PROBLEM which is so terribly hard to solve and for which not much is known except series devellopment, given by Feynman diagrams. If it were only to find the classical solution to the non-linear PDE, it would be peanuts to solve :biggrin:.

Now, there is one thing I didn't mention, and that is that because of the fermionic nature of the dirac field, their field values aren't taken to be 4-tuples of complex numbers at each spacetime point, but taken to be anticommuting Grassman variables ; as such indeed, the PDE is different from the PDE where the dirac equation takes its solutions as complex fields. You can do away with that, and then you'd have some kind of bosonic QED (for which the quantum case falls on its face due to the spin-statistics theorem, but for which you can always find classical solutions).

But it should now be clear that the non-linear PDE that you are always talking about IS FULLY TAKEN INTO ACCOUNT as a special case in the path integral.

cheers,
patrick.

PS: I could also include nasty remarks such as not to confuse pedagogical introductions to the path integral with the professional use by particle physicists, but I won't :devil: :smile:


EDIT: there were some errors in my formulas above. The most important one is that the correlation is not given by the path integral alone, but that we also have to divide by the path integral of exp(iS) without extra factors (that takes out the factor exp(iS0) I talked about.
The second one is that one has a path integral over psi, and an independent one over psi-bar as if it were an independent variable. Of course, for the classical solution it doesn't change anything.
Finally, x1, x2, x3 and x4 have to be in time order. Otherwise, we can say that we are taking the correlation of the time-ordered product.
 
Last edited:
  • #171
Wouldn't it be easier to think of the electron in Young's experiment as a 'speedboat' passing through the vacuum, but that also generates a wave in the same way as a boat does on water.
What would the result be if you used a ripple tank and water to recreate Young's experiment? And how would we interpret it?
 
  • #172
Ian said:
Wouldn't it be easier to think of the electron in Young's experiment as a 'speedboat' passing through the vacuum, but that also generates a wave in the same way as a boat does on water.
What would the result be if you used a ripple tank and water to recreate Young's experiment? And how would we interpret it?

We know what happens when an electron (or any charge particles) generates a wake. The double-slit result is NOT due to a wake.

Zz.
 
  • #173
vanesch said:
But it should now be clear that the non-linear PDE that you are always talking about IS FULLY TAKEN INTO ACCOUNT as a special case in the path integral.

I would like to add that the classical solution has often the main contribution to the path integral, because the classical solution makes S stationary, which means that up to second order, all the neighboring field configurations (which are non-solutions, but are "almost" solutions) have almost the same S value and hence the same phase factor exp(iS). They add up constructively in the pathintegral as such. If the fields are far from the classical solution, their neighbours will have S values which change in first order and hence they have different phase factors, and tend to cancel out.
So for certain cases, it can be that the classical solution gives you a very good approximation (or even the exact result, I don't know) to the full quantum problem, especially if you limit yourself to a series development in the coupling constant. It can then be that the first few terms give you identical results. It is in that light that I see Barut's results.

cheers,
Patrick.
 
  • #174
nightlight said:
vanesch My Mandel and Wolf is in the mail, but I checked the copy in our library. You are quite wrong concerning the standard quantum predictions. On p706, section 14.5, it is indicated that the anticoincidence in a 1-photon state is perfect, as calculated from the full quantum formalism.

The textbook doesn't dwell on the fine points of distinction between the Glauber's pseudo-correlation function <[G]> and the true count coincidence correlation observable <[C]> but uses the conventional Quantum Optics shorthand (which tacitly includes vacuum effects subtractons by standard QO procedures, to reconstruct <[G]> from the obtained counts correlation <[C]>).

Well, it seems that pedagogical textbook student knowledge is closer to experimental reality than true professional scientific publications then :smile: :smile:

Ok, I asked the question on sci.physics.research and the answer was clear, not only about the quantum prediction (there IS STRICT ANTICORRELATION), but also about its experimental verification. Look up the thread "photodetector coincidence or anticoincidence" on s.p.r.

cheers,
Patrick.
 
  • #175
vanesch
<0| psibar[x1] psi[x2] A[x3] A[x4] |0> = Path integral[/color] over all possible field configurations of A_m and psi of {Exp[ i / hbar S] psibar[x1] psi[x2] A[x3] A[x4]}

Thanks for clearing up what you meant. I use term "path integral"[/color] when the sum is over paths and "functional integral"[/color] when it is over the field configurations. The Feynman's original QED formulation was in terms of the "path integrals" and no new physics is being added to it by re-expressing it in an alternative formalism, such as the "functional integrals" (just as the path integral formulation doesn't add any new physics to the canonically quantized QED or to S matrix formalism).

Therefore the state evolution in Fock space generated by the obtained multipoint Green functions is still piecewise linearized evolution (the linear sections are generated by the Fock space H) with the full interaction being turned on only within the infinitesimal scattering regions. If you a nice physical picture, though, of what goes on here in terms your favorite formalism, I wouldn't mind learning something new.

It is important to distinguish here that even though the 2nd quantization by itself doesn't add any new physics[/color] (being a general purpose linearization algorithm used in various forms in many other areas, an empty mathematical shell like Kowalski's Hilbert space linearization or, as Jaynes put it, like Ptolomean epicycles) that wasn't already present in the nonlinear fields, the new physics can be (and is likely being) added that wasn't present in the initial nonlinear fields model within the techniques working out the details of the scattering (in S matrix imagery), since in QED these techniques were tweaked over the years to fit the experimental data. Obviously, this new physics distilled from the experiments and absorbed into the computational rules of QED[/color], is by necessity in a form given to it by the 2nd quantization formalism they were fitted into. The mathematical unsoundness and logical incoherence of the overall scheme as it evolved only aided its flexibility, to better wrap around whatever experimental data that turned up.

That is why it is likely that Barut's self-fields are not the whole story, the most notable missing pieces being the charge quantization and the electron localization (not that QED has an answer other than 'that's how it is and the infinities hocus-pocus go away'). In principle, had the nonlinear classical fields been the starting formalism instead of the 2nd quantization (in fact they were the starting formalism and the most natural way to look at the Maxwell-Schroedinger/Dirac equations, and that's how Schroedinger understood it from day one), all the empirical facts and new physics accumulated over the decades would have been incorporated into it and would have had a form appropriate to that formalism e.g. some additional interaction terms or new fields in the nonlinear equations. But this kind of minor tweaking is not where the physics will go; it's been done and that way can go only so far. My view is that Wolfram's NKS points in the direction of the next physics, recalling especially the http://pm1.bu.edu/~tt/publ.html cellular automata modelling of physics (note that his http://kh.bu.edu/qcl/ is wrong on his home page; see for example http://kh.bu.edu/qcl/pdf/toffolit199064697e1d.pdf ) and Garnet Ord's interesting models (which reproduce the key Maxwell, Dirac and Schroedinger equations as purely enumerative and combinatorial properties of plain random walks, without using imaginary time/diffusion constant; these can also be re-expressed in the richer modelling medium of cellular automata and the predator-prey eco networks, where they're much more interesting and capable). Or another little curiosity, again from a mathematician, Kevin Brown, showing the relativistic velocities addition formulas as a simple enumerative property of set unions and intersections.

But it should now be clear that the non-linear PDE that you are always talking about IS FULLY TAKEN INTO ACCOUNT as a special case in the path integral.

The "fully taken into account" is in the same sense that Taylor or Fourier series cofficients "fully take into account" the function the series is trying to approximate. In either situation, you have to take fully into account, explictly or implicitly, that which you are trying to approximate[/color]. How else could the algorithm know it isn't approximating something else. In the functional integrals formulation of QED, this tautological trait[/color] is simply more manifest[/color] than in the path integrals or the canonical quantization. There is thus nothing nontrivial about this "fully taking into account" tautological phenomenon you keep bringing up.

If there is anything it does for the argument, this more explicit manifestation of the full nonlinear dynamics only emphasizes my view, by pointing more clearly what is it at all that the algorithm is trying to ultimately get at, what is the go of it. And there it is.

But the path integral also includes all the non-solutions[/color] to the classical problem, with their phase weight,

If you approximata a parabola with a Fourier series, you will be including (adding) a lot of functions which are not even close to parabola. It doesn't mean that all this inclusion of non-parabolas[/color] amounts, after all is said and done, to anything more than what was already contained in the parabola. In other words, the Green functions do not generate in the Fock space the classical nonlinear PDE evolution but only a series of the piecewise linearized approximations of it, none of which is the same as the nonlinear evolution (note also that obtaining a Green function could be generally more useful in various ways than just having one classical solution of the nonlinear system). This kind of overshoot/undershoot busywork is a common trait of approximating.
 
Last edited by a moderator:
  • #176
vanesch Ok, I asked the question on sci.physics.research and the answer was clear, not only about the quantum prediction (there IS STRICT ANTICORRELATION), but also about its experimental verification. Look up the thread "photodetector coincidence or anticoincidence" on s.p.r.

Now there is an authority. I was lucky to be fighting just the little points and thoughts of von Neumann, Bohr, Heisenberg, Feynman, Bell, Zeh, Glauber,... What do I do now, sci.physics.research is coming.

Should we go look for few other selected pearls of wisdom from over there? Just to give it some more weight. If you asked here you would get the same answer, too. If you asked for the shape of the Earth at some point, all would agree it was flat.

The physics (thankfully) isn't a litterary critique, where you can just declare, Derrida said... and you win. In physics and mathematics the facts and logic have to stand or fail on their own merits. Give them a link here if you need help, let them read the thread, let them check and pick apart the references, and then 'splain it to me how it really works.
 
  • #177
nightlight said:
Therefore the state evolution in Fock space generated by the obtained multipoint Green functions is still piecewise linearized evolution (the linear sections are generated by the Fock space H) with the full interaction being turned on only within the infinitesimal scattering regions. If you a nice physical picture, though, of what goes on here in terms your favorite formalism, I wouldn't mind learning something new.

You seem to have missed what I pointed out. The "piecewise linearised evolution" is correct when you consider the PERTURBATIVE APPROXIMATION of the functional integral (you know, in particle physics everybody calls it the path integral) using Feynman diagrams. Feynman diagrams (or, for that matter, Wick's theorem) are a technique to express each term in the series expansion of the FULL CORRELATION FUNCTION as combinations of the FREE CORRELATION FUNCTIONS, which are indeed the exact solutions to the linear quantum field parts. But I wasn't talking about that. I was talking about the full correlation function itself. That one doesn't have any "piecewise linear approximations" in it, and contains the FULL DYNAMICS. Only, except in special circumstances, nobody knows how to solve that problem directly, but the quantity itself is no approximation or contains no linearisation at all, and contains, as a very special case, the full nonlinear classical solution - I thought that my previous message made that clear.
There are some attempts (which work out better and better) to tackle the problem differently than by series development, such as lattice field theory, but it requires still enormous amounts of CPU power, as compared to nonlinear classical problems such as fluid dynamics. I don't know much about these techniques myself, but there are some specialists here around. But in all cases, we try to solve the same problem, which is a much more involved problem than the classical non-linear field equations, namely calculate the full correlation functions as I wrote them out.

If you take your opinions and information from the sixties, indeed, you can have the impression that QFT is a shaky enterprise for which you change the rules as data come in. At a certain point, it looked like that. However, by now, it is on much firmer grounds (although problems remain). First of all, the amount of data which is explained by it has exploded ; 30 years of experimental particle physics confirm the techniques. If it were a fitting procedure, that would mean that by now we'd have thousands of different rules to apply to fit the data. It isn't the case. There are serious mathematical problems too, if QFT is taken as a fundamental theory. But not if you suppose that there is a real high energy cutoff determined by what will come next. And all this thanks to second quantization :-p


cheers,
Patrick.
 
  • #178
nightlight said:
In physics and mathematics the facts and logic have to stand or fail on their own merits. Give them a link here if you need help, let them read the thread, let them check and pick apart the references, and then 'splain it to me how it really works.

Well, you could start by reading the article of the experiment by Thorn
:-p

cheers,
Patrick.

EDIT: Am. J. Phys. Vol 72, No 9, September 2004.
They did EXACTLY the experiment I proposed - don't forget that it is my mind who can collapse any wavefunction :smile: :smile:

The experiment is described in painstaking detail, because it is meant to be a guide for an undergrad lab. There is no data tweaking at all.

They use photodetectors with QE 50% and 250 counts per second dark current.

They find about 100.000 cps triggers of the first photon, and about 8800 cps triggers of coincidences between the first and one of the two others, within time coincidence window of 2.5 nanoseconds.

The coincidences are hardwired logic gates which count.
If the first photon clicks are given by N_G, the two individual coincidences between trigger (first photon and second photon left or right) are N_GT and N_GR and the triple coincidence N_GTR, then they calculate (no subtractions, no efficiency corrections nothing):

g(2)(0) = N_GTR N_G / (N_GT N_GR)

In your model, g(2) has to be bigger than 1.

They find: 0.0177 +/- 0.0026 for a 40 minute run.

Now given the opening window of 2.5 ns and the rates of the different detectors, they also calculate what they expect as "spurious coincidences". They find 0.0164.
Now if that doesn't explain nicely the full anticorrelation as I told you, I don't know what will ever do so.
 
Last edited:
  • #179
vanesch Ok, I asked the question on sci.physics.research and the answer was clear, not only about the quantum prediction (there IS STRICT ANTICORRELATION), but also about its experimental verification. Look up the thread "photodetector coincidence or anticoincidence" on s.p.r.

I just checked, look the kind of argument he gives:

But for this, look at Davis and Mandel, in _Coherence and Quantum Optics_ (ed. Mandel and Wolf, 1973). This is a very careful observation of ``prompt'' electrons in the photoelectric effect, that is, electrons emitted before there would be time, in a wave model, for enough energy to build up (even over the entire cathode!) to overcome the potential barrier. The experiment shows that the energy is *not* delivered continuously to a photodetector, and that this cannot be explained solely by detector properties unless one is prepared to give up energy conservation.

This is the Old Quantum theory (the Bohr's atom era) argument for Einstein's 'light quantum'. The entire photoeffect is fully within the semiclassical model -- the Schroedinger atoms interacting with the classical EM waves. Any number you can pull out of the QED/Quantum Optics on this, you can get from the semiclassical model (usually much easier, check ref's [1] & [2] on detector theory to see the enormous difference in the efforts to reach the same result). The only thing you won't get is a lively story about a "particle" (but after all the handwaving, there won't be any number out of all that 'photon' particle ballyhoo). And, of course, you won't get to see the real photons, shown clearly as a day in the Feynman diagrams[/color].

What a non-starter. Then he quotes the Grangier, et al, 1986 paper I told you myself to check out.
 
  • #180
vanesch Well, you could start by reading the article of the experiment by Thorn

Abstract: "We observe a near absence of coincidence counts between the two detectors—a result inconsistent with a classical wave model of light[/color], "

I wonder if they tested against the Marshall-Santos classical model[/color] (with ZPF) for the PDC which covers any multipoint coincidence experiment you can do with PDC source, plus any number of mirrors, lenses, splitters, polarizers,... (and any other linear optical elements) with any number of detectors. Or was it just the usual strawman "classical model".

Even if I had AJP access, I doubt it would have been worth the trouble, considering the existence of general results for the thermal, laser and PDC sources, unless they explictly faced head on the no-go result of Marshall-Santos and have shown how their experiment gets around it. It is easy to fight and "win" if you get to pick the opponent (the usual QO/QM "classical" caricature), while knowing that no one will challenge your pick since the gatekeepers is you.

You can't, not even in principle, get any measured count correlation[/color] (which approximates the expectation value <[C]>, where [C] is the observable for the correlations in the numbers of photo-electrons ejected on multiple detectors; see ref [1] & [4] in the detector theory message) in Quantum Optics which violates classical statistics[/color]. That observable (the photo-electron counts correlations) has all counts strictly non-negative (they're proportional to the EM field intensity in the photo-detection area) and the counts are in the standard correlation form, Sum(N1*N2), same form as Bell's LHVs. Pure classical correlation.

You can only reconstruct the Glauber's pseudo-correlation function[/color] from the measured count correlation observable (via the Glauber's prescription for subtractions of vacuum generated modes, the standard QO way of coincidence "counting") -- and it is that recontructed function <[G]>, if you were to express it formally as a correlation function of some imaginary G_Counts[/color], that is write it formally as: <[G]> = Sum(GN1*GN2), which would show that some of these G_Counts (the GN's) would need to be negative (because for the PDC the standard Glauber QO "counting" prescription, equivalent to subtracting the effects of vacuum generated modes, over-subtracts here, since the PDC pair actually uses up the phase-matching vacuum modes). The G_Counts have no direct operational meaning, there is no G_Count detector which counts these. There is not even a design for any such (Glauber's 1963 QO founding papers suggests a single atom might do it, although he stopped short of calling for the experimenters to hurry up and make one like that).

Note that for a single hypothetical "photon" counter device these G_Counts would be the value of the "pure incident"[/color] photon number operator, and as luck would have it, the full quantum field photon number operator, which happens to be a close relative of the one the design calls for, is none other than a Hermitean operator in the Fock space, thus it is an "observable", which means it is observable, which, teacher adds, this means kids that this photo-detector we have right here, in our very own lab, gives us the counts which are the values of this 'pure incident photon number observable' { the "pure incident"[/color] photons, interacting with the detector's atoms, are imagined in this "photon counter" concept, as somehow separated out from all the vacuum modes, even from those vacuum modes which are superposed with the incident EM field, not by any physical law or known data, but by the sheer raw willpower of the device designer[/color], Glauber: "they [vacuum modes] have nothing to do with the detection of photons"[/color] and presto! the vacuum modes are all gone, even the superposed ones, miracle, Halleluiah, Praise the Almighty.} The G_Counts are a theoretical construct, not something any device actually counts. And it certainly is not what the conventional detectors (of the AJP article experiment) did -- those counted as always, or at least since 1964, just the plain old boring classical [C], the photoelectron counts correlation observable.

The photo-electron counts correlation observable [C], the one their detectors actually report as the count correlation <[C]> (and which is approximately measured as the correlation of the amplified photo-curents; see [1],[2],[4]) is fully 100% classical, no negative counts required, no non-classical statistics required to explain anything they can ever show.
 
Last edited:
  • #181
vanesch But I wasn't talking about that. I was talking about the full correlation function itself. That one doesn't have any "piecewise linear approximations" in it, and contains the FULL DYNAMICS. Only, except in special circumstances, nobody knows how to solve that problem directly, but the quantity itself is no approximation or contains no linearisation at all, and contains, as a very special case, the full nonlinear classical solution - I thought that my previous message made that clear.

I know what you were saying. The question is, for these few exactly solvable toy models, where you can get the exact propagator (which would have to be a non-linear operator in the Fock space) and compute the exact classical fields evolution -- do the two evolutions differ at all?

This is just a curiosity, but it is not a decisive matter regarding the QED or QCD. Namely, there could be conceivably a toy model where the two evolutions differ, since it is not given from above that any particular formalism (which is always a huge generalization of the empirical data) has to work correctly in all of its farthest reaches. More decisive would be to see whether Barut's type model could replicate QED prediction to at least another order or two, and if not whether the experiment can pick one or the other.

If you take your opinions and information from the sixties,

Wait a minute, that's the lowliest of the lowest lows:)

indeed, you can have the impression that QFT is a shaky enterprise for which you change the rules as data come in.

Jaynes had for his PhD advisor Eugene Wigner and for decades was in the circle and was a close friend of the names the theorems and models and theories were named after in the QED textbooks . Read what he said till well into the 1990s. In comparison, I would be classified as a QED fan.

And all this thanks to second quantization

Or, maybe, despite of it. I have nothing against it as an effective algorithm, unsightly as it is. Unfortunately too much physics has been ensnared into its evolving computational rules, largely implicitly, for anyone to be able to disentangle it out and teach the cleaned up leftover algorithm in the applied math class. But it shouldn't be taken as a sacred oracle, a genuine foundation or a way to go.
 
Last edited:
  • #182
vanesch The "piecewise linearised evolution" is correct when you consider the PERTURBATIVE APPROXIMATION[/color] of the functional integral (you know, in particle physics everybody calls it the path integral) using Feynman diagrams. Feynman diagrams (or, for that matter, Wick's theorem) are a technique to express each term in the series expansion of the FULL CORRELATION FUNCTION as combinations of the FREE CORRELATION FUNCTIONS, which are indeed the exact solutions to the linear quantum field parts.

Re-reading that comment, I think you too ought to pause for a moment and step back a bit and connect the dots you already have in front of you[/color]. I will just list them since there are several longer messages in this thread, with an in depth disscussion on each one with a hard fought battle back and forth, where each point took your and other contributors' best shots. And you can read them back, in the light of the totality listed here (and probably as many lesser dots I didn't list), and see what the outcome was.

a)[/color] You recognize here that the QED perturbative expansion is a 'piecewise linear approximation'. The QED field amplitude propagation can't be approximating another piecewise linear approximation, or linear fields. And each new order of the perturbation invalidates the previous order approximation as a possible candidate for the last and the exact evolution. Therefore, there must be a limiting nonlinear evolution, not equal to any of the orders of QED, thus an "underlying" (at the bottom) nonlinear evolution (of QED amplitudes in coordinate basis) being approximated by the QED perturbative expansion, which is (since nothing else is left of all the finite orders of QED expansion, each is invalid, thus not the last word) "some" local classical nonlinear field theory (we're not interpreting what this limit-evolution means, but just establishing its plausible existence).

b)[/color] You also read within the last ten days at least some of the Barut's results on radiative corrections, including the Lamb shift, the crown jewel of QED. There ought to be no great mystery then, what could it be, what kind of nonlinear field evolution is it that QED amplitudes are actually piecewise linearizing.

c)[/color] What is the most natural thing, say a classical mathematician, would have considered if someone had handed him Dirac and Maxwell equations and told him: here are our basic equations, what ought to be done. Would he consider EM field potential A occurring in Dirac equation external, or the Dirac currents in the Maxwell's equations as external? Or would he do exactly what Schroedinger, Einstein, Jaynes, Barut... thought needs to be done and tried to do -- see them as set of coupled nonlinear PDEs to be solved?

d)[/color] Oddly that this most natural and conceptually the simplest way to look at these classical field equations, nonlinear PDEs, fully formed in 1920s, already had the high precision result of Lamb shift in it, with nothing that had to be added or changed in them (all it needed was someone to carry out the calculations for a problem already posed along with the equations, the problem of H atom) -- it had the experimental fact which would come twenty years later.

e)[/color] In contrast, the Dirac-Jordan-Heisenberg-Pauli Quantum theory of EM fields at the time of the Lamb shift discovery (1947), a vastly more complex edifice, failed the prediction, and had to be reworked thorughly in the years after the result, until Dyson's QED finally had managed to put all the pieces together.

f)[/color] The Quantum Optics may not have the magic it shows off with. Even if you haven't got yet to the detector papers, or the Glauber's QO foundation papers, you at least should have the grounds for rational doubt and more questions, what is really being shown by these folks? Especially in view of dots (a)-(e) which imply the Quantum Opticians may be wrong, overly enthusiastic with their claims.

g)[/color] The point (f), considering that the plain classical nonlinear fields, the Maxwell-Dirac equations, already had the correct QED radiative corrections right from the start, and these are much closer to the core of QED, they're its crown jewels, couple orders beyond the orders at which the Quantum Optics operates (which is barely at the level of the Old QED).

h)[/color] In fact, the toppled Old QED (e), has already all that Quantum Optics needs, including the key of Quantum Magic, the remote state collapse. Yet that Old QED flopped and the Maxwell-Dirac worked. Would it be plausible that for the key pride of QED predictions, the Lamb shift, the one which toppled the Old QED, the Maxwell-Dirac just got lucky. What about (b) & (c), lucky again that QED appears to be approximating it, in the formalism through several orders, and well beyond the Quantum Optics level, as well? All just luck? And that Maxwell-Dirac nonlinear fields are the simplest and the most natural approach (d)?

i)[/color] The Glauber's correlations <[G]>, the standard QO "correlations" may not be what they are claimed to be (correlating something being literally and actually counted). Its flat-out vacuum removal procedures are over-subtracting whenever there is a vacuum generated mode. And this is the way the Bell test results are processed.

j)[/color] The photo-detectors are not counting "photons" but are counting photo-electrons and the established non-controversial detector theory neither shows nor claims any non-classical photo-electron count results (it is defined as a standard correlation function with non-negative p-e counts). Entire nonclassicality in QO comes from the reconstructed <[G]>, which doesn't correlate anything literally and actually being counted (the <[C]> is what is being counted). Keep also in mind dots (a)-(i)

k)[/color] The Bell theorem tests seem to be stalled for over three decades. So far they have managed to exclude only the "fair sampling" type local theories (none of such theories ever existed). The Maxwell-Dirac, the little guy from (a)-(j) which does happen to exist, is not a "fair sampling" theory, and more embarrassingly, it even predicts perfectly well what the actual counting data show and agrees perfectly with the <[C]> correlations, the photo-electron counts correlations, which is what the detectors actually and literally produce (within the photo-current amplification error limits).

l)[/color] The Bell's QM prediction is not a genuine prediction, in the sense of giving the range of its own accuracy. It is a toy derivation, a hint for someone to go and do full QO/QED calculation. Indeed such calculations exist, and if one removes <[G]> and simply leaves it at its bare quantitative prediction of detector counts (which would match fairly well the photo-electron counts obtained), the QO/QED predicts the obtained raw counts well and does not predict violation either. There is no prediction of detector counts (the photo-electron counts, the stuff that detectors actually count), not even in principle and not with the most ideal photo-electron counter conceivable, even with 100% QE, which would violate Bell's inequality. No such prediction and it cannot be deduced even in principle for anything that can be actually counted.

m)[/color] The Bell's QM "prediction" does require remote non-local projection/collapse. Without it can't be derived.

n)[/color] The only reason we need collapse at all is the allegedly verified Bell's QM prediction saying we can't have LHVs. Otherwise variables could have values (such as Maxwell-Dirac fields), just not known, but local and classical. The same lucky Maxwell-Dirac of the other dots.

o)[/color] Without the general 'global non-local collapse' postulate, Bell could not get state of particle (2) to become |+> for the sub-ensemble of particle (2) instances, for which the particle (1) gave (-1) result (and he does assume he can get that state, by Gleason's theorem by which the statistics determines the state; he assumes statistics of |+> state on the particle 2 sub-ensemble for which the particle 1 gave -1). Isn't it a bit odd that to deduce non-local QM prediction one needs to use non-local collapse as a premise? How could any other conclusion be reached, but the exclusion of locality, with the starting premise of non-locality?

p)[/color] Without collapse postulate, no Bell's QM prediction, thus no measurement problem (the Bell's no-go for LHV), thus no reason to keep collapse at all. The Born rule as an approximate operational rule would suffice (e.g. the way a 19th century physicist might have defined it for the light measurements: the incident energy is proportional to photocurrent, which is correct empirically and theoretically, the square law detection).

q)[/color] The Poissonian counts of photo-electrons in all Quantum Optics experiments preclude Bell's inequality violation ever in photon experiments. The simple classical model, the same Maxwell-Dirac from (a)...(p) points, the lucky one, predicts somehow exactly what you actually measure, the detector counts, with no need for untested conjectures or handwaving or euphemisms, all it uses is the established detector theory (QED based or Maxwell-Dirac based) and a model of an unknown but perfectly existent polarization. And it gets lucky again. While the QO needs to appologize and promise yet again, it will get it just as soon as the detectors which count Glauber's <[G]> get constructed, soon, no question about it.

r)[/color] Could it be all luck for Maxwell-Dirac? All the points above, just dumb luck? And the non-linear fields actually don't contradict any QED quantitative prediction, in fact agree to an astonishing precision with QED. QED amplitude evolution even converges to Maxwell-Dirac, as far as anyone can see and as precisely as anything that gets measured in this area. The Maxwell-Dirac disagrees only with the collapse postulate (the general non-local projection postulate), for which there is no empirical evidence, and for which there is no theoretical need of any sort, other than the conclusions it creates by itself (such as Bell's QM prediction or various QO non-classicalities based on <[G]> and collapse).

s)[/color] Decades of physicists have banged their heads to figure out how can QM work like that. And no good way out, but multiple universes or observers mind when all shots have been fired and all other QM "measurement theory" defense lines have fallen. It can't be defended with a serious face. That means, no way to solve it as long as the collapse postulate is there, otherwise someone would have thought it up. And the only thing that is holding up the whole of the puzzle is the general non-local collapse postulate. Why do we need it? As an approximate operational rule (as Bell himself advocated in his final QM paper) with only local validity it would be perfectly fine, no puzzle. What does it do, other than to uphold the puzzle of its own making, to earn its pricey keeps? What is that secret invaluable function it serves, the purpose so secret and invaluable that no one knows for sure what it is, to be able to explain it as plainly and directly as 2+2, but everyone believes that someone else knows exactly what it is? Perhaps, just maybe, if I were to dare to wildly conjecture here, there is none?

t)[/color] No single or even a few dots above may be decisive. But all of it? What are the odds?

PS: I will have to take at least a few weeks break from this forum. Thanks again to you 'vanesch' and all the folks who contributed their challenges to make this a very stimulating discussion.[/color]
 
Last edited:
  • #183
Ok, you posted your summary, I will post mine and I think that after that I'll stop with this discussion too ; it has been interesting but took quite some time, and the subjects are being worn out in that we now seem to camp on our positions. Also it starts to take a lot of time.

a) Concerning the Lamb shift: it is true that you get it out of the Dirac-Maxwell equations, because now I remember that that was how my old professor did it (I don't find my notes from long ago, they must be at my parents home 1000 km from here, but I'm pretty sure he must have done something like Barut). The name of my old professor is Jean Reignier, he's long retired now (but I think he's still alive).

b) The fundamental reason for "second quantization" (meaning, quantizing fields) was not the Lamb shift of course. It's principal prediction is the existence of electrons (and positrons) as particles. As I said, I don't know of any other technique to get particles out of fields. As a bonus, you also get the photon (which you don't like, not my fault). I think that people tried and tried and didn't manage to get multi-particle-like behaviour out of classical fields. There are of course soliton solutions to things like the Korteweg Devries equation. But to my knowledge there has never been a satisfying solution to multiparticle situations in the case of the classical Dirac-Maxwell equation. Second quantization brilliantly solved that problem, but once done, with the huge mathematical difficulties of the complex machinery set up, how to get nice predictions beyond the obvious "tree level" ? It is here that the Lamb shift came in: the fact that you ALSO get it out of QFT ! (historically of course, they were first, but that doesn't matter).

c) I can be wrong, but I don't think that the classical approach you propose can even solve something like the absorption lines of Helium (a 2-electron system) or another low-count multi-electron atom, EVEN if we allow for the existence of a particle-like nucleus (which is another problem you'll have to solve: there's more to this world than electrons and EM radiation, and if everything is classical fields, you'd have to show me where the nucleus comes from and doesn't leak out).
Remember that you do not reduce to 2-particle non-relativistic QM if you take the classical Dirac equation (but that you do in the case of second quantization). You have ONE field, in which you'll have to produce the 2-particle state in a Coulomb field - I suppose with two bumps in the field that whirl around or I don't know what. I'd be pretty sure you cannot get anything reasonable out of it.

d) Concerning the anticoincidence: come on, you said there wouldn't be anticoincidence of clicks, and you even claimed that quantum theory (as used by professionals, not by those who write pedagogical stuff for students) also predicted that. You went off explaining that I confuse lots of quantities I don't even know what they mean in QO and asked me to show you a single paper where 1) the quantum prediction of anticoincidence was made and 2) the experiment was performed. I showed you (thanks to professor Carlip, from s.p.r.) an article of very recent research (September 2004) where the RAW COINCIDENCE g(2,0) (which is expected in classical maxwell theory to be bigger than 1) is something of the order of 0.017 (much and much better than Grangier and this time with no detector coefficients, background subtraction etc...)
Essentially, they find about 100.000 "first photon" triggers per second, about 8800 coincidences per second between first and one of the two others (so these are the identified pairs), and, hold your breath, about 3 or 4 triple coincidences (of which I said there wouldn't be any) per second, which, moreover are explained as Poissonian double pairs within the coincidence window of about 2.5 ns, but that doesn't matter.
If we take it in classical electromagnetism that both intensity profiles are split evenly by the beam splitter and if the detectors were square law devices responding to these intensities, you'd expect about 200 hits per second for these triple coincidences.
Remember: no background subtractions, no efficiency corrections. These are raw clicks. No need for Glauber or whatever functions. Marbles do the trick!
Two marbles in; one triggers, the other one chooses his way on the beam splitter. If it goes left, a click left (or not) but no click right. If it goes right, a click right (or not) but no click left. Never a click left and right. Well, we almost never see a click left and right. And these few clicks can even be explained by the probability of having 2 pairs of marbles in the pipe.
According to you, first we would never find this anticoincidence as a prediction of "professional" quantum theory, and of course never measure it. Now that we measured it, it doesn't mean anything because you expected so.

e) Once you acknowledge the existence of particles from fields, such as proposed by second quantization, the fair-sampling hypothesis becomes much much more natural, because you realize (or not! depends on the openness of mind) that photodetectors are particle detectors with a finite probability of seeing the particle or not. I acknowledge of course that the EPR like experiments today do not exclude local realistic theories. They are only a strong indication that entanglement over spacelike intervals is true.

f) Nobody believes that QFT is ultimately true, and everybody thinks that something else will come after it. Some think that the strict linearity of QM will remain (string theorists, for instance), while others (like Penrose) think that one will have to do something about it and hopes that gravity will. But I think that simplistic proposals such as classical field theory (with added noises borrowed from QFT or whatever you want to add) are deluded approaches who will fail even before they achieve anything. Nevertheless, why don't you guys continue. You still have a lot of work to do, after all, before it turns into a working theory and get you Nobel prizes (or will go down the drain of lost ink and sweat...). Really, try the helium atom and the hydrogen molecule. If that works, try something slightly more ambitious, such as a benzene molecule. If that also works you are really on the way of replacing quantum theory by classical field physics, so then tackle solid state physics, in the beginning, simply with metals. Then try to work out the photoelectric effect to substantiate all the would be's in your descriptions of how photodetectors work.

g) QFT is not just a numerical technique to find classical field solutions in a disguised way, it solves a quite different problem (which is much more involved). In certain circumstances, however, it can produce results which are close to the classical field solution (after all, there is the correspondence principle which states that QFT goes into the classical theory if h -> 0). I know that in QCD the results don't work out, for instance. People tried it.

cheers,
Patrick.
 
Back
Top