Photon Wave Collapse Experiment (Yeah sure; AJP Sep 2004, Thorn )

Click For Summary
The discussion centers on a 2004 paper by Thorn et al. claiming to demonstrate the indivisibility of photons through a beam splitter experiment, asserting a significant violation of classicality. The authors report a g2 value of 0.0177, suggesting a collapse of the wave function upon measurement, but critics argue that the experimental setup contains flaws, particularly in how coincidences were measured. Key issues include the use of different sampling windows for the GTR unit, which could skew results to falsely support the quantum prediction of g2=0. Additionally, discrepancies in reported delay times raise concerns about the validity of the findings. Overall, the experiment is viewed skeptically, with calls for a formal rebuttal to clarify these issues.
  • #91
I would like to come back to a very fundamental reason why
the detector correlation in the case of single-photon states
must be zero, and this even INDEPENDENTLY of any consideration
of photon number operators, finite-size detectors, hamiltonians
etc...
It is in fact a refinement of the original argument I put here,
before I was drawn in detailled considerations of the photon
detection process.

It is the following: we have a 1-photon state inpinging on a
beam splitter, which, if we replace it by a full mirror, gives
rise to the incoming state |R> and if we remove it, gives
rise to the state |T> ; with the beam splitter in place, the
ingoing state is 1/sqrt(2) {|R> + |T>}

Up to now, there is no approximation, no coarse graining.
You can replace |T> and |R> with very complicated expressions
describing explicitly the beams. It is just their symbolic expression.

Next, we consider the entire setup, with the two detectors and the
coincidence counter, as ONE SINGLE MEASUREMENT SYSTEM. Out of it can
come 2 results: 0 or 1. 0 means that there was no coincident clicks. 1
means that there was a coincident click detected. It doesn't really
matter exactly how everything is wired up.
As this is an observable measurement, general quantum theory (of which,
as I repeated often, QED is only a specific application) dictates that
there is a hermitean operator that corresponds to this measurement, and
that it is an operator with 2 eigenvalues: 0 and 1.
THIS is the actual content which is modeled by the normal ordering of
the 2 number operators, but for the argument here, there is no need
to make that link.
You can, if you want, analyse in detail, the operator that will give us
the correlation is the above operator, which we will call C.
In Mandelian QO this is :nv1 nv2: but it doesn't matter: if you want to
construct it yourself, based upon interaction hamiltonians, finite size
detectors, finite size beams etc...be my guest. Write down an operator
expression 250 pages long if you want.
At the end of the day, you will have to come up with an operator, that
corresponds to the correlation measurement, and that measurement, for each
individual measurement in a single time frame,
has 2 possible answers: 0 and 1. (no coincidence,or coincidence).

What we want to calculate, is 1/2 (<R|+<T|) C (|R> + |T> )

What do we know about C ?

If we remove the beamsplitter, we have a pure |T> state.
And for |T> we know FOR SURE that we will not see any coincidence.
Indeed, nothing is incident on the R detector.
This means, by general quantum theory (if we know an outcome for sure),
that |T> is an eigenstate of C with eigenvalue 0.

If we put in place a full mirror, we have a pure |R> state.
Again, we know for sure that we will not see any coincidence.
This time, nothing is incident on the T detector.
So |R> is an eigenstate of C, also with eigenvalue 0.

From this it follows that 1/sqrt(2) (|R>+|T>) is also an eigenstate with
eigenvalue 0 of C (a linear combination of eigenvectors with same
eigenvalue).

1/2 (<R|+<T|) C (|R> + |T> ) = 0

This follows purely from general quantum theory, its principal
point being the superposition principle and the definition, in
quantum theory, of what is an operator related to a measurement.

Now, exercise:
If you misunderstood the above explanation, you could think that
this is an absurd result that means that if you have two intensive
beams on 2 detectors, you could never have a coincidence, which is
clearly not true. The answer is that the above reasoning
(put in a mirror, remove the splitter) only works for 1-photon
states. So here is the exercise:
Why does this only work if the incoming states on the beam splitter
are 1-photon states ?

If you understood this, and found the answer, you will have gained
a great insight in quantum theory in general, and in quantum optics
in particular :-)

cheers,
Patrick.
 
Physics news on Phys.org
  • #92
hi all, I got zapped by esteemed moderator TM
on a post criticizing vanesch's style here,
so let me backpeddle and just say the following.

I am disappointed in this dialogue which is
breaking down to a deathmatch between
vanesch vs nite. from "observation" vanesch & nite are
clearly both world class physicists at least on a theoretical
level. from his profile page & web page,
vanesch is a phd, and nite I am guessing
is probably "almost phd".

but here we have a supposed break between theory &
prediction & experiment, and the dialogue just seems to keep going in
circles. it seems to me both of you guys
are going in the wrong direction.

it seems to me, when _really_ world class physicists get
into a disagreement, they work on coming up with _new_
experiments that can attempt to discriminate/isolate the problem
or phenomenon, rather
than endlessly disagree on something that was done in a
lab, say, more than 25 years ago.

ie, proactive vs reactive. constructive vs reactionary.

example: einstein, with the EPR paper, bohm, who made
a key switch in it with polarized light, but is rarely credited for
this, and bell, who sharpened the knife further with a mathematical
analysis that escaped bohm, designing an experiment to
force the issue to light.

physicists of the above calibre are rare, I know. as this thread attests.
but, I was hoping to find one in cyberspace SOMEDAY. maybe not
now, but someday.

my other disappointment is that one shouldn't have to use
extremely abstruse theory to make simple predictions about
experiments. a good theory should generally require a reasonably
intelligent person not coming to completely opposite conclusions
in analyzing an experiment. why is it that it seems the better
informed the crowd on qm, sometimes the greater disagreement
over simple setups? as illustrated on this thread.

in this sense I would say both qm & semiclassical
theories often fail.

I challenge vanesch/nite to stop trying to whack each other (& me briefly
too :p) over the head with theory & together come up
with a new experiment that would tend to settle the disagreement.

also, nite says that the von neumann projection postulate is
incompatible with the locality of QED, but shouldn't it be possible
to PROVE this mathematically?

"none of us is as smart as all of us"

vzn
http://groups.yahoo.com/group/theory-edge/
 
  • #93
a question. vanesch just described a simple conceptual
experiment of a 1-photon state going thru a beamsplitter.

as I understand it, an experimental realization of this
would be a laser that emits a very brief pulse, "one photon width"
so to speak.

semiclassical theory (ala nite) will tend to predict that you will get
coincident clicks in the two detectors at each branch of
the beamsplitter. QM in constrast via mainly the projection postulate
predicts zero coincidences (within experimental error.. I know
thats a can of worms wrt semiclassical).

the question: supposed I used a small sample of a radioactive
isotope & emitted gamma rays. the emissions are spaced far
enough apart that we can be fairly sure its a 1-photon state ie
no overlap of emissions due to more atoms in a larger sample.
does this constitute a 1-photon
state in the above experiment?

the idea is, this setup could conceivably be done much more cheaply
than using a laser.
 
  • #94
vzn said:
supposed I used a small sample of a radioactive isotope & emitted gamma rays.

What would you use for a beamsplitter? :rolleyes:
 
  • #95
vzn said:
...

the question: supposed I used a small sample of a radioactive
isotope & emitted gamma rays. the emissions are spaced far
enough apart that we can be fairly sure its a 1-photon state ie
no overlap of emissions due to more atoms in a larger sample.
does this constitute a 1-photon
state in the above experiment?

the idea is, this setup could conceivably be done much more cheaply
than using a laser.

The idea was to use the entangled pair so that one is the "gate" and demonstrates a purely quantum effect. I guess you could say that the splitting of a single atom is evidence of the quantum nature of particles in general. But I don't think it could be adapted to demonstrate the quantum nature of light in specific.
 
  • #96
Let's talk some more about the Thorn et al experiment itself. The results were approximately as follows (g2 being the second order coincidence rate, relative to intensities, per the Maxwell equations):

g2(1986 Grangier actual)=0.18
g2(2003 Thorn actual)=0.018
g2(classical prediction)>=1
g2(quantum)=0

As you can see, 17 years of technological improvements yielded results significantly closer to the quantum predictions. Considering the dark count rates - which can cause 3-fold coincidences resulting in experimental values slightly on the high side - the results would have to be considered as solidly in the QM camp.

The above actual values DO NOT subtract accidentals. You can't really, because you wouldn't truly know if it is an accidental. Since that is what you are trying to prove in the first place.

There is really no classical explanation for these results. Vanesch has explained it well for those who are interested. Of course, there is no substitute for reading the actual paper itself and following its references. In case you lost the reference he provided originally:

J.J. Thorn, M.S. Neel, V.W. Donato, G.S. Bergreen, R.E. Davies, M. Beck
http://marcus.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf
Am. J. Phys., Vol. 72, No. 9, 1210-1219 (2004).


To break it down further, in terms of how g2 was calculated: g2(2003 Thorn actual) is based on the following values (approximate count rate per second):

3 fold coincidences (G, T and R detected)=3
2 fold coincidences (G and T)=4,000
2 fold coincidences (G and R)=4,000
1 fold coincidences (G only)=100,000
Dark rate count=250

G=Gate (idler)
T=Transmitted (signal)
R=Reflected (signal)

To achieve the classical results, the 3 fold coincidences would need to have been 160 per second, instead of the 3 actually seen.
 
Last edited by a moderator:
  • #97
vzn said:
I am disappointed in this dialogue which is
breaking down to a deathmatch between
vanesch vs nite. from "observation" vanesch & nite are
clearly both world class physicists

Hahahaha :-)))

I don't think I'm a world class physicist, but I think I "know my stuff" in certain areas. I also don't think nite is a world class physicist :-).

but here we have a supposed break between theory &
prediction & experiment,

No, not at all. We're arguing about what standard QED predicts, not whether this is confirmed or not by experiment. Now, I think I know enough about standard QED to understand what ALL "world class physicists" claim it predicts: namely that for one-photon field states, you find anti-correlation in detector clicks.
We're not arguing whether this is how nature behaves, should behave, behaved or whatever. Nightlight just claims that all those people studying QED don't understand their proper theory and miscalculate what it is supposed to predict.
And to do so, he tries to underline the importance of several "secondary" effects, such as finite detector size, beam size, the interaction of EM fields with detectors and so on. I have to say that these issues can become quickly so complicated that nobody (including nightlight) can work out exactly what is going on, and that's a good technique to break down any argument "you are forgetting this complicated effect, so you oversimplify things ; I did it (but I'm not going to show you, expert as you are, you should be able to do it yourself, and then you'll find the same answers as I) and found <fil in anything you like>".
However, nightlight is in a difficult position here, because he's contradicting some BASIC POSTULATES of quantum theory. So I don't need to go into all the detail. It is like proponents of perpetuum mobile systems. They can become hopelessly complicated... but you know from thermodynamics that IF YOU USE THERMODYNAMICS TO SHOW THAT IT MUST WORK, then clearly the claim is wrong, because it is a basic postulate in thermodynamics that perpetuum mobile don't exist.
See the difference: that doesn't mean that one cannot exist in nature (then thermodynamics is wrong) ; but the claim that THERMODYNAMICS PREDICTS that this particular system will be a perpetuum mobile is FALSE FOR SURE.

And that's what nightlight tries to do here: he tries to show that QED PREDICTS coincidence counts for one-photon states, and that if all quantum opticists in the world think it is not, that's because they don't understand their own theory and they over-simplify their calculations.

However, from previous discussions with nightlight, I'm now convinced that nightlight doesn't understand the basic postulates of quantum theory, especially the meaning of the superposition principle.
He confuses it with linear or non-linear dynamics of the interactions. But there is a fundamental difference: non-linear dynamics of the interaction is given by non-linear relationships between the observables or field operators, and the hamiltonian. But the superposition principle is about the LINEARITY OF THE OPERATORS ON THE HILBERT SPACE OF STATES. That has NOTHING to do with linear, or nonlinear, field equations.

Look at the hydrogen atom: the interaction term of the proton and the electron goes in 1/r. Clearly that's a non-linear relationship ! But that doesn't mean that the hamiltonian is not a linear operator on the state space !
The linearity of operators on state space is a fundamental postulate of quantum theory (of which QED is a specific application).
Also, the association of a linear, hermitean operator with every measurement is such a basic postulate.
So if his predictions are in disagreement with these postulates, it is SURE that there's a mistake on his part somewhere.

it seems to me, when _really_ world class physicists get
into a disagreement, they work on coming up with _new_
experiments that can attempt to discriminate/isolate the problem
or phenomenon, rather
than endlessly disagree on something that was done in a
lab, say, more than 25 years ago.

You cannot do an experiment to verify whether a certain theory predicts a certain outcome or not.

example: einstein, with the EPR paper, bohm, who made
a key switch in it with polarized light, but is rarely credited for
this, and bell, who sharpened the knife further with a mathematical
analysis that escaped bohm, designing an experiment to
force the issue to light.

The problem is that the experiment of Thorn is about the best one can do ; he did it, and others will repeat it.
The reasoning is this:
1)Out of the Thorn experiment comes a quantity which is close to 0.
2)Now, classical theory predicts that it should be bigger than 1.
3)QED, according to all experts, predicts that it should be about 0.
4)Nightlight CLAIMS that they all used badly their own theory, and that
if you do it right, QED predicts also 1 (that's our argument here).
5)Then nightlight claims Thorn is cheating in his experiment, and that he should find 1.

I'm arguing with point 4).
I have been adressing point 5). There are indeed a few minor problems in the paper ; I think they are details - in that if they are truly a problem then Thorn has made a big fool of himself ; the rest is just an argument on "priesthood or not".

cheers,
Patrick.
 
  • #98
vzn said:
as I understand it, an experimental realization of this
would be a laser that emits a very brief pulse, "one photon width"
so to speak.

No, that's impossible to do, for fundamental reasons. Every "classical" beam,
such as a laser beam, is a coherent state, which contains a superposition
of vacuum, 1-photon, 2-photon, 3-photon ... states in a special relationship.
So you cannot make "pulses of 1 photon" this way.

The trick is to use a non-linear optical element, that "reshuffles" these states ; for instance, that converts 1-photon states into 2-photon states. That's what such a PDC xtal does. When you do that, out comes a beam, which ALSO contains all n-photon states, but WHICH CONTAINS A BIG SURPLUS in 2-photon states, as compared to a "classical" beam.

And now the trick is to trigger on "one arm of these 2-photon states". If you then limit yourself to time intervals around these triggers, you know that the "other arm" is essentially an almost pure 1-photon state, with which you can then do experiments as you like (as long as your detections are synchronized with the trigger).

What I find funny is that nightlight doesn't attack THIS. It would be much easier for him :-) (hint, hint)


cheers,
patrick.
 
  • #99
If you understood this, and found the answer, you will have gained a great insight in quantum theory in general, and in quantum optics in particular :-)

Not that I expected very much, but would that be all the gratitude I get for escorting you out of the "QM measurement" darkness?

Why does this only work if the incoming states on the beam splitter are 1-photon states ?

There are two basic misconception built into this question, one betrayed by "this" another one by "only". The two tie the knot of your tangle.

Your "this" blends together the results of actual observation (the actual counts and their correlation, call them O-results[/color]) with the "results" of the abstract observable C (C-results[/color]). To free you from the tangle, we'll need finer res conceptual and logical lenses.

The C-results are not same as O-results. There is nothing in the abstract QM postulates that tells you what kind of setup implements C[/color] or, for a given setup, what kind of post-processing of O-results yields C-results[/color]. The postulates just tell you C exists and it can be implemented. But to implement it, to perform operational mapping between formal C and experiment, you need a more detailed physical model of the setup, where at least part of the 'aparatus' interacting with the 'object' is treated as a physical interaction. In our case one needs QED applied to detectors, such as treatments in [4] or [9].

The first observation of such dynamical analysis is that the "trigger of DT" involves making a decision how to define "trigger" and "no-trigger" O-results[/color] (which we can then use to define C-result). The number of photo-electrons ejected will have Poissonian distribution i.e. the (amplified) photo-current corresponding to r photo-electrons with probability p(r,n) = exp(-n) n^r/r!, where n=<r> is the average p-e count (and also a variance). This is the most ideal case, the sharpest p(r,n) you can get[/color] (provided you have perfectly reproducable source pulses and precise enough detection windows so that the incident field intensities I(t) are absolutely identical between the tries). Note that EM pulse need not be constant in the window, only the integral of I(t) must be constant for the window to obtain the "ideal" p-e distribution sharpnes p(r,n). If there is any EM amplitude variation between the tries, the p(r,n) will be compounded (smeared or super-) Poissonian which has variance larger than n.

A common sleight of hand in pedagogical QO treatments (initiated by Purcell during the HBT effect controversy, the QO birthing crisis, in 1950s, later elaborated by Mandel and refined into a work of art by Glauber) is to point out one example which provides such nearly perfectly reproducable incident EM fields, the perfectly stable laser light (coherent light), and note that the (single mode) photon number observable [n]=[a+][a] of such source has also the Poissonian distribution of photons. From that, the pedagogues leap to the "conclusion" that O-results r are interchangeable with the [n]-results, the values of the observable [n] (the photon number) i.e. as if measurements of r is an implementation of the observable [n]. From this "conclusion" they then "deduce" that if we can produce Fock state as the incident EM field, thus have a sharp value for observable [n], we will have a sharp value for r. Nothing of the sort follows from the QED model of detection. The association between the [n]'s [n]-values and the measured O-values r is always statistical (the EM intensity 'I' fixes <r>=<r(I)> and its moments) and the sharpest association between the [n]-values and the O-values one can have is Poissonian.

The average r (parameter n), is a function of the incident light intensity I and of the settings on the "detectors" (bias voltage, temperature, amplifier gain, pulse anlyser, window sizes, etc.). Assuming we keep detector & window parameters fixed, <r>=n will be a function of the incident light intensity I only, i.e. n=n(I).

The key observation about this function n(I) from the QED detection model [4] is that, for a given detector settings, the n(I), thus the p(r,n(I)), is determined solely by the EM fields reaching the detector within the detection window[/color]. In particular, [4] being a relativistic model, given the incident fields, there are no effects on p(r,n) from any interactions occurring at the spacelike distances from the detector during the detection window.

Following the common convention, we can define O-result "no-trigger" to correspond to r=0 photo-electrons and "trigger" to correspond to r>0 photo-electrons (we're idealising here by assuming perfect amplification of the ejected photo-electrons into the measured currents). We'll define q = p(0) = p(0,n) = exp(-n) and p = p(1) = 1-q.

To obtain the operational interpretation of Glauber's [G1] observable (his single 'detection' rate observable, [G1(x,t)] = [E-][E+], where [E]=[E+]+[E-] is electric field operator [E] decomposition to positive & negative frequency parts [E+] (annihilator) and [E-] (creator)), we need another result of the dynamical analysis (cf. [4] 78-84). The desired behavior of [G1] is that it has <0|[G1]|0>=0, i.e. [G1]-value is 0 when no incident EM field interacts with the detector. Thus we want [G1] to count photo-absorptions of the incident field only. The dynamics for the detection, unfortunately yields only, and at best, the Poissonian r-counts. That means we will have O-triggers with no incident light (corresponding to vacuum rate n0=n(I0), I0 from hv/2 vacuum energy per mode) and absent O-triggers when the incident light is present (since p(0,n)>0).

The Glauber's ideal [G1]=[E-][E+] operationally corresponds to filtering out both types of r-results 'we are not interested in'. While detector designs (including pulse analyser & discriminator/PAD) perform this subtraction atomatically, they cannot compensate for the 'failed triggers'. To account for the failed triggers, detectors have a parameter Quantum Efficiency QE which is obtained (calibrated) as a ratio of vacuum filtered trigger rate and the average photon rate of the incident field. Thus, knowing the measured trigger rate R(I) and R0 (dark rate, the adjustable leftover from the built in vacuum subtractions), one can compute the average 'photon number' rate PN(I)=<[G1]>=<G> of the incident field as PN(I) = (R(I)-R0)/QE (cf. eq (2) [10]).

This relation among averages does not get around the Poissonian spread p(r,n) for the r-counts, thus of the dark triggers p(r>0,n0) and the missed triggers p(r=0,n>n0). Namely, even if the EM field has a perfectly sharp incident photon number within the detection window (as we have approximately in PDC on TR photon), the r-counts still have at best the Poissonian distribution, thus the variance of at least n. This implies a tradeoff between the TE (trigger efficiency, TE=R(I)/PN(I), which is different than QE=(R(I)-R0)/PN(I)) and the 'false' triggers for the r-counts, no matter what n we select or how we adjust n0 of our detector (n0=<r> for no incident field). Defining the average of r-count for incident field 'alone' as nf=<r>-n0, for given sharp [n] incident field we can maintain the fixed nf. We can still adjust detector sensitivity by tuning n0, thus adjust n=nf+n0, which adjusts the loss rate as LR=p(r=0,n)=exp(-(nf+n0)) and the false trigger rate FT=p(r>0,n0)=1-exp(-n0). If we reduce losses LR->0, then we need n0->inf, which causes FT->1, thus making nearly all triggers false. If we reduce false triggers via n0->0, then we are maximizing the loss rate to exp(-nf).

In particular, for single (on avg) mode absorption per window, nf=1, and reducing the false triggers to 0, will yield the loss rate (per window) at least LR=exp(-nf)=1/e=36.79%, which is well above the max loss rate for an absolutely loophole free Bell's test of (1-0.83)=17%. But, to avoid only the natural semiclassical models, the tests require a less demanding than 83% (limit for any conceivable local model) efficiency. They require at least 2/Pi=63.66% trigger efficiency i.e. the max loss allowed to eliminate natural classical models is LR=1-2/Pi=36.34%, which is almost there, yet it is 0.45% below the unavoidable (when false triggers are minimized) p-e Poissonian loss of 36.79%. Thus any optical Bell test will fail, falling short of eliminating the natural classical models by mere .45%, precisely because of the dynamically deduced statistical association between the r-counts (the O-triggers) and the photon numbers of the incident field (which [G1] counts via photon absorption counts).

As a mnemonic device, one can think of [G1] as corresponding to r-counts on 1-by-1 basis instead as a relation among averages (and moments) of the two distributions. For coherent or chatoic states this causes no problem, since averages and moments agree. But for the Fock state |1> (or similarly any Glauber "non-classical" states), [G1] has sharp [G1]-values 1, while the r-counts remain Poissonan with average 1 (which requires the lost counts to be at least 1/e=36.79%). As cited earlier [11], one can introduces a different kind of (nonlinear[/color]) annihiliation operator E_ which does maintain consistency between the distributions of r-counts and these 'new-photo-counts', and consequently the result in [11] for the Fock state is also the Poissonian 'new-photo-count' distribution. The regular annihilator shows other strange properties, as well, if one takes it literally as 'photon absorption' operators [12], such as increasing the number of field quanta for super-Poissonian states (and even more so than the creation operator [a+] for some states!).

The operational meaning of Glauber's 2 point "correlation" G2(x1,x2)=<[G2]> where [G2]=[E1-][E2-][E2+][E1+] and its "non-locality" (convention) has been discussed at length already. Here I will only add that the same Poissonian r-counts limitations and the caveats apply when heuristically identifying, on 1-by-1 basis, the r-counts coincidences as the observable [G2]-values. The association is still statistical (in the sense of being able to map only the averages & the moments between the two). Additional important caveat here is that [G2] implementation requires non-local operations to subtract the rates of accidentals and unpaired singles (or losses from p(r=0,n>n0)), which now occur at different locations, thus any "non-locality" one deduces from it is just a matter definitions, not anything genuinly non-local (since one can graft the same non-local subtraction conventions to the semiclassical models and make them Glauber "non-local", too).

Before constructing operational rules for your C observable on the |Psi.1>, we'll look at the actual observation results. The O-results of the superposed state will be (T,R) pairs: (0,0), (0,1), (1,0), (1,1). If the average r on DT for the single state |T> is <r>=n, then the <r> for the superposed state |Psi> = (|T>+|R>)/s will be n/2 for individual DT and DR "trigger" probabilities (per window or per try).

Assuming DT and DR are at spacelike distance during the detection windows (defined via DG events) and that the PDC pump is stable enough so that within the sampling windows we get repeatably the 'same' EM fields, (i.e. at least the same Integral{I(t)*dt}) on DT and on DR in each window (so we can get the sharpest possible distribution of the p-e r-counts predicted by the QED model), and that the light intensity is low enough so that we can ignore dead time on the detectors, the probabilities of the four kinds of O-triggers are simple products p00=q^2, p01=p10=pq and p11=p^2[/color]. In short, whatever interactions are going on at the spacelike location DR, it has no effect on the evolution of the fields and their interactions at the DT location, thus no effect on the probabilities of the r-counts on DT. This is, of course, the same result that the finite detectors and finite R & T fields model predicts, as described [post=538215]earlier[/post].

Now, finally, your observable C. Nothing in the QM axioms specifes or limits how the C must be computed from the r-counts, and certainly does not require that computed C-values are the same as O-values (the observed r-counts) on 1-by-1 basis. QM only says such C exists and it can be mapped to the experimental data. The fact that C "predicts" indeed the avg r-counts <r> for setups with "mirror" or with "transparent" have no implication for the setup with PBS. Similarly, the fact that via (AJP.9) one could write down your C in a concise form, implies nothing regarding the operational mapping of C to our setup, and implies nothing about the O-values for the setup (since AJP.9 plane wave operators don't describe DT and DR setup with the |Psi.1> incident fields). On the other hand, the Glauber's model [4], augmented with the finite detectors & EM volumes does predict proper O-values p00...p11, and does provide a simple operational mapping for your C.

Note first that the r-counts for the actual finite DT and DR are not exclusive, whether |Psi> is a single photon or multi-photon or partial photon state (no sharp [n]). Thus your requirement of exclusivity "only for single photon state" is an additional ad hoc requirement, an extra control variable for the C-observable mapping algorithm, instructing it to handle the case of EM fields with <[n]>=N=1 differently than case N != 1 (N need not be an integer). Nothing in the r-counts, though, is different in any drastic way (other than the difference in n=<r> used in p(r,n)).

Note that your C is not sensitive to PBS split ratio i.e. since |T> and |R> are basis in the 0 eigenspace of C, any Psi(a,b) = a|T> + b|R>, will yield C=0, which simplifies the C mapping algorithm since it doesn't have to care about the values of a and b, but at most it needs to take a note that a beam splitter is there so it can enforce C=0. As luck would have it, though, from the r-count probabilities p00...p11, when either a->0 or b->0, the proportion of (1,1) cases automatically converges to 0 (or background accidentals, globally discarded for C, same as for [G2]), which were the cases of the "mirror" or "transparent" setups, thus no special C-algorithm adjustments are needed for all 3 of your setups.

For N=1, one could thus simply compute C-values by treating the (1,1) O-values as (0,0) O-values, discarding them (it discards double O-triggers the same way that the triple and quadruple O-triggers are discarded in Bell tests, i.e. by definition, cf.[post=531880]Ou, Mandel[/post] [5]). Note that the fact that |a|^2+|b|^2=1 for C, has no operational mapping implications on the variability of the number of 'C-values obtained' (and the total of C-values which gave no-result, such as (0,0)), since the any 'results obtained' are normalized to the "results obtained" total (for all eigenvalues), hence we get C=0 for 100% of the obtained results for any a,b, just as the observable C "predicts". For N!=1, the algorithm will report result of (1,1) as C=1. The (0,0) cases, as in [G2] observable, are always reported as no-result (disposal of the unpaired DG trigger singles built into the Glauber's QO subtraction conventions).

The C-algorithm is non-local, by its convention of course, but that doesn't contradict any QM postulates which only say that observable C exists and it can be computed (but not how). Even without your ad hoc exclusivity requirement for N=1, even the finite-detector/EM augmented [G2] algorithm is already non-local as well due to the requirement for the non-local subtractions.


[10] Edo Waks et al. "High Efficiency Photon Number Detection for Quantum Information Processing" quant-ph/0308054

[11] M. C. de Oliveira, S. S. Mizrahi, V. V. Dodonov "A consistent quantum model for continuous photodetection processes" quant-ph/0307089

[12] S.S. Mizrahi, V.V. Dodonov "Creating quanta with 'annihilation' operator" quant-ph/0207035
 
Last edited:
  • #100
However, from previous discussions with nightlight, I'm now convinced that nightlight doesn't understand the basic postulates of quantum theory, especially the meaning of the superposition principle.
He confuses it with linear or non-linear dynamics of the interactions. But there is a fundamental difference: non-linear dynamics of the interaction is given by non-linear relationships between the observables or field operators, and the hamiltonian. But the superposition principle is about the LINEARITY OF THE OPERATORS ON THE HILBERT SPACE OF STATES. That has NOTHING to do with linear, or nonlinear, field equations.


The linearity (and thus the fields superposition) are violated for the fields evolution in Barut's self-field approach (these are non-linear integro-differential equations of Hartree type). One can approximate these via piecewise-linear evolution of the QED formalism.

Note that Barut has demonstrated this linearization explicitly for finite number of QM particles. He first writes the action S of, say, interacting Dirac fields F1(x) and F2(x) via EM field A(x). He eliminates A (expressing it as integrals over currents), thus gets action S(F1,F2,F1',F2') with current-current only interaction of F1 and F2.

Regular variation of S via dF1 and dF2 yields nonlinear Hartree equations (similar to those that already Schrodinger tried in 1926, except that Schr. used K-G fields F1 & F2). Barut now does an interesting ansatz. He defines a function:

G(x1,x2)=F1(x1)*F2(x2) ... (1)

Then he shows that the action S(F1,F2..) can be rewritten as a function of G(x1,x2) and G' with no leftover F1 and F2. Then he varies S(G,G') action via dG, and the stationarity of S yields equations for G(x1,x2), also non-linear. But unlike the non-linear equations for fermion fields F1(x) and F2(x), the equations for G(x1,x2) become linear if he drops the self-interaction terms. They are also precisely the equations of standard 2-fermion QM in configuration space (x1,x2), thus he obtains the real reason for using the Hilbert space products for multi-particle QM.

The price paid for the linearization is that the evolution of G(x1,x2) contains non-physical solutions. Namely the variation in dG is weaker than the independent variations in dF1 and dF2. Consequently, the evolution of G(x1,x2) is less constrained by its dS=0, thus the G(x1,x2) can take paths that the evolution of F1(x) and F2(x), under their dS=0 for independent dF1 and dF2, cannot.

In particular, the evolution of G(x1,x2) can produce states such Ge(x1,x2)=F1a(x1)F2a(x2) + F1b(x1)F2b(x2), corresponding to the entangled two particle states of QM. The mapping (1) cannot reconstruct any more the exact solution F1(x) and F2(x) uniquely from this Ge, thus the indeterminism and entanglement arise as the artifacts of the linearization approximation, without adding any physical content which was not already present in the equations for F1(x) and F2(x) ("quantum computing" enthusiasts will likely not welcome this fact since the power of qc is result of precisely the exponential explosion of paths to evolve the qc "solutions" all in parallel; unfortunately almost all of these "solutions" are non-solutions for the exact evolution).

The von Neumann's projection postulate is thus needed here as an ad hoc fixup of the indeterministic evolution of F1(x) and F2(x) produced by the approximation. It selects probabilistically one particular physical solution (those that factorize Ge) of actual fields F1(x), F2(x) which the linear evolution of Ge() cannot. The exact evolution equations for F1(x) and F2(x), don't need such ad hoc fixups since they always produce only the valid solutions (whenever one can solve them).

Thus, the exact evolution of F1(x), F2(x) is just like taking a single path in the MWI exponential tree of universes, except that this one makes sense and there is no need for outside intervention to pick the branches -- the evolution of F1(x), F2(x) is deterministic, the branches are artifact of the Ge() approximation. Since in Barut's self-fields, there is no prediction of any non-locality via Bell tests violations, and since no test ever violated them, there is no reason within self-fields to worry about encountering any contradiction in what in MWI one could see as amounting to picking just one path.


The same results hold for any finite number of particles, each particle adding 3 new dimensions to the configuration space and more indeterminism. The infinite N cases (with the anti/-symmetrization reduction of H(N), which Barut uses in the case of 'identical' particles for F1(x) and F2(x), as well) are exactly the fermion and boson Fock spaces of the QED. For all values of N, though, the QM description in 3N dimensional configurations space (the product H^N, with anti/symmetrization reductions) remains precisely the linearized approximation (with the indeterminism, entanglement and projection price paid) of the exact evolution equations, and absolutely nothing more. You can check the couple of his references (links to ICTP preprints) on this topic I posted few messages back.
 
Last edited:
  • #101
As you can see, 17 years of technological improvements yielded results significantly closer to the quantum predictions.

Just wait till someone figures out an even more advanced technology: how to cut the triple unit wire altogether, so it will give exactly the perfect 0 for g2. Why not get rid of the pesky accidentals, and make it look just like rolling the marble, to DT or to DR.

After all, that's how they got their g2 -- by using the 6ns delay, they had cut off the START from DT completely, blocking the triple unit from counting almost any triple coincidences other than the accidentals. Without it the experiment won't "work" and that's why the 6ns is repeated 11 times in the paper -- the 6ns is the single most mentioned numeric fact about the experiment in the paper[/color]. And it is wrong, as acknowledged by the chief experimenter (and as anyone can verify as well from the data sheet). Isn't that little bit funny.

Go see if they issue errata on their site at least, and say perhaps what was the secret "true" delay they used ... and what were any of the 'confidential' counts used to compute Table I. You couldn't pry any of it with the pliers out them the last time I tried. Let us know if you have any luck.
 
  • #102
ok guys I think I can see a way to reconcile this or at least
restate it in a way acceptable to both parties.

lets attempt to state this schism in terms of a "kuhnian paradigm shift".

let me postulate a new version of QM in a thought experiment.
call it "QM v 2". now vanesch, suppose I told you that
measuring simultaneous eigenstates is not forbidden in this
NEW theory. where QM predicts mutually exclusive eigenstates,
this new QM v 2 predicts that they are not mutually exclusive.

(in fact throw out my idea of measuring the binomial distribution
in coincidences--that means they are random, let's no longer imagine
that for the moment).

lets say that since QM v 1 denies they exist, we build our experiments
to throw them out if we detect them. either in the electronics or
data selection etc. now let's say QM v 2 argues that this procedure
is biasing the sample! QM v 2 uses QM v 1 as a starting point
and argues that QM v 1 is PERFECTLY VALID
for the specific samples it refers to (which is very broad, if it is given
that coincidences are rare)..

however QM v 2 would assert QM v 1 is inherently referring only
to a BIASED SAMPLE by throwing out detected simultaneous eigenstates
(informally "coincidences"). ie as einstein argued, INCOMPLETE. and maybe
just maybe, the experimenters attempting to test QM v 1 vs QM v 2
are INADVERTENTLY doing things that bias the test in favor of QM v 1,
naturally being guided by QM v 1 theory.

now just putting aside all these experiments that have been done
so far, can we agree on the above? it seems to me this is the crux
of the disagreement between vanesch/nite.

further, nite argues
that the experiments so far are not really testing QM v 1 vs QM v 2,
but in fact are just testing QM v 1-- by "accidentally"
throwing out coincidences based on experimental & experimenter bias.

lets push it a little further. suppose QM v 1 is not merely "undefined"
in talking about simultaneously measured eigenstates, but goes further
and asserts they
are RANDOM. suppose QM v 2 actually can predict, in contrast,
a NONRANDOM co-occurence.
then we have not merely a hole but a break/inconsistency
between these two theories, agreed?
one that can be tested in practice, right?

given all this, I think we can try to devise better experiments.

ps re doing a test using gamma emissions. my main question is this:
gamma rays are "photons" ie EM radiation.
has anyone ever figured out how to
make a beamsplitter using gamma rays? is that in the literature anywhere?
seems like it shouldn't be hard...? if you guys will bear with me just
a little on this, I have something up my sleeve that should be of great
interest to everyone on the thread..
 
  • #103
The linearity (and thus the fields superposition) are violated for the

fields evolution in Barut's self-field approach (these are non-linear

integro-differential equations of Hartree type). One can approximate

these via piecewise-linear evolution of the QED formalism.

It was indeed this discussion that made me decide you didn't make the

distinction between the linearity of the dynamics and the linearity of

the operators over the quantum state space.


Note that Barut has demonstrated this linearization explicitly for finite

number of QM particles. He first writes the action S of, say, interacting

Dirac fields F1(x) and F2(x) via EM field A(x). He eliminates A

(expressing it as integrals over currents), thus gets action

S(F1,F2,F1',F2') with current-current only interaction of F1 and F2.

Regular variation of S via dF1 and dF2 yields nonlinear Hartree equations

(similar to those that already Schrodinger tried in 1926, except that

Schr. used K-G fields F1 & F2). Barut now does an interesting ansatz. He

defines a function:

G(x1,x2)=F1(x1)*F2(x2) ... (1)

Then he shows that the action S(F1,F2..) can be rewritten as a function

of G(x1,x2) and G' with no leftover F1 and F2. Then he varies S(G,G')

action via dG, and the stationarity of S yields equations for G(x1,x2),

also non-linear. But unlike the non-linear equations for fermion fields

F1(x) and F2(x), the equations for G(x1,x2) become linear if he drops the

self-interaction terms. They are also precisely the equations of standard

2-fermion QM in configuration space (x1,x2), thus he obtains the real

reason for using the Hilbert space products for multi-particle QM



The price paid for the linearization is that the evolution of G(x1,x2)

contains non-physical solutions. Namely the variation in dG is weaker

than the independent variations in dF1 and dF2. Consequently, the

evolution of G(x1,x2) is less constrained by its dS=0, thus the G(x1,x2)

can take paths that the evolution of F1(x) and F2(x), under their dS=0

for independent dF1 and dF2, cannot.

In particular, the evolution of G(x1,x2) can produce states such

Ge(x1,x2)=F1a(x1)F2a(x2) + F1b(x1)F2b(x2), corresponding to the entangled

two particle states of QM. The mapping (1) cannot reconstruct any more

the exact solution F1(x) and F2(x) uniquely from this Ge, thus the

indeterminism and entanglement arise as the artifacts of the

linearization approximation, without adding any physical content which

was not already present in the equations for F1(x) and F2(x) ("quantum

computing" enthusiasts will likely not welcome this fact since the power

of qc is result of precisely the exponential explosion of paths to evolve

the qc "solutions" all in parallel; unfortunately almost all of these

"solutions" are non-solutions for the exact evolution).

All this is not amazing in fact. It only means that the true solution of

the classical coupled field problem gives different solutions than the

quantum theory of finite particle number. That's not surprising at all,

for the basic postulates are completely different: a quantum theory of a

finite number of particles has a totally different setup than a classical

field theory with non-linear interactions. If by coincidence, in certain

circumstances, both ressemble, doesn't mean much.
It also means that you cannot conclude anything about a quantum theory of

a finite number of particles by studying a classical field theory with

non-linear terms. They are simply two totally different theories.



The von Neumann's projection postulate is thus needed here as an ad hoc

fixup of the indeterministic evolution of F1(x) and F2(x) produced by the

approximation. It selects probabilistically one particular physical

solution (those that factorize Ge) of actual fields F1(x), F2(x) which

the linear evolution of Ge() cannot. The exact evolution equations for

F1(x) and F2(x), don't need such ad hoc fixups since they always produce

only the valid solutions (whenever one can solve them).

No, a quantum theory of a finite number of particles is just something

different. It cannot be described by a linear classical field theory,

nor by a non-linear classical field theory, except for the 1-particle

case, where it is equivalent to a linear classical field theory.
A quantum theory of a finite number of particles CAN however, be

described by a linear "field theory" in CONFIGURATION SPACE. That's

simply the wave function. So for 3 particles, we have an equivalent

linear field theory in 9 dimensions. That's Schroedinger's equation.

However, von Neumann's postulate is an integral part of quantum theory.

So if you have another theory that predicts other things, it is simply

that: another theory. You cannot conclude anything from that other

theory to talk about quantum theory.

The same results hold for any finite number of particles, each particle

adding 3 new dimensions to the configuration space and more

indeterminism. The infinite N cases (with the anti/-symmetrization

reduction of H(N), which Barut uses in the case of 'identical' particles

for F1(x) and F2(x), as well) are exactly the fermion and boson Fock

spaces of the QED. For all values of N, though, the QM description in 3N

dimensional configurations space (the product H^N, with

anti/symmetrization reductions) remains precisely the linearized

approximation (with the indeterminism, entanglement and projection price

paid) of the exact evolution equations, and absolutely nothing more. You

can check the couple of his references (links to ICTP preprints) on this

topic I posted few messages back.

It is in fact not amazing that the linear field theory in 3 dimensions is

equivalent to the "non-interacting" quantum theory... up to a point you

point out yourself: the existence, in quantum theory, of superpositions

of states, which disappears, obviously (I take your word for it), in the

non-linear field theory.
In quantum theory, their existence is EXPLICITLY POSTULATED, so this

already proves the difference between the two theories.

But all this is about "finite-number of particle" quantum theory, which

we also know, can only be non-relativistic.
Quantum field theory is the quantum theory of FIELDS. So this time, the

configuration space is the space of all possible field configurations,

and each configuration is a BASIS VECTOR in the Hilbert space of states.

This is a HUGE space, and it is in this HUGE SPACE that the superposition

principle holds, not in the configuration space of fields.
For ANY non-linear field equation, (such as Barut's, which simply sticks

to the classical equations at the basis of QED) you can set up such a

corresponding Hilbert space. If you leave the field equations linear,

this corresponds to the free field situation, and this corresponds to a

certain configuration space, and to it corresponds a quantum field

hilbert space called Fock space. If you now assume that the

*configuration space* for the non-linear field equations is the same (not

the solutions, of course), this Fock space will remain valid for the

interacting quantum field theory.
There is however, not necessary a 1-1 relation between the solutions of

the classical non-linear field equations, and the evolution equations in

the quantum theory, even if starting from the quantum state that

corresponds to a classical state to which the classical theory can be

applied.
Indeed, as an example: in the hydrogen atom, there is not necessary an

identity between the classically calculated Bohr orbits and the solutions

to the quantum hydrogen atom. But of course, there will be links, and

the Feynman path integral formulation makes this rather clear, as is well

explained in most QFT texts. Note that the quantum theory has always

MANY MORE solutions than the corresponding classical field theory,

because of the superposition principle.

However, all this is disgression, through I've been already through this

with you. At the end of the day, it is clear that classical (non-linear)

field theory, and its associated quantum field theory, ARE DIFFERENT

THEORIES.
The quantum field theory is a theory which has, by postulate, a LINEAR

behaviour in an INFINITELY MUCH BIGGER space than the non-linear

classical theory. It allows (that's the superposition principle) much

more states as physically distinct states, than the classical theory.

The non-linearity of the interacting classical field theory is taking

into account fully by the relationships between the LINEAR operators over

the Hilbert space.
In the case h->0, all the solutions of the non-linear field equations

correspond to solutions of the quantum field theory. However, the

quantum field theory has many MORE solutions, because of the

superposition principle.
Because of the hugely complicated problem (much more complicated than the

non-linear classical field equations) an approach is by Feynman diagrams.

But there are other techniques, such as lattice QFT.

QED is such a theory, and it is WITHIN that theory that I've been giving

my answers, which stand unchallenged (and given their simplicity it will

be hard to challenge them :-)
The linearity over state space (the superposition principle) together

with the correspondence between any measurement and a hermitean operator,

as set out by von Neumann, are an integral part of QED. So I'm allowed

to use these postulates to say things about predictions of QED.

We can have ANOTHER discussion over Barut's approach. But it is not THIS

discussion. This discussion is about you denying that standard QED

predicts anti-correlations in detector hits between two detectors, when

the incoming state is a 1-photon state.
I think I have demonstrated that this cannot be right.

cheers,
Patrick.
 
  • #104
I said:

If you understood this, and found the answer, you will have gained a

great insight in quantum theory in general, and in quantum optics in

particular :-)

Why does this only work if the incoming states on the beam splitter are

1-photon states ?


Your "this" blends together the results of actual observation (the actual

counts and their correlation, call them O-results) with the "results" of

the abstract observable C (C-results). To free you from the tangle, we'll

need finer res conceptual and logical lenses.

The C-results are not same as O-results.

That would then simply mean that one made a mistake.
To every "actual" observation corresponds, by postulate, AN OPERATOR, and

that operator, taking into account ALL EXPERIMENTAL DETAILS, is:
- hermitean
- has as eigenvalues all possible experimental outcomes
- etc...

It can be very difficult to construct exactly that operator, so obviously

often one makes approximations, idealisations etc... That's nothing new

in physics. It takes some intuition (or a lot of work) to know what is

essential, and what not. But that's just a calculational difficulty. In

principle, to ALL observations (actual, real) corresponds a hermitean

operator.

There is nothing in the abstract QM postulates that tells you what kind

of setup implements C or, for a given setup, what kind of post-processing

of O-results yields C-results. The postulates just tell you C exists and

it can be implemented.

Well, that's all I needed !
That, plus two facts:
- that the outcome of a single "correlation test" (one time slice) gives

0 or 1.
- that, when you have only one beam (by putting in a full mirror, or

removing the splitter), that your correlation test gives with certainty

0.


Now, finally, your observable C. Nothing in the QM axioms specifes or

limits how the C must be computed from the r-counts, and certainly does

not require that computed C-values are the same as O-values (the observed

r-counts) on 1-by-1 basis.

QM only says such C exists and it can be mapped to the experimental data.

The fact that C "predicts" indeed the avg r-counts <r> for setups with

"mirror" or with "transparent" have no implication for the setup with

PBS.

It does ! In the case of a PBS, the outgoing state, when the ingoing

state is a 1-photon state, is a superposition of the 1-photon state

"left" and the 1-photon state "right". (you seem to have accepted that).

That's sufficient, because the "left" state and the "right" state are

both eigenstates with eigenvalue 0.


Thus your requirement of exclusivity "only for single photon state" is an

additional ad hoc requirement, an extra control variable for the

C-observable mapping algorithm, instructing it to handle the case of EM

fields with <[n]>=N=1 differently than case N != 1 (N need not be an

integer).

Absolutely not. That was the exercise ! I didn't have to SPECIFY it,

the exercise was to find WHY this was so. Apparently you didn't find the

answer (which is not surprising, as you have a big confusion on the

issue). If you would have found it, you would have shown you understood

quantum optics much better than I thought you did :-)

So here's the answer:

It is only for incoming 1-photon states that a PBS has as an outgoing

states a superposition of 1-photon states, which are those states one can

obtain by replacing the PBS by a mirror, or by removing it.

However, if you send a 2-photon state on a PBS, out comes a superposition

which can be written as follows:

a |T,T> + b |T,R> + c |R,R>

here, |T,T> is the photon state with a "2 transmitted photons", ...

If we replace the PBS by a mirror, we only have |R,R> and if we remove

it, we have |T,T>, and it is only for those states that we know that we

have an eigenstate with eigenvalue 0.


So the ingoing state with the PBS is NOT a superposition of the case

(full mirror) and (nothing). As such, the state coming out of the PBS

doesn't stay within the 0-eigenspace. Indeed, the |T,R> state is

another, orthogonal Fock state, and will be not in the 0-eigenvalue space

(if the detectors are perfect, it will be an eigenvector with eigenvalue

1, but that's not necessary in the argument).

You can apply similar reasonings for n-photon states.
It is only the special case of the 1-photon state that, after splitting,

is a superposition of the "exclusive" cases (left and right).

cheers,
Patrick.

PS: that said, I have learned more quantum optics with you than with anybody else, just by you defying me, I go and read a lot of stuff :-) It is in fact a pity you don't master the essentials, and know a lot of publications about lots of "details". Your work about Barut and de Santos would win from "knowing the ennemy better" ;-)
 
  • #105
vzn said:
however QM v 2 would assert QM v 1 is inherently referring only
to a BIASED SAMPLE by throwing out detected simultaneous eigenstates
(informally "coincidences"). ie as einstein argued, INCOMPLETE. and maybe
just maybe, the experimenters attempting to test QM v 1 vs QM v 2
are INADVERTENTLY doing things that bias the test in favor of QM v 1,
naturally being guided by QM v 1 theory.

now just putting aside all these experiments that have been done
so far, can we agree on the above? it seems to me this is the crux
of the disagreement between vanesch/nite.

1. There are no results being discarded, other than a time window is created. I.e. there is no subtraction of accidentals. The time window begins BEFORE the gate fires and is easily wide enough to pick up triple coincidences. That this is true is seen by the fact that the T and R detectors separately (and equally) fire 4000 times per second within this same window. Given this rate for double coincidences, there should be 160 triple coincidences if the classical theory held. The actual was 3. Clearly, there is anti-correlation of the T and R detections and the reason has nothing to do with the window size.

2. The disagreement goes a lot deeper than this experiment. Nightlight denies the results of most any experiment based on entangled photon pairs, i.e. Bell tests such as Aspect. He is a diehard local realist as far as I can determine, and such tests violate their sensibilities. (Nightlight, if you are not a local realist then please correct me.) Vanesch knows that IF there was a QM1 and QM2 whose difference could be detected by this kind of test, then it would be. That is because he abides by the results of scientifically conducted experiments regardless of their outcome. I wouldn't expect much movement on the part of either of them.
 
  • #106
hey guys. I suggest everyone calculate the following. its
a very sensitive calculation, not very easy to pull off,
but I did something very similar via empirical/computational experiments/simulations many years ago. the details are in a paper.

suppose that whenever a 1-photon goes thru a polarizing beamsplitter,
with 2 detectors, one on each branch, it has a very small probability of being
detected by both the H and V (horizontal and vertical) detectors.

question: how small would this probability have to be to preserve
the possibility of locality in bell experiments?

the answer is surprising & tends to support nite's general thesis.
the answer is apparently, "very little"
 
  • #107
another thought. I am not sure who 1st came up with
the idea of using photons for the EPRB experiment.
maybe bell or bohm? the original experiment imagined "particles"
eg an atom.

very,very few non-photon tests of bell's experiment have
been done. its a very tricky test. there is a recent one, but
thats another topic (or can of worms).

but let's consider the particle based version of the experiment.
it considers a particle, say an electron, going thru stern-gerlach
detectors.

so its interesting to ask, what is the analog of the GRA
(grangier roger aspect) experiment with particles? it would be something
like sending an electron through a stern-gerlach apparatus and
showing that you only measure "spin up" or "spin down" exclusively,
never both simultaneously.

nite's argument, translated, would tend
to suggest that you won't. what do you think nite, how would your
point be rephrased wrt stern gerlach measurements? are you arguing
you would get a small coincidence detection there also?

because what's interesting (and this is why photons were substituted
for mass-based particles in bell tests): the math is exactly the same!
the same math that predicts photon anticorrelation in the GRA experiment
would predict mutually exclusive measurements in spin up and spin
down stern-gerlach measurements of mass-based particles.

my general question,
has anyone done that experiment (recently)? ie attempt to measure
"anticorrelation" coefficient of spin up & spin down particles?
my guess is that old stern-gerlach
experiments from early part of this century
had low precision that probably has not been
improved on via more recent technology (after there was no
more theoretical interest in the phenomenon).
 
  • #108
vzn said:
hey guys. I suggest everyone calculate the following. its
a very sensitive calculation, not very easy to pull off,
but I did something very similar via empirical/computational experiments/simulations many years ago. the details are in a paper.

suppose that whenever a 1-photon goes thru a polarizing beamsplitter,
with 2 detectors, one on each branch, it has a very small probability of being
detected by both the H and V (horizontal and vertical) detectors.

question: how small would this probability have to be to preserve
the possibility of locality in bell experiments?

the answer is surprising & tends to support nite's general thesis.
the answer is apparently, "very little"

That is false. The correlation rate at 22.5 degrees is .8536 while Bell's Inequality yields a maximum value of .7500 at the same angle. (You don't need to do any time-varying changes for this to be true if you are a local realist.) Thus there must be at least 10% of the sample which must be skewed in the exact direction to give the wrong answer.

But that is not all. The skewing must switch the other way when you measure at 67.5 degrees, because the relationship reverses and the QM value of .1464 is less than the LR prediction which must be at least .2500. So the measurement bias must not only be large, it must be sensitive to the angle as well and even switch signs!

All of which is TOTALLY besides the point because the latest experiments don't subtract for accidentals anyway. Per Weihs et al, 1998: "We want to stress again that the accidental coincidences have not been subtracted from the plotted data."

http://arxiv.org/PS_cache/quant-ph/pdf/9810/9810080.pdf
 
Last edited by a moderator:
  • #109
vzn said:
nite's argument, translated, would tend
to suggest that you won't. what do you think nite, how would your
point be rephrased wrt stern gerlach measurements? are you arguing
you would get a small coincidence detection there also?

because what's interesting (and this is why photons were substituted
for mass-based particles in bell tests): the math is exactly the same!
the same math that predicts photon anticorrelation in the GRA experiment
would predict mutually exclusive measurements in spin up and spin
down stern-gerlach measurements of mass-based particles.

This test might be OK as a Bell test, but would not serve as proof that electrons are quantum non-classical particles because the classical view is that electrons are individual particles anyway. On the other hand, a double slit experiment (with an electron) is a pretty good way to show that a classical particle such as an electron can be made to act like a wave. Classical theory has a problem modeling in this manner (wave particle duality). Although there are probably those who might try...
 
  • #110
This test might be OK as a Bell test, but would not serve as proof that electrons are quantum non-classical particles because the classical view is that electrons are individual particles anyway.

Another view is that electrons are Dirac eq. matter fiels, such Jaynes' and Barut's classical field theories. This melding of "classical" and "particle" is what various 'classical limits' of QM do -- they do particle limit and call it classical limit.

On the other hand, a double slit experiment (with an electron) is a pretty good way to show that a classical particle such as an electron can be made to act like a wave.

Because it is wave (check Dirac equation). Just configured so it won't fall apart. There are many ways that can be done, especially in nonlinear theories, such as coupled Maxwell-Dirac or Maxwell-Schrodinger eqs. which are not linear (they become linear only as an approximations "of external fields" and "external currents" which is how they're normally approximated for regular QM and regular Maxwell ED; the QED scheme then reintroduces these omitted back-interactions by simulating them via scattering matrix, thus it represents a piecewise linearization approximation of the original nonlinear Maxwell-Dirac eq's). For linear equations Barut had used wavelet solutions, which are nonspreading for free electrons.

Classical theory has a problem modeling in this manner (wave particle duality). Although there are probably those who might try...

Again concept melding of "classical" with "particle" models. The two are different. Thus for example, the classical models of QM are not same and have generally nothing to do with particle models (although they could be).
 
  • #111
Why does this[/color] only work if the incoming states on the beam splitter are 1-photon states ?

Your whole reasoning in the "answer" and the "question" are based on the very premisses being disputed. In your photon-speak the finite DT can absorb 1-photon state |Psi.1> leaving vacuum as the EM state after the process. I say it can't (and why, as explained at length).

And what is "this" above. If it is C operator, then indeed it is obvious why, you can it defined it if you wish to turn to 0 for single photon superposed state. There is nothing from your reasoning about the PBS experiment that requires such conclusion, unless one shares your particular way of misundertsaning as to how DT absorbs the whole mode |Psi.1>.

Just becuse your C "predicts" the outcomes of "mirror" only or "transparent only" setups has no bearing on whether it predicts the third setup whith 50:50 polarizer. Note, and as already pointed out, that the 'classical probabilities' p00..p11 given earlier, also predict correct results (the perfect anticorrelation) for limits of the beam splitter, when the split ratio T:R varies continuously, including thus the cases of T only and R only.

There is no reason (that you have shown) why your C ought to be interpreted as the operator predicting anything for the beam splitter setup. This is similar trap of being carried away by the formalism that von Neumann's impossibility proof fell into. Read Bell's critique so you can avoid that kind of leaps in the future.

To every "actual" observation corresponds, by postulate, AN OPERATOR, and...

The trick is which operator corresponds to which setup and which detectors. The whole difference[/color] in the "naive" interpretation (such as that of AJP paper or yours) of QO prediction g2=0 and the correct one is in what size detector does it apply to.

My point (see Q-b earlier) is that G2=0 corresponds operationally to a trivial and perfectly classical setup of single large detector capturing whole |Psi.1> while the apperture of the second detector receives no incident fields.

You need to note that the (AJP.9) doesn't have any parameter to specify the size of detector, it is an idealization for infinite detector. One way to deal with that is to look at what kind of setup does the idealization work correctly for, where do the simplifications (of infinite detectors) matter the least. That was my first answer --- if you take a large detector to cover T and R paths, then indeed you can absorb the whole mode |Psi.1>=|T>+|R>, thus the presumed full mode annihilation is Ok. Another angle is to look for another model that accounts better for the finite detectors and finite fields of the actual setup (which is, recall, all the difference being debated: which setup does g2=0 prediction model).

ALL observations (actual, real) corresponds a hermitean

Yep, that still doesn't make your particular operator C a model for the DG & DT setup with 50:50 beam splitter. Plain classical model predicts the same anticorrelation for the two limiting cases, too.



operator.
 
Last edited:
  • #112
QM only says such C exists and it can be mapped to the experimental data. The fact that C "predicts" indeed the avg r-counts <r> for setups with "mirror" or with "transparent" have no implication for the setup with PBS.

It does ! In the case of a PBS, the outgoing state, when the ingoing state is a 1-photon state, is a superposition of the 1-photon state "left" and the 1-photon state "right". (you seem to have accepted that).

That's sufficient, because the "left" state and the "right" state are both eigenstates with eigenvalue 0.


That shows a complete concepetul meld of operator C and its C-results with the setup and it O-results. The existence of mapping does not say anything that ties your operator C and this setup with PBS.

The trivial fact that operator C acts particular way so that in two setups its C-results match the O-results of the two setups, has no bearing or imlications on the third setup with BPS placed in and its results. You can't also use presumed results of PBS case (such as applying infinite detector version of AJP.9 to DT & DR setup) to define your C, then come back and declare that C shows QED predicsts anticorrelation for finite DT and DR with 50:50 PBS.

After all, the probabilities p00..p11 I gave match (in the limits for T:R split ratio varying from 0:1 to 1:0) the "mirror" and "transparent" O-values, giving perfect anticorrelation for those cases as well.

You really need to go and study [4] (where the Gn() came from), its physics and its assumptions on the absorption processes involved. Then you can verify for youself, as sketched earlier, that QED makes a prediction here (for finite DT and DR) which is a regular classical correlation. The little circles between the observable C and the AJP.9 predictions don't prove anything about the QED prediction in this setup.

In the process you will also discover that the Glauber's "non-classicality" of Gn()'s is merely mislabeling of the "non-correlating" properties of Gn() i.e. his Gn() "correlations" are not correlations at all, and the cases they manifest their "non-correlating" side, such as Fock states, are precisely those labeled "non-classical".


Absolutely not. That was the exercise ! I didn't have to SPECIFY it, the exercise was to find WHY this was so. Apparently you didn't find the answer (which is not surprising, as you have a big confusion on the issue).

If I only shared your particular kind of misunderstanding of the photo-detection of |Psi.1>, yep, I could have handwaved my way to your wrong "answer", too. The whole excercise is based the same premisses we're disputing. We don't agree what the results with finite DT and DR will be, in the first place. Your leaping back and forth between the imagined results on |Psi.1> with DT and DR and the properties of "observable" C adds nothing in your defense of the wrong interpretation and the wrong prediction for this setup. It is a circular "proof" (if one can call it so at all).

You can construct operator C anyway you wish and "assign" it to the 3 setups. That doesn't mean it has anything to do with the results in the PBS setup, just because in the other two, the C-values match the observed O-values.

The behavior in the T and R only cases doesn't also also imply mathematically any special relation to |Psi.1> instead of |Psi.2>... The only way you introduce such requirements later is by including the operational mapping assumptions about photon absorptions of |Psi.1> on DT and DR as given via (AJP.9), thus its "predictions" for this setup to which it doesn't apply, in which case the |Psi.1> behaves differently than, say, |Psi.2>.

Your complete concepetual meld of C "observable" and C-values with the PBS experiment and its O-values, doesn't allow you to realize you're even making any operational mapping (since it is all the same thing under low-res glasses) and then borrowing the presumed (and disputed) O-results from the PBS experiment to ammend the mathematical properties for your C, so it can have the special 1-photon behavor. Just because you can write down (<R|+<T|) C (|R> + |T> ), it doesn't mean you have any grounds for using the imagined (and incorrect at that) results of the PBS experiments to refine the specification of C.

I was talking about the operational mapping of C to this setup, the process you don't even know to exist -- what kind of computational algorithm Alg_C(O-Values) do you need to do to make C behave as model for the Alg_C(O-values) output for each of the setup. Since I don't think that PBS case will behave differently for |Psi.1> or |Psi.2> (other than intensity change), it is an arbitrary requirement. Your later "deduction" of it assumes the same DT and DR behavior we already disagree about.

Your work about Barut and de Santos would win from "knowing the ennemy better" ;-)

I was in the "enemy" camp for few years after the grad school. I stayed puzzled and confused about QM (the way you're now) for several years after leaving academia. It was several years later that I visited a QO instrumentation lab (where my wife was VP of engineering) and chatted there with engineers and exp. physicists about their QO stuff. Within days I realized I had no clue while back at school on what I was talking about when playing with formalism spiced with photon-speak and the rest of QM "measurement" handwaving (as you and many others are doing now). The same weekend I "invented" what turned out to be Pearle's 1970s variable detection model (of which I never heard of at school)...

I know the "enemy" and it is barely clinging at the edge of a cliff. When the journals have to censor perfectly legitimate and valid works by folks like Marshall & Santos (even Barut had to publish much in ICTP preprints), so the "heretical" view wouldn't challenge the party line "truth", you know whose days are numbered. The whole present QM nonsense (kept propped in substance by just a few well networked zealots in the right places; most other physicists don't care or know much either way and at most just pay lip service to the party line), with its 'nonlocality' and 'nonclassicality,' will be laughed at in not too long. 't Hooft already thinks the whole Bell inequality argument etc. have no relevance for physics and he is happily playing with local deterministic models.
 
Last edited:
  • #113
nightlight said:
I know the "enemy" and it is barely clinging at the edge of a cliff. When the journals have to censor perfectly legitimate and valid works by folks like Marshall & Santos (even Barut had to publish much in ICTP preprints), so the "heretical" view wouldn't challenge the party line "truth", you know whose days are numbered. The whole present QM nonsense (kept propped in substance by just a few well networked zealots in the right places; most other physicists don't care or know much either way and at most just pay lip service to the party line), with its 'nonlocality' and 'nonclassicality,' will be laughed at in not too long. 't Hooft already thinks the whole Bell inequality argument etc. have no relevance for physics and he is happily playing with local deterministic models.

The key flaw in your meltdown prediction is the fact that QM is moving forward, not backward. Even a "wrong" theory can be very useful.

When 't Hooft has something specific to talk about, I'll be listening for sure. But his opinions alone are still opinions. Hey, I don't care whether he is Republican or Democrat either (or even a US citizen for that matter) !

In the meantime: g2(actual)=.018 which is not >=1; and S(CHSH actual)=2.73 which is not <=2. Photons display quantum behavior and local hidden variables do not exist. That remains true until something more useful comes along.
 
Last edited:
  • #114
It was indeed this discussion that made me decide you didn't make the distinction between the linearity of the dynamics and the linearity of the operators over the quantum state space...

Because you have a single conceptual spot for "all those fields". I can imagine getting confused under such constraint.

All this is not amazing in fact. It only means that the true solution of the classical coupled field problem gives different solutions than the quantum theory of finite particle number.

You missed the key point. They are different, of course. But one of them is an explicit linearization approximation of the solutions of the other. They're not just two different nerly equivalent formalisms sitting side by side.

One, the Hilbert product space of QM is a linearized approximation of the other -- its indeterminism and the entanglement are solely the mathematical consequences of the weaker form of the variation of the same action. You vary the action in a subset of ways that the exact dynamics does -- thus you're looking for false minima since your variation is only in dG not in dF1 and dF2 (via (1) you can reproduce any dF as dF1 F2+ F1 dF2 but not reverse). With dG, you're examining S values in fewer near point to declare it a stationary. So you will declare a "solution" functions G(x1,x2) that don't truly minimize the action, but only do so if you don't look all the nearby paths, but only a subset.

That is a form of roughening-up or coarse-graining of the solutions, like calculating integrals with trapezoids or a function with first two terms of Taylor series. You wouldn't, after approximating an integral with trapezoids, claim that the coarse grained value you got, gives you some greater new physics that the exact integral doesn't already have, and can't have even in principle.

Another point you missed (esp. if you check his papers) is that his the nonlinear equations replicate QED "radiative corrections" to alpha^4, at least. The keyword is "corrections" -- to the original Dirac's Fock space QED, the same one that is an approximation (for finite N) to the Barut's nonlinear (coupled Maxwell-Dirac) fields.

So the original Dirac QED a) was inaccurate and b) is a linearized approximation (which dropped self-interaction terms) to the Barut's nonlinear fields. Now, the QED of 1940-50, discovers radiative corrections, which reintroduces the dropped self-interaction terms into the Dirac's QED, and suddenly these new corrections make the new QED a') more accurate and b') closer to the Barut's self-fields results.

So you can't say that the multiparticle QM is in any way at the same level, much less a superior as a fundamental theory, to the Barut's nonlinear fields. The a->a' and b->b' show you their true relation. Also, had Schrodinger not abandoned the same theory (after the K-G equation version didn't work there, and also he used only the H ground state in the iteration, instead of the full set of energy eigenstates), he could have had the 9 digits of radiative corrections which came in late 1940s to regular QED, back in 1927 with his wave mechanics (after replacing the K-G with the Dirac's eq). The Hilbert space products for multiparticle QM is clearly inferior as a fundamental theory to the Schrodinger-Fermi-Jaynes-Barut approach (to say nothing of the QM's interepretative and conceptual tangles, propagated into QFT, which are all entirely nonexistent in the coupled Maxwell-Dirac fields theory).

This discussion is about you denying that standard QED
predicts anti-correlations in detector hits between two detectors, when the incoming state is a 1-photon state.
I think I have demonstrated that this cannot be right.


You haven't demonstrated anything of the sort. There is no basis for your operator C to operationally map as an observable to 50:50 PBS setup at all (much less to start using imagined "results" of the PBS setup). At least the numerator of AJP.9, the G2, has some grounds to expect mapping here, being derived dynamically in [4]. But that one doesn't map to the finite DT as an absorber of |Psi.1>, either, since that kind of finite |Psi.1> doesn't work as a mode which finite DT can absorb using the dynamical model [4], as discussed at length.

Note that for the finite-EM & finite-detector limited version of annihilators, I left them as formally defined for all k (with understanding that you don't plug into the resulting correlation functions fields out-of-range, such as any frequency you wish, and still expect such terms to map into coincidence rates, or be valid at all).

In principle, one can put in such restrictions (which were already made and are there, from [4]) explicitly and formally into the equations, e.g. by attaching factors of the type 1/theta(w_max-w) for the given a_k, where theta(x)=0 for x<=0, =1 for x>0.

That automatically produces 0 in the denominator when you plug in the frequency w >= w_max, from your expansions, thus makes the correlation functions formally undefined for such w's (which are assumed out-of-range in the derivation [4] already) and precludes arbitrary expansions (the actual detectors, too, already include high and low frequency cutoffs, thus arbitrary plane wave annihilators a_w don't model their absorptions, either).

Other restrictions of the derivation [4] can be put as well, making Gn()'s formally undefined expressions when the assumed restrictions for Gn() are violated.

Note that similar formal expansions often ignore any such limits and the results may be still fine. In general, though, such formal expansions can make the result invalid if the restrictions assumed in deriving Gn() are violated.
 
Last edited:
  • #115
nightlight said:
There is no reason (that you have shown) why your C ought to be interpreted as the operator predicting anything for the beam splitter setup.

There is: it is the same measurement apparatus: same detectors, same electronics, same everything. The only difference is the incoming field state to the apparatus, which has two holes: one for the T beam, and one for the R beam ; and one outgoing result: a binary logic signal, 1 or 0 (two simultaneous hits, or not).
With that apparatus corresponds a hermitean operator.
If the incoming state is a superposition of two others, then quantum theory, from a very elementary and fundamental point of view, fixes the results for the superposition when we know the results for the individual component states.
This is so elementary, that if you dispute that, you haven't understood the fundamentals of quantum theory at all. So I will stop repeating that.

But we knew that already: you confuse superposition of classical fields with superposition of quantum states. I'm pretty sure you are convinced that if
E1(r) is a classical field, and E2(r) is a classical field, and to E1 corresponds a quantum description |E1> and to E2 corresponds a quantum description |E2>, I'm pretty sure you are convinced that in all generality to a classical field E = a E1 + b E2 corresponds the quantum state a |E1> + b |E2>.
THIS IS NOT TRUE AT ALL.
The quantum state that corresponds to E is |E> and is usually orthogonal both to |E1> and |E2>.
It is only in the particular case of 1-photon states that there is a mapping between the configuration space of E-fields, and the Hilbert space of 1-photon states ; this is the reason why we can associate to a beam splitter a superposition of 2 quantum states which corresponds to a superposition of 2 classical E-fields (although, I repeat, 1-photon states are NOT the quantum description of classical fields ; there is only a bijective relationship between the two spaces).

It is that very confusion (between superposition of classical fields and superposition of quantum states) that makes you draw the ridiculous conclusion that quantum field theory is the piecewise linearized version of nonlinear classical field theory. As I outlined before, the space in which quantum field theory acts is IMMENSELY MUCH BIGGER (Hilbert space) than the space in which the nonlinear classical field theory acts (configuration space of classical fields). Which each POINT in configuration space CORRESPONDS A WHOLE DIMENSION in Hilbert space. With each superposition in configuration space correspond ORTHOGONAL STATES in hilbert space.
So in no way one is an "approximation" of the other. QFT is immensely more complex than non-linear CFT.
The "piecewise" linearisation of Feynman diagrams has not much to do with a piecewise linear approximation of a solution in CFT. But this is impossible to explain to someone who doesn't even understand the difference between superposition in configuration space (something related to specific dynamics) and the superposition in Hilbert space (which is a fundamental postulate of quantum theory).

The trick is which operator corresponds to which setup and which detectors.

No, to an unaltered measurement setup corresponds an operator. The beamsplitter is not part of the measurement setup, but changes the incoming states to an unaltered measurement setup (two detectors and some electronics). So I am allowed to use the operator which corresponds to the measurement setup, and it is the same in the 3 cases.
The very fact that you deny this means, again, that you haven't understood the basic premisses of quantum theory.

You need to note that the (AJP.9) doesn't have any parameter to specify the size of detector, it is an idealization for infinite detector.

This is again a fundamental misunderstanding on your part. For instance, the C operator I've been talking about DOES take into account all sizes, efficiencies and everything of a specific detector setup. Even the errors in the electonics and all. But it is not because I write it abstractly as "C" that that doesn't mean that it cannot stand for a complicated expression. In the same way, the eigenspaces of the D1 and D2 operators (in one of my earlier posts) are strongly dependent on the exact sizes and efficiencies and physical constructions of the detectors. I only don't write it down explicitly. It is just some abstract notation.

One way to deal with that is to look at what kind of setup does the idealization work correctly for, where do the simplifications (of infinite detectors) matter the least. That was my first answer --- if you take a large detector to cover T and R paths, then indeed you can absorb the whole mode |Psi.1>=|T>+|R>, thus the presumed full mode annihilation is Ok. Another angle is to look for another model that accounts better for the finite detectors and finite fields of the actual setup (which is, recall, all the difference being debated: which setup does g2=0 prediction model).

Again, I don't need such an idealisation. Finite detectors, as long as I use the same setup for the 3 cases, R, T and PBS, are sufficient.

That still doesn't make your particular operator C a model for the DG & DT setup with 50:50 beam splitter. Plain classical model predicts the same anticorrelation for the two limiting cases, too.

It does. Because the operator describes the measurement apparatus: the two detectors and the electronics. It is the same in the 3 cases, so I can use the same operator.

You know, the very fact that you do not attack the |psi1> = 1/sqrt(2)(|R> + |T>), but that you try to attack the operator corresponding to the measurement setup "C" means that you didn't understand:
1) the essence of the argument about the superposition of states and how it is fundamentally related to the basic premisses of quantum theory.
2) the confusion you have between superposition of fields and of quantum states
3) the misunderstanding you have about the postulates of a measurement in quantum theory (namely that the operator corresponds to the measurement apparatus and not to the entire setup, of which part prepares the INCOMING STATE, and part the measurement).

It is at the basis of your erroneous conclusion that quantum field theory is an approximative scheme of nonlinear classical field theory.
Now, because you are convinced that nonlinear classical field theory is the "correct" QED, you also claim that the predictions of NL CFT are necessary "less naive" predictions of QED, and as such you draw the conclusion that:

- people calculating QED predictions use "naive models"
- that "true QED" makes other predictions, namely no anti-correlations.

As such, you make some serious mistakes, which are resumed as follows:

- classical field theory predicts no anti-correlation (this is correct)
- your classical non-linear field theory also predicts no anti-correlation (I take your word for it).
- QED, as it is a linearised approximation of the above, must also predict no anti-correlation (that's your fundamental misunderstanding)
- As you now think that as well non-linear field theory, as classical EM, as QED (the way you misunderstand it) predicts no anticorrelation it must mean
that:

- those experiments showing anticorrelation MUST be wrong (it just CAN'T be, right)
- people doing so are priests trying to mislead youngsters
- they have in fact no argument to stand on, except naively misused QED, which also predicts anticorrelation, but only in a naive approach ; otherwise nothing distinguishes their glorified QED from non-linear CFT (which MUST be right :-)
- the "priesthood" keeps these naive calculations and their cheated experiments to keep their statute.

However, the situation is different:
- classical EM and NL CFT predict no anticorrelation
- QED (when one understands the superposition principle of quantum theory) predicts anticorrelation
- experiments find anti-correlation.

That's slightly less motivating for NL CFT...

I can understand, from your point of view, why you cling on your view :-))

cheers,
Patrick.
 
  • #116
There is however, not necessary a 1-1 relation between the solutions of the classical non-linear field equations, and the evolution equations in the quantum theory, even if starting from the quantum state that corresponds to a classical state to which the classical theory can be applied.

Indeed, as an example: in the hydrogen atom, there is not necessary an identity between the classically calculated Bohr orbits and the solutions to the quantum hydrogen atom.


It seems you're confusing "classical" with "particle" theories. The "classical" fields I was talking about already include the "first quantized" matter fields.
 
  • #117
nightlight said:
It seems you're confusing "classical" with "particle" theories. The "classical" fields I was talking about already include the "first quantized" matter fields.

Duh ! I was simply taking an example in NR quantum mechanics to illustrate that nonlinear dynamics in the classical (Newtonian) model doesn't mean non-linearity in the corresponding quantum theory. But of course I know that we're not talking about that particular model (a few particles in Newtonian mechanics). We're talking about a classical model consisting of fields in 3D, and its associated quantum theory (QFT).

cheers,
Patrick.
 
  • #118
... The quantum state that corresponds to E is |E> and is usually orthogonal both to |E1> and |E2>.
It is only in the particular case of 1-photon states that there is a mapping between the configuration space of E-fields, and the Hilbert space of 1-photon states ;...


Duh. That's point of Barut's ansatz to show exactly how the multiparticle QM configuration space is obtainable as an linearization approximation from the nonlinear fields of his/Schrodinger approach.

... It is only in the particular case of 1-photon states that there is a mapping between the configuration space of E-fields, and the Hilbert space of 1-photon states ; this is the reason why we can associate to a beam splitter a superposition of 2 quantum states which corresponds to a superposition of 2 classical E-fields (although, I repeat, 1-photon states are NOT the quantum description of classical fields ; there is only a bijective relationship between the two spaces).

It is that very confusion (between superposition of classical fields and superposition of quantum states) that makes you draw the ridiculous conclusion that quantum field theory is the piecewise linearized version of nonlinear classical field theory. As I outlined before, the space in which quantum field theory acts is IMMENSELY MUCH BIGGER (Hilbert space) than the space in which the nonlinear classical field theory acts (configuration space of classical fields). Which each POINT in configuration space CORRESPONDS A WHOLE DIMENSION in Hilbert space. With each superposition in configuration space correspond ORTHOGONAL STATES in hilbert space...


You have missed entirely the importance and implications of Barut's ansatz or any points of subsequent comments and appear completely lost as to which "fields" or which "spaces" were talked about at any given point. (You may need to separate some of your conceptual eigenspaces into few subspaces with different eigenvalues.)

It is typical for a differential eq's linearization procedures to introduce vast quantities of redundant functions which evolve linearly, instead of a single function which evolves non-linearly. For example in Carleman's linearization you take a nonlinear equation for 'field' A(t) (works same way also for regular nonlinear PDEs of very general type):

A' = F(A) ... (1)

where F is some nonlinear analytic function of field A. You then define infinite set of 'fields' Bn=A^n, and applying differentiations on Bn and Taylor expansion of F(), you get an infinite set of first order linear differential equations for Bn's of the type:

Bn' = Sum{ k; Mnk Bk } ... (2)

where Mnk is a numeric matrix. The infinite set of fields {Bn} evolves linearly and approximates A which evolves nonlinearly. The whole "IMMENSELY MUCH BIGGER" set of fields {Bn} is in fact mere approximation to single field A. You would not attribute any new physics to system described via this "IMMENSELY MUCH BIGGER" set {Bn} and its formalism (2), than what was already given in single field A and its formalism (1). Any "new" effect in {Bn}, if it doesn't exist in A is an artifact of the approximation.

That is precisely the relation between the Maxwell-Dirac equations and the multiparticle QM formalism. The latter is a linearized approximation of the former. The entangled states are simply result of the coarse-graining of the nonlinear evolution, which introduces artificial indeterminism in the approximate linear evolution.

As cited before, Kowalski has shown how to convert this kind of linearization (1), (2) (for general nonlinear PDEs) to boson Fock space formalism. All the "IMMENSELY MUCH BIGGER" Fock space formalism is still just the rewritten {Bn} set, still an approximation to the single field A and its evolution (1).

Barut's ansatz does the nearly same for the regular coupled Maxwell-Dirac eq's. The "IMMENSELY MUCH BIGGER" appearance is still an artifact of the approximation, the price paid for the mathematical convenience of linearity, but it brings in absolutely no new physics phenomenon that wasn't already in the original system. The apparent "immensity" of the multiparticle QM/QED parametrization of the system is the same kind of "immensity" that the infinite number of Taylor expansion terms produces for the sin(x) function.

Any deviation between the two within the finite orders N or due to omitted interaction for the sake of linearization is just a side-effect of truncated or incomplete approximation. This was clearly exemplified with the radiative corrections, where the fix for the Dirac's QED came closer in predictions to the Maxwell-Dirac nonlinear fields equations.
 
Last edited:
  • #119
I am afraid you've had a bit of overload here, as manifest in the high volume and the pitch of personal tones in your most recent long post. Frankly, that kind of exchange is a waste of everyones time, to read or to respond to. So, I'll leave you the last word.
 
  • #120
nightlight said:
Your basic problem is obvious here. The three setups are different. What pieces of equipment you call "aparatus" and what the "preparation" is matter of your definitions.

Again an illustration of your misunderstanding of the basic postulates of QM: what I call "apparatus" and what I call "system under study" defines what goes in the Hilbert space, and what goes in the Hermitean operator. Here, the system under study is the incoming EM field ; the apparatus is the detector setup. I could have drawn the line somewhere else, that's true ; it would have implied another separation between hilbert space and hermitean measurement operator. This is the famous Heisenberg cut, and we have a large liberty in placing it where we want ; the predictions are the same (that's a fundamental result of decoherence theory).
But in this particular case the cut is placed in the most obvious place: the detecting system goes in the "measurement" and the EM quantum field (the system under study) goes in the "system hilbert space".

You shouldn't take my remarks that you do not understand quantum theory as a personal insult: it is just an objective observation. If someone tells you that in standard natural number arithmetic, the operation + is not commutative, it is an obvious observation that that person doesn't understand natural number arithmetic. It is not an insult, and it can even be remedied.

They thre are also incompatible setups. That the three setups produce different state in T & R region, is true, too, but that is in addition to the the other differences. You seem to assume that only the state changes between the three setups, because the state does change, among all other things.

Only the incoming EM field state changes, yes. The detector system and its associated electronics didn't change, and you seemed not to have any difficulty accepting that the incoming state is something of the kind |R> + |T>.

You're making the same kind of assumption that von Neumann's faulty proof of no-HV had used. When you have three incompatible setups, there is no grounds in anything that your C which predicts S1 and S2, ought to predict anything for the third incompatible setup S3.

I never studied von Neumann's faulty proof, so I cannot comment on it. But this problem here is much much simpler than EPR situations, where the MEASUREMENT SETUP changes (Alice decides which angle to measure, for instance). This complication is not the case here.
You have a FIXED measurement setup, with different incoming states, according to different preparations. There's nothing more usual in quantum theory.

cheers,
Patrick.
 

Similar threads

  • · Replies 40 ·
2
Replies
40
Views
4K
Replies
24
Views
24K