Photon Wave Collapse Experiment (Yeah sure; AJP Sep 2004, Thorn )

  • #101
As you can see, 17 years of technological improvements yielded results significantly closer to the quantum predictions.

Just wait till someone figures out an even more advanced technology: how to cut the triple unit wire altogether, so it will give exactly the perfect 0 for g2. Why not get rid of the pesky accidentals, and make it look just like rolling the marble, to DT or to DR.

After all, that's how they got their g2 -- by using the 6ns delay, they had cut off the START from DT completely, blocking the triple unit from counting almost any triple coincidences other than the accidentals. Without it the experiment won't "work" and that's why the 6ns is repeated 11 times in the paper -- the 6ns is the single most mentioned numeric fact about the experiment in the paper[/color]. And it is wrong, as acknowledged by the chief experimenter (and as anyone can verify as well from the data sheet). Isn't that little bit funny.

Go see if they issue errata on their site at least, and say perhaps what was the secret "true" delay they used ... and what were any of the 'confidential' counts used to compute Table I. You couldn't pry any of it with the pliers out them the last time I tried. Let us know if you have any luck.
 
Physics news on Phys.org
  • #102
ok guys I think I can see a way to reconcile this or at least
restate it in a way acceptable to both parties.

lets attempt to state this schism in terms of a "kuhnian paradigm shift".

let me postulate a new version of QM in a thought experiment.
call it "QM v 2". now vanesch, suppose I told you that
measuring simultaneous eigenstates is not forbidden in this
NEW theory. where QM predicts mutually exclusive eigenstates,
this new QM v 2 predicts that they are not mutually exclusive.

(in fact throw out my idea of measuring the binomial distribution
in coincidences--that means they are random, let's no longer imagine
that for the moment).

lets say that since QM v 1 denies they exist, we build our experiments
to throw them out if we detect them. either in the electronics or
data selection etc. now let's say QM v 2 argues that this procedure
is biasing the sample! QM v 2 uses QM v 1 as a starting point
and argues that QM v 1 is PERFECTLY VALID
for the specific samples it refers to (which is very broad, if it is given
that coincidences are rare)..

however QM v 2 would assert QM v 1 is inherently referring only
to a BIASED SAMPLE by throwing out detected simultaneous eigenstates
(informally "coincidences"). ie as einstein argued, INCOMPLETE. and maybe
just maybe, the experimenters attempting to test QM v 1 vs QM v 2
are INADVERTENTLY doing things that bias the test in favor of QM v 1,
naturally being guided by QM v 1 theory.

now just putting aside all these experiments that have been done
so far, can we agree on the above? it seems to me this is the crux
of the disagreement between vanesch/nite.

further, nite argues
that the experiments so far are not really testing QM v 1 vs QM v 2,
but in fact are just testing QM v 1-- by "accidentally"
throwing out coincidences based on experimental & experimenter bias.

lets push it a little further. suppose QM v 1 is not merely "undefined"
in talking about simultaneously measured eigenstates, but goes further
and asserts they
are RANDOM. suppose QM v 2 actually can predict, in contrast,
a NONRANDOM co-occurence.
then we have not merely a hole but a break/inconsistency
between these two theories, agreed?
one that can be tested in practice, right?

given all this, I think we can try to devise better experiments.

ps re doing a test using gamma emissions. my main question is this:
gamma rays are "photons" ie EM radiation.
has anyone ever figured out how to
make a beamsplitter using gamma rays? is that in the literature anywhere?
seems like it shouldn't be hard...? if you guys will bear with me just
a little on this, I have something up my sleeve that should be of great
interest to everyone on the thread..
 
  • #103
The linearity (and thus the fields superposition) are violated for the

fields evolution in Barut's self-field approach (these are non-linear

integro-differential equations of Hartree type). One can approximate

these via piecewise-linear evolution of the QED formalism.

It was indeed this discussion that made me decide you didn't make the

distinction between the linearity of the dynamics and the linearity of

the operators over the quantum state space.


Note that Barut has demonstrated this linearization explicitly for finite

number of QM particles. He first writes the action S of, say, interacting

Dirac fields F1(x) and F2(x) via EM field A(x). He eliminates A

(expressing it as integrals over currents), thus gets action

S(F1,F2,F1',F2') with current-current only interaction of F1 and F2.

Regular variation of S via dF1 and dF2 yields nonlinear Hartree equations

(similar to those that already Schrodinger tried in 1926, except that

Schr. used K-G fields F1 & F2). Barut now does an interesting ansatz. He

defines a function:

G(x1,x2)=F1(x1)*F2(x2) ... (1)

Then he shows that the action S(F1,F2..) can be rewritten as a function

of G(x1,x2) and G' with no leftover F1 and F2. Then he varies S(G,G')

action via dG, and the stationarity of S yields equations for G(x1,x2),

also non-linear. But unlike the non-linear equations for fermion fields

F1(x) and F2(x), the equations for G(x1,x2) become linear if he drops the

self-interaction terms. They are also precisely the equations of standard

2-fermion QM in configuration space (x1,x2), thus he obtains the real

reason for using the Hilbert space products for multi-particle QM



The price paid for the linearization is that the evolution of G(x1,x2)

contains non-physical solutions. Namely the variation in dG is weaker

than the independent variations in dF1 and dF2. Consequently, the

evolution of G(x1,x2) is less constrained by its dS=0, thus the G(x1,x2)

can take paths that the evolution of F1(x) and F2(x), under their dS=0

for independent dF1 and dF2, cannot.

In particular, the evolution of G(x1,x2) can produce states such

Ge(x1,x2)=F1a(x1)F2a(x2) + F1b(x1)F2b(x2), corresponding to the entangled

two particle states of QM. The mapping (1) cannot reconstruct any more

the exact solution F1(x) and F2(x) uniquely from this Ge, thus the

indeterminism and entanglement arise as the artifacts of the

linearization approximation, without adding any physical content which

was not already present in the equations for F1(x) and F2(x) ("quantum

computing" enthusiasts will likely not welcome this fact since the power

of qc is result of precisely the exponential explosion of paths to evolve

the qc "solutions" all in parallel; unfortunately almost all of these

"solutions" are non-solutions for the exact evolution).

All this is not amazing in fact. It only means that the true solution of

the classical coupled field problem gives different solutions than the

quantum theory of finite particle number. That's not surprising at all,

for the basic postulates are completely different: a quantum theory of a

finite number of particles has a totally different setup than a classical

field theory with non-linear interactions. If by coincidence, in certain

circumstances, both ressemble, doesn't mean much.
It also means that you cannot conclude anything about a quantum theory of

a finite number of particles by studying a classical field theory with

non-linear terms. They are simply two totally different theories.



The von Neumann's projection postulate is thus needed here as an ad hoc

fixup of the indeterministic evolution of F1(x) and F2(x) produced by the

approximation. It selects probabilistically one particular physical

solution (those that factorize Ge) of actual fields F1(x), F2(x) which

the linear evolution of Ge() cannot. The exact evolution equations for

F1(x) and F2(x), don't need such ad hoc fixups since they always produce

only the valid solutions (whenever one can solve them).

No, a quantum theory of a finite number of particles is just something

different. It cannot be described by a linear classical field theory,

nor by a non-linear classical field theory, except for the 1-particle

case, where it is equivalent to a linear classical field theory.
A quantum theory of a finite number of particles CAN however, be

described by a linear "field theory" in CONFIGURATION SPACE. That's

simply the wave function. So for 3 particles, we have an equivalent

linear field theory in 9 dimensions. That's Schroedinger's equation.

However, von Neumann's postulate is an integral part of quantum theory.

So if you have another theory that predicts other things, it is simply

that: another theory. You cannot conclude anything from that other

theory to talk about quantum theory.

The same results hold for any finite number of particles, each particle

adding 3 new dimensions to the configuration space and more

indeterminism. The infinite N cases (with the anti/-symmetrization

reduction of H(N), which Barut uses in the case of 'identical' particles

for F1(x) and F2(x), as well) are exactly the fermion and boson Fock

spaces of the QED. For all values of N, though, the QM description in 3N

dimensional configurations space (the product H^N, with

anti/symmetrization reductions) remains precisely the linearized

approximation (with the indeterminism, entanglement and projection price

paid) of the exact evolution equations, and absolutely nothing more. You

can check the couple of his references (links to ICTP preprints) on this

topic I posted few messages back.

It is in fact not amazing that the linear field theory in 3 dimensions is

equivalent to the "non-interacting" quantum theory... up to a point you

point out yourself: the existence, in quantum theory, of superpositions

of states, which disappears, obviously (I take your word for it), in the

non-linear field theory.
In quantum theory, their existence is EXPLICITLY POSTULATED, so this

already proves the difference between the two theories.

But all this is about "finite-number of particle" quantum theory, which

we also know, can only be non-relativistic.
Quantum field theory is the quantum theory of FIELDS. So this time, the

configuration space is the space of all possible field configurations,

and each configuration is a BASIS VECTOR in the Hilbert space of states.

This is a HUGE space, and it is in this HUGE SPACE that the superposition

principle holds, not in the configuration space of fields.
For ANY non-linear field equation, (such as Barut's, which simply sticks

to the classical equations at the basis of QED) you can set up such a

corresponding Hilbert space. If you leave the field equations linear,

this corresponds to the free field situation, and this corresponds to a

certain configuration space, and to it corresponds a quantum field

hilbert space called Fock space. If you now assume that the

*configuration space* for the non-linear field equations is the same (not

the solutions, of course), this Fock space will remain valid for the

interacting quantum field theory.
There is however, not necessary a 1-1 relation between the solutions of

the classical non-linear field equations, and the evolution equations in

the quantum theory, even if starting from the quantum state that

corresponds to a classical state to which the classical theory can be

applied.
Indeed, as an example: in the hydrogen atom, there is not necessary an

identity between the classically calculated Bohr orbits and the solutions

to the quantum hydrogen atom. But of course, there will be links, and

the Feynman path integral formulation makes this rather clear, as is well

explained in most QFT texts. Note that the quantum theory has always

MANY MORE solutions than the corresponding classical field theory,

because of the superposition principle.

However, all this is disgression, through I've been already through this

with you. At the end of the day, it is clear that classical (non-linear)

field theory, and its associated quantum field theory, ARE DIFFERENT

THEORIES.
The quantum field theory is a theory which has, by postulate, a LINEAR

behaviour in an INFINITELY MUCH BIGGER space than the non-linear

classical theory. It allows (that's the superposition principle) much

more states as physically distinct states, than the classical theory.

The non-linearity of the interacting classical field theory is taking

into account fully by the relationships between the LINEAR operators over

the Hilbert space.
In the case h->0, all the solutions of the non-linear field equations

correspond to solutions of the quantum field theory. However, the

quantum field theory has many MORE solutions, because of the

superposition principle.
Because of the hugely complicated problem (much more complicated than the

non-linear classical field equations) an approach is by Feynman diagrams.

But there are other techniques, such as lattice QFT.

QED is such a theory, and it is WITHIN that theory that I've been giving

my answers, which stand unchallenged (and given their simplicity it will

be hard to challenge them :-)
The linearity over state space (the superposition principle) together

with the correspondence between any measurement and a hermitean operator,

as set out by von Neumann, are an integral part of QED. So I'm allowed

to use these postulates to say things about predictions of QED.

We can have ANOTHER discussion over Barut's approach. But it is not THIS

discussion. This discussion is about you denying that standard QED

predicts anti-correlations in detector hits between two detectors, when

the incoming state is a 1-photon state.
I think I have demonstrated that this cannot be right.

cheers,
Patrick.
 
  • #104
I said:

If you understood this, and found the answer, you will have gained a

great insight in quantum theory in general, and in quantum optics in

particular :-)

Why does this only work if the incoming states on the beam splitter are

1-photon states ?


Your "this" blends together the results of actual observation (the actual

counts and their correlation, call them O-results) with the "results" of

the abstract observable C (C-results). To free you from the tangle, we'll

need finer res conceptual and logical lenses.

The C-results are not same as O-results.

That would then simply mean that one made a mistake.
To every "actual" observation corresponds, by postulate, AN OPERATOR, and

that operator, taking into account ALL EXPERIMENTAL DETAILS, is:
- hermitean
- has as eigenvalues all possible experimental outcomes
- etc...

It can be very difficult to construct exactly that operator, so obviously

often one makes approximations, idealisations etc... That's nothing new

in physics. It takes some intuition (or a lot of work) to know what is

essential, and what not. But that's just a calculational difficulty. In

principle, to ALL observations (actual, real) corresponds a hermitean

operator.

There is nothing in the abstract QM postulates that tells you what kind

of setup implements C or, for a given setup, what kind of post-processing

of O-results yields C-results. The postulates just tell you C exists and

it can be implemented.

Well, that's all I needed !
That, plus two facts:
- that the outcome of a single "correlation test" (one time slice) gives

0 or 1.
- that, when you have only one beam (by putting in a full mirror, or

removing the splitter), that your correlation test gives with certainty

0.


Now, finally, your observable C. Nothing in the QM axioms specifes or

limits how the C must be computed from the r-counts, and certainly does

not require that computed C-values are the same as O-values (the observed

r-counts) on 1-by-1 basis.

QM only says such C exists and it can be mapped to the experimental data.

The fact that C "predicts" indeed the avg r-counts <r> for setups with

"mirror" or with "transparent" have no implication for the setup with

PBS.

It does ! In the case of a PBS, the outgoing state, when the ingoing

state is a 1-photon state, is a superposition of the 1-photon state

"left" and the 1-photon state "right". (you seem to have accepted that).

That's sufficient, because the "left" state and the "right" state are

both eigenstates with eigenvalue 0.


Thus your requirement of exclusivity "only for single photon state" is an

additional ad hoc requirement, an extra control variable for the

C-observable mapping algorithm, instructing it to handle the case of EM

fields with <[n]>=N=1 differently than case N != 1 (N need not be an

integer).

Absolutely not. That was the exercise ! I didn't have to SPECIFY it,

the exercise was to find WHY this was so. Apparently you didn't find the

answer (which is not surprising, as you have a big confusion on the

issue). If you would have found it, you would have shown you understood

quantum optics much better than I thought you did :-)

So here's the answer:

It is only for incoming 1-photon states that a PBS has as an outgoing

states a superposition of 1-photon states, which are those states one can

obtain by replacing the PBS by a mirror, or by removing it.

However, if you send a 2-photon state on a PBS, out comes a superposition

which can be written as follows:

a |T,T> + b |T,R> + c |R,R>

here, |T,T> is the photon state with a "2 transmitted photons", ...

If we replace the PBS by a mirror, we only have |R,R> and if we remove

it, we have |T,T>, and it is only for those states that we know that we

have an eigenstate with eigenvalue 0.


So the ingoing state with the PBS is NOT a superposition of the case

(full mirror) and (nothing). As such, the state coming out of the PBS

doesn't stay within the 0-eigenspace. Indeed, the |T,R> state is

another, orthogonal Fock state, and will be not in the 0-eigenvalue space

(if the detectors are perfect, it will be an eigenvector with eigenvalue

1, but that's not necessary in the argument).

You can apply similar reasonings for n-photon states.
It is only the special case of the 1-photon state that, after splitting,

is a superposition of the "exclusive" cases (left and right).

cheers,
Patrick.

PS: that said, I have learned more quantum optics with you than with anybody else, just by you defying me, I go and read a lot of stuff :-) It is in fact a pity you don't master the essentials, and know a lot of publications about lots of "details". Your work about Barut and de Santos would win from "knowing the ennemy better" ;-)
 
  • #105
vzn said:
however QM v 2 would assert QM v 1 is inherently referring only
to a BIASED SAMPLE by throwing out detected simultaneous eigenstates
(informally "coincidences"). ie as einstein argued, INCOMPLETE. and maybe
just maybe, the experimenters attempting to test QM v 1 vs QM v 2
are INADVERTENTLY doing things that bias the test in favor of QM v 1,
naturally being guided by QM v 1 theory.

now just putting aside all these experiments that have been done
so far, can we agree on the above? it seems to me this is the crux
of the disagreement between vanesch/nite.

1. There are no results being discarded, other than a time window is created. I.e. there is no subtraction of accidentals. The time window begins BEFORE the gate fires and is easily wide enough to pick up triple coincidences. That this is true is seen by the fact that the T and R detectors separately (and equally) fire 4000 times per second within this same window. Given this rate for double coincidences, there should be 160 triple coincidences if the classical theory held. The actual was 3. Clearly, there is anti-correlation of the T and R detections and the reason has nothing to do with the window size.

2. The disagreement goes a lot deeper than this experiment. Nightlight denies the results of most any experiment based on entangled photon pairs, i.e. Bell tests such as Aspect. He is a diehard local realist as far as I can determine, and such tests violate their sensibilities. (Nightlight, if you are not a local realist then please correct me.) Vanesch knows that IF there was a QM1 and QM2 whose difference could be detected by this kind of test, then it would be. That is because he abides by the results of scientifically conducted experiments regardless of their outcome. I wouldn't expect much movement on the part of either of them.
 
  • #106
hey guys. I suggest everyone calculate the following. its
a very sensitive calculation, not very easy to pull off,
but I did something very similar via empirical/computational experiments/simulations many years ago. the details are in a paper.

suppose that whenever a 1-photon goes thru a polarizing beamsplitter,
with 2 detectors, one on each branch, it has a very small probability of being
detected by both the H and V (horizontal and vertical) detectors.

question: how small would this probability have to be to preserve
the possibility of locality in bell experiments?

the answer is surprising & tends to support nite's general thesis.
the answer is apparently, "very little"
 
  • #107
another thought. I am not sure who 1st came up with
the idea of using photons for the EPRB experiment.
maybe bell or bohm? the original experiment imagined "particles"
eg an atom.

very,very few non-photon tests of bell's experiment have
been done. its a very tricky test. there is a recent one, but
thats another topic (or can of worms).

but let's consider the particle based version of the experiment.
it considers a particle, say an electron, going thru stern-gerlach
detectors.

so its interesting to ask, what is the analog of the GRA
(grangier roger aspect) experiment with particles? it would be something
like sending an electron through a stern-gerlach apparatus and
showing that you only measure "spin up" or "spin down" exclusively,
never both simultaneously.

nite's argument, translated, would tend
to suggest that you won't. what do you think nite, how would your
point be rephrased wrt stern gerlach measurements? are you arguing
you would get a small coincidence detection there also?

because what's interesting (and this is why photons were substituted
for mass-based particles in bell tests): the math is exactly the same!
the same math that predicts photon anticorrelation in the GRA experiment
would predict mutually exclusive measurements in spin up and spin
down stern-gerlach measurements of mass-based particles.

my general question,
has anyone done that experiment (recently)? ie attempt to measure
"anticorrelation" coefficient of spin up & spin down particles?
my guess is that old stern-gerlach
experiments from early part of this century
had low precision that probably has not been
improved on via more recent technology (after there was no
more theoretical interest in the phenomenon).
 
  • #108
vzn said:
hey guys. I suggest everyone calculate the following. its
a very sensitive calculation, not very easy to pull off,
but I did something very similar via empirical/computational experiments/simulations many years ago. the details are in a paper.

suppose that whenever a 1-photon goes thru a polarizing beamsplitter,
with 2 detectors, one on each branch, it has a very small probability of being
detected by both the H and V (horizontal and vertical) detectors.

question: how small would this probability have to be to preserve
the possibility of locality in bell experiments?

the answer is surprising & tends to support nite's general thesis.
the answer is apparently, "very little"

That is false. The correlation rate at 22.5 degrees is .8536 while Bell's Inequality yields a maximum value of .7500 at the same angle. (You don't need to do any time-varying changes for this to be true if you are a local realist.) Thus there must be at least 10% of the sample which must be skewed in the exact direction to give the wrong answer.

But that is not all. The skewing must switch the other way when you measure at 67.5 degrees, because the relationship reverses and the QM value of .1464 is less than the LR prediction which must be at least .2500. So the measurement bias must not only be large, it must be sensitive to the angle as well and even switch signs!

All of which is TOTALLY besides the point because the latest experiments don't subtract for accidentals anyway. Per Weihs et al, 1998: "We want to stress again that the accidental coincidences have not been subtracted from the plotted data."

http://arxiv.org/PS_cache/quant-ph/pdf/9810/9810080.pdf
 
Last edited by a moderator:
  • #109
vzn said:
nite's argument, translated, would tend
to suggest that you won't. what do you think nite, how would your
point be rephrased wrt stern gerlach measurements? are you arguing
you would get a small coincidence detection there also?

because what's interesting (and this is why photons were substituted
for mass-based particles in bell tests): the math is exactly the same!
the same math that predicts photon anticorrelation in the GRA experiment
would predict mutually exclusive measurements in spin up and spin
down stern-gerlach measurements of mass-based particles.

This test might be OK as a Bell test, but would not serve as proof that electrons are quantum non-classical particles because the classical view is that electrons are individual particles anyway. On the other hand, a double slit experiment (with an electron) is a pretty good way to show that a classical particle such as an electron can be made to act like a wave. Classical theory has a problem modeling in this manner (wave particle duality). Although there are probably those who might try...
 
  • #110
This test might be OK as a Bell test, but would not serve as proof that electrons are quantum non-classical particles because the classical view is that electrons are individual particles anyway.

Another view is that electrons are Dirac eq. matter fiels, such Jaynes' and Barut's classical field theories. This melding of "classical" and "particle" is what various 'classical limits' of QM do -- they do particle limit and call it classical limit.

On the other hand, a double slit experiment (with an electron) is a pretty good way to show that a classical particle such as an electron can be made to act like a wave.

Because it is wave (check Dirac equation). Just configured so it won't fall apart. There are many ways that can be done, especially in nonlinear theories, such as coupled Maxwell-Dirac or Maxwell-Schrodinger eqs. which are not linear (they become linear only as an approximations "of external fields" and "external currents" which is how they're normally approximated for regular QM and regular Maxwell ED; the QED scheme then reintroduces these omitted back-interactions by simulating them via scattering matrix, thus it represents a piecewise linearization approximation of the original nonlinear Maxwell-Dirac eq's). For linear equations Barut had used wavelet solutions, which are nonspreading for free electrons.

Classical theory has a problem modeling in this manner (wave particle duality). Although there are probably those who might try...

Again concept melding of "classical" with "particle" models. The two are different. Thus for example, the classical models of QM are not same and have generally nothing to do with particle models (although they could be).
 
  • #111
Why does this[/color] only work if the incoming states on the beam splitter are 1-photon states ?

Your whole reasoning in the "answer" and the "question" are based on the very premisses being disputed. In your photon-speak the finite DT can absorb 1-photon state |Psi.1> leaving vacuum as the EM state after the process. I say it can't (and why, as explained at length).

And what is "this" above. If it is C operator, then indeed it is obvious why, you can it defined it if you wish to turn to 0 for single photon superposed state. There is nothing from your reasoning about the PBS experiment that requires such conclusion, unless one shares your particular way of misundertsaning as to how DT absorbs the whole mode |Psi.1>.

Just becuse your C "predicts" the outcomes of "mirror" only or "transparent only" setups has no bearing on whether it predicts the third setup whith 50:50 polarizer. Note, and as already pointed out, that the 'classical probabilities' p00..p11 given earlier, also predict correct results (the perfect anticorrelation) for limits of the beam splitter, when the split ratio T:R varies continuously, including thus the cases of T only and R only.

There is no reason (that you have shown) why your C ought to be interpreted as the operator predicting anything for the beam splitter setup. This is similar trap of being carried away by the formalism that von Neumann's impossibility proof fell into. Read Bell's critique so you can avoid that kind of leaps in the future.

To every "actual" observation corresponds, by postulate, AN OPERATOR, and...

The trick is which operator corresponds to which setup and which detectors. The whole difference[/color] in the "naive" interpretation (such as that of AJP paper or yours) of QO prediction g2=0 and the correct one is in what size detector does it apply to.

My point (see Q-b earlier) is that G2=0 corresponds operationally to a trivial and perfectly classical setup of single large detector capturing whole |Psi.1> while the apperture of the second detector receives no incident fields.

You need to note that the (AJP.9) doesn't have any parameter to specify the size of detector, it is an idealization for infinite detector. One way to deal with that is to look at what kind of setup does the idealization work correctly for, where do the simplifications (of infinite detectors) matter the least. That was my first answer --- if you take a large detector to cover T and R paths, then indeed you can absorb the whole mode |Psi.1>=|T>+|R>, thus the presumed full mode annihilation is Ok. Another angle is to look for another model that accounts better for the finite detectors and finite fields of the actual setup (which is, recall, all the difference being debated: which setup does g2=0 prediction model).

ALL observations (actual, real) corresponds a hermitean

Yep, that still doesn't make your particular operator C a model for the DG & DT setup with 50:50 beam splitter. Plain classical model predicts the same anticorrelation for the two limiting cases, too.



operator.
 
Last edited:
  • #112
QM only says such C exists and it can be mapped to the experimental data. The fact that C "predicts" indeed the avg r-counts <r> for setups with "mirror" or with "transparent" have no implication for the setup with PBS.

It does ! In the case of a PBS, the outgoing state, when the ingoing state is a 1-photon state, is a superposition of the 1-photon state "left" and the 1-photon state "right". (you seem to have accepted that).

That's sufficient, because the "left" state and the "right" state are both eigenstates with eigenvalue 0.


That shows a complete concepetul meld of operator C and its C-results with the setup and it O-results. The existence of mapping does not say anything that ties your operator C and this setup with PBS.

The trivial fact that operator C acts particular way so that in two setups its C-results match the O-results of the two setups, has no bearing or imlications on the third setup with BPS placed in and its results. You can't also use presumed results of PBS case (such as applying infinite detector version of AJP.9 to DT & DR setup) to define your C, then come back and declare that C shows QED predicsts anticorrelation for finite DT and DR with 50:50 PBS.

After all, the probabilities p00..p11 I gave match (in the limits for T:R split ratio varying from 0:1 to 1:0) the "mirror" and "transparent" O-values, giving perfect anticorrelation for those cases as well.

You really need to go and study [4] (where the Gn() came from), its physics and its assumptions on the absorption processes involved. Then you can verify for youself, as sketched earlier, that QED makes a prediction here (for finite DT and DR) which is a regular classical correlation. The little circles between the observable C and the AJP.9 predictions don't prove anything about the QED prediction in this setup.

In the process you will also discover that the Glauber's "non-classicality" of Gn()'s is merely mislabeling of the "non-correlating" properties of Gn() i.e. his Gn() "correlations" are not correlations at all, and the cases they manifest their "non-correlating" side, such as Fock states, are precisely those labeled "non-classical".


Absolutely not. That was the exercise ! I didn't have to SPECIFY it, the exercise was to find WHY this was so. Apparently you didn't find the answer (which is not surprising, as you have a big confusion on the issue).

If I only shared your particular kind of misunderstanding of the photo-detection of |Psi.1>, yep, I could have handwaved my way to your wrong "answer", too. The whole excercise is based the same premisses we're disputing. We don't agree what the results with finite DT and DR will be, in the first place. Your leaping back and forth between the imagined results on |Psi.1> with DT and DR and the properties of "observable" C adds nothing in your defense of the wrong interpretation and the wrong prediction for this setup. It is a circular "proof" (if one can call it so at all).

You can construct operator C anyway you wish and "assign" it to the 3 setups. That doesn't mean it has anything to do with the results in the PBS setup, just because in the other two, the C-values match the observed O-values.

The behavior in the T and R only cases doesn't also also imply mathematically any special relation to |Psi.1> instead of |Psi.2>... The only way you introduce such requirements later is by including the operational mapping assumptions about photon absorptions of |Psi.1> on DT and DR as given via (AJP.9), thus its "predictions" for this setup to which it doesn't apply, in which case the |Psi.1> behaves differently than, say, |Psi.2>.

Your complete concepetual meld of C "observable" and C-values with the PBS experiment and its O-values, doesn't allow you to realize you're even making any operational mapping (since it is all the same thing under low-res glasses) and then borrowing the presumed (and disputed) O-results from the PBS experiment to ammend the mathematical properties for your C, so it can have the special 1-photon behavor. Just because you can write down (<R|+<T|) C (|R> + |T> ), it doesn't mean you have any grounds for using the imagined (and incorrect at that) results of the PBS experiments to refine the specification of C.

I was talking about the operational mapping of C to this setup, the process you don't even know to exist -- what kind of computational algorithm Alg_C(O-Values) do you need to do to make C behave as model for the Alg_C(O-values) output for each of the setup. Since I don't think that PBS case will behave differently for |Psi.1> or |Psi.2> (other than intensity change), it is an arbitrary requirement. Your later "deduction" of it assumes the same DT and DR behavior we already disagree about.

Your work about Barut and de Santos would win from "knowing the ennemy better" ;-)

I was in the "enemy" camp for few years after the grad school. I stayed puzzled and confused about QM (the way you're now) for several years after leaving academia. It was several years later that I visited a QO instrumentation lab (where my wife was VP of engineering) and chatted there with engineers and exp. physicists about their QO stuff. Within days I realized I had no clue while back at school on what I was talking about when playing with formalism spiced with photon-speak and the rest of QM "measurement" handwaving (as you and many others are doing now). The same weekend I "invented" what turned out to be Pearle's 1970s variable detection model (of which I never heard of at school)...

I know the "enemy" and it is barely clinging at the edge of a cliff. When the journals have to censor perfectly legitimate and valid works by folks like Marshall & Santos (even Barut had to publish much in ICTP preprints), so the "heretical" view wouldn't challenge the party line "truth", you know whose days are numbered. The whole present QM nonsense (kept propped in substance by just a few well networked zealots in the right places; most other physicists don't care or know much either way and at most just pay lip service to the party line), with its 'nonlocality' and 'nonclassicality,' will be laughed at in not too long. 't Hooft already thinks the whole Bell inequality argument etc. have no relevance for physics and he is happily playing with local deterministic models.
 
Last edited:
  • #113
nightlight said:
I know the "enemy" and it is barely clinging at the edge of a cliff. When the journals have to censor perfectly legitimate and valid works by folks like Marshall & Santos (even Barut had to publish much in ICTP preprints), so the "heretical" view wouldn't challenge the party line "truth", you know whose days are numbered. The whole present QM nonsense (kept propped in substance by just a few well networked zealots in the right places; most other physicists don't care or know much either way and at most just pay lip service to the party line), with its 'nonlocality' and 'nonclassicality,' will be laughed at in not too long. 't Hooft already thinks the whole Bell inequality argument etc. have no relevance for physics and he is happily playing with local deterministic models.

The key flaw in your meltdown prediction is the fact that QM is moving forward, not backward. Even a "wrong" theory can be very useful.

When 't Hooft has something specific to talk about, I'll be listening for sure. But his opinions alone are still opinions. Hey, I don't care whether he is Republican or Democrat either (or even a US citizen for that matter) !

In the meantime: g2(actual)=.018 which is not >=1; and S(CHSH actual)=2.73 which is not <=2. Photons display quantum behavior and local hidden variables do not exist. That remains true until something more useful comes along.
 
Last edited:
  • #114
It was indeed this discussion that made me decide you didn't make the distinction between the linearity of the dynamics and the linearity of the operators over the quantum state space...

Because you have a single conceptual spot for "all those fields". I can imagine getting confused under such constraint.

All this is not amazing in fact. It only means that the true solution of the classical coupled field problem gives different solutions than the quantum theory of finite particle number.

You missed the key point. They are different, of course. But one of them is an explicit linearization approximation of the solutions of the other. They're not just two different nerly equivalent formalisms sitting side by side.

One, the Hilbert product space of QM is a linearized approximation of the other -- its indeterminism and the entanglement are solely the mathematical consequences of the weaker form of the variation of the same action. You vary the action in a subset of ways that the exact dynamics does -- thus you're looking for false minima since your variation is only in dG not in dF1 and dF2 (via (1) you can reproduce any dF as dF1 F2+ F1 dF2 but not reverse). With dG, you're examining S values in fewer near point to declare it a stationary. So you will declare a "solution" functions G(x1,x2) that don't truly minimize the action, but only do so if you don't look all the nearby paths, but only a subset.

That is a form of roughening-up or coarse-graining of the solutions, like calculating integrals with trapezoids or a function with first two terms of Taylor series. You wouldn't, after approximating an integral with trapezoids, claim that the coarse grained value you got, gives you some greater new physics that the exact integral doesn't already have, and can't have even in principle.

Another point you missed (esp. if you check his papers) is that his the nonlinear equations replicate QED "radiative corrections" to alpha^4, at least. The keyword is "corrections" -- to the original Dirac's Fock space QED, the same one that is an approximation (for finite N) to the Barut's nonlinear (coupled Maxwell-Dirac) fields.

So the original Dirac QED a) was inaccurate and b) is a linearized approximation (which dropped self-interaction terms) to the Barut's nonlinear fields. Now, the QED of 1940-50, discovers radiative corrections, which reintroduces the dropped self-interaction terms into the Dirac's QED, and suddenly these new corrections make the new QED a') more accurate and b') closer to the Barut's self-fields results.

So you can't say that the multiparticle QM is in any way at the same level, much less a superior as a fundamental theory, to the Barut's nonlinear fields. The a->a' and b->b' show you their true relation. Also, had Schrodinger not abandoned the same theory (after the K-G equation version didn't work there, and also he used only the H ground state in the iteration, instead of the full set of energy eigenstates), he could have had the 9 digits of radiative corrections which came in late 1940s to regular QED, back in 1927 with his wave mechanics (after replacing the K-G with the Dirac's eq). The Hilbert space products for multiparticle QM is clearly inferior as a fundamental theory to the Schrodinger-Fermi-Jaynes-Barut approach (to say nothing of the QM's interepretative and conceptual tangles, propagated into QFT, which are all entirely nonexistent in the coupled Maxwell-Dirac fields theory).

This discussion is about you denying that standard QED
predicts anti-correlations in detector hits between two detectors, when the incoming state is a 1-photon state.
I think I have demonstrated that this cannot be right.


You haven't demonstrated anything of the sort. There is no basis for your operator C to operationally map as an observable to 50:50 PBS setup at all (much less to start using imagined "results" of the PBS setup). At least the numerator of AJP.9, the G2, has some grounds to expect mapping here, being derived dynamically in [4]. But that one doesn't map to the finite DT as an absorber of |Psi.1>, either, since that kind of finite |Psi.1> doesn't work as a mode which finite DT can absorb using the dynamical model [4], as discussed at length.

Note that for the finite-EM & finite-detector limited version of annihilators, I left them as formally defined for all k (with understanding that you don't plug into the resulting correlation functions fields out-of-range, such as any frequency you wish, and still expect such terms to map into coincidence rates, or be valid at all).

In principle, one can put in such restrictions (which were already made and are there, from [4]) explicitly and formally into the equations, e.g. by attaching factors of the type 1/theta(w_max-w) for the given a_k, where theta(x)=0 for x<=0, =1 for x>0.

That automatically produces 0 in the denominator when you plug in the frequency w >= w_max, from your expansions, thus makes the correlation functions formally undefined for such w's (which are assumed out-of-range in the derivation [4] already) and precludes arbitrary expansions (the actual detectors, too, already include high and low frequency cutoffs, thus arbitrary plane wave annihilators a_w don't model their absorptions, either).

Other restrictions of the derivation [4] can be put as well, making Gn()'s formally undefined expressions when the assumed restrictions for Gn() are violated.

Note that similar formal expansions often ignore any such limits and the results may be still fine. In general, though, such formal expansions can make the result invalid if the restrictions assumed in deriving Gn() are violated.
 
Last edited:
  • #115
nightlight said:
There is no reason (that you have shown) why your C ought to be interpreted as the operator predicting anything for the beam splitter setup.

There is: it is the same measurement apparatus: same detectors, same electronics, same everything. The only difference is the incoming field state to the apparatus, which has two holes: one for the T beam, and one for the R beam ; and one outgoing result: a binary logic signal, 1 or 0 (two simultaneous hits, or not).
With that apparatus corresponds a hermitean operator.
If the incoming state is a superposition of two others, then quantum theory, from a very elementary and fundamental point of view, fixes the results for the superposition when we know the results for the individual component states.
This is so elementary, that if you dispute that, you haven't understood the fundamentals of quantum theory at all. So I will stop repeating that.

But we knew that already: you confuse superposition of classical fields with superposition of quantum states. I'm pretty sure you are convinced that if
E1(r) is a classical field, and E2(r) is a classical field, and to E1 corresponds a quantum description |E1> and to E2 corresponds a quantum description |E2>, I'm pretty sure you are convinced that in all generality to a classical field E = a E1 + b E2 corresponds the quantum state a |E1> + b |E2>.
THIS IS NOT TRUE AT ALL.
The quantum state that corresponds to E is |E> and is usually orthogonal both to |E1> and |E2>.
It is only in the particular case of 1-photon states that there is a mapping between the configuration space of E-fields, and the Hilbert space of 1-photon states ; this is the reason why we can associate to a beam splitter a superposition of 2 quantum states which corresponds to a superposition of 2 classical E-fields (although, I repeat, 1-photon states are NOT the quantum description of classical fields ; there is only a bijective relationship between the two spaces).

It is that very confusion (between superposition of classical fields and superposition of quantum states) that makes you draw the ridiculous conclusion that quantum field theory is the piecewise linearized version of nonlinear classical field theory. As I outlined before, the space in which quantum field theory acts is IMMENSELY MUCH BIGGER (Hilbert space) than the space in which the nonlinear classical field theory acts (configuration space of classical fields). Which each POINT in configuration space CORRESPONDS A WHOLE DIMENSION in Hilbert space. With each superposition in configuration space correspond ORTHOGONAL STATES in hilbert space.
So in no way one is an "approximation" of the other. QFT is immensely more complex than non-linear CFT.
The "piecewise" linearisation of Feynman diagrams has not much to do with a piecewise linear approximation of a solution in CFT. But this is impossible to explain to someone who doesn't even understand the difference between superposition in configuration space (something related to specific dynamics) and the superposition in Hilbert space (which is a fundamental postulate of quantum theory).

The trick is which operator corresponds to which setup and which detectors.

No, to an unaltered measurement setup corresponds an operator. The beamsplitter is not part of the measurement setup, but changes the incoming states to an unaltered measurement setup (two detectors and some electronics). So I am allowed to use the operator which corresponds to the measurement setup, and it is the same in the 3 cases.
The very fact that you deny this means, again, that you haven't understood the basic premisses of quantum theory.

You need to note that the (AJP.9) doesn't have any parameter to specify the size of detector, it is an idealization for infinite detector.

This is again a fundamental misunderstanding on your part. For instance, the C operator I've been talking about DOES take into account all sizes, efficiencies and everything of a specific detector setup. Even the errors in the electonics and all. But it is not because I write it abstractly as "C" that that doesn't mean that it cannot stand for a complicated expression. In the same way, the eigenspaces of the D1 and D2 operators (in one of my earlier posts) are strongly dependent on the exact sizes and efficiencies and physical constructions of the detectors. I only don't write it down explicitly. It is just some abstract notation.

One way to deal with that is to look at what kind of setup does the idealization work correctly for, where do the simplifications (of infinite detectors) matter the least. That was my first answer --- if you take a large detector to cover T and R paths, then indeed you can absorb the whole mode |Psi.1>=|T>+|R>, thus the presumed full mode annihilation is Ok. Another angle is to look for another model that accounts better for the finite detectors and finite fields of the actual setup (which is, recall, all the difference being debated: which setup does g2=0 prediction model).

Again, I don't need such an idealisation. Finite detectors, as long as I use the same setup for the 3 cases, R, T and PBS, are sufficient.

That still doesn't make your particular operator C a model for the DG & DT setup with 50:50 beam splitter. Plain classical model predicts the same anticorrelation for the two limiting cases, too.

It does. Because the operator describes the measurement apparatus: the two detectors and the electronics. It is the same in the 3 cases, so I can use the same operator.

You know, the very fact that you do not attack the |psi1> = 1/sqrt(2)(|R> + |T>), but that you try to attack the operator corresponding to the measurement setup "C" means that you didn't understand:
1) the essence of the argument about the superposition of states and how it is fundamentally related to the basic premisses of quantum theory.
2) the confusion you have between superposition of fields and of quantum states
3) the misunderstanding you have about the postulates of a measurement in quantum theory (namely that the operator corresponds to the measurement apparatus and not to the entire setup, of which part prepares the INCOMING STATE, and part the measurement).

It is at the basis of your erroneous conclusion that quantum field theory is an approximative scheme of nonlinear classical field theory.
Now, because you are convinced that nonlinear classical field theory is the "correct" QED, you also claim that the predictions of NL CFT are necessary "less naive" predictions of QED, and as such you draw the conclusion that:

- people calculating QED predictions use "naive models"
- that "true QED" makes other predictions, namely no anti-correlations.

As such, you make some serious mistakes, which are resumed as follows:

- classical field theory predicts no anti-correlation (this is correct)
- your classical non-linear field theory also predicts no anti-correlation (I take your word for it).
- QED, as it is a linearised approximation of the above, must also predict no anti-correlation (that's your fundamental misunderstanding)
- As you now think that as well non-linear field theory, as classical EM, as QED (the way you misunderstand it) predicts no anticorrelation it must mean
that:

- those experiments showing anticorrelation MUST be wrong (it just CAN'T be, right)
- people doing so are priests trying to mislead youngsters
- they have in fact no argument to stand on, except naively misused QED, which also predicts anticorrelation, but only in a naive approach ; otherwise nothing distinguishes their glorified QED from non-linear CFT (which MUST be right :-)
- the "priesthood" keeps these naive calculations and their cheated experiments to keep their statute.

However, the situation is different:
- classical EM and NL CFT predict no anticorrelation
- QED (when one understands the superposition principle of quantum theory) predicts anticorrelation
- experiments find anti-correlation.

That's slightly less motivating for NL CFT...

I can understand, from your point of view, why you cling on your view :-))

cheers,
Patrick.
 
  • #116
There is however, not necessary a 1-1 relation between the solutions of the classical non-linear field equations, and the evolution equations in the quantum theory, even if starting from the quantum state that corresponds to a classical state to which the classical theory can be applied.

Indeed, as an example: in the hydrogen atom, there is not necessary an identity between the classically calculated Bohr orbits and the solutions to the quantum hydrogen atom.


It seems you're confusing "classical" with "particle" theories. The "classical" fields I was talking about already include the "first quantized" matter fields.
 
  • #117
nightlight said:
It seems you're confusing "classical" with "particle" theories. The "classical" fields I was talking about already include the "first quantized" matter fields.

Duh ! I was simply taking an example in NR quantum mechanics to illustrate that nonlinear dynamics in the classical (Newtonian) model doesn't mean non-linearity in the corresponding quantum theory. But of course I know that we're not talking about that particular model (a few particles in Newtonian mechanics). We're talking about a classical model consisting of fields in 3D, and its associated quantum theory (QFT).

cheers,
Patrick.
 
  • #118
... The quantum state that corresponds to E is |E> and is usually orthogonal both to |E1> and |E2>.
It is only in the particular case of 1-photon states that there is a mapping between the configuration space of E-fields, and the Hilbert space of 1-photon states ;...


Duh. That's point of Barut's ansatz to show exactly how the multiparticle QM configuration space is obtainable as an linearization approximation from the nonlinear fields of his/Schrodinger approach.

... It is only in the particular case of 1-photon states that there is a mapping between the configuration space of E-fields, and the Hilbert space of 1-photon states ; this is the reason why we can associate to a beam splitter a superposition of 2 quantum states which corresponds to a superposition of 2 classical E-fields (although, I repeat, 1-photon states are NOT the quantum description of classical fields ; there is only a bijective relationship between the two spaces).

It is that very confusion (between superposition of classical fields and superposition of quantum states) that makes you draw the ridiculous conclusion that quantum field theory is the piecewise linearized version of nonlinear classical field theory. As I outlined before, the space in which quantum field theory acts is IMMENSELY MUCH BIGGER (Hilbert space) than the space in which the nonlinear classical field theory acts (configuration space of classical fields). Which each POINT in configuration space CORRESPONDS A WHOLE DIMENSION in Hilbert space. With each superposition in configuration space correspond ORTHOGONAL STATES in hilbert space...


You have missed entirely the importance and implications of Barut's ansatz or any points of subsequent comments and appear completely lost as to which "fields" or which "spaces" were talked about at any given point. (You may need to separate some of your conceptual eigenspaces into few subspaces with different eigenvalues.)

It is typical for a differential eq's linearization procedures to introduce vast quantities of redundant functions which evolve linearly, instead of a single function which evolves non-linearly. For example in Carleman's linearization you take a nonlinear equation for 'field' A(t) (works same way also for regular nonlinear PDEs of very general type):

A' = F(A) ... (1)

where F is some nonlinear analytic function of field A. You then define infinite set of 'fields' Bn=A^n, and applying differentiations on Bn and Taylor expansion of F(), you get an infinite set of first order linear differential equations for Bn's of the type:

Bn' = Sum{ k; Mnk Bk } ... (2)

where Mnk is a numeric matrix. The infinite set of fields {Bn} evolves linearly and approximates A which evolves nonlinearly. The whole "IMMENSELY MUCH BIGGER" set of fields {Bn} is in fact mere approximation to single field A. You would not attribute any new physics to system described via this "IMMENSELY MUCH BIGGER" set {Bn} and its formalism (2), than what was already given in single field A and its formalism (1). Any "new" effect in {Bn}, if it doesn't exist in A is an artifact of the approximation.

That is precisely the relation between the Maxwell-Dirac equations and the multiparticle QM formalism. The latter is a linearized approximation of the former. The entangled states are simply result of the coarse-graining of the nonlinear evolution, which introduces artificial indeterminism in the approximate linear evolution.

As cited before, Kowalski has shown how to convert this kind of linearization (1), (2) (for general nonlinear PDEs) to boson Fock space formalism. All the "IMMENSELY MUCH BIGGER" Fock space formalism is still just the rewritten {Bn} set, still an approximation to the single field A and its evolution (1).

Barut's ansatz does the nearly same for the regular coupled Maxwell-Dirac eq's. The "IMMENSELY MUCH BIGGER" appearance is still an artifact of the approximation, the price paid for the mathematical convenience of linearity, but it brings in absolutely no new physics phenomenon that wasn't already in the original system. The apparent "immensity" of the multiparticle QM/QED parametrization of the system is the same kind of "immensity" that the infinite number of Taylor expansion terms produces for the sin(x) function.

Any deviation between the two within the finite orders N or due to omitted interaction for the sake of linearization is just a side-effect of truncated or incomplete approximation. This was clearly exemplified with the radiative corrections, where the fix for the Dirac's QED came closer in predictions to the Maxwell-Dirac nonlinear fields equations.
 
Last edited:
  • #119
I am afraid you've had a bit of overload here, as manifest in the high volume and the pitch of personal tones in your most recent long post. Frankly, that kind of exchange is a waste of everyones time, to read or to respond to. So, I'll leave you the last word.
 
  • #120
nightlight said:
Your basic problem is obvious here. The three setups are different. What pieces of equipment you call "aparatus" and what the "preparation" is matter of your definitions.

Again an illustration of your misunderstanding of the basic postulates of QM: what I call "apparatus" and what I call "system under study" defines what goes in the Hilbert space, and what goes in the Hermitean operator. Here, the system under study is the incoming EM field ; the apparatus is the detector setup. I could have drawn the line somewhere else, that's true ; it would have implied another separation between hilbert space and hermitean measurement operator. This is the famous Heisenberg cut, and we have a large liberty in placing it where we want ; the predictions are the same (that's a fundamental result of decoherence theory).
But in this particular case the cut is placed in the most obvious place: the detecting system goes in the "measurement" and the EM quantum field (the system under study) goes in the "system hilbert space".

You shouldn't take my remarks that you do not understand quantum theory as a personal insult: it is just an objective observation. If someone tells you that in standard natural number arithmetic, the operation + is not commutative, it is an obvious observation that that person doesn't understand natural number arithmetic. It is not an insult, and it can even be remedied.

They thre are also incompatible setups. That the three setups produce different state in T & R region, is true, too, but that is in addition to the the other differences. You seem to assume that only the state changes between the three setups, because the state does change, among all other things.

Only the incoming EM field state changes, yes. The detector system and its associated electronics didn't change, and you seemed not to have any difficulty accepting that the incoming state is something of the kind |R> + |T>.

You're making the same kind of assumption that von Neumann's faulty proof of no-HV had used. When you have three incompatible setups, there is no grounds in anything that your C which predicts S1 and S2, ought to predict anything for the third incompatible setup S3.

I never studied von Neumann's faulty proof, so I cannot comment on it. But this problem here is much much simpler than EPR situations, where the MEASUREMENT SETUP changes (Alice decides which angle to measure, for instance). This complication is not the case here.
You have a FIXED measurement setup, with different incoming states, according to different preparations. There's nothing more usual in quantum theory.

cheers,
Patrick.
 
  • #121
nightlight said:
It is typical for a differential eq's linearization procedures to introduce vast quantities of redundant functions which evolve linearly, instead of a single function which evolves non-linearly.

[...]

That is precisely the relation between the Maxwell-Dirac equations and the multiparticle QM formalism. The latter is a linearized approximation of the former. The entangled states are simply result of the coarse-graining of the nonlinear evolution, which introduces artificial indeterminism in the approximate linear evolution.

I know that you can solve a non-linear differential equation by going to a Hilbert space mechanism. However, what you have completely missed is that in the case of QFT, the Hilbert space is there BY POSTULATE.
Now, you (and Barut and others) can think that this is the same machinery at work, and that people are in fact, without knowing, using this "linearised hilbert space mechanism" to solve, without their knowing, a non-linear differential equation. But that idea is fundamentally flawed for an obvious reason:
the postulates of quantum theory ASSOCIATE A DIFFERENT PHYSICAL STATE to each element of the hilbert state. The non-linear PDE cannot do that. So it could be (it isn't, but the reasons are somewhat difficult to go into) that the linearised system ALSO ALLOWS SOLUTIONS TO THE PDE. But it contains immensely MORE solutions, and BY POSTULATE they are all true, physical states which are distinguishable one from another. This very fundamental postulate of quantum theory makes that it doesn't even matter if the Hilbert system is the result of a linearization or not of a NL PDE. We are now linked directly to the Hilbert space by postulate.

So in any case, the QFT contains many more physical situations than could ever be described by the non-linear PDE ; that's the FUNDAMENTAL CONTENT OF THE SUPERPOSITION PRINCIPLE I have been claiming you don't understand, and of which what you write above is again an illustration.

Now, that doesn't mean that QFT is the "correct" theory, and the NL PDE is the "wrong" theory or vice versa: only experiment can tell. But one thing is sure: the NL PDE doesn't describe the same physical theory as the QFT, which contains immensely more potential physical situations.
You can call them "spurious" but according to quantum theory, they are not. So that's a clear difference between both physical theories.

Now, you were making a claim about a prediction of QFT. If you do so, you should work with QFT, and not with the theory you think should replace it (the NL PDE). And the predictions of QFT are, for this setup, very clear: we have anti-correlation. This can be experimentally right, or it can be wrong. But one thing is sure: QFT predicts anti-correlation.
If you say that you work out a prediction of QFT, but:
- you do not accept the superposition principle
- you do not accept von Neumann's measurement theory
- you do not accept the usual links between systems and their mathematical representation in standard QFT
- you base yourself on another theory (NL PDE) of which you think erroneously that it is the superceding theory of QFT
...

well, then you're not working out a prediction of QFT :-)
If you claim that it does, it can only mean that you don't understand fundamental aspects of QFT, and those aspects are so fundamental, that it makes me conclude that you don't understand the basic postulates of quantum theory in its generality.

Otherwise you wouldn't claim that QFT makes these predictions: you would say that you have another theory, which contains the only "valid" predictions of QFT, and which theory does not predict anticorrelations. Even that would be wrong, but less so. The solutions of the NL PDE are not even in general the "converging solutions" of QFT. But it doesn't matter. The important point is that you recognize that what you are claiming is not a prediction of QFT.

That's all I'm saying.

cheers,
Patrick.
 
  • #122
For convenience, here is the list of references all in one place (some for later:).

References [/size]

1. J.J. Thorn, M.S. Neel, V.W. Donato, G.S. Bergreen, R.E. Davies, M. Beck
"Observing the quantum behavior of light in an undergraduate laboratory"[/color]
http://marcus.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf
http://marcus.whitman.edu/~beckmk/QM/

2. J. F. Clauser
"Experimental distinction between the quantum and classical field-theoretic predictions for the photoelectric effect''[/color]
http://prola.aps.org/abstract/PRD/v9/i4/p853_1

3. P. Grangier, G. Roger, and A. Aspect
"Experimental evidence for a photon anticorrelation effect on a beam splitter: A new light on single-photon interferences''[/color]
Europhys. Lett. 1, 173-179 (1986).

4. R. J. Glauber
"Optical coherence and photon statistics"[/color]
in Quantum Optics and Electronics, ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji
(Gordon and Breach, New York, 1965), pp. 63-185.

5. Z.Y. Ou, L. Mandel
"Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment" [/color]
http://prola.aps.org/abstract/PRL/v61/i1/p50_1

6. P.L. Kelly and W.H. Kleiner,
"Theory of electromagnetic field measurement and photoelectron counting"[/color]
Phys. Rev. 136, A316-A334 (1964).

7. L. Mandel, E.C.G. Sudarshan, E. Wolf
"Theory of Photoelectric Detection of Light Fluctuations"[/color]
Proc. Phys Soc. 84 (1964) 435-444.

8. L. Mandel
"Configuration-Space Photon Number Operators in Quantum Optics"[/color]
Phys. Rev 144, 1071-1077 (1966)

9. L. Mandel, E. Wolf
"Optical Coherence and Quantum Optics"[/color]
Cambridge Univ. Press., Cambridge (1995)

10. Edo Waks et al.
"High Efficiency Photon Number Detection for Quantum Information Processing"[/color]
quant-ph/0308054

11. M. C. de Oliveira, S. S. Mizrahi, V. V. Dodonov
"A consistent quantum model for continuous photodetection processes"[/color]
quant-ph/0307089

12. S.S. Mizrahi, V.V. Dodonov
"Creating quanta with 'annihilation' operator"[/color]
quant-ph/0207035

13. F. X. Kärtner and H. A. Haus
"Quantum-nondemolition measurements and the `collapse of the wave function' "[/color]
http://prola.aps.org/abstract/PRA/v47/i6/p4585_1

14. R.Y. Chiao, P.G. Kwiat
"Heisenberg's Introduction of the `Collapse of the Wavepacket' into Quantum Mechanics"[/color]
quant-ph/0201036

15. V. Bykov
"Photons, photocounts and laser detection of weak optical signals"[/color]
http://www.ensmp.fr/aflb/AFLB-26j/aflb26jp115.htm

16. T. S. Larchuk, M. C. Teich, and B. E. A. Saleh
"Statistics of Entangled-Photon Coincidences in Parametric Downconversion"[/color]
Ann. N. Y. Acad. Sci. 755, 680-686 (1995)

17. A. Joobeur, B. E. A. Saleh, T. S. Larchuk, and M. C. Teich
"Coherence Properties of Entangled Light Beams Generated by Parametric Down-Conversion: Theory and Experiment" [/color]
Phys. Rev. A 53, 4360-4371 (1996). Other M.C. Teich papers of interest.

18. P.N. Kaloyerou
"The GRA Beam-Splitter Experiment and Wave-Particle Duality of Light"[/color]
quant-ph/0503201
 
Last edited by a moderator:
  • #123
nightlight said:
For convenience, here is the list of references all in one place (some for later:).

Thanks ! That will be useful :-)

cheers,
patrick.
 
  • #124
vanesch said:
Thanks ! That will be useful :-)
cheers,
patrick.
In the continuation of this discussion in the couple ongoing threads on sci.physics.research, arguing against some Quantum Optician, few days ago I cricized Roy Glauber, the founder of modern Quantum Optics. Well, today http://nobelprize.org/physics/laureates/2005/index.html. Interesting timing.

The two sci.physics.research threads (where I also post as 'nightlight') are:

1. photoelectric effect : hypothetical experiment (the same kind of experiment discussed here).
2. The time it takes to emit one photon
 
Last edited:
  • #125
nightlight said:
In the continuation of this discussion in the couple ongoing threads on sci.physics.research, arguing against some Quantum Optician, few days ago I cricized Roy Glauber, the founder of modern Quantum Optics. Well, today http://nobelprize.org/physics/laureates/2005/index.html. Interesting timing. ...

I bet they decided to award him the Nobel just to bug you. :smile: Seriously, don't you ever get tired of beating yourself over the head?

The very purpose of the experiment you critized when you started this thread off was one which is intended for the undergraduate lab. That means it is mainstream stuff. I am sure that within a short period of time, these setups will begin proliferating. If you can see the "flaw" in the theory, don't you think some else will too? Or maybe you could "enlighten" them. Or better, get your own (the price is dropping fast) and prove everyone else wrong, even Glauber.

This is a recent quote from nightlight:

"Thanks for offering one more illustration of a typical 'QO sleight of
hand' -- pretend that the fundamental QO subtractions (which are built
into the very definition of Glauber's filtered correlation functions
Gn()) are due to some kind of minor and temporary technological
imperfection, to be overcome soon."


In all of the tests: once the proper people (i.e. "real" scientists and not some dumb Nobel dudes) do the proper experiment with the properly calibrated and high resoultion equipment, YOU WILL BE VINDICATED. Please ignore the fact that every photon test is currently headed AWAY from your assertions and towards QO predicted values. After all: that just PROVES there is a conspiracy, n'est-ce pas?

You are not searching for a common truth, you are looking for a way to maintain your archaic views from the advances of science. Your criticism disguises this simple fact, which is fairly evident to others. The true reason your selected criticisms don't really work is because you provide no alternative theoretical framework to explain the actual results. That would be necessary to get anyone to take you seriously. But that is impossible because of this little thing called Bell's Theorem. C'est la vie!
 
  • #126
The very purpose of the experiment you critized when you started this thread off was one which is intended for the undergraduate lab. That means it is mainstream stuff[/color].

So was the geocentric astronomy and lots of other nonsense we laugh at today. The result claimed by the AJP/2004 authors [1], in addition to being misleading to physicists outside of Quantum Optics in the usual "QO sleight of hand" way (such as Clauser [2], Grangier et al. [3] and other such QO experiments and claims), is an outright falsity[/color] (since no such effect is predicted by the QED/QO and other experimenters, such as Clauser [2], Grangier et al [3], Chiao & Kwiat [14] ... don't ever claim violations on the actual counts but only on Glauber type of "counts" i.e. only for the subsample of data which has the unpaired singles and accidental coincidences filtered out, the procedure which, as they recognize [14], makes their results entirely explicable by the straightforward classical models of the setup) and a blatant experimental fraud perpetrated in support of their false notions of what was supposed to happen. You are welcome to address the technical substance[/color] of my debunking of their experiment (which started this thread). Or go ask Prof. Beck if you can't do it on your own.

I am sure that within a short period of time, these setups will begin proliferating. If you can see the "flaw" in the theory, don't you think some else will too? ... This is a recent quote from nightlight:...

Again, you are welcome to address the technical substance[/color] of the theoretical "flaw" (it is merely a 'sleight of hand', which 'only' misleads physicists as to what the experimental facts are, and not an outright lie or a formal flaw) being discussed, especially as explained in the critique of the 1988 Ou & Mandel's paper [5], in particular in the sci.phys.research post #1 and post #2.

For that (or to at least begin discussing the same subject, for a change), you do need to read and understand the actual references, in particular Glauber [4] and Ou & Mandel [5] being talked about (if you want to refute me there, note that sci.physics.research is a moderated newsgroup, so you would need to know a bit what you're posting about; even physicists get their posts rejected there).

The true reason your selected criticisms don't really work is because you provide no alternative theoretical framework to explain the actual results.

Of course, I do. I merely don't provide any "alternative" theory of my own here. I don't need to since the alternative theories already exist. For example, Barut's Self-field ED explains all of QED phenomena (to at least alpha^5 order i.e. as far as the high precision QED test go). The Quantum Optics phenomena discussed here, which don't involve any QED radiative corrections (the QED of Quantum Optics is just the Old QED of Dirac, Heisenberg and Jordan from 1920s, with the Einstein's lightquantum imagery and heuristics of early 1900s used in pedagogical and popular expositions), are already completely quantitatively accounted for by the Marshall & Santos SED/SO (which is an approximation to Barut's SFED). Both were discussed and well referenced earlier in this thread, so I won't follow you back to square one on that. (As always, you are welcome to address the technical substance[/color] of anything I said earlier.)

As to the rest of your thoughts offered in your "reply", I am not interested in spending any time at all on your "psychoanalysis" of myself (you should take those valuable thoughts to some psychiatry or Freud forum where they can be truly appreciated, this is just physics being discussed here[/color]).


References

1. J.J. Thorn, M.S. Neel, V.W. Donato, G.S. Bergreen, R.E. Davies, M. Beck
"Observing the quantum behavior of light in an undergraduate laboratory"[/color]
http://marcus.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf
http://marcus.whitman.edu/~beckmk/QM/

2. J. F. Clauser
"Experimental distinction between the quantum and classical field-theoretic predictions for the photoelectric effect''[/color]
http://prola.aps.org/abstract/PRD/v9/i4/p853_1

3. P. Grangier, G. Roger, and A. Aspect
"Experimental evidence for a photon anticorrelation effect on a beam splitter: A new light on single-photon interferences''[/color]
Europhys. Lett. 1, 173-179 (1986). http://kh.bu.edu/qcl/pdf/grangiep19867a0e0f09.pdf

4. R. J. Glauber "Optical coherence and photon statistics"[/color]
in Quantum Optics and Electronics (1964 Les Houches Lectures)
ed. C. de Witt-Morett, A. Blandin, and C. Cohen-Tannoudji
(Gordon and Breach, New York, 1965), pp. 63-185.

5. Z.Y. Ou, L. Mandel
"Violation of Bell's Inequality and Classical Probability in a Two-Photon Correlation Experiment"[/color]
http://prola.aps.org/abstract/PRL/v61/i1/p50_1 http://puhep1.princeton.edu/~mcdonald/examples/QM/ou_prl_61_50_88.pdf

14. R.Y. Chiao, P.G. Kwiat
"Heisenberg's Introduction of the `Collapse of the Wavepacket' into Quantum Mechanics"[/color]
quant-ph/0201036
 
Last edited by a moderator:
  • #127
nightlight said:
Or go ask Prof. Beck if you can't do it on your own.

I have spoken to Beck on previous occasions, and I don't think he would be likely to spend a lot of time debating you.

And since you mentioned the subtraction of accidentals issue... let me quote from a nearly identical experiment to the Thorn/Beck et al experiment, that of http://users.icfo.es/Morgan.Mitchell/QOQI2005/DehlingerMitchellAJP2002EntangledPhotonsNonlocalityAndBellInequalitiesInTheUndergraduateLaboratory.pdf :

S=2.307 +/- 0.035

a violation of the Bell inequality by more than eight standard deviations. This result conclusively eliminates the HVTs, and is consistent with quantum mechanics. Also shown is the computed number of accidental coincidences, the average number of times that photons from two different downconversion events will arrive, purely by happenstance, within the coincidence interval t of each other. This background is small, nearly constant, and acts to decrease |S|. A finding of |S|>2 thus cannot be an artifact of the accidental background.


In other words, even in an undergraduate lab they are well aware of the critique of Bell tests as to the "accidentals" issue, and so they addressed it head on. I don't know why you local realists have so much trouble accepting something so simple. Accept the experimental evidence for what it is: conclusive by mainstream standards. When you have some mainstream evidence for your position, then we will be here to listen to it.

P.S. I think it is embarassing that you would support your position with the pitifully flimsy argument that "scientists have been wrong in the past." You can do better than that.
 
Last edited by a moderator:
  • #128
nightlight,

The problem we are having increasingly is that your views are not mainstream views. This doesn't mean necessarily that they have no value, but the PF guidelines want discussions to be limited to mainstream issues, and referring to the entire quantum optics community as cheaters, the very year when one of its founding fathers got the Nobel prize in physics, illustrates the difficulty.
I enjoyed discussing with you in the past, but I wasn't a mentor here back then, so now I'm supposed to watch for respect of the PF guidelines. If mainstream physics of today will be laughed at tomorrow, well, that simply means that we can only discuss laughable matter on PF :smile: ; such are the rules here.

Your discussions seem to be welcome on sci.physics.research, which has a slightly less restrictive moderation policy. However, in your posts there, you often refer to your PF threads, which could become annoying for PF in the long run. PF is not going to become the website hosting all your arguments against quantum physics :bugeye: .

It would probably be wise to limit your more virulent exchanges to s.p.r.
The quantum physics section of PF is meant to be about discussions of standard quantum theory, including QFT and quantum optics. A bit of informed speculation around open questions can be tolerated. Not about why it is just a pile of misleading rubbish.
 
  • #129
DrChinese said:
And since you mentioned the subtraction of accidentals issue... let me quote from a nearly identical experiment to the Thorn/Beck et al experiment, that of http://users.icfo.es/Morgan.Mitchell/QOQI2005/DehlingerMitchellAJP2002EntangledPhotonsNonlocalityAndBellInequalitiesInTheUndergraduateLaboratory.pdf :

S=2.307 +/- 0.035

a violation of the Bell inequality by more than eight standard deviations. This result conclusively eliminates the HVTs, and is consistent with quantum mechanics. Also shown is the computed number of accidental coincidences, the average number of times that photons from two different downconversion events will arrive, purely by happenstance, within the coincidence interval t of each other. This background is small, nearly constant, and acts to decrease |S|. A finding of |S|>2 thus cannot be an artifact of the accidental background.


In other words, even in an undergraduate lab they are well aware of the critique of Bell tests as to the "accidentals" issue, and so they addressed it head on. I don't know why you local realists have so much trouble accepting something so simple. Accept the experimental evidence for what it is: conclusive by mainstream standards. When you have some mainstream evidence for your position, then we will be here to listen to it.

The accidentals are just one type of subtraction prescribed by the Glauber's counting (to extract the Gn()'s from the data). They can be traded off for other types of subtractions to to the point of being nearly completely absent.

These other types of events which must be excluded (via the non-local post-filtering on obtained data) to match the Glauber's "filtered correlation" Gn() are all cases of m triggers where m differs from n, as well as any cases of n triggers where the n triggers are not on the n distinct detectors. Generally, out of n^n terms describing perturbatively the full evolution of the system of n detectors in the EM field, the Gn() is a hand-picked selection of particular n! terms, which is approximately 1/e^n-th fraction of the full n detector dynamics (the equations that says what is happening on the n detectors).

In other words the Gn()'s don't describe what is happeneing with the n detectors but merely what can be subsampled from among all the events using the Glauber's non-local filtering procedure (which, of course, invalidates it for any comparisons or use in Bell inequalities which are purely set-theoretical/enumerative constraints, a la pigeonhole principle). In other words, the theory (QED/QO) doesn't predict any "ideal" case where these subtractions could be made negligible. The rest of the events (being necessarily discarded from the set of all samples) is always substantially more numerous than the filtered fraction of samples kept.

All these subtractions have nothing to do with the technological limitations of detectors -- they are fundamental to the Glauber's definition of Gn() (which is the QED/QO technique to derive predictions for these coincidences). And that is just the first key problem (of the two discussed in the posts cited earlier) with the 'QO sleight of hand' in presenting their results.

I gave you links to more detaled descriptioon and discussion with references to relevant papers (such as Glauber's and Ou & Mandel's papers) and I don't intend to dance in a circle with you around the square one on every post. You don't show even the slightest indication of any familiarity with the subject discussed or references cited.

I am sure, though, that there is place on the whole internet where the psychological, psychiatric and social aspects of physics, which seem to be the only kind of topic you wish to talk about, are being discussed and where you wisdom would be appreciated. Unfortunately, as hinted before, I have a bit of a wooden ear for such topics, so you may be throwing the proverbial 'margaritas ante porcos'. It would be a terrible tragedy to waste here on me all those pearls of wisdom which must be, I am sure, hiding somewhere in your writing.
 
Last edited by a moderator:

Similar threads

Replies
40
Views
4K
Back
Top