What Confusion Surrounds Young's Experiment on Wave-Particle Duality?

  • Thread starter Thread starter Cruithne
  • Start date Start date
  • Tags Tags
    Experiment
  • #51
I’ve read this interesting discussion and I’d like to add the following comments.

Bell’s inequalities and more precisely EPR like states help in understanding how quantum states behave. There are many papers in arxiv about theses inequalities. Many of them show how classical statistics can locally break these inequalities even without the need to introduce local (statistical) errors in the experiment.

Here are 2 examples extracted form arxiv (far from being exhaustive).

Example 1 : quant-th/0209123 Laloë 2002 extensive paper on QM interpretation questions (to my opinion against local hidden variable theory, but open mind => lot of pointers and examples)
Example 2: quant-ph/0007005 Accardi 2000 (and later). An Example of how a classical probability space can break bell inequalities (contextual).

The approach of Nightlight, if I have correctly understood, is another way (I’ve missed it: thanks a lot for this new possibility): instead of breaking the inequalities, the “statistical errors” (some events not counted by the experiment, or the way the experiment data is calculated), if included in the final result, force the experiment to follow the bell inequalities. This is another point of view on what is “really” going on with the experiment.

All of these alternative examples use a classical probability space, i.e. the Kolmogorov axiomatization, where one take the adequate variables such that they can violate the Bell’s inequalities (and now, a way to enforce them).

Now, if the question is to know whether the bell’s inequalities experiments are relevant or not, one conservative approach is to try to know (at least feel, and the best, demonstrate), if in “general”, “sensible” experiments (quantum or classical or whatever we want) are most likely to break the bell’s inequalities or not. If the answer is no, then we must admit that aspect type of experiments have detected a rare event and that the leaving “statistical errors” seem not to help (in breaking the inequalities). If the answer is yes, well, we can say what we want :).

The papers against bell’s inequalities experiments, to my modest opinion, demonstrate that a sensible experiment is more likely to detect the inequalities breaking so that we can say what we want! That’s a little bit disappointing, because in this case we still not know if any quantum state may be described by any “local” classical probability space or not. I really prefer to get an good and solid explanation.

To end, I did not know before the Sica’s papers. But, I would like to understand the mechanism he (and Nightlight) used in order to force the Bell’s inequality matching. I follow Vanesh reasoning without problem, but the Nighlight one is a little bit more difficult to understand: where is the additional freedom used to enforce the inequality.

So, let's try to understand this problem in the special case of the well known Aspect et al experiment 1982, phys.rev. letters (where only very simple mathematics are used). I like to use a particular case before making a generalisation; it is easier to see where the problem is.

First let’s take 4 ideal discrete measurements (4 sets of data) of an Aspect type experiment with no lost sample during the measurement process.

If we take the classical expectations formulas with have :

S+= E(AB)+E(AB’)=1/N sum_i1 [A(i1)B(i1)]+ 1/N sum_i2 [A(i2)B’(i2)]
= 1/N sum_i1_i2[A(i1)B(i1)+ A(i2)B’(i2)] (1)

Where A(i1),B(i1) is the data collected by the first experiment and A(i2),B(i2) the data collected by the second experiment. With N --> ∞ (we also take the same sample number for each experiment).

In our particular case A(i1) is the result of the spin measurement of photon 1 on the A (same name as the observable) axis (+1 if spin |+>, -1 if spin |->) while B(i1) is the result of the spin measurement of photon 2 on the B axis (+1 if spin |+>, -1 if spin |->).
Each ideal measurement (given by label i1 or i2) thus gives two spin results (the two photons must be detected).
Etc … For the other measurement cases.

We thus have the second equation:

S-= E(A’B)-E(A’B’)=1/N sum_i3 [A’(i3)B(i3)]- 1/N sum_i4 [A’(i4)B’(i4)]
= 1/N sum_i3_i4[A(i3)B(i3)- A(i4)B(i4)] (2)

Labelling equation (1) or (2), ie, changing the ordering of label i1,i2,i3,i4 does not change the result (sum is commutative).

Now, If we want get the inequality S+=|E(AB)+E(AB’)|≤ 1+E(BB’), we first need to make a filter to the rhs equation (1), otherwise A cannot be factorized: we must select a subset of experiment samples with A(i1)=A(i2).

If we take a large samples number N, equation (1) is not changed with this filtering and we get:

|S+|= |E(AB)+E(AB’)|= 1/N |sum_i1_i2[A(i1)B(i1)+ A(i2)B’(i2)] |
= 1/N |sum_i1[A(i1)B(i1)+ A(i1)B’(i1)] |=
≤1/Nsum_i1 |[A(i1)B(i1)+ A(i1)B’(i1)]|

We then used the simple inequality |a.b+a.c|≤ 1+ a.c (|a|,|b|,|c| ≤1) for each label i1

|S+|= |E(AB)+E(AB’)| ≤1+1/N sum_i1[B(i1)B’(i1)] (3)

Remind that B’(i1) is the data of the second experiment relabelled with a subset of label i1. Now this re-labelling has a freedom because we may have several experiment results (50%) where A(i1)=A(i2).

So in equation (3) |sum_i1[B(i1)B’(i1)]]| depends on the artificial label order.

We also have almost the same inequality for equation (2)

|S+|= |E(A’B)-E(A’B’)|= 1/N |sum_i3_i4[A’(i3)B(i3)- A’(i4)B’(i4)] |
= 1/N |sum_i3[A’(i3)B(i3)- A’(i3)B’(i3)] |=
≤1/Nsum_i3 |A’(i3)B(i3)- A’(i3)B’(i3)|

We then used the simple inequality |a.b-a.c|≤ 1- a.c (|a|,|b|,|c| ≤1)

|S+|= |E(A’B)-E(A’B’)| ≤1-1/N sum_i3[B(i3)B’(i3)] (4)

So in equation (4) |sum_i1[B(i3)B’(i3)]]| depends on the artificial label ordering i3.

Now, we thus have the bell inequality:

|S=S++S-|≤ S++ S-= 2+1/N sum_i1_i3[B(i1)B’(i1)-B(i3)B’(i3)] (5)

where sum_i1_i3[B(i1)B’(i1)-B(i3)B’(i3)] depends on the labelling order we have used to filter and get this result.

I think that (3) and (4) may be the labelling order pb remarked by Nighlight in this special case.

Up to know, we have only spoken of collection of measurement results of values +1/-1.

Now if B is a random variable that depends only on the local experiment apparatus (the photon polarizer) we have B=B(apparatus_B, hv) where hv is the local hidden variable, we should have:

1/N.sum_i1[B(i1)B’(i1)= 1/N.sum_i3B(i3)B’(i3)] = <BB’> when N--> ∞.
(so we have the Bell inequality |S|≤2).

So, now I can use the Nighlight argument, ordering of B’(i1) and B’(i3) is totally artificial then the question is: should I got 1/N.sum_i1[B(i1)B’(i1)<> 1/N.sum_i3B(i3)B’(i3)] or the equality?

Moreover, Equation (5) seems to show that this kind of experiments are more likely to see a violation of a bell inequality as B(i1),B’(i2),B(i3)B’(i4) comes from 4 different experiments.


Seratend

P.S. Sorry if some minor mistakes are left.
 
Physics news on Phys.org
  • #52
Sorry, my post was supposed to follow this one:


nightlight said:
vanesch Ok, I read the paper you indicated and I have to say I'm disappointed, because there seems to be a blatant error in the reasoning.

... the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>[/color], because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> [/color]by quantum theory.


The indicated statements show you have completely missed the several pages of discussion in the Sica's paper on his "data matching" procedure[/color] where he brings out that question and explicitly preserves <b.b'>[/color]. Additional analysis of the same question is in his later paper. It is not necessary to change the sum <b1.b2> even though individual elements of the arrays b1[] and b2[] are reshuffled. Namely there is a great deal of freedom when matching a1[] and a2[] elements since any +1 from a2[] can match any +1 from a1[], allowing thus for [(N/2)!]^2 ways to match N/2 of +1's and N/2 of -1's between the two arrays. The constraint from <b1.b2> requires only that the sum is preserved in the permutation, which is a fairly weak constraint.

Although Sica's papers don't give a blow by blow algorithm for adjusting b2[], there is enough description in the two papers to work out a simple logistics for the swapping moves between the elements of b2[] which don't change the correlation <a.b2> and which monotonically (in steps of 2 per move) approach the required correlation <b1.b2> until reaching it within the max error of 1/N.

Let me know if you have any problem replicating the proof, then I'll take the time to type it in (it can be seen from a picture of the three arrays almost at a glance, although typing it all in would be a bit of a tedium).

Seratend,
It takes time to answer :)
 
  • #53
seratend said:
Example 2: quant-ph/0007005 Accardi 2000 (and later). An Example of how a classical probability space can break bell inequalities (contextual).

I only skimmed quickly at this paper, but something struck me: he shows that Bell's equality can also be satisfied with a non-local model. Bell's claim is the opposite: that in order NOT to satisfy the equality, you need a non-local model.

A => B does not imply B => A.

The "reduction" of the "vital assumption" that there is one and only one underlying probability space is in my opinion EXACTLY what is stated by local realistic models: indeed, at the creation of the singlet state with two particles, both particles carry with them the "drawing of the point in the underlying probability universe", from which all potential measurements are fixed, once and for all.

So I don't really see the point of the paper! But ok, I should probably read it more carefully.

cheers,
Patrick
 
  • #54
vanesch you accepted ALL OF QUANTUM THEORY except for the projection, so I was shooting at the wrong target ; nevertheless, several times it occurred to me that you were actually defying the superposition principle, which is at the heart of QM. Now you confessed :-p :-p

Projection postulate itself suspends the linear dynamical evolution. It just does it on the cheap[/color], in a kind of slippery way, without explaining how, why and when does it stop functioning (macroscopic device ? consciousness? a friend of that consciousness? decoherence? consistent histories? gravity? universe branching? jhazdsfuty?) and how it resumes. That is a tacit recognition that linear evolution such as the linear Schrodinger or Dirac equations, don't work correctly throughout. So when the limitation of the linear approximation reaches a critical point, the probability mantras get chanted, the linear evolution is stopped in a non-dynamic, abrupt way, and temporarily substituted with a step-like, lossy evolution (the projector) to a state in which it ought to be, then when in the safe waters again, the probability chant stops, and the regular linear PDE resumes. The overall effect is at best analogous to a piecewise linear approximation[/color] of a curve which, all agree, cannot be a line.

So this is not a matter who is for and who is against the linearity -- since no one is "for" with the only difference that some know that. The rest believe they are in the "for" group and they despise the few who don't believe so. If you believe in projection postulate, you believe in temporary suspension of linear evolution equations, however ill-defined it may be. [/color]

Now that we agreed we're all against linearity, what I am saying is that this "solution," the collapse, is an approximate stop-gap measure, due to intractability of already known non-linear dynamics, which in principle can produce collapse-like effects when they're called for, except in a lawful and clean way. The linearity would hold approximately as it does now, and no less than it does now, i.e. it is analogous to smoothing the sharp corners of the piecewise linear approximation with a mathematically nicer and more accurate approximation.

While you may imagine that non-linearity is a conjecture, it is the absolute linearity that is a conjecture, since non-linearity is a more general scheme. Check von Neumman's and Wigner's writings on the measurement problem to see the relation between the absolute linearity and the need for the collapse.

A theory cannot be logically coherent if it has an ill-defined switch between the two incompatible modes of operation, the dynamic equations and the collapse (which grew out of the similarly incoherent seeds, the Bohr's atom model and the first Plank's theory). The whole theory in this phase is like a hugely magnified version of the dichotomies of the originating embryo. That's why there is so much philosophizing and nonsense on the subject.
 
  • #55
You must be kidding. The time evolution is in the state in Hilbert space, not[/color] in the Born rule itself.

That is the[/color] problem I am talking about. We ought not to have dynamical evolution interrupted and suspended by "measurement" which turns on the Born rule, to figure out what it really wants to do next, then somehow the dynamics is allowed to run again.


Well, the decoherence program has something to say about this. I don't know if you are aware of this.

It's a bit decoherent for my taste.
 
  • #56
nightlight said:
Projection postulate itself suspends the linear dynamical evolution. It just does it on the cheap[/color], in a kind of slippery way, without explaining how, why and when does it stop functioning (macroscopic device ? consciousness?

The relative-state (or many worlds) proponents do away with it, and apply strict linearity. I have to say that I think myself that there is something missing in thas picture. But I think that quantum theory is just a bit too subtle to replace it with semiclassical stuff. I'd be surprised that such a model can predict the same things as QFT and most of quantum theory. In that it would be rather stupid, no? I somehow have the feeling - it is only that of course - that this semiclassical approach would be the evident way to try stuff before jumping on the bandwagon of full QM, and that people have turned that question in all possible directions, so that the possibilities have been exhausted there. Of course, one cannot study all "wrong" paths of the past and one has to assume that this has been looked at somehow, otherwise nobody gets nowhere if all wrong paths of the past are re and re and reexamined. So I didn't look in all that stuff, accepting that it cannot be done.

cheers,
Patrick.
 
  • #57
But I think that quantum theory is just a bit too subtle to replace it with semiclassical stuff.

I didn't have in mind the semiclassical models. The semiclassical scheme merely doesn't quantize EM field, but it still uses the external field aproximation, thus, although practical, it is limited and less accurate than QED (when you think of the difference in heavy gear involved, it's amazing it works at all). Similar problems plague Stochastic Electrodynamics (and its branch Stochastic Optics). While they can model many of the the so called non-classical effects touted in Quantum Optics (including the Bell's inequality experiments), they are also an external field approximation scheme, just using the ZPF distribution as the boundary/intial conditions for the classical EM field.

To see the difference from the above approaches, write down Dirac equation with minimal EM coupling, then add below the inhomogenious wave equation for the 4-potential A_mu (the same one from the Dirac equation above it) with the right hand side using the Dirac's 4-current. You have a set of coupled nonlinear PDEs without external field or external current approximation. See how far you get with that kind of system.

That's a variation of what Barut started with (and also with Schroedinger-Pauli instead of Dirac) and then managed to reproduce the results of the leading orders of QED expansion (http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=&AU=barut&AF=&CL=&RP=&YR= has 55 of his papers and preprints scanned; those from mid 1980s and on are mostly on his self-field). While this scheme alone obviously cannot be the full theory, it may be a at least knock on the right door.
 
Last edited by a moderator:
  • #58
nightlight said:
To see the difference from the above approaches, write down Dirac equation with minimal EM coupling, then add below the inhomogenious wave equation for the 4-potential A_mu (the same one from the Dirac equation above it) with the right hand side using the Dirac's 4-current. You have a set of coupled nonlinear PDEs without external field or external current approximation. See how far you get with that kind of system.

This is exactly what my old professor was doing (and in doing so, he neglected to teach us QFT, the bastard). He was even working on a "many particle dirac equation". And indeed, this seems to be a technique that incorporates some relativistic corrections for heavy atoms (however, there the problem is that there are too many electrons and the problem becomes untractable, so it would be more something to handle an ion like U+91 or so).

Nevertheless, I'd still classify this approach as fully classical, because there is no "quantization" at all, and the matter fields are considered as classical fields just as well as the EM field. In the language of path integrals, you wouldn't take into account anything besides the classical solution.
Probably this work can be interesting. But you should agree that it is still a far way to go to have a working theory, so you shouldn't sneer at us low mortals who, for the moment, take quantum theory in the standard way, no ?
My feeling is that it is simply too cheap, honestly.

cheers,
Patrick.
 
  • #59
This is exactly what my old professor [/color] was doing (and in doing so, he neglected to teach us QFT, the bastard).

What's his name. (Dirac was playing in later years with that stuff, too, so it can't be that silly.)

He was even working on a "many particle dirac equation".

What Barut found (although only for the non-relativistic case) was that for the N particle QM, he can obtain the equivalent result to conventional N particle QM, in a form superficially resembling the Hartree-Fock self-consistent field, using electron Psi_e and a nucleon Psi_n (all normalized to correct number of particles instead of to 1), as nonlinearly coupled classical matter fields in 3-D instead of the usual 3N-Dimensional configuration space, and unlike Hartree-Fock, it was not an approximation.

Nevertheless, I'd still classify this approach as fully classical, because there is no "quantization" at all, and the matter fields are considered as classical fields just as well as the EM field. In the language of path integrals, you wouldn't take into account anything besides the classical solution.

Indeed, that model alone doesn't appear to be the key by itself. For example the charge quantization doesn't come out of it and must be put in by hand, although no one has really solved anything without the substantial approximations, so no one knows what these equations are really capable of producing (charge quantization seems very unlikely, though, without additional fields or some other missing ingredient). But many have gotten quite a bit of mileage out of much much simpler non-linear toy models, at least in the form of insights about the spectrum of phenomena one might find in such systems.
 
Last edited:
  • #60
Seratend reply:
=======================================================
Before, here is below the place in time, when I started to reply:


nightlight said:
vanesch you accepted ALL OF QUANTUM THEORY except for the projection, so I was shooting at the wrong target ; nevertheless, several times it occurred to me that you were actually defying the superposition principle, which is at the heart of QM. Now you confessed :-p :-p

Projection postulate itself suspends the linear dynamical evolution. It just does it on the cheap[/color], (...)

(...)A theory cannot be logically coherent if it has an ill-defined switch between the two incompatible modes of operation, the dynamic equations and the collapse (which grew out of the similarly incoherent seeds, the Bohr's atom model and the first Plank's theory). The whole theory in this phase is like a hugely magnified version of the dichotomies of the originating embryo. That's why there is so much philosophizing and nonsense on the subject.

first,


vanesh said:
(...) I only skimmed quickly at this paper, but something struck me: he shows that Bell's equality can also be satisfied with a non-local model. Bell's claim is the opposite: that in order NOT to satisfy the equality, you need a non-local model.

(...) So I don't really see the point of the paper! But ok, I should probably read it more carefully.
cheers,
Patrick


Vanesh, do not loose your time with Accardi Paper. It is only an example: one attempt, among many others, surely not the best, of a random variable model that breaks the bell inequalities. If I correctly understand, the results of spin measurement depend on the apparatus settings (local random variables: what they call “cameleon effect”).
It has been a long time I’ve looked at this paper :), but the first pages, before their model, are a very simple introduction to the probability and bell inequalities on a general probability space, and after how to construct, a model that breaks this inequality (local or global). This kind of view has led to the creation of a school QM interpretation (“quantum probability”, that is slightly different from orthodox interpretation).


Second and last, I have some other comments on physics and the projection postulate and its dynamics.



nightlight said:
In the usual Hilbert space formulation, the Born rule is a static, geometric property of vectors, projectors, subspaces. It lacks the time dimension, thus the connection to the dynamics which is its real origin and ultimate justification and delimiter. (…)

(…) Thus for him (or for Jaynes) the field quantization was unneccessary, non-fundamental, at best a computational linearization procedure.

The Schroedinger, Dirac and Maxwell equations can already be rederived as macroscopic approximation of the dynamics of simple binary on/off automata (see for example some interesting papers by Garnet Ord). These kind of tools are hugely richer modelling medium than either PDEs or Hilbert space.

I appreciate, when someone likes to check other possibilities in physical modelling (or theory if we prefer), it is a good way to discover new things. However, please, avoid saying that a model/theory is better than another does as the only thing we can say (to my modest opinion) is that each model has its unknown domain of validity.
Note I do not reject the possibility of a perfect model (full domain of validity), but I prefer to think it currently does not exist.

The use of PDE models is interesting. It has already proved its value in many physical branches. We can use the PDE to model the classical QM, as well as the relativistic one, this is not the problem.
For example you have bohemian mechanics (1952): you can get all the QM classical results with this model as well as you can write an ODE that complies with QM. (insertion of a Brownian motion like term in Newton’s equation – Nelson 1966 or your more recent “simple binary on/off automata” of Garnet Ord that seems to be the binary random walk of the Brownian motion –not enough time to check it :(
The main problem is to know if we can get the results of the experiments in a simpler way using such a method.

The use of Hilbert space tools in quantum mechanics formulation is just a matter of simplicity. It is interesting when we face discrete value problems (eg. Sturm-Liouville like problems). It shows,, for example, how a set of discrete values of operators change in time. Moreover it shows simply the representation relativity (e.g. quantum q or p basis) by a simple basis change. It then shows in a simplistic way the connection (a basis change) between a continuous and discrete basis (eg. {p,q} continuous basis and {a,a+} discrete basis).
In this type of case, the use of PDE may become very difficult. For example, the use of non gentle functions of L2(R,dx) spaces introduces less intuitive problems of continuities requiring for example the introduction of extension of the derivation operators to follow the full solutions of an extended space. This is not my cup of tea so I won’t go further on.


The text follows in the next post. :cry:

Seratend
 
  • #61
second part


Now let’s go back to the projection postulate and time dynamics in the Hilbert space formulation.


nightlight said:
In the usual Hilbert space formulation, the Born rule is a static, geometric property of vectors, projectors, subspaces. It lacks the time dimension, thus the connection to the dynamics which is its real origin and ultimate justification and delimiter.


The reason it is detached from the time and the dynamics is precisely in order to empower it with the magic capability of suspending the dynamics, producing the "measurement" result with such and such probability, then resuming the dynamics. And without ever defining how and when exactly this suspension occurs, what and when restarts it... etc. It is so much easier to forget about time and dynamics if you smother them with ambiguous verbiage ("macroscopic" and other such obfuscations) and vacuous but intricate geometric postulates. By the time student gets through all of it, his mind will be too numbed, his eyes too glazed to notice that emperor wears no trousers.


Projection postulate (PP) is one of the orthodox/Copenhagen postulates that is not well understood by many people even if it is one of the most simple (but may be subtle).

PP is not completely outside the QM it mimics the model of scattering theory. The only thing that we have to know about PP is the description of the result of the unknown interaction between a quantum system (a system with N variables) and a measurement system (a quantum system of may be an infinite number of quantum variables: 10^23 variables, or more):
-- From an input state of the quantum system, PP gives an output state of the quantum system like in the quantum scattering theory except that we assume that the time of interaction (the state update) is as short as we want and that the interaction may be huge:
|in> --measurement--> |out>
-- Like scattering theory, the projection postulate does not need to know the evolution of the “scattering center” (the measurement system): in scattering theory we often assume a particle with an infinite mass, this is not much different from a heavy measurement system.
-- Like the scattering theory, you have a model: the state before the interaction, and the sate after the interaction. You do not care about what occurs during the interaction. And it is perfect, because you avoid manipulating incommensurable variables and energies due to this huge interaction and where the QM may become wrong. However, before and after the interaction we are in the supposed validity domain of QM: that’s great and its exactly we need for our experiments! Then we apply the Born rules: we then have our first explanation why born rules apply to the PP model: it is only an extension of the scattering theory rather than an “of the hat” postulate.

What I also claim with the PP, is that I have a “postulate”/model that gives me the evolution of a quantum system interaction with a huge system and that I can verify in the everyday quantum experiments.
I am not saying that I have a collapse or a magical system evolution just what is written on most of schoolbooks: I have a model of the time evolution of the system in interaction with the measurement system. Therefore, I also need to describe this evolution on all the possible states of the quantum state.

Now most of the people using the PP always forget the principal thing: the description of the complete modification of the quantum system by the measurement “interaction”. The missing of such complete specification almost always leads to these “collapse” and others stuffs. When we look at the states issued by the measurement apparatus this is not a problem, but the paradoxes (and questions about projection postulate or not) occur for the other states.

For example, when we say that we have an apparatus that measures the |+> spin. It is common to read/see this description:
1) We associate to the apparatus the projector P_|+>= |+><+|. We thus say that we have an apparatus that acts on the entire universe forever (even before its beginning).
2) For a particle in a general state |in>, we find the state after measure:
|out>= P_|+>|in>= |+> (we skip the renormalisation)
And most of the people are happy with this result.
So if we take now a particle |in>=|-> and apply the projector we get |out>= P_|+>|in>= 0.
Is there any problem?
I say: what is a particle with a state equal to a null vector? Consider now two particles in the state |in>=|in1>|-> where the measurement apparatus acts on:
|out>= P_|+>|in>=<+|->|in1>|+>=0|in1>|+>=0, the first particle has also disappeared during the measurement of the second particle.

What’s wrong: In fact, like in scattering theory, you must describe the measurement interaction output states for all input states otherwise you will get in trouble. Classical QM as well as field/relativistic QM formalism does not like the null state as the state of a particle (it is one of the main mathematical reasons why we need to add a void state /Hilbert void space to the states of fields ie <void|void> <>0).
Therefore, we have to specify also the action of the measurement apparatus on the sub Hilbert space orthogonal to the measured values!
In our simple case |out>= P_|->|in>= |somewhere> : we just say for example that particles of spin |-> will be stopped by the apparatus or, if we like, will disappear (in this case we need to introduce the creation/annihilation formalism: the jump to the void space). We may also take |somewhere(t)>: to say that after the interaction the particle has a new non permanent state: it is only a description of what the apparatus do on particles.

So if we take our 2 particles we have:
|out>=sum P_|>|in1>|->= (|+><+|+|somewhere><-|)||in1>|+>= |in1>|somewhere>

Particle doesn’t 1 disappear and is unchanged by the measurement (we do not need to introduce the density operator to check it).

Once we begin to define the action of the PP on the complete Hilbert space (or at least the sub Hilbert space under interest), everything becomes automatic and the magical stuff disappears.

Even better, you can define exactly, and in a very simple way, where and when the measurement occurs and describes local measurement apparatuses. Let’s go back to our spin measurement apparatus:
Here is the form a finite spatial extension apparatus measuring the |+> spin:
P_|+>=|there><there||+><+| (1)
Where <x|there>=There(x)~there. There(x) is different from 0 only on a small local zone of the apparatus. It is where the measurement will take place.

We thus have to specify the action on the other states (rest of the universe, rest of the spin states) otherwise P_|+> will make the particles “disappear” if particles are not within the spatial domain of the apparatus. For example:

P_|->=|there><there||somewhere><-|+ |everywhere - there>< everywhere - there|(|+><+| +|-><-|)


And, if we take |in>=|x_in(t)>|+> a particle moving along the x-axis (very small spatial extension), we approximately know the measurement time (we do not break the uncertainty principle :): time of interaction tint occurs at |x_in(tint)>=|there>.

So once we take the PP in the write manner (the minimum: only a state evolution), we have all the information to describe the evolution of the particle. And it is not hard to see and to describe the dynamical evolution of the system and to switch on and off the measurement apparatus during the time.



Seratend.
 
  • #62
seratend The use of PDE models is interesting. It has already proved its value in many physical branches. We can use the PDE to model the classical QM, as well as the relativistic one, this is not the problem.

Hilbert space is an abstraction of the linear PDEs. There is nothing abot it that PDEs don't have, i.e. it doesn't add properties but it subtracts them[/color]. That is fine, as long as one understands that every time you make abstraction, you throw out quite a bit from the more specific model you are abstracting away. Which means you may be abstracting away something which is essential.

In the Hilbert space abstraction we subtract the non-linearity[/color] traits of the PDE (or integral eqauations) modelling tool. That again is perfectly fine, as long as one understands that linear modelling in any domain is generally an approximation for the more detailed or deeper models. Our models are always an approximation to a domain of phenomena, like a Taylor expansion to a function within a proper domain.

While the linear term of Taylor series is useful, it is by no means the best math for everything and for all times. And surely one would not take the linear term and proclaim that all functions are really linear functions and then to avoid admitting it ain't so, one then proceeds using piecewise linear approximations for everything[/color]. That could be fine too, but it surely cannot be said that it is the best one can do with functions, much less that it is the only thing valid about them.

This kind of megalomany is precisely what the Hilbert space abstraction (the linear approximation) taken as a foundation[/color] of the QT has done -- this is how all has to work or it isn't the truest and the deepest physics.

Since the linearity is a very limited model for much of anything, and it certainly was never a perfect approximation for all the phenomena that QT was trying to describe, the linear evolution is amended -- it gets suspended in an ill-defined slippery maner (allegedly only when "measurement" occurs, whatever that really means) the "state" gets changed to where its linear model shadow, the state vector (or statistical operator), ougth to be, then the linear evolution gets resumed.

That too is all fine, as long as one doesn't make such clumsy rigging and ad-hockery into the central truth about universe. At least one needs to be aware that one is merely rectifying the inadequacies of the linear model while trying to describe phenomena which don't quite fit the first term of Taylor expansion. And one surely ought not to make the ham-handed ways we use to ram it in, into the core principle and make way too much out of it. There is nothing deep about projection (collapse) postulate, it's a underhanded slippery way to acknowledge 'our model doesn't work here'. It is not a virtue, as often made to look like; it is merely a manifestation of a model defect, of the ignorance.

Bell's theorem[/color] is precisely analogous of deifying the piecewise linear approximation model[/color] and proclaiming that the infinities of its derivatives are fundamental trait of the modeled phenomenon. Namely, the Bell's QM "prediction" is a far stretch of the collapse postulate[/color] (which is detached from and declared contrary and an override of the dynamics, instead of being recognized for what it is -- a kludgey do-hickey patching over the inadequacies of the linear approximation, the Hilbert space axiomatics, for the dynamics) to the remote non-interacting system[/color] and then proclaiming this shows fundamental non-locality. The non-locality was put in by hand[/color] through the inappropriate use of the piecewise linear approximation for the dynamics (i.e. throught the misuse of the projection postulate[/color] of the standard QM axiomatics). It applies the instantaneous projection to the remote system, disregarding the time, distance and the lack of interaction. It is as hollow as making a big ado about the infinites in the derivatives of piecewise linear approximations. It's a waste of time.
 
Last edited:
  • #63
nightlight said:
Hilbert space is an abstraction of the linear PDEs. There is nothing abot it that PDEs don't have, i.e. it doesn't add properties but it subtracts them[/color].
You have such an ego, this is amazing. You pretend to know everything about both domains, which is simply impossible for a single one !

What about fractal dimension as computed with wavelets ? This is a great achievement of the Hilbert space formalism. The reason Hilbert spaces were discovered, was to understand how Fourier could write his meaningless equations, and get so powerful results at the end of the day. How come when PDEs get too complicated to deal with, we reduce them to strange attractors, and analyze these with Hilbert space technics ?

It is not because a computation is linear, that it is trivial. It makes it doable. If you are smart enough to devise a linear algorithm, you could presumably deal with any problem. The Lie algebra, reduces the study of arbitrary mappings, to those linear near the identity : does it substract properties : yes, global ones. Does it matter ? Not so much, they can be dealt with afterwards.

Your objections are formal, and do not bring very much. You are very gifted at hands waving.
 
  • #64
nightlight said:
Bell's theorem[/color] is precisely analogous of deifying the piecewise linear approximation model[/color] and proclaiming that the infinities of its derivatives are fundamental trait of the modeled phenomenon. Namely, the Bell's QM "prediction" is a far stretch of the collapse postulate[/color] (which is detached from and declared contrary and an override of the dynamics, instead of being recognized for what it is -- a kludgey do-hickey patching over the inadequacies of the linear approximation, the Hilbert space axiomatics, for the dynamics) to the remote non-interacting system[/color] and then proclaiming this shows fundamental non-locality. The non-locality was put in by hand[/color] through the inappropriate use of the piecewise linear approximation for the dynamics (i.e. throught the misuse of the projection postulate[/color] of the standard QM axiomatics). It applies the instantaneous projection to the remote system, disregarding the time, distance and the lack of interaction. It is as hollow as making a big ado about the infinites in the derivatives of piecewise linear approximations. It's a waste of time.

I personally think that although this idea is interesting, it is very speculative and you should keep an open mind towards the standard theory too. After all, under reasonable assumptions of the behaviour of photon detectors (namely that they select a random fraction of the to be detected photons, with their, very measurable efficiency epsilon), we DO find the EPR type correlations, which are hard to reproduce otherwise. So, indeed, not all loopholes are closed, but there is VERY REASONABLE EXPERIMENTAL INDICATION that the EPR predictions are correct (EDIT: what I mean by this is that if it weren't for an application in an EPR experiment, but say, to find the coincidences of the two photons in a PET scanner, you wouldn't probably object to the procedure at all ; so if presented with data (clicks in time) of an experiment of which you don't know the nature, and people ask you to calculate the original correlation when the efficiencies of the detectors are given, you'd probably calculate without hesitation the things you so strongly object to in the particular case of EPR experiments). Until you really can come up with a detailled and equally practical scheme to obtain these results, you should at least show the humility of considering that result. It is a bit easy to say that you have a much better model of QM, except for those results that don't fit in your conceptual scheme, which have to be totally wrong. Also, I think you underestimate the effort that people put into this, and are not considered as heretics. But saying that "young student's minds are misled by the priests of standard theory" or the like make you sound a bit crackpottish, no ? :-p
I repeat, this is not to say that work like you are considering should be neglected ; but please understand that it is a difficult road full of pitfalls which has been walked before by very bright minds, and who came back empty hands. So I'm still convinced that it is a good idea to teach young students the standard approach, because research in this area is still too speculative (and I'd say the same about string theory !).

cheers,
Patrick.
 
Last edited:
  • #65
nightlight Hilbert space is an abstraction of the linear PDEs. There is nothing abot it that PDEs don't have, [/color] .

The reciprocal is also true. I think you really have problems/belief with mathematical toys: they all say the same thing in different ways. One of the problems with these toys is the domain of its validity we assume. The other problem is the advance in the mathematical research that can restrict, voluntary, the domain of validity (e.g. Riemann integration of 19th century and the 20th general theory of integration).
Please do not forget, that I have no preference concerning PDE or Hilbert spaces, because I cannot have an objective demonstration that one formulation gives a domain of solutions larger or smaller than the other one.
So you say, that PDE are better than the Hilbert space (I don’t know what is “better” in your mind), then, please try to give a rigorous mathematical demonstration. I really think you have no idea (or may be you do not want to have) on how they may be close.
Like projection postulate, you give an affirmation but not a piece of demonstration, because if you give a demonstration, you have to give its domain of validity. Therefore you may see that you therorem depends on the assumed domain of validy of PDE or Hilbert spaces (like restricting the domain of validity of the integrals to the 19th century).
You are like some people saying that probability has nothing to do with integration theory.

I like PDE and Hilbert spaces and probability, I always try to see what IS different and what SEEMS different and thus I peek what is the more adequate to solve a problem.

nightlight
i.e. it doesn't add properties but it subtracts them. That is fine, as long as one understands that every time you make abstraction, you throw out quite a bit from the more specific model you are abstracting away. Which means you may be abstracting away something which is essential.

In the Hilbert space abstraction we subtract the non-linearity traits of the PDE (or integral eqauations) modelling tool. That again is perfectly fine, as long as one understands that linear modelling in any domain is generally an approximation for the more detailed or deeper models. Our models are always an approximation to a domain of phenomena, like a Taylor expansion to a function within a proper domain.
[/color]
.

I think you block with linearity, like a schoolchild with the addition and multiplication. He may think that addition has nothing to do with multiplication until discovering that a multiplication is only an addition.

You seem to view Hilbert space linearity like the first new comers in quantum area: you are trying to see what it is not here.
Linearity of Hilbert spaces just allows saying that ANY UNTHINKABLE vector MAY describe a system and that’s all:
We may choose the Hilbert space we want: L2(R,dx), L1(R,dx), any Hilbert space, it seems to have no importance, it is only a matter of representation. So how can you tell that linearity “doesn’t add properties but it subtracts them” [/color] , but I say linearity says NOTHING. It says just what is evident: all solutions are possible: the belong to the Hilbert space.

You say that linearity of the Hilbert spaces imposes more restriction that the use of PDE while you may use this “awful linearity” without problem when you write your PDE in an abstract Euclidian space. I assume that you know that an Euclidian space is only a real Hilbert space. How do you manage this kind of linearity?


nightlight
While the linear term of Taylor series is useful, it is by no means the best math for everything and for all times. And surely one would not take the linear term and proclaim that all functions are really linear functions and then to avoid admitting it ain't so, one then proceeds using piecewise linear approximations for everything. That could be fine too, but it surely cannot be said that it is the best one can do with functions, much less that it is the only thing valid about them.
[/color]
.

Please try to define what is “by no means the best math for everything for all times”. Such an assertion is very large and may lead to the conclusion the definite use of a math toy is given only once in time by the current knowledge we have about it.

Once again, you are restricting the domain of validity of the “Taylor” mathematical toy (as well as the Hilbert spaces). You are like a 19th century mathematician discovering the meaning of continuity. You think, with your implicit restricted domain of validity, that Taylor series does only apply to analytic functions. Try to expand your view, like the PDE toy, think for example the Taylor series in a different topological space with a different continuity. Think for example on the Stone Weistrass theorem, and the notion of complete spaces.

Like Hilbert spaces, PDE, probability theory, etc… Taylor series are only a toy with its advantage and disadvantage that can evolve with the advance in the mathematics.

nightlight

This kind of megalomany is precisely what the Hilbert space abstraction (the linear approximation) taken as a foundation of the QT has done -- this is how all has to work or it isn't the truest and the deepest physics.

Since the linearity is a very limited model for much of anything, and it certainly was never a perfect approximation for all the phenomena that QT was trying to describe, the linear evolution is amended -- it gets suspended in an ill-defined slippery maner (allegedly only when "measurement" occurs, whatever that really means) the "state" gets changed to where its linear model shadow, the state vector (or statistical operator), ougth to be, then the linear evolution gets resumed.
[/color]
.

I think you mix Linearity of the operator space with the linearity of Hilbert spaces. How can you manage such a mix (an operator space is not an Hilbert space)? I really think you really need to have a look on papers like Paul J. Werbos (arxiv). Such papers may help you in understanding better the difference of linearity of a vector space and the non linearity of operators space. May be, its papers will help you to better understand how Hilbert spaces toys are connected to ODE and PDE toys.

You even mix linearity with unitary evolution! How can you really speak about measurement if you do not seem to see the difference?
Look: Unitary evolution assumes the continuity of evolution like the assumption of continuity in PDE or ODE. It is not different. You can suppress this requirement if you want like in PDE or ODE (in some limit conditions) there is no problem, only short mind.
Suppression of this requirement is equivalent to consider the problem of the domain of definition of unbounded operators in Hilbert spaces and the type of discontinuities they have (i.e reduction of the Hilbert space where the particle stays).



nightlight

That too is all fine, as long as one doesn't make such clumsy rigging and ad-hockery into the central truth about universe. At least one needs to be aware that one is merely rectifying the inadequacies of the linear model while trying to describe phenomena which don't quite fit the first term of Taylor expansion. And one surely ought not to make the ham-handed ways we use to ram it in, into the core principle and make way too much out of it. There is nothing deep about projection (collapse) postulate, it's a underhanded slippery way to acknowledge 'our model doesn't work here'. It is not a virtue, as often made to look like; it is merely a manifestation of a model defect, of the ignorance. …

[/color]
.

Once again, I think you do not understand the projection postulate or you attach too many things to a simple state evolution (see my precedent post).
May be is it the term postulate that you do not like or may be is it the fact it is said that an apparatus not fully described by the theory gives results of this theory? Tell me, what theory does not use such a trick to describe results that are in final seen by the human?
The main difference comes from the fact that quantum theory has written this fact explicitly, so our attention is called to it and it is good: we must not forget that we are far from having described how everything works and if it is possible without requiring, for example, black boxes.

Please tell us what you really understand by projection postulate! It’s a good way to improve our knowledge and may be detect some errors.

Seratend
 
  • #66
:smile: Continuation:

nightlight
Bell's theorem is precisely analogous of deifying the piecewise linear approximation model and proclaiming that the infinities of its derivatives are fundamental trait of the modeled phenomenon. Namely, the Bell's QM "prediction" is a far stretch of the collapse postulate (which is detached from and declared contrary and an override of the dynamics, instead of being recognized for what it is -- a kludgey do-hickey patching over the inadequacies of the linear approximation, the Hilbert space axiomatics, for the dynamics) to the remote non-interacting system and then proclaiming this shows fundamental non-locality. The non-locality was put in by hand through the inappropriate use of the piecewise linear approximation for the dynamics (i.e. throught the misuse of the projection postulate of the standard QM axiomatics). It applies the instantaneous projection to the remote system, disregarding the time, distance and the lack of interaction. It is as hollow as making a big ado about the infinites in the derivatives of piecewise linear approximations. It's a waste of time.
[/color]
.

I may repeat vanesh, but what do you intend by “collapse” or “collapse postulate” ? . We have only a “projection postulate” - see my previous post: We even may get a unitary evolution model of the particles in the Bell type experiment if we want! I really would like to know where you think there is a “fundamental” problem with this experiment/theorem.

As said before, I think you mix several types of linearity in your discussion and you confuse linearity with unitary evolution.



You also attach too many deductions from what Bell theorem says: two calculus models of a 2 particle system gives 2 different values. Then we have experiments, with certain assumptions as always in physics, who gives also the known results. That’s all. So where is the problem?

After, you have the interpretation (yours, and everybody) and the corrections of the model and theorem: we have had a lot of ones since 1964. And one of the interpretations is centred on non-locality of variables. Ok, this interpretation of the theorem disturbs you and me, that’s all (not the linearity of Hilbert spaces or what else):

The non-locality is not given by the theorem itself, it is interpreted from the single “classical” model” used in the theorem (its domain of validity): This model is incompatible with the model used in the QM formalism.
Bell, in its initial paper has given its definition of “locality” and therefore one of its interpretation of the theorem: “the vital assumption is that the result B for particle 2 does not depend on the setting a of the magnet for particle A on b”. But we can also prove easily that some “non local” variables may match the bell inequality if we want (e.g. Accardi 2000).
So the real question, I think is: does “the classical model of bell” contain all possible local variable models? Rather than does the theorem implies that QM formulation is not compatible with non local hidden variables.

Concerning the relevance of the experiments, a lot of work is done since 1964. And we must say yes, we may have some errors and explicit filters and assumptions, etc … in the experiment, but the numeric results of the experiments just give the expectation value of QM with a good confidence and only after the break of the bell inequality. So all of the errors must comply gently with the QM model: that only what is mainly searched in physics (or at least in technology): an experimental(“real”) numeric value that complies with the abstract model (or the opposite :biggrin: ).

Seratend :bugeye:
 
  • #67
vanesch VERY REASONABLE EXPERIMENTAL INDICATION that the EPR predictions are correct

I don't find it a "reasonable experimental indication" at all.

The presence of the cos(2a) modulation on top of a larger "non-ideality" is a trait shared with the most natural classical physics of this setup. And that is all that gets tested.

If you could show the experimental results to any physicist, from Malus through Lorenz, they wouldn't be surprised with the data in the least. Maxwell could have probably written down a fairly accurate model for the actual data, since the classical EM fields have all the main "non-classical" traits of the quantum amplitudes, including entaglement (see papers by Robert Spreeuw, and http://remote.science.uva.nl/~spreeuw/lop.htm ).

So this argument is barking up the wrong tree. There is nothing distinguishing about cos(2a) modulated correlation[/color]. When you enhance the simple classical models with a detector model, which models detection noise (via the Stochastic Electrodynamics, the classical EM model with ZPF boundary conditions) along with the detector sensitivity curves and perform the same kind of subtractions and data adjustments as the experimenters do, you get exactly what the experimenters have obtained (see numerous Marshall & Santos papers on this; they've been battling this battle for the last 25 years).

What distinguishes[/color] the alleged QM "prediction" from the classical one is the claim that such setup can produce a pure, unmodulated cos(2a)[/color]. That is a conjecture and not a prediction[/color]. Namely, the 2x2 space model gives you only a hint of what kind of modulation to expect and says nothing about the orbital degrees of freedom, much less about dynamics of the detectior trigger. To have a proper prediction[/color], one needs to have error bounds of the prediction (this is not a sampling error but the prediction limits) so that one can say e.g. the data which gets measured will be in [a,b] 95% of the time (in the limit of infinite sample). If I were in a hurry to make a prediction with the error bounds, such that I had to put my money on the prediction, I would at least include the 'loss projectors'[/color] to "missed_aperture" and "failed_to_trigger" ... subspaces, based on known detector efficiencies, and thus predict a wide, modulated cos(2a) correlation, indistingushable from the classical models.

In other words, to have a QM prediction which excludes all classical models, one needs to examine any setup candidate through the natural classical models then deduce a trait from the QM model which cannot be reproduced first by the natural classical models and then, if the prediction passes this basic preliminary citeria, analyse to which degree any other type of classicality can be excluded by the prediction.

In the case of the Bell's setup, one would have immediately found that the classical EM predicts a modulated cos(2a) correlation, so one would have to conclude that the prediction of the 2x2 model[/color] (which in order to have error bounds has to include the loss projectors to cover for all the skipped over and thus unknown details) is not sharp enough to draw the line between the QM and the classical models[/color].

One would then evalute the sharpest cos(2a) modulation that natural classical models produce (which amounts to 1/2 photon equivalent of noise added to QM model). With this least of the QM-EM distingusihibility thresholds at hand, one can now focus the QM modelling to this distinction line[/color]. One would analyse the orbital propagation, the apertures, the unsharpness of the source particle number, etc, using of course any relevant empirical data available (such as detector response curves) looking to shrink the wide "loss projectors" spread of the toy model, so the more accurate QM prediction wich includes its error margin[/color] now falls beyond the classical line. Then you would have a proper QM prediction that excludes at least the straightforward classical models[/color]. At this point, it would become clear that with the present knowlede and empirical data on apartus properties, the best proper prediction of QM is indistungushable from the prediction of the natural classical models[/color].

One has thus given up the proper QM prediciton, leaving it as a conjecture, something the experiment will have to resolve. And it's here that the Bell's inequality would come into exclude the artificial classical models, provided the experiments show the actual data violates the inequality.

The theory (aided by empirical apparatus data) does not have a prediction which distinguishes[/color] even the natural classical models from the QM model for this setup. The Bell's inequality itself isn't a QM prediction, it is a classical prediction. And the error margins of the toy model are too large to make a distinction.

You may reply here that you don't need any such procedure, when QM already has projection postulate which predicts the inequality violation.

Ok, so, in order to distinguish the prediction from classical models, tell me what error margin for the prediction does the postulate give?[/color] None that I know of (ignoring the finite sample error which is implicitly understood and can be made as small as necessary).

Does that absence of mention of error margin mean that error margin is 0 percent? [/color] If it does, than the postulate is plainly falsified with any measurement. Clearly, there is an implicit understanding[/color] here that if you happen to require an error margin for any particular setup, you will have to evaluate it from the specific setup and a model (or use empirical data) for the detectors. Since we do need a good error margin here to be distingushable even if only to separate from the weaker threshold of natural models, we can't avoid the kind of estimation sketched earlier.

You might try refining the response saying: the absence of mention of error margin means that the "ideal system"[/color] has error margin 0. Which axiom defines the "ideal system"? Is that any system that one can construct in the Hilbert space? If so, why bother with all the arguments about the shaky projection postulate, when one can simply construct a non-local Hamiltonian that no local model can reproduce?

what I mean by this is that if it weren't for an application in an EPR experiment, but say, to find the coincidences of the two photons in a PET scanner, you wouldn't probably object to the procedure at all ; ...

Why would I object. For the prediction here the distingushability from classical models is irrelevant[/color], it is not a requirement. Therefore the constraints on the error margins of the prediction are much weaker[/color]. Why would the experiment designer care here whether any classical model might be able to replicate the QM prediction. He has entirely different requirements. He might even bypass much of the error margins computations and simply let the setup itself "compute" what these are. For him it may not matter whether he has a genuine prediction or a just a heuristic toy model to guide the trial and error -- he is not asserting a theorem (as Bell's QM "prediction" is often labeled) claiming that there is a prediction with such and such error margins.

Until you really can come up with a detailled and equally practical scheme to obtain these results, you should at least show the humility of considering that result.

I did put quite bit of thought and effort on this problem over the years. And for several years I believed with the utmost humility and the highest respects the conventional story line.

It was only after setting foot into the real life Quantum Optics lab that I realized that the conventional presentations were largely misleading and are continuing as a waste of physics students' time and creativity. It would be better for almost everyone if the their energies were redirected away from this tar pit.

But saying that "young student's minds are misled by the priests of standard theory" or the like make you sound a bit crackpottish, no ?

I didn't 'invent' the "priesthood" label. I heard it first in this context from Trevor Marshall, who, if anyone, knows what he is talking about.

So I'm still convinced that it is a good idea to teach young students the standard approach

Well, yes, the techniques have to be taught. But if I were to teach my own kids, I would tell them to forget the projection postulate as an absolute rule and to take it as a useful but limited approximation and as a prime example of its misuse and the pitfalls to watch for I would show them the Bell's theorem.
 
Last edited by a moderator:
  • #68
nightlight said:
If you could show the experimental results to any physicist, from Malus through Lorenz, they wouldn't be surprised with the data in the least. Maxwell could have probably written down a fairly accurate model for the actual data, since the classical EM fields have all the main "non-classical" traits of the quantum amplitudes, including entaglement (see papers by Robert Spreeuw, and http://remote.science.uva.nl/~spreeuw/lop.htm ).

I think I'm beginning to see what you are alluding to. Correct me if I'm wrong. You consider the parametric down conversion as a classical process, out of which come two continuous EM waves, which are then "photonized" only locally in the detector, is that it ?

EDIT: have a look at quant-ph/9810035



cheers,
Patrick.
 
Last edited by a moderator:
  • #69
nightlight said:
vanesch VERY REASONABLE EXPERIMENTAL INDICATION that the EPR predictions are correct

I don't find it a "reasonable experimental indication" at all.

The presence of the cos(2a) modulation on top of a larger "non-ideality" is a trait shared with the most natural classical physics of this setup. And that is all that gets tested.

I'm probably very naive, but let's do a very simple calculation. We assume a perfect singlet state from the start (psi = 1/sqrt(2) (|+>|-> - |->|+>). Probably it must be an idealisation, and there might be some poluent of |+>|+> states but let us assume that it is neglegible.
Let us assume we have an angle th between the polarizers, and hence a quantum prediction of correlation C_ideal. So C_ideal is the fraction of hit-hit combinations within a certain coincidence time (say, 50 ns) on both detectors and is given, by the simple QM model, by C_ideal = sin(th/2)^2.

Now, assume efficiencies e1 and e2 for both photon detectors (we could take them to be equal as a simplifying assumption).

Take detector 1 as the "master" detector. The efficiency e1 is just a limitation of the number of events (it is as if the intensity were multiplied by e1), so we do not even have to take it into account. Each time that detector 1 has seen a photon, we ASSUME - granted - that there has been a photon in branch 2 of the experiment, and that two things can happen: or that the second photon was in the wrong polarization state, or that it was in the right one. The probability, according to QM, to be in the right polarization state is C_ideal. For each of those, the probability of actually beind detected is e2. So I'd guess that the quantum prediction to have coincidences in channel 2, if channel 1 triggered, to be equal to e2xC_ideal.
Vice versa, the prediction to have coincidences in channel 1, when channel 2 triggered, is equal to e1xC_ideal.
If these two rates of coincidence are verified, I'd take it as experimentally established that C_ideal is a correct theoretical prediction.
Here, e1 and e2 can be very small, it doesn't matter, because ANY original coincidence rate undergoes the same treatment. Also I didn't take into account spurious coincidences, but they are related to overall flux, so they can be experimentally easily distinguished.
I would think that this is how people in quantum optics labs do their thing, no ?

cheers,
Patrick.

EDIT: I realize that there is of course something else that can seriously disturb the measurement, which is a proportional rate of uncorrelated photons from the downconversion xtal. But that is also reasonable to get rid of.

Indeed, we can assume that the photons falling onto detector 1 which are not correlated with the second branch, can be found by removing the second polarizer. So if we have rate I1 of uncorrelated photons, and rate I2 of correlated ones (the ones we are modelling), we can simply remove polarizer 2 and find the number of coincidences we obtain that way. It will equal e2xI2. So for a total rate (I1 + I2) on detector 1, this means e2xI2 gives 100% correlation, hence we have to multiply our original rate e2 C_ideal with I2/(I1+I2) as a prediction. Even this will have to be corrected if there is a loss in the polarizer, which could be introduced as an "efficiency" of the polarizer. All this is standard experimental techniques.
 
Last edited:
  • #70
vanesch You consider the parametric down conversion as a classical process, out of which come two continuous EM waves, which are then "photonized" only locally in the detector, is that it ?

The classicality of PDC is not my conjecture but a mathematical result of the same kind as the similar 1963 Sudarshan-Glauber result (cited earlier) showing the same for the thermal and laser sources -- there is no multipoint coincidence setup for these sources that can yield correlations distinguishable from the classical EM filed interacting with a square law detector[/color] (which in turn can be toy-modelled as an ionisation of a Schrodinger's atom). Marshall, Santos and their collaborators/disciples have shown this equivalence for type I and || PDC sources. Here are just couple (among over a dozen) of their papers on this question:
Trevor W. Marshall Do we need photons in parametric down conversion?

The phenomenon of parametric down conversion from the vacuum may be understood as a process in classical electrodynamics, in which a nonlinear crystal couples the modes of the pumping field with those of the zeropoint, or "vacuum" field. This is an entirely local theory of the phenomenon, in contrast with the presently accepted nonlocal theory. The new theory predicts a hitherto unsuspected phenomenon - parametric up conversion from the vacuum.
Alberto Casado, Trevor W. Marshall, Emilio Santos
Type-II parametric down conversion in the Wigner-function formalism. Entanglement and Bell's inequalities

We continue the analysis of our previous articles which were devoted to type-I parametric down conversion, the extension to type-II being straightforward. We show that entanglement, in the Wigner representation, is just a correlation that involves both signal and vacuum fluctuations. An analysis of the detection process opens the way to a complete description of parametric down conversion in terms of pure Maxwell electromagnetic waves.

The essential new ingredient in their approach is the use of the vacuum fluctuations (they call it Zero Point Field, ZPF, when referring to it as the special initial & boundary conditions of classical EM field; the ZPF distribution is not some kind of adjustible fudge factor since it is uniquely determined by the requirements of Lorenz invariance) and its relation to the detection. That addition also provides a more concrete physical interpretation[/color] of the somewhat abstract 1963 results for the thermal and laser sources.

Namely, the ZPF background (which amounts to an equivalent of 1/2 photon per mode and which the "normal ordering" of operators in QED, used for computing the multipoint correlations, discards from the expressions) allows one to have, among other effects, a sub-ZPF EM wave[/color], with energy below the background (result of a superposition with the source field e.g. on the beam splitter), but which carries the phase info of the original wave and can interfere with it, yet it is normally not detected (since it falls below the detector's dark-current trigger cutoff which is callibrated to register no events for the vacuum fluctuations alone; this is normally accomplished by a combination of adjustments to the detector's sensitivity and the post-detection subtracion of the backgorund[/color] rate).

This sub-ZPF component (a kind of "dark wave"; I recall Marshall using term "ghost" or "ghost field" for it) behaves formally as a negative probability[/color] in the Wigner's joint distribution formalism (it has been know for a long time that allowing for negative probabilities yields at least formally the classical-like joint distributions; the sub-ZPF component provides a simple and entirely non-mysterious interpretation of these negative probabilities). It is conventionally undetectable (in the coincidence experiments it gets callibrated & subtracted away to match the correlations computed using the normal ordering rule; it shows as a negative dip below the baseline rate=0 in some more detailed data presentations), yet it travels the required path and picks all the inserted phase shifts along the way so the full intereference behavior is preserved, while any detection on each path shows just one "photon" (the averaging over the ZPF and source distributions smoths out the effect, matching quantitatively the photon anti-correlation phenomenon which is often mentioned as demonstrating the "particle" aspect of photons).

{ Note that ZPF alone plus classical point particles cannot reproduce the Schroedinger or Dirac equations (that goal used to be the "holy grail" in the early years of the Stochastic ED, 1960s & 1970s). Although there were some limited successes over the years, Marshall now admits it can't work; this realization has nudged this approach to Barut's Self-field ED, except with the advantage of the ZPF tool.}
 
Last edited:
  • #71
vanesch See, I didn't need any projection as such...

I think John Bell's [/color] last paper http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=against+measurement&AU=bell&AF=&CL=&RP=&YR= should suffice to convince you that you're using the non-dynamical collapse (Dirac's quantum jump). He explains it by picking apart the common obfuscatory verbiage, using the Landau-Lifsh-itz** and Gottfried's QM textbooks as the examples. You'll also find that his view of the collapse, measurement and teaching of the two is not very different of what I was saying here. Among other points I agree with, he argues that the collapse should be taught as a consequence of the dynamics, not in addition to it (as a postulate). He returns to and discusses the Schroedinger's original interpretation (the view that |Psi|^2 is a density of "stuff"). Then he looks at the ways to remedy the packet spread problem that Schroedinger found (the schemes such as de Broglie-Bohm's theory and the Ghirardi-Rimini-Weber dynamical, thus non-linear, collapse). Note that this is an open problem in Barut's approach as well, even though he had some toy models for the matter field localizations (from which he managed to pull out a rough approximation for the fine structure constant).



{ ** The PF's 4-letter-word filter apparently won't allow the entry of the Lanadau's coauthor.}
 
Last edited by a moderator:
  • #72
nightlight said:
this is normally accomplished by a combination of adjustments to the detector's sensitivity and the post-detection subtracion of the backgorund[/color] rate).

Well, you're now in my field of expertise (which are particle detectors) and this is NOT how this works, you know. When you have a gas or vacuum amplification, this is essentially noiseless ; the "detector sensitivity" is completely determined by the electronic noise of the amplifier and the (passive) capacitive load by the detector and can be calculated using simple electronics simulations which do not take into account any detector properties except for its capacity (C).
A good photomultiplier combined with a good amplifier has a single-photon signal which stands out VERY CLEARLY (tens of sigma) from the noise. So the threshold of detection is NOT adjusted "just above the vacuum noise" as you seem to imply (or I misunderstood you).

cheers,
Patrick.


PS: you quote a lot of articles to read ; I think I'll look at (some) of them. But all this takes a lot of time, and I'm not very well aware of all these writings. I'm still convinced those approaches are misguided, but I agree that they are interesting, up to a point. Hey, maybe I'll change sides :-)) Problem is, there's SO MUCH to read.
 
  • #73
Nightlight ... I think John Bell's last paper http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=against+measurement&AU=bell&AF=&CL=&RP=&YR= [/color]

I is a good paper for a first introduction to the decoherence program.
Attention that "collapse" does not mean that the quantum state |psi> has really collapsed into a new state |An>. It is always a conditionnal state: the quantum state |An> when a measurement apparatus has given the value gn (like in probability).

Seratend.
 
Last edited by a moderator:
  • #74
seratend said:
Nightlight ... I think John Bell's last paper http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=against+measurement&AU=bell&AF=&CL=&RP=&YR= [/color]

I is a good paper for a first introduction to the decoherence program.
Attention that "collapse" does not mean that the quantum state |psi> has really collapsed into a new state |An>.

Yes, exactly. In fact, the funny thing is that the decoherence program seems to say that people who insist on the nonlinearity of the measurement process didn't take the linearity seriously enough :-p
However, it is true that the decoherence program, in itself, still doesn't solve completely the measurement problem, and any work that can shed more light on it can be interesting.
What disturbes me a bit in the approach suggested by nightlight and the people he cites is not so much that "standard theory is offended" but that they obstinately seem to refute the EPR experimental results EVEN if by fiddling around a lot, there are still many ways to explain the results by semiclassical approaches (or at least POTENTIALLY explain them). I'm maybe completely missing the point, but I think that there are 2 ways of comparing experimental results to theoretical predictions. One is to try to "correct the measurements and extract the ideal quantities". Of course that looks like fudging the data. But the other is: taking the ideal predictions, applying the experimentally expected transformations (like efficiencies and so on), and compare that with the raw data. Both procedures are of course equivalent, but the second one seems to be much more "acceptable". As far as I know, all EPR type experiments agree with the predictions of QM EVEN IF ONE COULD THINK OF SPECIFIC SEMICLASSICAL THEORIES almost made up for the purpose to obtain the same results - potentially. This, to me, is for the moment sufficient NOT TO REJECT the standard theory, which, at least, with simple "toy models" and "toy experimental coefficients" obtains correct results.

cheers,
Patrick.
 
Last edited by a moderator:
  • #75
A good photomultiplier combined with a good amplifier has a single-photon signal which stands out VERY CLEARLY (tens of sigma) from the noise. So the threshold of detection is NOT adjusted "just above the vacuum noise" as you seem to imply (or I misunderstood you).

You can change the PM's bias voltage and thus shift on its sensitivity curve (a sigmoid type function). That simply changes your dark current rate/vacuum fluctuations noise[/color]. The more sensitive you make it (to shrink the detection loophole) the larger the dark current, thus the larger explicit background subtraction. If you wish to have lower background subtractions you select lower PM sensitivity. It is a tradeoff between the detection and the subtraction "loopholes"[/color].

In the original Aspect's experiment, they found the setup to be the most efficient (most accepted data points per unit time) when they tune the background subtraction rate to be equivalent exactly to the background DC offset on the cos^2() that natural classical model predicts as necessary.

This was pointed out by Marshall, who demonstrated a simple classical, non-ZPF EM model, with exactly the same predictions as the pre-subtraction coincidence rates Aspect reported. After some public back and forth via letters and papers, Aspect repeated the experiments where he reduced the subtractions below the simple classical model (by lowering the sensitivity of the detectors) and still "violated" the Bell's inequality (while the subtraction loophole shrunk, the detection loophole grew). Then Marshall & Santos worked out the initial versions of their ZPF based model (without a good detection model at the time) which matched the new data, but by this time the journals had somehow lost the interest in the subject leaving Aspect with the last word in the debate.

In adition to tuning the particular PM, changing the type of the detector as well as the photon wavelengths (this change affects the analyzing efficiency of the polarizer, basically offseting any efficiency gains you made on the detection), selects by itself different tradeoff points between the dark rate and the quantum efficiency.

Since the classical predictions are within 1/2 photon equivalent from the "ideal" QM prediction and since more realistic QED models do use the "normal ordering" of a+, a- operators (which amounts to subtracting the divergent vacuum energy of 1/2 photon per mode; different ordering rules change the joint distributions types e.g. from Husimi to Wigner), it is highly doubtful that anything decisive can come out of the photon experiments ever. Separately, the popular source of recent years, PDC, is 100% equivalent to the classical ZPF based EM predictions for any coincidence measurements, with any number of detectors and any number of linear components (polarizers, beam splitters, etc). Just as one didn't have to bother debunking blow by blow any non-classicality claim based on linear optical components and thermal sources from 1956 (Hanbury Brown & Twiss effect) or laser sources from 1963 (Sudarshan, Glauber), since the late 1990s one doesn't need to do it with PDC sources either. The single atomic sources are still usable (if one is a believer in the remote, non-dynamical, instant collapse), but the three body problem has kept these setups even farther from the "ideal".
 
Last edited:
  • #76
vanesch but let's do a very simple calculation. ...

Simple, indeed. I don't wish to excessively pile on references, but there is an enormous amount of detailed analysis on this setup.

Take detector 1 as the "master" detector. ... Each time that detector 1 has seen a photon, we ASSUME - granted - that there has been a photon in branch 2 of the experiment,

The master may be a background, as well. Also, you have 2 detectors for A and 2 for B and thus you have 2^4-1 combinations of events to deal with (ignoring the 0,0,0,0).

and that two things can happen[/color]: or that the second photon was in the wrong polarization state, or that it was in the right one.

Or that there is none since the "master" trigger was the vacuum fluctuation, amplification or any other noise. Or that there is more than one photon (note that the "master" has a dead time after a trigger). Keep in mind that even if you included the orbital degrees of freedom and followed up the amplitudes in space and time, you are still putting in the assumption of non-relativistic QM approximation of the sharp, conserved particle number which isn't true for QED photons.

If you compute QED correlations for the coincidences using "normal operator ordering" (the Glauber's prescription[/color], which is the cannonical Quantum Optics way) you're modelling a detection process which is calibrated to subtract vacuum fluctuations, which yields Wigner's joint distributions[/color] (in phase space variables). The "non-classicality" of these distribution (the negative probability regions) is always equivalent for any number of correlations to the classical EM with ZPF subtractions (see the Marshall & Santos papers for these results).

If your correlation computation doesn't use normal operator ordering to remove the divergent vacuum fluctuations from the Hamiltonian and instead uses high frequency cutoff, you get Husimi joint distribution[/color] in phase space which has only positive probabilites[/color], meaning it is a perfectly classical stochastic model for all photon correlations (it is mathematically equivalent to the Wigner's distribution with smoothed out negative probability regions by a Gaussian; the smoothing is physically due to the indeterminacy of the infrared/soft photons). Daniele Tommasini has several papers on the QED "loophole"[/color] topic (it surely takes some chutzpah to belittle the more accurate QED result as a "loophole" for the approximate QM result).

So I'd guess that the quantum prediction to have coincidences in channel 2, if channel 1 triggered, to be equal to e2xC_ideal...

What you're trying to do is rationalize the fair sampling[/color] assumption. The efficiencies are features of averages[/color] of detections over Gaussian (for high rates) or Poisson (for low rates) detection event distributions, neither of which is usable for Bell's violation tests, since classical models trigger rates are equivalent to the "ideal" QM prediction smoothed out/smeared by precisely these types of distributions.

The QED "loophole" is not a problem in the routine Quantum Optics applications. But if you're trying to distinguish QM "ideal" model from a classical model, you need much sharper parameters than the averages[/color] over Poisson/Gaussian distributions.

In addition to the QED "loophole, the assumption that the average ensemble properties (such as the coincidence detection efficiency, which is a function of individual detectors quantum efficiencies) must also be properties of the individual pairs has another, even more specific problem in this context.

Namely, in a classical model you could have a hidden variable shared by the two photons (which was set at the pair creation time and which allows the photons to correlate nearly perfectly for the parallel polarizers). In the natural classical models this variable is a common and specific polarization orientation. For the ensemble of pairs this polarization is distributed equally in each direction. But for any individual pair it has a specific value (thus the rotational symmetry of the state is an ensemble property which need not be a property of individual pair in LHV theories; and it is not rotationally symmetrical for individual pairs even in the most natural classical models i.e. you don't need some contrived toy model to have this rotational asymmetry at the individual pair level while retaining the symmetry at the ensemble level[/color]).

Now, the splitting on the polarizer breaks the amplitudes into sin() and cos() projections relative to the polarizer axis for (+) and (-) result. But since the detection probability is sensitive to the squares of amplitude incident on each detector, this split automatically induces a different total probability of coincident detection for individual pair[/color] (for the general polarizer orientations). The quantum efficiency figure is an average insensitive to this kind of systematic bias which correlates the coincidence detection probability of an individual pair with the hidden polarization of this pair thus with the result of the pair.

This classically perfectly natural trait is a possibility that "fair sampling" assumption excludes upfront.[/color] The individual pair detection (as a coincidence) may be sensitive to the orientation of a hidden polarization relative to the two polarizer orientations even though the ensemble average may lack this sensitivity. See Santos paper on the absurdity of "fair sampling" (due to this very problem) and also a couple of papers by Khrennikov which propose a test of the "fair sampling" for the natural classical models (which yield a prediction for detecting this bias).
 
Last edited:
  • #77
nightlight said:
vanesch but let's do a very simple calculation. ...

Simple, indeed. I don't wish to excessively pile on references, but there is an enormous amount of detailed analysis on this setup.

May I ask you something ? Could you indicate me a "pedagogical reading list" in the right order for me to look at ? I'm having the impression in these papers I'm being sent from reference to reference, with no end to it (all papers have this treat of course, it is just that normally, at a certain point, this is absorbed by "common knowledge in the field").
I think this work is interesting because it illustrates what exactly is "purely quantum" and what not.

cheers,
Patrick
 
  • #78
vanesch May I ask you something ? Could you indicate me a "pedagogical reading list" in the right order for me to look at ?

I am not sure what specific topic for the list are you referring to. In any case, I haven't used or have any such list. Marshall & Santos have perhaps couple hundred papers, most of which I have in the paper preprint form. I would say that one can find their more recent equivalents/developments of most that was worth pursuing (http://homepages.tesco.net/~trevor.marshall/antiqm.html has few basic reference lists with informal intros into several branches of their work). The phase space distributions are a well developed field (I think Marshall & Santos work here was the first rational interpretation of the negative probabilities appearing in these distributions).

http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=&AU=barut&AF=&CL=&RP=&YR= servers have most of Barut's stuff (I did have to get some which are missing as paper copies via mail from ICTP and some hard to find from his former students).

There is also E.T. Jaynes publication archive (I found especially interesting #71, #74, #68, #67, and the unpublished paper on Dyson). His classic http://www-laplace.imag.fr/Jaynes/prob.html is also online.

Of other people you may have not heard of, an applied math professor Garnet Ord has found an interesting combinatorial origin of all the core equations of physics (Maxwell, Schroedinger, Dirac) - they're all contnuum limits of the combinatorial/enumerative properties of the fluctuations of the plain Brownian motion (obtained without analytic continuation, but purely combinatorially; Gerard 't Hooft has been playing with similar models lately, apparently having lost the faith in the QM orthodoxy and the physical relevance of the no-go "theorems").

Another collection of interesting and often rare papers and preprints is the http://kh.bu.edu/qcl/ , plus many of Toffoli's papers.
 
Last edited by a moderator:
  • #79
nightlight said:
Now, the splitting on the polarizer breaks the amplitudes into sin() and cos() projections relative to the polarizer axis for (+) and (-) result. But since the detection probability is sensitive to the squares of amplitude incident on each detector, this split automatically induces a different total probability of coincident detection for individual pair[/color] (for the general polarizer orientations). The quantum efficiency figure is an average insensitive to this kind of systematic bias which correlates the coincidence detection probability of an individual pair with the hidden polarization of this pair thus with the result of the pair.

This classically perfectly natural trait is a possibility that "fair sampling" assumption excludes upfront.[/color]

I think I understand what you mean. To you, a "photon" is a small, classical wavetrain of EM radiation, and the probability of detection depends on the square of its amplitude. I tried to find quickly a counter example, and indeed, in most if not all "single photon events" the behaviour seems identical with the "QED photon". So you claim that if we send a polarized EM beam under 45 degrees onto a detector, each little wavetrain arrives "fully" at the photocathode and hence has a probability epsilon to be detected. However, if we now put an X polarizer into the beam, contrarily to the QED view, we do not block half of the "photons" (wave trains), but we let through ALL of them, but they have diminished (by sqrt(2)) wave train amplitudes, namely purely their X component, which results, you claim, in half of the detection efficiency, and so it is only the detector who sees half of them, us naive physicists thinking that there physically are only half of them present. It is just that they are 'half photons'. By turning the polarizer, we can make "tiny tiny photons" which still encode the wavelength or frequency, but have a very very small probability to be detected.
Right. So you will agree with me that you are essentially refuting the photon as a particle, and just as a detection phenomenon of classical EM radiation ; in that case I also understand your insistence on how you refuse the fair sampling idea.
How do you explain partial absorption in a homogeneous material ? Loss of a number of wavetrains, or diminishing the amplitude of each one, keeping the total number equal ?

cheers,
Patrick.
 
  • #80
vanesch To you, a "photon" is a small, classical wavetrain of EM radiation, and the probability of detection depends on the square of its amplitude.

The QM (or Quantum Optics) amplitude does exactly the same here. Check any computation for the free space propagation or through the polarizer or other linear elements -- it follows precisely the Maxwell's equations. The only difference is that I don't imagine a marble floating somehow inside this amplitude packet. It is not necessary, it makes no empirical difference (and one can't even define a consistent theoretical position operator for the photon). The a+, a- operators don't give you any marble-like localization hint, they're field modes (which are spatially extended) creators and destructors.

Of course, it is fine to use any visual mnemonic one finds helpful for some context, but one has to be aware that that's all it is and not create "paradoxes" by attributing ontology to a private visual mnemonic device[/color].

So you claim that if we send a polarized EM beam under 45 degrees onto a detector, each little wavetrain arrives "fully" at the photocathode and hence has a probability epsilon to be detected.

The Quntum Optics amplitude does exactly the same split since it follows the same equations for orbital propagation. For any thermal, laser or PDC source you cannot set-up any configuration of linear optical elements and square law detectors that will predict[/color] any difference in coincidence counts.

Of course, I am not including in the term "prediction" here the predictions of some QM toy-model since that is not a model of an actual system and it doesn't predict what is actually measured (the full raw counts, the existence, much less the rate, of background counts or the need and the model for the background subtractions). It predicts the behavior of some kind of "ideal" system (which has no empirical counterpart), that has no vacuum fluctuations (which are in QO/QED formally subtracted by normal ordering convention, and this subtraction is mirrored by the detector design, threshold calibration and background subtractions to exclude the vacuum events) and that has a sharp, conserved photon number.

(-- interrupted here--)
 
Last edited:
  • #81
vanesch ... us naive physicists thinking that there physically are only half of them present. It is just that they are 'half photons'. ...

What's exactly the empirical difference? The only sharp test that can exclude the natural classical field models (as well as any local model) is the Bell's inequality test, and so far it hasn't done it.

Dropping the "loophole" euphemistics, the plain factual situation is that these tests have excluded all local theories which satisfy the "fair sampling" property [/color] (which neither the classical EM fields nor the QED/QO amplitudes satisfy). So, the test has excluded the theories that never existed in the first place. Big deal.

So you will agree with me that you are essentially refuting the photon as a particle, and just as a detection phenomenon of classical EM radiation ; in that case I also understand your insistence on how you refuse the fair sampling idea.

In which sense is the photon a particle in QED? I mean, other than jargon (or a student taking the Feynman's diagrams a bit too literally). You prefer to visualise counting marbles while I prefer to imagine counting modes since it has neither position nor individuality[/color].

The discreteness of the detector response is the design decision (the photo-ionisation in detectors is normally treated via semi-clasical model; even the few purely QED treatments of the detection don't invoke any point-like photons and show nothing different or new). Detectors could also perfectly well measure and correlate continuous photo-currents. (Of course, even in the continuous detection mode a short pulse might appear as a sharp spike, but that is a result of the amplitude modulation which is not related to a pointlike "photon." )

One needs also to keep in mind that the engineering jargon of Quantum Optics uses label non-classical[/color] in much weaker sense than the fundamental non-classicality[/color] we're discussing (for which the only sharp test is some future Bell type test which could violate the Bell's inequalities).

Basically, for them anything that the most simple minded boundary & initial conditions of classical EM model cannot replicate is "non-classical" (such as negative regions of Wigner distributions). Thus their term "classical" excludes by definition the classical EM models which account for the vacuum fluctuations (via the ZPF initial & bounday conditions).

How do you explain partial absorption in a homogeneous[/color] material ? Loss of a number of wavetrains, or diminishing the amplitude of each one, keeping the total number equal ?

What is this, a battle of visual mnemonic devices? Check the papers by Jaynes (esp. 71 and 74 the QED section) about the needs for quantization of the EM field. With Jaynes' and Barut's results included, you need to go beyond the first order of radiative corrections to even get to the area where they haven't carried out the computations and where the differences may appear (they've both passed away, unfortunately). The Quantum Optics, or some statistical physics bulk material properties are far away from the fundamental (as opposed to computationally practical) necessity to quantize the EM field. The Quantum Optics doesn't have a decisive test on this question (other than those for the weak non-classicality or some new design of the Bell's test). The potentially distinguishing efects are couple QED perturbative orders below the phenomena of Quantum Optics or the bulk material properties.

What you're talking above is the battle of visual mnemonic devices. For someone doing day to day work in QO it is perfectly fine to visualize photons as marbles somehow floating inside the amplitudes or collapsing out of amplitudes, if that helps them think about it or remember the equations better. That has nothing to do with the absolute empirical necessity for a marble-like photon (of which there is none, not even in QED formalism it is point-like, although the jargon is particle-like).

As a computational recipe for routine problems the standard scheme is far ahead of any existent non-linear models for the QED. As Jaynes put it (in #71):

Today, Quantum Mechanics (QM) and Quantum Electrodynamics (QED) have great
pragmatic success - small wonder, since they were created, like epicycles, by
empirical trial-and-error guided by just that requirement. For example, when
we advanced from the hydrogen atom to the helium atom, no theoretical
principle told us whether we should represent the two electrons by two wave
functions in ordinary 3-d space, or one wave function in a 6-d configuration
space; only trial-and-error showed which choice leads to the right answers.

Then to account for the effects now called `electron spin', no theoretical
principle told Goudsmit and Uhlenbeck how this should be incorporated into the
mathematics. The expedient that finally gave the right answers depended on
Pauli's knowing about the two-valued representations of the rotation group,
discovered by Cartan in 1913.

In advancing to QED, no theoretical principle told Dirac that electromagnetic
field modes should be quantized like material harmonic oscillators; and for
reasons to be explained here by Asim Barut, we think it still an open question
whether the right choice was made. It leads to many right answers but also to
some horrendously wrong ones that theorists simply ignore; but it is
now known that virtually all the right answers could have been found without,
while some of the wrong ones were caused by field quantization
.

Because of their empirical origins, QM and QED are not physical theories at
all
. In contrast, Newtonian celestial mechanics, Relativity, and Mendelian
genetics are physical theories, because their mathematics was developed by
reasoning out the consequences of clearly stated physical
principles which constrained the possibilities. To this day we have no
constraining principle from which one can deduce the mathematics of QM and
QED; in every new situation we must appeal once again to empirical evidence to
tell us how we must choose our mathematics in order to get the right answers
.

In other words, the mathematical system of present quantum theory is, like
that of epicycles, unconstrained by any physical principles
. Those
who have not perceived this have pointed to its empirical success to justify
a claim that all phenomena must be described in terms of Hilbert spaces,
energy levels, etc. This claim (and the gratuitous addition that it must be
interpreted physically in a particular manner) have captured the minds of
physicists for over sixty years. And for those same sixty years, all efforts
to get at the nonlinear `chromosomes and DNA' underlying that linear
mathematics have been deprecated and opposed by those practical men who,
being concerned only with phenomenology, find in the present formalism all
they need.


But is not this system of mathematics also flexible enough to accommodate any
phenomenology, whatever it might be?
Others have raised this question
seriously in connection with the BCS theory of superconductivity. We have all
been taught that it is a marvelous success of quantum theory, accounting for
persistent currents, Meissner effect, isotope effect, Josephson effect, etc.
Yet on examination one realizes that the model Hamiltonian is
phenomenological, chosen not from first principles but by trial-and-error
so as to agree with just those experiments.

Then in what sense can one claim that the BCS theory gives a physical
explanation
of superconductivity? Surely, if the Meissner effect did not
exist, a different phenomenological model would have been invented, that does
not predict it; one could have claimed just as great a success for quantum
theory whatever the phenomenology to be explained.

This situation is not limited to superconductivity; in magnetic resonance,
whatever the observed spectrum, one has always been able to invent a
phenomenological spin-Hamiltonian that ``accounts'' for it. In high-energy
physics one observes a few facts and considers it a big advance - and great
new triumph for quantum theory -- when it is always found possible to invent a
model conforming to QM, that ``accounts'' for them. The `technology' of QM,
like that of epicycles, has run far ahead of real understanding.

This is the grounds for our suggestion (Jaynes, 1989) that present QM is only
an empty mathematical shell
in which a future physical theory may, perhaps, be
built. But however that may be, the point we want to stress is that the
success - however great - of an empirically developed set of rules gives
us no reason to believe in any particular physical interpretation of them.
No physical principles went into them.


Contrast this with the logical status of a real physical theory; the success
of Newtonian celestial mechanics does give us a valid reason for believing in
the restricting inverse-square law, from which it was deduced; the success
of relativity theory gives us an excellent reason for believing in the
principle of relativity, from which it was deduced.
 
Last edited:
  • #82
nightlight said:
The Quntum Optics amplitude does exactly the same split since it follows the same equations for orbital propagation. For any thermal, laser or PDC source you cannot set-up any configuration of linear optical elements and square law detectors that will predict[/color] any difference in coincidence counts.

How about the following setup:
take a thermal source of light, with low intensity. After collimation into a narrow beam, send it onto a beam splitter (50-50%). Look at both split beams with a PM. In the "marble photon" picture, we have a Poisson stream of marbles, which, on the beam splitter, go left or right, and have then a probability of being seen by the PM (which is called its quantum efficiency). This means that only 3 cases can occur: a hit "left", a hit "right" or no hit. The only possibility of a "hit left and a hit right" is when there is a spurious coincidence in the Poisson stream, and lowering the intensity (so that the dead time is tiny compared to the average flux) lowers this coincidence. So in the limit of low intensities (as defined above), the coincidence rate of hits can be made as low as desired.
However, in the wavetrain picture, this is not the case. The wavetrain splits equally in two "half wavetrains" going each left and right. There they have half the probability to be detected. This leads to a coincidence rate which, if I'm not mistaking, to be e/2 where e is the quantum efficiency of the PM. Indeed, exactly at the same moment (or small lapse of time) when the left half wavetrain arrives at PM1, the right half wavetrain arrives at PM2. This cannot be lowered by lowering the intensity.
Now it is an elementary setup to show that we find anticoincidence, so how is this explained away in the classical picture ?


EDIT:
I add this because I already see a simple objection: you could say: hey, these are continuous beams, not little wavetrains, and it are both PM's which randomly (Poisson-like) select clicks as a function of intensity. But then the opposite bites you: how do you explain then that, in double cascades, or PDC, or whatever, we DO get clicks which are more correlated than a Poisson coincidence can account for ?
Now if you then object that "normal" sources have continuous wave outputs, but PDC or other "double photon emitters" do make little wavetrains, then do my proposed experiment again with such a source (but only using one of both wavetrains coming out of it, which is not difficult given the fact that they usually are emitted in opposite directions, and that we single out one single narrow beam).
What I want to say is that any classical mechanism that explains away NON-coincidence bites you when you need coincidence, and vice versa, while the "marble photon" model gives you naturally each time the right result.


cheers,
Patrick.
 
Last edited:
  • #83
nightlight said:
You can change the PM's bias voltage and thus shift on its sensitivity curve (a sigmoid type function). That simply changes your dark current rate/vacuum fluctuations noise[/color]. The more sensitive you make it (to shrink the detection loophole) the larger the dark current, thus the larger explicit background subtraction.

Have a look at:
http://www.hpk.co.jp/eng/products/ETD/pdf/PMT_construction.pdf
p 12, figure 28. You see a clear spectral (pulse height spectrum) separation between the dark current pulses, which have a rather exponential behaviour, and the bulk of "non-dark single photon" pulses, and allthough a small tradeoff can be made, it should be obvious that a lower cutoff in the amplitude spectrum clearly cuts away most of the noise, while not cutting away so much "non-dark" pulses. The very fact that these histograms have completely different signatures should indicate that the origins are quite different, no ?

cheers,
Patrick.

EDIT: but you might be interested also in other technologies, such as:

http://www.hpk.co.jp/eng/products/ETD/pdf/H8236-07_TPMO1011E03.pdf

where quantum efficiencies of ~40% are reached, in better noise conditions (look at the sharpness of the peak for single photon events !)
 
Last edited by a moderator:
  • #84
vanesch How about the following setup:
take a thermal source of light, with low intensity. After collimation into a narrow beam, send it onto a beam splitter (50-50%). Look at both split beams with a PM. In the "marble photon" picture, we have a Poisson stream of marbles, which, on the beam splitter, go left or right, and have then a probability of being seen by the PM (which is called its quantum efficiency). This means that only 3 cases can occur[/color]: a hit "left", a hit "right" or no hit. The only possibility of a "hit left and a hit right" is when there is a spurious coincidence[/color] in the Poisson stream, and lowering the intensity (so that the dead time is tiny compared to the average flux) lowers this coincidence. So in the limit of low intensities (as defined above), the coincidence rate of hits can be made as low as desired.
However, in the wavetrain picture, this is not the case. The wavetrain splits equally in two "half wavetrains" going each left and right. There they have half the probability to be detected. This leads to a coincidence rate which, if I'm not mistaking, to be e/2 where e is the quantum efficiency of the PM. Indeed, exactly at the same moment (or small lapse of time) when the left half wavetrain arrives at PM1, the right half wavetrain arrives at PM2. This cannot be lowered by lowering the intensity.
Now it is an elementary setup to show that we find anticoincidence, so how is this explained away in the classical picture ?


The semiclassical and QED theory of the square law detectors predicts Poisson distribution of counts P(k,n) where n is the average k (the average combines the efficiency and the incident intensity effects).

That means that classical case will have (A,B) triggers of type: (0,0), (1,0), (0,1), (2,0), (1,1), (0,2),... (with Poisson distribution applied independently on each detector). Thus, provided the average of your incident marbles Poissonian has this same n, you will have indistingushable correlations for any combination of events, since your marbles will be splitting in exactly same combinations at the same rates.

If you wish to divide the QM cases into "spurious" and "non-spurious" and start discarding events (instead of just using the incident Poissonian and computing the probabilities) and based on that predict some other kind of correlation, note that the same "refinement" of the Poisonian via spurious/non-spurious division can be included into any other model. Of course, any such data tweaking takes both models away from the Poissonian prediction (and the actual coincidence data).

It seems you're again confusing the average ensemble property[/color] (such as the efficiency e) with the individual events[/color] for the classicl case, i.e. assuming that the classical model implies a prediction of exactly 1/2 photocurrent in each try. That's not the case. The QED and the semiclassical theory of detectors yield exactly the same Poisson distribution of trigger events. The QED proper effects (which distinguish it empirically from the semiclassical models with ZPF) start at higher QED perturbative orders than those affecting the photo-ionization in detector modelling.
 
  • #85
nightlight said:
The semiclassical and QED theory of the square law detectors predicts Poisson distribution of counts P(k,n) where n is the average k (the average combines the efficiency and the incident intensity effects).

I was talking in cases where we have on average a hit every 200 seconds, while the hit itself takes 50 ns (so low intensity), so this means that (in the QM picture) double events are such a rarity that we can exclude them. Now you seem to say that - as I expected in my EDIT - you would now consider these beams as CONTINUOUS and not as little wavetrains ("photons") ; indeed, then you have independent Poisson streams on both sides.

It seems you're again confusing the average ensemble property[/color] (such as the efficiency e) with the individual events[/color] for the classicl case, i.e. assuming that the classical model implies a prediction of exactly 1/2 photocurrent in each try. That's not the case. The QED and the semiclassical theory of detectors yield exactly the same Poisson distribution of trigger events. The QED proper effects (which distinguish it empirically from the semiclassical models with ZPF) start at higher QED perturbative orders than those affecting the photo-ionization in detector modelling.

Indeed, I expected this remark (see my EDIT). However, I repeat my inverse difficulty then:
If light beams of low intensity are to be considered as "continuous" (so no little wave trains which are more or less synchronized in time), then how do you explain ANY coincidence which surpasses independent Poisson hits, such as there are clearly observed in PMD coincidences ?? Given the low intensity, the probability of coincidence based on individual Poisson streams is essentially neglegible, so it is extremely improbable that both detectors would trigger simultaneously, no ? So how do you explain ANY form of simultaneity of detection ?

cheers,
Patrick.
 
  • #86
vanesch Have a look at:
http://www.hpk.co.jp/eng/products/ETD/pdf/PMT_construction.pdf
p 12, figure 28. You see a clear spectral (pulse height spectrum) separation between the dark current pulses, which have a rather exponential behaviour, and the bulk of "non-dark single photon" pulses, and allthough a small tradeoff can be made, it should be obvious that a lower cutoff in the amplitude spectrum clearly cuts away most of the noise, while not cutting away so much "non-dark" pulses. The very fact that these histograms have completely different signatures should indicate that the origins are quite different, no ?


You can get low noise for low QE. The PM tubes will have max QE about 40% which is not adequate for any non-classicality test. Also, note that the marketing brochures sent to the engineers are not exactly the same quality information source as the scientific reports. This kind of engineering literature uses also its own jargon and conventions and needs a grain of salt to interpret.

To see the problems better, take a look at the ultra-high efficiency detector (in a scientific report), with 85% QE[/color], which with some engineering refinements might go to 90-95% QE. To reduce the noise, it is cooled down to 6K [/color](not your average commercial unit). You might think that this solves the Bell test problem since the 83% ought to remove the need for the "fair sampling" assumption.

Now you read on, it says the optimum QE is obtained for the signal rates of 20,000 photons/sec. And what is the dark rate. You look around, and way back you find it as also 20,000 events/sec. The diagrams for QE vs bias voltage shows increase in QE with voltage, so it seems one could just increase the bias. Well, it doesn't work, since this also increases the dark rate much faster and for the voltage 7.4V it achieves the 85% QE and the 20,000 cps dark rate for the flux of 20,000 photons/sec. Beyond the 7.4V the detector breaks down.

So why not increase the incident flux[/color]? Well, because the detector's dead time[/color] then rises, decreasing the efficiency. The best QE they could get was after tuning the flux to 20,000. Also, decreasing the temperature does lower the noise, but requires higher voltage to get same QE, thus that doesn't help them.


Their Fig. 5 in the paper then combines all the effects relating QE to dark rate, and you see for the QE vs dark rate that as QE rises to 85% the dark rate grows exponentially to 20,000 (QE shows the same dependency as for voltage increase). As they put it:

But if we plot the quantum efficiency as a function of dark counts, as is done in Figure 5, the data for different temperatures all lie along the same curve. This suggests that the quantum efficiency and dark counts both depend on a single parameter, the electric field intensity in the gain region. The temperature and bias voltage dependence of this parameter result in the behavior shown in Figure 4. From Figure 5 we see that the maximum quantum efficiency of 85% is achieved at a dark count rate of roughly 20,000.

So, with this kind of dark rates, this top of the line detector with maxed-out QE is basically useless for Bell tests. Anything it measures will be prefectly well within the semiclassical (with ZPE) models. No matter how you tweak it, you can't get rid of the stubborn side-effects of an equivalent of 1/2 photon per mode vacuum fluctuations[/color] (since they predicted by the the theory). And that 1/2 photon equivalent noise is exactly what makes the semiclassical model with ZPF indistingushable at the Quantum Optics level (for the highers orders of perturbative QED, the semiclassical models which don't include self-interaction break down; Barut's self-field ED which models self-interaction still matches the QED to all orders it was computed).
 
Last edited by a moderator:
  • #87
vanesch said:
Now you seem to say that - as I expected in my EDIT - you would now consider these beams as CONTINUOUS and not as little wavetrains ("photons") ; indeed, then you have independent Poisson streams on both sides.

I should maybe add that you have not that much liberty in the detector performance if you consider wavetrains of a duration of the order of a few ns with most of the time NOTHING happening. If you need, on the average, a certain efficiency, and you assume that it is only during these small time windows of arrival of the wavetrains that a detector can decide to click or not, then, with split wavetrains, you HAVE to have a certain coincidence rate if no "flag" is carried by the wavetrain to say whether it should click or not (and if you assume the existence of such a flag, then just do away with those wavetrains which have a "don't detect me" flag, and call those with the "detect me" flag, photons :-), because you assume these detection events to be independent in each branch.
So your only way out is to assume that the low level light is a continuous beam, right ? No wavetrains, and then nothing.

And with such a model, you can NEVER generate non-Poisson like coincidences, as far as I can see.

cheers,
Patrick.
 
  • #88
nightlight said:
And that 1/2 photon equivalent noise is exactly what makes the semiclassical model with ZPF indistingushable at the Quantum Optics level

Is this then also true for 1MeV gamma rays ?

cheers,
Patrick.
 
  • #89
vanesch I was talking in cases where we have on average a hit every 200 seconds, while the hit itself takes 50 ns (so low intensity), so this means that (in the QM picture) double events are such a rarity that we can exclude them.

It doesn't matter what the Poisson average rate is. The semiclassical photodetector model predicts the trigger on average once per 200 s. There is no difference there.

However, I repeat my inverse difficulty then:
If light beams of low intensity are to be considered as "continuous" (so no little wave trains which are more or less synchronized in time), then how do you explain ANY coincidence which surpasses independent Poisson hits, such as there are clearly observed in PMD coincidences ??


The previous example of thermal source was super Poissonian (or at best Poissonian for laser beam).

On the other hand, you're correct that the semiclassical theory which does not model vacuum fluctuations [/color] cannot predict sub-Poissonian correlations of PDC or other similar sub-Poissonian experiments.

But, as explained at length earlier, if classical theory uses ZPF (boundary & initial conditions) and the correlation functions are computed to subtract the ZPF (to match what is done in the Glauber's prescription for Quantum Optics and what the detectors do by background subtractions and tuning to have null triggers when there is no signal), then it produces the same coincidence counts. The sub-Poissonian trait of semiclassical model is the result of the contribution of sub-ZPF superpositions[/color] (the "dark wave"). See the earlier message and the references there.

Given the low intensity, the probability of coincidence based on individual Poisson streams is essentially neglegible, so it is extremely improbable that both detectors would trigger simultaneously, no ? So how do you explain ANY form of simultaneity of detection ?

There is nothing to explain about vague statements such as "extremely improbable". Show me an expression that demonstrates the difference. Note that two independent Poissonans of classical model yield p^2 for trigger probability. Now if your QM marble count is Poissonian with any average, small or large (it only has to match the singles rate of classical model) it yields also the probability quadratic in p of the two marbles arriving to the splitter.

What's exactly the difference for the Poissonan source if you assume that QM photon number Poissonian has the same average singles rate as the classical model (same intensity calibration)? There is none. The Poissonian source complete clasicality for any number of detectors, polarizers, splitters (any linear elements) is an old hat (1963, Glauber Sudarshan classical equivalence for the coherent states).

NOTE: I assume here, when you insist on Poissonian classical distribution, that your marble distribution is also Poissonian (otherwise why would you mention it). For the sub-Poissonian case, classical models with ZPF can do that too (using the same kind of subtractions of vacuum fluctuation effects as QO), as explained at the top.
 
Last edited:
  • #90
vanesch Is this then also true for 1MeV gamma rays ?

You still have vacuum fluctuations. But, the gamma ray photons are obviously in much better relation to any frequency cutoff of the fluctuations than optical photons, thus they indeed do have much better signal to noise on the detectiors. But this same energy advantage turns into disadvantage for non-classicality tests since the polarization coupling to atomic lattice is proportionately smaller, thus the regular polarizers (or half-silvered mirrors with preserved coherent splitting) won't work for the gamma rays.

The oldest EPR tests in fact were with gamma rays and they used Compton scattering to analyse polarization (for beam splitting), which is much less accurate than the regular optical polarizers. The net effect was much lower overall efficiency than for the optical photons. That's why no one uses gamma or X-ray photons for Bell's tests.
 
Last edited:
  • #91
nightlight said:
vanesch Is this then also true for 1MeV gamma rays ?

You still have vacuum fluctuations. But, the gamma ray photons are obviously in much better relation to any frequency cutoff of the fluctuations than optical photons, thus they indeed do have much better signal to noise on the detectiors. But this same energy advantage turns into disadvantage for non-classicality tests since the polarization coupling to atomic lattice is proportionately smaller, thus the regular polarizers (or half-silvered mirrors with preserved coherent splitting) won't work for the gamma rays.

I was not talking for EPR kinds of experiments, but to try to refute the idea that you do not need photons (that classical wave theory can do). I don't really see what's so different between gamma rays and optical photons: or both exist, or both are describable by the same semiclassical theory. And with gamma rays, there isn't this problem with quantum efficiency and dark currents, so I was wondering if this doesn't give a problem with the explanations used to explain away photons in the visible range.

cheers,
Patrick.
 
  • #92
nightlight said:
But, as explained at length earlier, if classical theory uses ZPF (boundary & initial conditions) and the correlation functions are computed to subtract the ZPF (to match what is done in the Glauber's prescription for Quantum Optics and what the detectors do by background subtractions and tuning to have null triggers when there is no signal), then it produces the same coincidence counts. The sub-Poissonian trait of semiclassical model is the result of the contribution of sub-ZPF superpositions[/color] (the "dark wave"). See the earlier message and the references there.

Eh, this sounds bizarre. I know you talked about it but it sounds so strange that I didn't look at it. I'll try to find your references to it.


cheers,
Patrick.
 
  • #93
vanesch Eh, this sounds bizarre. I know you talked about it but it sounds so strange that I didn't look at it. I'll try to find your references to it.

The term "dark wave" is just picturesque depiction (a visual aid), a la Dirac hole, except this would be like a half-photon equivalent hole in the ZPF. That of course does get averaged over the entire ZPF distribution and without further adjustments of data nothing non-classical happens.

But, if you model a setup calibrated to "ignore" vacuum fluctuations (via combination of background subtractions and detectors sensitivity adjustments), the way Quantum Optics setups are (since Glauber's correlation functions subtract the vacuum contributions as well) -- then the subtraction of the average ZPF contributions makes the sub-ZPF contribution appear as having negative probability (which mirrors the negative regions in Wigner distribution) and able to replicate the sub-Poissonian effects on the normalized counts. The raw counts don't have sub-Poissonian or negative probability traits, just as they don't in Quantum Optics. Only the adjusted counts (background subtractions and/or the extrapolations of the losses via the "fair sampling" assumption) have such traits.

See those references for PDC classicality via Stochastic electrodynamics I gave earlier.
 
Last edited:
  • #94
I think both of you should not argument on the possibility to get a detector triggering a single photon as it is impossible (with classical or quantum physics).
Experiences with single photon are only simplification of what really occurs (i.e the problem of the double slit experiment, cavities, etc …). We just roughly say: I have an electromagnetic energy of hbar.ω.

Remind that a “pure photon” is a single energy and thus requires an infinite time of measurement: reducing this time of measurement (interaction is changed) modifies the field.

It is like trying to build a super filter that would extract a pure sinus wave from an electrical signal or a sound wave.

Don’t also forget that the quantization of the electromagnetic field gives only the well known energy eigenvalues (a number of photons) in the case of a free field (no sources) where we have a free electromagnetic Hamiltonian (Hem).
Once you interact, the eigenvalues of the electromagnetic field are changed and are given by Hem+Hint (i.e. you modify the basis of the photons number (the energy) < => you change the number of photons).

Seratend.
 
  • #95
vanesch I was not talking for EPR kinds of experiments, but to try to refute the idea that you do not need photons (that classical wave theory can do). I don't really see what's so different between gamma rays and optical photons: or both exist, or both are describable by the same semiclassical theory. And with gamma rays, there isn't this problem with quantum efficiency and dark currents, so I was wondering if this doesn't give a problem with the explanations used to explain away photons in the visible range.

We were talking about coherent beam splitter which splits QM amplitude (or classical wave) into two equal coherent sub-packets. If you try same with the gamma rays, you have different type of split (non equal amplitudes) i.e. it would have merely classical mix (multidimensional rho instead of a single dimensional Psi in Hibert space terms) of the left-right sub-packets. This gives it a particle-like appearance, i.e. in each try it goes one way or the other but not both ways. But that type of scattering is perfectly within the semi-classical EM modelling.

Imagine a toy classical setup with a mirror containing large holes in the coating (covering half the area) and run quickly a "thin" light beam in random fashion over it. Now a pair of detectors behind will show perfect anti-correlation in their triggers. There is nothing contrary to classical EM about it, even though it appears as if particle went through on each trigger and on average half the light went each way. The difference is that you can't make the two halves interfere any more since they are not coherent. If you can detect which way "it" went you lose coherence and you won't have interference effects. [/color]

Some early (1970s) "anti-correlation" experimental claims with optical photons have made this kind of false leap. Namely they used randomly or circularly polarized light and used polarizer to split it in 50:50 ratio, then claimed the perfect anti-correlation. That's the same kind of trivially classical effect as the mirror with large holes. They can't make the two beams interfere, though (and they didn't try that, of course).
 
Last edited:
  • #96
nightlight said:
vanesch I was not talking for EPR kinds of experiments, but to try to refute the idea that you do not need photons (that classical wave theory can do). I don't really see what's so different between gamma rays and optical photons: or both exist, or both are describable by the same semiclassical theory. And with gamma rays, there isn't this problem with quantum efficiency and dark currents, so I was wondering if this doesn't give a problem with the explanations used to explain away photons in the visible range.

We were talking about coherent beam splitter which splits QM amplitude (or classical wave) into two equal coherent sub-packets.

No, we were simply talking about photon detection and background. If the zero-point energy or whatever it is, provoques what is usually considered as being a technical difficulty, namely the "dark current noise", then why is this the limiting factor in visible light detectors, but doesn't it occur in gamma ray detectors, which are close to 100% efficient with neglegible dark currents (take MWPC for instance). I don't see why something that is "a fundamental problem" at say, 2eV is suddenly no issue anymore at 2MeV.

cheers,
Patrick.
 
  • #97
vanesch said:
No, we were simply talking about photon detection and background. If the zero-point energy or whatever it is, provoques what is usually considered as being a technical difficulty, namely the "dark current noise", then why is this the limiting factor in visible light detectors, but doesn't it occur in gamma ray detectors, which are close to 100% efficient with neglegible dark currents (take MWPC for instance). I don't see why something that is "a fundamental problem" at say, 2eV is suddenly no issue anymore at 2MeV.

cheers,
Patrick.

I didn't want to jump into this and spoil the "fun", but I can't resist myself. :)

You are certainly justified in your puzzlement. Dark current, at least the ones we detect in photon detectors, has NOTHING to do with "zero-point" field. I deal with dark current all the time in accelerators. They are the result of field emission, and over the range of less than 10 MV/m, they are easily described by the Fowler-Nordheim theory of field emission. I do know that NO photodetector work with that kind of a gradient, or anywhere even close.

The most sensitive gamma-ray detector is the Gammasphere, now firmly anchored here at Argonne (http://www.anl.gov/Media_Center/News/2004/040928gammasphere.html ). In its operating mode when cooled to LHe, it is essentially 100% efficient in detecting gamma photons.

Zz.
 
Last edited by a moderator:
  • #98
ZapperZ said:
I didn't want to jump into this and spoil the "fun", but I can't resist myself. :)

No, please jump in ! I try to keep an open mind in this discussion, and I have to say that nightlight isn't the usual crackpot one encounters in this kind of discussions and is very well informed. I only regret a bit the tone of certain statements like what we are doing to those poor students and so on, but I've seen worse. What is potentially interesting in this discussion is how far one can push semiclassical explanations for optical phenomena. For instance, I remember having read that what is usually quoted as a "typical quantum phenomenon", namely the photo-electric effect, needs quantification of the solid state, but not of the EM field which can still be considered classical. Also, if it is true that semiclassical models can correctly predict higher order radiative corrections in QED, I have to say that I'm impressed. Tree diagrams however, are not so impressive.
I can very well accept that EPR like optical experiments do not close all loopholes and it can be fun to see how people still find reasonably looking semiclassical theories that explain the results.
However, I'm having most of the difficulties in this discussion with 2 things. The first one is that photon detectors seem to be very flexible devices which seem to get exactly those properties that are needed in each case to save the semiclassical explanation ; while I can accept each refutation in each individual case, I'm trying to find a contradiction between how it behaves in one case and how it behaves in another case.
The second difficulty I have is that I'm not aware of most of the litterature that is referred too, and I can only spend a certain amount of time on it. Also, I'm not very aware (even if I heard about the concepts) of these Wigner functions and so on. So any help from anybody here can do. Up to now I enjoyed the cat-and-mouse game :smile: :smile:

cheers,
Patrick.
 
  • #99
May be the principal problem is to know if the total spin 0, EPR state is possible with a classical vision.
I understand, nightlight needs to refute the physical existence of a "pure EPR state" in order to get "a classical theory" that describes the EPR experiment.

Seratend
 
  • #100
seratend said:
I understand, nightlight needs to refute the physical existence of a "pure EPR state" in order to get "a classical theory" that describes the EPR experiment.

Yes, that is exactly what he does, and it took me some time to realize this in the beginning of the thread, because he claimed to accept all of QM except for the projection postulate. But if you browse back, afterwards we agreed upon the fact that he disagreed on the existence of a product hilbert space, and only considers classical matter fields (kind of Dirac equation) coupled with the classical EM field (Maxwell), such that the probability current of charge from the dirac field is the source of the EM field. We didn't delve into the issue of the particle-like nature of matter. Apparently, one should also add some noise terms (zero point field or whatever) to the EM field in such a way that it corresponds to the QED half photon contribution in each mode. He claims (well, and most authors he cites too) that the clickediclack of photons is just a property of "photon detectors" which react in such a way to continuous radiation... it is there that things seem to escape me.
However, the fun thing is that indeed, one can find explanations of a lot of "quantum behaviour" this way. The thing that bothers me are the detectors.

cheers,
Patrick.
 
Back
Top