What Confusion Surrounds Young's Experiment on Wave-Particle Duality?

  • Thread starter Thread starter Cruithne
  • Start date Start date
  • Tags Tags
    Experiment
Cruithne
Messages
2
Reaction score
0
I was recently handed a link by a friend after a discussion about wave-particle duality that seems to me to be incorrect in certain areas. This is the link: http://www.jracademy.com/~jtucek/science/exp.html

I'll have a quick go at dissecting parts of the page.

If light were just a particle, and you were able to send just one photon through (Fig 1.3), then there would be no pattern on the screen, just a single point of light. However, it has been found that even if just one photon is sent through, it creates the same interference pattern, although dimmer

Well this seems basically true, however I understood that given a large enough sample of single photons the interference pattern would be indistinguishable from that produced by a continuous light source. It would be dimmer with a smaller sample of photons. However, the next statement seems to contradict this...

If the light is measured, or observed, in between the screen and the second barrier, no interference pattern is formed. Instead, there is the most intense light in between the two slits, which gets dimmer as it progresses away

This seems to go against every other example of Young's experiment I've come across. What the author seems to be describing is the result of closing one of the slits. With both slits open an interference pattern will eventually appear over time...

This phenomenon is one of the basic principles of quantum physics, the Heisenberg Uncertainty Principle

I'm not sure why!

If light is not being observed, it acts as a wave, but if it is being observed, it has to behave itself and act like particles

This just strikes me as wrong. The essential mystery of Young's experiment is that even when treating light as individual particles (photons) the light still produces behaviour that would imply it is acting as if it is a wave. This statement also seems to suggest that the interference patterns produced were not the result of any observations :shy: And then there's the fact that observing light's behaviour in other circumstances show it acting like a wave (e.g. diffraction and polarisation)

Could anyone clear up my confusion please?

(Hi btw :) I've been lurking for a while but hadn't signed up...)
 
Last edited by a moderator:
Physics news on Phys.org
The great mystery of QM : if you do not know by which hole the particle went through, it went through both and inteferences are produced. That is only if by any mean you cannot know which hole, because if there is some kind of detector forcing the particle to go through only one hole, even when nobody watches the result of the detector, the interferences are broken. I think you have understood everything, you are just confused by this mystery. We are all. When nobody touches a particle, it behaves like a wave. When you try to catch a particle, you indeed get a corpuscle.
 
Cruithne said:
I was recently handed a link by a friend after a discussion about wave-particle duality that seems to me to be incorrect in certain areas. This is the link: http://www.jracademy.com/~jtucek/science/exp.html

The source you are citing has major confusion about the subtle part of the experiment. Please refer to this one below and see if it is any clearer.

http://www.optica.tn.tudelft.nl/education/photons.asp

Zz.
 
Last edited by a moderator:
ZapperZThe source you are citing has major confusion about the subtle part of the experiment. Please refer to this one below and see if it is any clearer.

http://www.optica.tn.tudelft.nl/education/photons.asp

Zz.

Detail from that page:

To generate the coherent light needed for this experiment a He-Ne laser was used, with an average wavelength of 632.8nm.[/color]

The problem is that even the most ideal laser light cannot show any non-classical effect here. Namely, if you were to place one detector behind each slit, the trigger by detector A does not exclude trigger by detector B since the laser light has Poisson distribution -- each detector triggers on its own whether or not the other one triggered (no collapse occurs). There is an old theorem of Sudarshan on this very question (the result also appears in a more elaborate paper from that same year by Roy Glauber, which is a foundations of Quantum Optics):

E.C.G. Sudarshan "The equivalence of semiclassical and quantum mechanical descriptions of statistical light beams" Phys. Rev. Lett., Vol 10(7), pp. 277-279, 1963.

Roy Glauber "The Quantum Theory of Optical Coherence" Phys. Rev., Vol 130(6), pp. 2529-2539, 1963.

Sudarshan shows that correlations among the trigger counts of any number of detectors are perfectly consistent with classical wave picture, i.e. you can think of a detector as simply thresholding the energy of the superposed incoming wave packet fragment (A or B) with the local field fluctuations, and triggering (or not triggering) based on these purely local causes, regardless of what the other detector did.

Thus there is nothing in these experiments that would surprise a 19th century physicist (other than technology itself). The students are usually confused by the lose claim that there is a "particle" which always goes one way or the other. If one thinks of two equal wave fragments and detector thresholding (after superposition with local field fluctuations), there is nothing in the experiment that is mysterious.

Even the much stricter non-classicality test, such as Bell's inequality experiments are still fully explicable with this kind of simple classical models (usually acknowledged via euphemisms: "detection loophole" or "fair sampling loophole"). You can check the earlier thread here where I posted more details and references, along with the discussions.
 
Last edited by a moderator:
nightlight said:
ZapperZThe source you are citing has major confusion about the subtle part of the experiment. Please refer to this one below and see if it is any clearer.

http://www.optica.tn.tudelft.nl/education/photons.asp

Zz.

Detail from that page:

To generate the coherent light needed for this experiment a He-Ne laser was used, with an average wavelength of 632.8nm.[/color]

The problem is that even the most ideal laser light cannot show any non-classical effect here. Namely, if you were to place one detector behind each slit, the trigger by detector A does not exclude trigger by detector B since the laser light has Poisson distribution -- each detector triggers on its own whether or not the other one triggered (no collapse occurs). There is an old theorem of Sudarshan on this very question (the result also appears in a more elaborate paper from that same year by Roy Glauber, which is a foundations of Quantum Optics):

E.C.G. Sudarshan "The equivalence of semiclassical and quantum mechanical descriptions of statistical light beams" Phys. Rev. Lett., Vol 10(7), pp. 277-279, 1963.

Roy Glauber "The Quantum Theory of Optical Coherence" Phys. Rev., Vol 130(6), pp. 2529-2539, 1963.

Sudarshan shows that correlations among the trigger counts of any number of detectors are perfectly consistent with classical wave picture, i.e. you can think of a detector as simply thresholding the energy of the superposed incoming wave packet fragment (A or B) with the local field fluctuations, and triggering (or not triggering) based on these purely local causes, regardless of what the other detector did.

Thus there is nothing in these experiments that would surprise a 19th century physicist (other than technology itself). The students are usually confused by the lose claim that there is a "particle" which always goes one way or the other. If one thinks of two equal wave fragments and detector thresholding (after superposition with local field fluctuations), there is nothing in the experiment that is mysterious.

Even the much stricter non-classicality test, such as Bell's inequality experiments are still fully explicable with this kind of simple classical models (usually acknowledged via euphemisms: "detection loophole" or "fair sampling loophole"). You can check the earlier thread here where I posted more details and references, along with the discussions.

I'm not sure what your point here. Are you trying to say that there are no photons? Or are you trying to convey that the interference phenomena are purely due to classical waves? What if I can show that ALL wave phenomena of light can also be described via the photon picuture? Then what? Is there an experiment that can clearly distinguish between the two? (This is not a trick question since I have already mentioned it a few times.)

The detection loophole is well-known. If we have no such loophole, the EPR-type experiment would be a done deal. It is why we have to deal with the statistics of large number of data to be able to know how many standard deviation the results deviate from classical predictions. If we are not encumbered by such loophole, we could in principle just do one measurement and be done with.

Zz.
 
Last edited by a moderator:
ZapperZ I'm not sure what your point here. Are you trying to say that there are no photons? Or are you trying to convey that the interference phenomena are purely due to classical waves?

I am saying that nothing in the experiment shows that one has to imagine a particle, thus the common "paradoxical" description is misleading. All their frame-by-frame pictures show is precisely the kind of discretization a 19th century physicist would expect to see if a detector thresholds the energy of incoming perfectly classical wave packet superposed with the local field fluctuations.

It does not show[/color] (and can't show since it isn't true) what is commonly claimed or hinted at in popular or "pedagogical" literature, which is that if you were to place two detectors, A and B, one behind each slit, that a trigger of A automatically excludes trigger of B (which would be a particle-like behavior, called "collapse" of wave function[/color] or a projection postulate in QM "measurement theory"). You get all 4 combinations of triggers (0,0), (0,1), (1,0) and (1,1) of (A,B). That is the prediction of Quantum Optics (see Sudarshan & Glauber papers) and also what the experiment shows. No wave collapse of B-wave fragment occurs when detector A triggers. The data and the Quntum Optics prediction here are perfectly classical.

What if I can show that ALL wave phenomena of light can also be described via the photon picuture? Then what? Is there an experiment that can clearly distinguish between the two? (This is not a trick question since I have already mentioned it a few times.)

The double-slit experiment doesn't show anything particle-like (the apparent discreteness is an artifact of detector trigger decision thresholding/discretization, which is the point of the Sudarshan's theorem).

Of course, you can simulate any wave field phenomena as a macroscopic/collective effect of many particles at a finer scale. Similarly, you can simulate particle behaviors with a microscopic wave fields in wave packets.

Whether fundamental entities are particles or waves has nothing to do with the double-slit experiment claims (in "pedagogical" and popular literature) -- no dual nature[/color] is shown by the experiment. All that is shown is consistent with a discretized detection of a wave phenomenon.


The detection loophole is well-known. If we have no such loophole, the EPR-type experiment would be a done deal. It is why we have to deal with the statistics of large number of data to be able to know how many standard deviation the results deviate from classical predictions. If we are not encumbered by such loophole, we could in principle just do one measurement and be done with.

That's incorrect characterization. The standard deviations have no relation with the detection or the fair sampling "loophole" -- they could have million times as many data points and thousand times as many standard deviation "accuracy" without touching the main problem (that the 90% of data isn't measured and that they assume ad hoc certain properties of the missing data). Check the earlier thread where this was discussed in detail and with references.
 
nightlight said:
ZapperZ I'm not sure what your point here. Are you trying to say that there are no photons? Or are you trying to convey that the interference phenomena are purely due to classical waves?

I am saying that nothing in the experiment shows that one has to imagine a particle, thus the common "paradoxical" description is misleading. All their frame-by-frame pictures show is precisely the kind of discretization a 19th century physicist would expect to see if a detector thresholds the energy of incoming perfectly classical wave packet superposed with the local field fluctuations.

I didn't realize that this thread was about answering the validity of the photon picture. I was responding to the confusing brought about by the original link in the first posting of this thread.

It does not show[/color] (and can't show since it isn't true) what is commonly claimed or hinted at in popular or "pedagogical" literature, which is that if you were to place two detectors, A and B, one behind each slit, that a trigger of A automatically excludes trigger of B (which would be a particle-like behavior, called "collapse" of wave function[/color] or a projection postulate in QM "measurement theory"). You get all 4 combinations of triggers (0,0), (0,1), (1,0) and (1,1) of (A,B). That is the prediction of Quantum Optics (see Sudarshan & Glauber papers) and also what the experiment shows. No wave collapse of B-wave fragment occurs when detector A triggers. The data and the Quntum Optics prediction here are perfectly classical.

What if I can show that ALL wave phenomena of light can also be described via the photon picuture? Then what? Is there an experiment that can clearly distinguish between the two? (This is not a trick question since I have already mentioned it a few times.)

The double-slit experiment doesn't show anything particle-like (the apparent discreteness is an artifact of detector trigger decision thresholding/discretization, which is the point of the Sudarshan's theorem).

So you are asserting that if I stick to the photon picture, I cannot explain all the so-called wavelike observation as in the double slit expt.? See T. Marcella, Eur. J. Phys., v.23, p.615 (2002).

So now we have, at best, the same set of experiments with two different explanations. Just because wavelike picture came first doesn't mean it is right, and just because photon picture came later, doesn't mean that is correct. Again, my question was, is there ANY other experiments that can clearly distinguish between the two and pick out where one deviates from the other? This, you did not answer.

Zz.
 
ZapperZ I didn't realize that this thread was about answering the validity of the photon picture.

It got there after some discussion of the Bell inequality experiments.

So you are asserting that if I stick to the photon picture, I cannot explain all the so-called wavelike observation as in the double slit expt.?

I am saying that if you put separate detectors A and B at each slit you will not obtain the usually claimed detection exclusivity that a single particle going through slit A or slit B would produce. The detector A trigger has no effect on the probability of trigger of B. The usual claim is that when the detector A triggers, the wave function in the region B somehow collapses, making detector B silent for that try. That is not what happens. The triggers of B are statistically independent from the triggers on A on each "try" (e.g. if you open & close the light shutter quickly enough for each "try" so that on average a single event is dected per try).


See T. Marcella, Eur. J. Phys., v.23, p.615 (2002).

He is using standard scattering amplitudes, i.e. analyzing the behavior of an extended object, wave, which spans both slits. Keep also in mind that you can affect the interference picture in a predictable manner by placing various optical phase delay devices on each path. That implies that a full phenomenon does involve two physical wave fragments propagating via separate paths, interacting with other objects along the way.

If you had a single particle going always via a single path, it would be insensitive to the relative phase delays of the two paths. The usual Quantum Optics solution is that the source produces Poisson distribution of the photons, with an average of 1 photon per try, although in each try there could be zero, one, two, three... etc photons. That kind of "particle" picture can account for these phase delay phenomena on two paths, but that is what makes it equivalent to classical picture as well.


So now we have, at best, the same set of experiments with two different explanations. Just because wavelike picture came first doesn't mean it is right, and just because photon picture came later, doesn't mean that is correct. Again, my question was, is there ANY other experiments that can clearly distinguish between the two and pick out where one deviates from the other? This, you did not answer.

You don't have a picture of a precisely 1 particle on each try producing the full set of double-slit phenomena (including replicating interference effects of the separate phase delays on each path). You can have a picture of "particles" provided you also assume that the particle number is not controllable[/color], and that it is uncontrollable to exactly such degree that the detector triggers are precisely same as if a simple wave has split in two equal parts, each of which triggers its own detector independently of the other.

Thus the particle model with the uncontrollable particle number is more redundant explanation since you need a separate rule or a model to explain why is the particle number uncontrollable in exactly such way to mimick the wave behavior in detector trigger statistics.


The double-slit experiment is a weak criteria (from the early days of QM) to decide the question. The Bell's experiment was supposed to provide a sharper crietira, but so far it hasn't supported the "collapse" (of two particle state).
 
nightlight said:
ZapperZ I didn't realize that this thread was about answering the validity of the photon picture.

It got there after some discussion of the Bell inequality experiments.

Sorry? I could have sworn the original question was on the double slit experiment, and that was what the webpage I replied with was also demonstrating. Where did Bell inequality came in?

So you are asserting that if I stick to the photon picture, I cannot explain all the so-called wavelike observation as in the double slit expt.?

I am saying that if you put separate detectors A and B at each slit you will not obtain the usually claimed detection exclusivity that a single particle going through slit A or slit B would produce. The detector A trigger has no effect on the probability of trigger of B. The usual claim is that when the detector A triggers, the wave function in the region B somehow collapses, making detector B silent for that try. That is not what happens. The triggers of B are statistically independent from the triggers on A on each "try" (e.g. if you open & close the light shutter quickly enough for each "try" so that on average a single event is dected per try).


See T. Marcella, Eur. J. Phys., v.23, p.615 (2002).

He is using standard scattering amplitudes, i.e. analyzing the behavior of an extended object, wave, which spans both slits. Keep also in mind that you can affect the interference picture in a predictable manner by placing various optical phase delay devices on each path. That implies that a full phenomenon does involve two physical wave fragments propagating via separate paths, interacting with other objects along the way.

If you had a single particle going always via a single path, it would be insensitive to the relative phase delays of the two paths. The usual Quantum Optics solution is that the source produces Poisson distribution of the photons, with an average of 1 photon per try, although in each try there could be zero, one, two, three... etc photons. That kind of "particle" picture can account for these phase delay phenomena on two paths, but that is what makes it equivalent to classical picture as well.


So now we have, at best, the same set of experiments with two different explanations. Just because wavelike picture came first doesn't mean it is right, and just because photon picture came later, doesn't mean that is correct. Again, my question was, is there ANY other experiments that can clearly distinguish between the two and pick out where one deviates from the other? This, you did not answer.

You don't have a picture of a precisely 1 particle on each try producing the full set of double-slit phenomena (including replicating interference effects of the separate phase delays on each path). You can have a picture of "particles" provided you also assume that the particle number is not controllable[/color], and that it is uncontrollable to exactly such degree that the detector triggers are precisely same as if a simple wave has split in two equal parts, each of which triggers its own detector independently of the other.

Thus the particle model with the uncontrollable particle number is more redundant explanation since you need a separate rule or a model to explain why is the particle number uncontrollable in exactly such way to mimick the wave behavior in detector trigger statistics.


The double-slit experiment is a weak criteria (from the early days of QM) to decide the question. The Bell's experiment was supposed to provide a sharper crietira, but so far it hasn't supported the "collapse" (of two particle state).

You lost me in this one. What detectors? The interference phenomena as described with photons/electrons/neutrons/etc. are NOT about these "particles", but rather the superpostion of all the possible paths! It isn't the issue of one particle going through either slit, it's the issue of the possible path interfering, creating the often misleading impression that a single particle is interfereing with itself. A single-particle interference is NOT the same as a 2-particle interference.

Again, I have NO IDEA how this thread degenerated into a question of the validity of photons.

If you have a solid argument against it, then let me request that you read this paper:

J.J. Thorn et al., Am. J. Phys., v.72, p.1210 (2004).

The abstract is in one of my postings in my Journals section. If you believe the analysis and conclusion is faulty, please send either a rebuttal or a followup paper to AJP. This isn't a PRL or Science or Nature, so it shouldn't be as difficult to get published there. THEN we'll talk.

Zz.
 
  • #10
Quick question about non-locality

nightlight said:
ZapperZ I'm not sure what your point here. Are you trying to say that there are no photons? Or are you trying to convey that the interference phenomena are purely due to classical waves?

I am saying that nothing in the experiment shows that one has to imagine a particle, thus the common "paradoxical" description is misleading. All their frame-by-frame pictures show is precisely the kind of discretization a 19th century physicist would expect to see if a detector thresholds the energy of incoming perfectly classical wave packet superposed with the local field fluctuations.

It does not show[/color] (and can't show since it isn't true) what is commonly claimed or hinted at in popular or "pedagogical" literature, which is that if you were to place two detectors, A and B, one behind each slit, that a trigger of A automatically excludes trigger of B (which would be a particle-like behavior, called "collapse" of wave function[/color] or a projection postulate in QM "measurement theory"). You get all 4 combinations of triggers (0,0), (0,1), (1,0) and (1,1) of (A,B). That is the prediction of Quantum Optics (see Sudarshan & Glauber papers) and also what the experiment shows. No wave collapse of B-wave fragment occurs when detector A triggers. The data and the Quntum Optics prediction here are perfectly classical.

What if I can show that ALL wave phenomena of light can also be described via the photon picuture? Then what? Is there an experiment that can clearly distinguish between the two? (This is not a trick question since I have already mentioned it a few times.)

The double-slit experiment doesn't show anything particle-like (the apparent discreteness is an artifact of detector trigger decision thresholding/discretization, which is the point of the Sudarshan's theorem).

Of course, you can simulate any wave field phenomena as a macroscopic/collective effect of many particles at a finer scale. Similarly, you can simulate particle behaviors with a microscopic wave fields in wave packets.

Whether fundamental entities are particles or waves has nothing to do with the double-slit experiment claims (in "pedagogical" and popular literature) -- no dual nature[/color] is shown by the experiment. All that is shown is consistent with a discretized detection of a wave phenomenon.


The detection loophole is well-known. If we have no such loophole, the EPR-type experiment would be a done deal. It is why we have to deal with the statistics of large number of data to be able to know how many standard deviation the results deviate from classical predictions. If we are not encumbered by such loophole, we could in principle just do one measurement and be done with.

That's incorrect characterization. The standard deviations have no relation with the detection or the fair sampling "loophole" -- they could have million times as many data points and thousand times as many standard deviation "accuracy" without touching the main problem (that the 90% of data isn't measured and that they assume ad hoc certain properties of the missing data). Check the earlier thread where this was discussed in detail and with references.


Hello nightlight,

I have read several of your postings as suggested and think that you thought the points well through. I also, however, like ZapperZ's attitude. To paraphrase him: "Well, what is the big point whether or not classical and quantum mechanics show the same result in these particular experiments?".

For me the really interesting thing is the answer to the following: In your explanations and works that you refer to, is there a faster than light entanglement, or not?

I hope that you say "yes" because then you help me to save time to compare sophisticated classical arguments (some of which I recognized myself) with standard quantum-mechanical arguments.

Roberth
 
  • #11
If you have a solid argument against it, then let me request that you read this paper:

J.J. Thorn et al., Am. J. Phys., v.72, p.1210 (2004).


There is a semiclassical model of PDC sources (Stochastic Optics) which J.J. Thorn had used (just as there are for regular laser and thermal sources, as was known since Sudarshan-Glauber results from 1963).

Therefore the detection statistics and correlations for any number of detectors (and any number of optical elements) for the field from such source can always be replicated exactly by a semi-classical model. What euphemisms these latest folks have used for their particular form of ad-hockery for the missing data to make what's left look as absolutely non-classical is of as much importance as trying to take apart the latest claimed perpetuum mobile device or random data compressor.

That whole "Quantum Mystery Cult" is a dead horse of no importance to anybody or anything outside that particular tiny mutual back-patting society. That parasitic branch of pseudo-physics has never produced anything but 70+ years of (very) loud fast-talking to bedazzle the young and ignorant.

Nothing, no technology no phenomenon no day-to-day physics, was ever found to depend or require in any way their imaginary "collapse/projection postulate" (or its corollary, the Bell's theorem). The dynamical equations (of QM and QED/QFT) and Born postulate is what does the work. (See the thread mentioned earlier for the explanation of these statements.)
 
  • #12
ZapperZ said:
The source you are citing has major confusion about the subtle part of the experiment. Please refer to this one below and see if it is any clearer.

http://www.optica.tn.tudelft.nl/education/photons.asp

Zz.

Cheers, that's a much clearer description of the experiment. I'll pass it on :smile:
 
Last edited by a moderator:
  • #13
nightlight said:
If you have a solid argument against it, then let me request that you read this paper:

J.J. Thorn et al., Am. J. Phys., v.72, p.1210 (2004).


There is a semiclassical model of PDC sources (Stochastic Optics) which J.J. Thorn had used (just as there are for regular laser and thermal sources, as was known since Sudarshan-Glauber results from 1963).

Therefore the detection statistics and correlations for any number of detectors (and any number of optical elements) for the field from such source can always be replicated exactly by a semi-classical model. What euphemisms these latest folks have used for their particular form of ad-hockery for the missing data to make what's left look as absolutely non-classical is of as much importance as trying to take apart the latest claimed perpetuum mobile device or random data compressor.

That whole "Quantum Mystery Cult" is a dead horse of no importance to anybody or anything outside that particular tiny mutual back-patting society. That parasitic branch of pseudo-physics has never produced anything but 70+ years of (very) loud fast-talking to bedazzle the young and ignorant.

Nothing, no technology no phenomenon no day-to-day physics, was ever found to depend or require in any way their imaginary "collapse/projection postulate" (or its corollary, the Bell's theorem). The dynamical equations (of QM and QED/QFT) and Born postulate is what does the work. (See the thread mentioned earlier for the explanation of these statements.)

I could say the same thing about the similar whinning people always do about QM's photon picture without realizing that if it is simply a cult not based on any form of validity, then it shouldn't WORK as well (eg. refer to the band structure of the very same semiconductors that you are using in your electronics and see how those were verified via photoemission spectroscopy).

If you think you are correct, then put your money where you mouth is and try to have it published in a peer-reviewed journal. Till you are able to do that, then all your whinning are nothing more than bitterness without substance.

Cheers!

Zz.
 
  • #14
nightlight said:

Nothing, no technology no phenomenon no day-to-day physics, was ever found to depend or require in any way their imaginary "collapse/projection postulate" (or its corollary, the Bell's theorem). The dynamical equations (of QM and QED/QFT) and Born postulate is what does the work.


I wonder what difference you see between Born's postulate and the collapse postulate ? To me, they are one and the same thing: namely that, given a quantum state psi, and a measurement of a physical quantity which corresponds to a self-adjoint operator A, gives you the probability |<a_i |psi>|^2 to be in state corresponding to eigenstate |a_i> of A, with value a_i for the physical quantity to be measured. If you accept this, and you accept that measuring twice the same quantity in succession yields the second time the same result as the first time, this time with certainty, then where exactly is the difference between the Born rule (giving the probabilities) and the projection postulate ?

cheers,
Patrick.
 
  • #15
vaneschI wonder what difference you see between Born's postulate and the collapse postulate ? To me, they are one and the same thing:

Unfortunately, the two are melded in the "pedagogicial" expositions so a student is left with the illusion that the projection postulate is empirically essential element of the theory. The only working part of the so-called measurement theory is the operational Born rule (as a convenient practical shortcut[/color], in the way Shroedinger originally understood his wave function) which merely specifies the probability of a detection event without imposing any new non-dynamical evolution[/color] (the collapse/projection) on the system state. The dynamical evolution of the state is never interrupted[/color] in some non-dynamical, mysterious way by such things as human mind (as von Neumann, the originator of the QM Mystery cult, claimed) or in any other ad hoc fuzzy way.

What happens to the system state after the apparatus has triggered a macroscopic detection event is purely a matter of the specific apparatus design and it is in principle deducible from the design, initial & boundary conditions and the dynamic equations. Since the dynamical equations are local (ignoring the superficial non-locality of the limited non-relativistic approximations for potentials, such as V(r)=q/r) all changes to the state are local and continuous.

There is no coherent way to integrate the non-dynamic collapse into the system dynamics. There is only lots of dance and handwaving on the subject. When exactly does the dynamical equations get put on hold, how long are they held in suspension and when do they resume activity? It doesn't matter, the teacher said. Well, something, somewhere has to know, since it would have to perform it.

How do you know that collapse occurs at all? Well, teacher said, since we cannot attribute a definite value to the position (what exactly is the position? position of what? spread out field?) before the measurement and have the value (a value of what? the location of detector apperture? the blackened photo-grain? the electrode?) after the measurement, the definite value must have been created in a collapse which occurred during the measurement. Why can't there be values before the measurement? Well, von Neumann proved that it can't be done while remaining consistent with all QM predictions. Ooops, sorry, that proof was shown invalid, it's the Kochen-Specker's theorem which shows it can't be done (after Bohm produced the counter-example to von Neumann). Ooops, again, As Bell has shown, that one had the same kind of problem as von Neumman's "proof", it's really the Bell's theorem[/color] which shows it can't be done. And what does the Bell's theorem use to show that there is a QM prediciton which violates Bell's inequality? The projection postulate, of course.

So, to show that we absolutely need the projection postulate we use projection postulate to deduce a QM prediction which violates Bell's inequality (and which no local hidden variable theory can violate). Isn't that a bit circular, a kind of cheating? That can't prove that we need projection postulate.

Well, teacher said, this QM prediction was verified experimentally, too. It was? Well, yeah, it was verified, well, other than for some tiny loopholes. You mean the actual measured data hasn't violated the Bell's inequality? It's that far off, over 90% coincidence points missing and just hypothesized into the curve? All these decades, and still this big gap? Well, the gap just appears superficially large, a purely numerical artifact, its true essence is really small, though. It's just matter of time till these minor technological glitches are ironed out.

Oh, that reminded me Mr. Teacher, I think you will be interested in investing in this neat new device I happen to have in my backpack. It works great, it has 110% of output energy vs input energy. Yeah, it can do it, sure, here is the notebook which shows how. By the way, this particular prototype has a very minor, temporary manufacturing glitch which keeps it at the 10% output vs input, just at the moment. Don't worry about it, the next batch will work as predicted.
 
  • #16
I could say the same thing about the similar whinning people always do about QM's photon picture without realizing that if it is simply a cult not based on any form of validity, then it shouldn't WORK as well (eg. refer to the band structure of the very same semiconductors that you are using in your electronics and see how those were verified via photoemission spectroscopy).

That's the point I was addressing -- you can take the projection/collapse postulate out of the theory, it makes no difference for anything that has any contact with empirical reality. The only item that would fall would be Bell's theorem (since it uses the projection postulate to produce the alleged QM "prediction" which violates Bell's inequality). Since no actual experimental data has ever violated the inequality, there is nothing empirical that must be explained.

Since the Bell's theorem on impossibility of LHV is the only remaining rationale for the projection postulate (after von Neumann's & Kochen-Specker's HV "impossibility theorems" were found empirically irrelevant), and since its proof uses in an essential way the projection postulate itself, the two are a closed circle with no connect or usefulness to anything but to each other.


If you think you are correct, then put your money where you mouth is and try to have it published in a peer-reviewed journal. Till you are able to do that, then all your whinning are nothing more than bitterness without substance.

And who do you imagine might be a referee in this field who decides whether the paper gets published or not? The tenured professors and highly reputabe physicists who founded entire branches of research (e.g. Trevor Marshall, Emilio Santos, Asim Barut, E.T. Jaynes,...) with hundreds of papers previously published in reputable journals could not get passed the QM cult zealots to publish a paper which directly and unambiguosly challenges the QM Mystery religion (Marshall calls them the "priesthood"). The best they would get is a highly watered down version with key points edited or dulled out and any back-and-forth arguments spanning several papers cut-off with the last word always for the opponents.

Being irrelevant and useless, this parasitic branch will die off eventually of its own. After all, how many times can one dupe the money man with the magical quantum computer[/color] tales, before he gets it and requests that they either show it work or go find another sucker.
 
Last edited:
  • #17
nightlight said:
And what does the Bell's theorem use to show that there is a QM prediciton which violates Bell's inequality? The projection postulate, of course.

Not to my understanding. He only needs the Born rule, no ? Bell's inequalities are just expressions of probabilities, which aren't satisfied by some probabilities predicted by QM. If you accept the Hilbert state description and the Born rule to deduce probabilities, that's all there is to it.

Let's go through a specific example, as given in Modern Quantum Mechanics, by Sakurai, paragraph 3.9. But I will adapt it so that you explicitly don't need any projection.

The initial state is |psi> = 1/sqrt(2) [ |z+>|z-> - |z->|z+> ] (1)
which is a spin singlet state.

(I take it that you accept that).

The probability to have an |a+>|b+> state is simply given (Born rule) by:

P(a,b) =|( <a+|<b+| ) |psi> |^2 = 1/2 | <a+|z+><b+|z-> - <a+|z-><b+|z+> |^2

Let us assume that a and b are in the xz plane, and a and b denote the angle with the z-axis.
In that case, <u+|z+> = cos (u/2) and <u+|z-> = - sin (u/2)

So P(a,b) = 1/2 | - cos(a/2)sin(b/2) + sin(a/2)cos(b/2) |^2

or P(a,b) = 1/2 { sin( (a-b)/2 ) }^2 (2)

So the probability to measure particle 1 in the spin-up state along a and particle 2 in the spin-up state along b is given by P(a,b) as given in (2) and we deduced this simply using the Born rule.

Now one of Bell's inequalities for probabilities if we have local variables determining P(a,b) is given by:

P(a,b) is smaller or equal than P(a,c) + P(c,b).

Fill in the formula (2), and we should have:

Sin^2((a-b)/2) < = Sin^2((a-c)/2) + Sin^2((c - b)/2)

Now, take a = 0 degrees, b = 90 degrees, c = 45 degrees,

sin^2(45) <= ? sin^2(22.5) + sin^2(22.5)

0.5 < ? 0.292893...

See, I didn't need any projection as such...

cheers,
Patrick.
 
Last edited:
  • #18
vanesch Not to my understanding.

The state of the subsystem B which is (in the usual pedagogical description) a mixed state: 1/2 |+><+| + 1/2 |-><-|, becomes a pure state |+> for the sub-ensemble of B for which we get (-1) result on A. This type of composite system measurement treatment and the sub-system state reduction are the consequences of the projection postulate -- the reasoning is an exact replica of the von Neumann's original description of the measured system and the apparatus where he introduced the projection postulate along with the speculation that it was the observer's mind which created the collapse. Without the collapse in this model the entangled state remains entangled, since the unitary evolution cannot, in this scheme of measurement, pick-out of the superposition the specific outcome or a pure resulting sub-ensemble.

Of course, there is a grain of truth in the projection. There is a correlation of cos^2(theta) burried in the coincidence counts as Glauber's Quantum Optics multi-point correlation functions or the actual Quantum Optics experiments show. In terminology of photons, the problem is when one takes the particle picture literally and claims there is exactly one such "particle" (a member of correlated pair) and that we're measuring properties of that one particle. In fact, the photon number isn't a conserved quantity in QED.

The fully detailed QED treatment of the actuall Bell inequality experiments, which takes into account the detection process and the photon number uncertainty, would presumably, at least in principle, reproduce the correct correlations observed, including the actual registered coincidence counts, which don't violate Bell's inequality. The full events for the two detectors of subsystem B include: 1) no trigger on + or -, 2) (+) only trigger, 3) (-) only trigger, 4) (+) and (-) trigger. The pair coincidence counts then consist of all 16 combinations of possible outcomes.

The "pedagogical" scheme (and its von Neumann template) insists that only (2) and (3) are the "legitimate" single particle results and only their 4 combinations, out of 16 that actually occur, the "legitimate" pair events (I guess, since only these fall within its simple-minded approach), while labeling euphemistically (1) and (4), and the 12 remaining pair combinations, which are outside of the scheme, as artifacts of the technological "non-ideality" to be fixed by the future technological progress. The skeptics are saying that it is the "pedagogical" scheme (the von Neumann's collapse postulate with its offshoots, the measurement theory and Bell's QM "prediction" based on it) itself that is "non-ideal" since it doesn't correspond to anything that actually exists, and it is the eyesore which needs fixing.
 
Last edited:
  • #19
nightlight said:
The "pedagogical" scheme (and its von Neumann template) insists that only (2) and (3) are the "legitimate" results (I guess, since only these two fall within its simple-minded approach), while labeling euphemistically (1) and (4), which are outside of the scheme, as artifacts of the technological "non-ideality" to be fixed by the future technological progress.

I don't understand what you are trying to point out. Do you accept, or not, that superpositions of the type |psi> = 1/sqrt(2) (|+>|-> - |->|+>) can occur in nature, where the first and the second kets refer to systems that can be separated by a certain distance ?

If you don't, you cannot say that you accept QM and its dynamics and Born's rule, no ? If you do, I do not need anything else to show that Bell's inequality is violated, and especially, I do not need the projection postulate.
In the state |psi>, the state |+>|+> has coefficient 0, so probability 0 (Born) to occur, just as well as the state |->|->. So these are NOT possible outcomes of measurement if the state |psi> is the quantum state to start with. No projection involved.

I know that with visible photon detection, there are some issues with quantum efficiency. But hey, the scheme is more general, and you can take other particles if you want to. Your claim that quantum mechanics, with the Born rule, but without the projection postulate, does not violate Bell's inequalities is not correct, as I demonstrated in my previous message.

cheers,
patrick.
 
  • #20
vanesch I don't understand what you are trying to point out. Do you accept, or not, that superpositions of the type |psi> = 1/sqrt(2) (|+>|-> - |->|+>) can occur in nature, where the first and the second kets refer to systems that can be separated by a certain distance ?

I am saying that such |psi> for the entangled photons is a schematized back-of-the-envelope sketch, adequate for a heuristic toy model and not a valid model for any physical system. Its "predictions" don't match (not even closely) any actually measured data. To make it "match" the data, over 90% of the "missing" coincidence data points have to be hand-put into the "matching curve" under the "fair sampling" and other speculative conjectures (see the earlier discussion here with details and references on this point).

If you don't, you cannot say that you accept QM and its dynamics and Born's rule, no ? If you do, I do not need anything else to show that Bell's inequality is violated, and especially, I do not need the projection postulate.

You've got the Born rule conceptually melded with the whole measurement theory which came later. The original rule (which Born introduced as a footnote in a paper on scattering) I am talking about is meant in the sense Schroedinger used to interpret his wave function with: it is an operational shortcut, not a foundamental axiom [/color]of the theory. There is no non-dynamical change of state or fundamental probabilistic axiom in this interpretation -- the Psi evolves by dynamical equations at all times. All its changes (including any localization and the focusing effects) are due to the interaction with the aparatus. There are no fundamental probabilities or suspension and resumption of the dynamical evolution.

The underlying theoretical foundation that Schroedinger assumed is the interpretation of the |Psi(x)|^2 as a charge/matter density, or in the case of photons as the field energy density. The probability of detection is the result of the specific dynamics between the apparatus and the matter field (the same way one might obtain probabilities in a classical field measurements). You can check the numerous papers and preprints of Asim Barut and his disciples which show how this original Schroedinger view can be consistently carried out for atomic systems including the correct predictions of QED radiative corrections (their self-field electrodynamics, which was a refinement of the eralier "neoclassical electrodynamics" of E.T. Jaynes).


In the state |psi>, the state |+>|+> has coefficient 0, so probability 0 (Born) to occur, just as well as the state |->|->. So these are NOT possible outcomes of measurement if[/color] the state |psi> is the quantum state to start with.

And if that simple model of |psi> corresponds to anything real at all. I am saying it doesn't, it is a simple-minded toy model for an imaginary experiment. You would need to do the full QED treatment to make any prediction that could match the actual coincidence results (which don't violate even remotely the Bell's inequality).

No projection involved.

Of course it does have projection (collapse). You've just got used to the usual pedagocial omissions and shortcuts you can't notice it any more. You simply need to include the aparatus[/color] in the dynamics and evolve the composite state to see that no (+-) or (-+) result occurs under the unitary evolution of the composite system until something collapses the superposition of the composite system (this is the von Neumann's measurement scheme, which is the bases of the QM measurement theory). That is the state collapse that makes (+) or (-) definite on A and which induces the sub-ensemble state as |-> or |+> on B, which Bell's theorem uses in an essential way to assert that there is a QM "prediction" which violates his inequality.

The full system dynamics (of A,B plus the two polarizers and the 4 detectors) cannot produce via unitary evolution of the full composite system a pure state with a definite polarization of A and B, such as |DetA+>|A+>|DetB->|B->. It can produce only a superposition of such states. That's why von Neumann had to postulate the extra-dynamical collapse[/color] -- the unitary dynamics by itself cannot produce such transition within his/QM measurement theory.

Without this extra-dynamical global collapse, you only have A, B, the two polarizers and and the four detectors evolving the superposition via purely local field interactions, incapable even in principle of yielding any prediction that excludes LHV (since the unknown local fields are LHVs themselves). It is precisely this conjectured global extra-dynamical overall state collapse to a definite result which results in the apparent non-locality (no LHV) prediction. Without it, there is no such prediction.

I know that with visible photon detection, there are some issues[/color] with quantum efficiency.

This sounds nice and soft, like the Microsoft marketing describing the latest "issues" with IE (the most recent in never ending stream of the major security flaws).

Plainly speaking, the QM "prediction" of the Bell's theorem which violates his inequality, doesn't actually happen in real data. No coincidence counts ever violated the inequality.

Your claim that quantum mechanics, with the Born rule, but without the projection postulate, does not violate Bell's inequalities is not correct, as I demonstrated in my previous message.

You seem unaware of how the projection postulate fits in the QM measurement theory or maybe you don't realize that the Bell's QM prediction is deduced using the QM measurement theory. All you have "demonstrated" so far is that you can superficially replay the back-of-the-envelope pedagogical cliches.
 
  • #21
nightlight said:
The underlying theoretical foundation that Schroedinger assumed is the interpretation of the |Psi(x)|^2 as a charge/matter density, or in the case of photons as the field energy density.

Ok, so you do not accept superpositions of states in a configuration space that describes more than one particle ; so you introduce a superselection rule here. BTW, this superselection rule IS assumed to be there for charged particles, but not for spins or neutral particles.

In the state |psi>, the state |+>|+> has coefficient 0, so probability 0 (Born) to occur, just as well as the state |->|->. So these are NOT possible outcomes of measurement if[/color] the state |psi> is the quantum state to start with.

And if that simple model of |psi> corresponds to anything real at all. I am saying it doesn't, it is a simple-minded toy model for an imaginary experiment. You would need to do the full QED treatment to make any prediction that could match the actual coincidence results (which don't violate even remotely the Bell's inequality).

Well, that's a bit easy as a way out: "you guys are only doing toy QM. I'm doing 'complicated' QM and there, a certain theorem holds. But I can't prove it, it is just too complicated."

No projection involved.

Of course it does have projection (collapse). You've just got used to the usual pedagocial omissions and shortcuts you can't notice it any more.

Well, tell me where I use a collapse. I just use the Born rule which states that the probability of an event is the absolute value squared of the inproduct of the corresponding eigenstate and the state of the system. But I think I know what you are having a problem with. It is not the projection or collapse, it is the superposition in a tensor product of Hilbert spaces.



You simply need to include the aparatus[/color] in the dynamics and evolve the composite state to see that no (+-) or (-+) result occurs under the unitary evolution of the composite system until something collapses the superposition of the composite system (this is the von Neumann's measurement scheme, which is the bases of the QM measurement theory). That is the state collapse that makes (+) or (-) definite on A and which induces the sub-ensemble state as |-> or |+> on B, which Bell's theorem uses in an essential way to assert that there is a QM "prediction" which violates his inequality.

But that's not true ! Look at my "toy" calculation. I do not need to "collapse" anything, I consider the global apparatus "particle 1 is measured along spin axis a and particle 2 is measured along spin axis b". This corresponds to an eigenstate <a|<b|. I just calculate the in-product and that's it.

That's why von Neumann had to postulate the extra-dynamical collapse[/color] -- the unitary dynamics by itself cannot produce such transition within his/QM measurement theory.

Yes, I know, that's exactly the content of the relative-state interpretation.
But it doesn't change anything to the predicted correlations. As I repeat: you do not have a problem with collapse, but with superposition. And that's an essential ingredient in QM.

cheers,
Patrick.
 
  • #22
Well, that's a bit easy as a way out: "you guys are only doing toy QM. I'm doing 'complicated' QM and there, a certain theorem holds. But I can't prove it, it is just too complicated."

You're using a 2(x)2 D Hilbert space. That is a toy model considering that the system space has about infinitely many times more dimensions, even before accounting for the quantum field aspect which is ignored all together (e.g. the indefinitness of the photon number would give application of Bell's argument here a problem). The ignored spatial factors are essential in the experiment. The spatial behavior is handwaved into the resoning in an "idealized" (fictitious) form which goes contrary to the plain experimental facts (the "missing" 90 percent of coincidences) or any adequate theory of the setup ("adequate" in the sense of predicting quntitatively these facts).

A non-toy physical model ought to describe quantitatively what is being measured, not just what might be measured if the world were doing what you imagine it ought to be doing if it were "ideal".

Well, tell me where I use a collapse. I just use the Born rule which states that the probability of an event is the absolute value squared of the inproduct of the corresponding eigenstate and the state of the system.

Then you either don't know that you're using the QM measurement theory for the composite system here (implicit in the formulas for the joint event probabilities) or you don't know how any definite result occurs in the QM measurement theory (via the extra-dynamical collapse).

Just step back and think for a second here to see the blatant self-contradictory nature of your claim [/color](that you're not assuming any non-dynamical state evolution/collapse). Let's say you're correct, you only assume dynamical evolution of the full system state according to QM (or QED, ultimately) and you never claimed, or believe, that the system state stops following the dynamical evolution[/color].

Now, look what happens if you account for the spatial degrees of freedom and follow (at least in principle) the multiparticle Schroedinger-Dirac equations for the full system[/color] (the lasers, PDC crystals or atoms for cascade, A,B, polarizers, detectors, computers counting the data). Taken all together, you have a gigantic set of coupled partial differential equations that fully describe what is going on once you give the initial and the boundary conditions (we can assume boundary conditions 0, i.e. any subsystems which might interact are included in the "full" system already in the equations). We also cannot use the approximate non-relativistic instantaneous potentials/interactions[/color] since that would be putting in the superficial non-locality by hand into the otherwise local dynamics. This excludes the non-relativistic, explicitly non-local point-particle models with the instantaneous potentials (which are only an approximation for QED interactions), such as Bohm's formalism (which is equivalent to the non-relativistic QM formalism).

The fields described by these equations evolve purely locally (they're 2nd order or some such PDEs, linear or non-linear). Therefore these coupled fields following the local PDEs are local "hidden" variable model[/color] all by themselves, if all that is happening is accounted for by these equations.

Yet, you claim that this very same dynamical evolution, without ever having to be suspended and amended by an extra-dynamical deus-ex-machina, yield result which prohibits any local hidden variables from being able to account for the result of their evolution.

A purely dynamical evolution via a set of local PDEs cannot yield result which shows that purely local PDEs cannot describe the system. That's the basic self-contradiction of your claim "I am using no collapse."

You seem unaware of the reasoning behind the probabilistic rules of QM measurement theory[/color] you're applying, which bridges gap between the purely local evolution and the "no-local hidden variables" prediction[/color] -- a suspension of the dynamics has to occur to make such prediction possible since the dynamics by itself is a local "hidden" variable theory. And that is the problem of the conventional QM measurement theory.

My starting point, several messages back, is that you can drop this suspension of dynamics (which has to be there to yield the alleged system evolution which contradicts the local hidden variables), and nothing will be affected in the application of quantum theory to anything.

Nothing depends on it but the Bell's theorem which would have to be declared void (i.e. no such QM prediction exists if you drop the non-dynamical state collapse). Recall also that the only reason the non-dynamical collapse was introduced into quantum theory was to explain the earlier faulty proofs of impossibility of any hidden variables (von Neumann's, Kochen-Specker's). The only remaining rationale is the Bell's theorem which seemingly prohibits any LHVs, thus one needs some way to make system produce measured values, since the classical view (of pre-existent values) could not work.

Thus the two, the non-dynamical evolution (collapse) and the Bell's QM prediction are a circular system[/color] -- the Bell's prediction requires for its deduction the non-dynamical evolution (via QM measurement theory) and the reason we need any collapse at all is the alleged impossibility of the pre-existent values for the variables/observables, and the claimed impossibility hinges solely on the Bell's proof itself. Nothing else in physics requires either. The two are a useless parasitic attachment serving nothing but to support itself.
 
Last edited:
  • #23
nightlight said:
You're using a 2(x)2 D Hilbert space. That is a toy model considering that the system space has about infinitely many times more dimensions, even before accounting for the quantum field aspect which is ignored all together (e.g. the indefinitness of the photon number would give application of Bell's argument here a problem). The ignored spatial factors are essential in the experiment.

Well, you should know that in all of physics, making a model which only retains the essential ingredients is a fundamental part of it, and I claim that the essential part is the 2x2 D Hilbert space of spins. Of course, a common error also in physics is that an essential component is overlooked, and that seems to be what you are claiming. But then it is up to you to show me what that is. You now seem to imply that I should also carry with me the spatial parts of the wave function for both particles. OK, I can do that, but you can already guess that this will factor out if I take a specific model, such as a propagating gaussian bump for each one. But then you will claim that I don't take into account the specific interaction in the photocathode of that particular photomultiplier. I could introduce a simple model for it, but you won't accept that. So you are endlessly complicating the issue, so that in the end nothing can be said, and all claims are open. You are of course free to do so, but no advancement is made nowhere. Witches exist. So now it is up to you to do a precise calculation with the model you can accept as satisfying (it will always be a model, and you'll never describe completely all aspects) and show me what you get, which is different from what the simple 2x2D model gives you, after introducing finite efficiencies for both detectors.

You are making the accusation that I do not know what I'm using in QM measurement theory, but it was YOU who claimed that you accepted the Born rule (stating that the probability for an event to occur is given by |<a|psi>|^2). It is what I used, in my model.

cheers,
Patrick.
 
  • #24
nightlight said:
The fields described by these equations evolve purely locally (they're 2nd order or some such PDEs, linear or non-linear). Therefore these coupled fields following the local PDEs are local "hidden" variable model[/color] all by themselves, if all that is happening is accounted for by these equations.

As a specific remark, the Schroedinger equation is not a local PDE equation. Well, it is local in configuration space, but it is not in real space, because the wave function is a function over configuration space. So nothing stops you from coupling remote coordinates.

cheers,
Patrick.
 
  • #25
vanesch As a specific remark, the Schroedinger equation is not a local PDE equation. Well, it is local in configuration space, but it is not in real space, because the wave function is a function over configuration space. So nothing stops you from coupling remote coordinates.

The PDEs are non-local only if you use the non-local (non-relativistic) approximations for the interaction potentials, such as the non-relativistic Coulomb potential approximation. If you allow these potentials in, you don't need QM or Bell's theorem to show that no local hidden variable theory could reproduce the effects of such non-local potentials[/color]: as soon as you move particle A its Coulomb potential at the position of a far away particle B changes and affects the B. It's surprising to see such discussion non-sequitur even brought up after I already excluded it upfront, explicitly and with a color emphasis.

The fully relativistic equations do not couple field values at the space-like separations. They are strictly local PDEs.
 
  • #26
nightlight said:
The fully relativistic equations do not couple field values at the space-like separations. They are strictly local PDEs.

Oh, but I agree with you that there is nothing spacelike going on. I had a few other posts here concerning that issue. Nevertheless, the correlations given by superpositions of systems which are separated in space are real, if you believe in quantum superposition, and that the "toy" models you attack do give you the essential facts. You WILL see correlations in the statistics, and no, the fact that we do not get the RAW DATA to be that way is not something deep, but just due to finite efficiencies. A very simple model of the efficiencies of the detectors give a correct description of the data taken.
My point (which I think is not very far from yours) is that one should only consider the measurement complete when the data are brought together (and hence are not separated spacelike) by one observer. Until that point, I do consider that the data of the other "observation" is in superposition.
But that is just interpretational. There's nothing very deep here. And no, I don't think you have to delve into QED (which is really opening a Pandora Box!) for such a simple system, because people use photons, but you could think just as well of electrons or whatever other combination. Only, the experiments are difficult to perform.

cheers,
Patrick.
 
  • #27
vanesch Well, you should know that in all of physics, making a model which only retains the essential ingredients is a fundamental part of it, and I claim that the essential part is the 2x2 D Hilbert space of spins. ... But then it is up to you to show me what that is.

The essential part of the Bell's inequalities is that the counts which violate the inequality in the QM "prediction" include nearly all pairs (equivalent to requiring around 82% of setup efficiency). Otherwise if one allows lower setup efficiency, the counts will not violate the inequality. Thus the spatial aspects and an adequate model of detection are both essential to evaluate the maximum setup efficiency. If you can't predict from QM (or Quantum Optics) the setup efficiency sufficient to violate inequalities, you don't have a QM prediction but an idea, a heuristic sketch of a prediction. And that is the phase that this alleged QM "prediction" never outgrew. It is a back of the envelope hint for a prediction.

You now seem to imply that I should also carry with me the spatial parts of the wave function for both particles. OK, I can do that, but you can already guess that this will factor out if I take a specific model, such as a propagating gaussian bump for each one.

All I am saying, go predict something that you actually could measure. Predict the 90% missing coincidences and then change the model of the setup, the apertures, detector electrode materials, lenses, source intensities or wavelengths,... whatever you need, and then show how these changes can be made so that the setup efficiency falls within the critical range to violate the inequalities. No "fair sampling" or other such para-physical/speculative ad-hockery allowed. Just the basic equations and the verifiable properties of the materials used.


So now it is up to you to do a precise calculation with the model you can accept as satisfying (it will always be a model, and you'll never describe completely all aspects) and show me what you get, which is different from what the simple 2x2D model gives you, after introducing finite efficiencies for both detectors.

All I am saying is that there is no QM prediction which violates Bell's inequalities and which doesn't rely on a non-dynamical, non-local and empirically unverfied and unverifiable state evolution (collapse). The non-locality is thus put in by hand upfront in the premise of the "prediction".

Secondly, after over three decades of trying, there is so far no experimental data which violates the Bell inequalities, either. There are massive extrapolations of the obtained data which put in by hand the 90% of the missing coincidences (handwaved in with a wishfull ad hoc unverifiable conjectures, such as the "fair sampling"), and to everyones big surprise, the massively extrapolated data violates the Bell's inequality.

So there is neither prediction nor data which violates the inequality. Luckily nothing else in the practical applications of physics needs that violation or the non-dynamical collapse (used to deduce the "prediction"), for anything. (Otherwise it would have been put on hold until the violation is achieved.)

You are making the accusation that I do not know what I'm using in QM measurement theory, but it was YOU who claimed that you accepted the Born rule (stating that the probability for an event to occur is given by |<a|psi>|^2). It is what I used, in my model.

I explained in what sense I accepted the "Born rule" --- the same sense that Schroedinger assumed in his interpretation of the wave function --- as a practical, although limited, operational shortcut (the same way the classical physics had used probabilities, e.g. for the scattering theory), not as a fundamental postulate of the theory and certainly not as an all-powerful deus-ex-machina of the later QM measurement theory (which is what Bell used), capable of suspending the dynamical evolution of the system state and then resuming the dynamics after the "result" is produced.

That is an absurdity that will be ridiculed by the future generations as we ridicule the turtle on a turtle... model of earth. Why turtles? Why non-dynamical collapse? What was wrong with those people? What were they thinking.
 
  • #28
Hi nightline,

I'm new to this forum so I'm not really clear on what etiquette is followed here. I just wanted to know what background you have in Physics.

Cheers,

Kane
 
  • #29
nightlight said:
The essential part of the Bell's inequalities is that the counts which violate the inequality in the QM "prediction" include nearly all pairs (equivalent to requiring around 82% of setup efficiency).

Yes, but why are you insisting on the experimental parameters such as efficiencies ? Do you think that they are fundamental ? That would have interesting consequences. If you claim that ALL experiments should be such that the raw data satisfy Bell's inequalities, that gives upper bounds to detection efficiencies of a lot of instruments, as a fundamental statement. Don't you find that a very strong statement ? Even though for photocathodes in the visible light range with today's technologies, that boundary is not reached ? You seem to claim that it is a fundamental limit.

But what is wrong with the procedure of taking the "toy" predictions of the ideal experiment, apply efficiency factors (which can very easily be established also experimentally) and compare that with the data ? After all, experiments don't serve to measure things but to falsify theories.

cheers,
Patrick.
 
  • #30
Oh, but I agree with you that there is nothing spacelike going on.

The QM measurement theory has a spacelike collapse of a state. As explained earlier, it absolutely requires the suspension of the dynamical evolution to perform its magical collapse that yields "result" then somehow let's go, and the dynamics is resumed. That kind of hocus-pocus I don't buy.

Nevertheless, the correlations given by superpositions of systems which are separated in space are real, if you believe in quantum superposition, and that the "toy" models you attack do give you the essential facts. You WILL see correlations in the statistics,

You can't make the prediction of the sharp kind of correlations that can violate the Bell's inequality. You can't get them (and nobody else has done it so far) from QED and using the known properties of the detectors and sources. You get them only using the QM measurement theory with its non-dynamical collapse.

and no, the fact that we do not get the RAW DATA to be that way is not something deep, but just due to finite efficiencies. A very simple model of the efficiencies of the detectors give a correct description of the data taken.

Well, for the Bell's inequality violation, the setup efficiency is the key question. Without the certain minimum percentage of pairs showing violation local models are still perfectly fine. If you can't create a realistic model which can violate the inequality (the sources and the detectors design which could do it if made by the specs), there is no prediction, just an idea of a prediction.

My point (which I think is not very far from yours) is that one should only consider the measurement complete when the data are brought together (and hence are not separated spacelike) by one observer. Until that point, I do consider that the data of the other "observation" is in superposition.

I think that QM should work without any measurement theory beyond the kind of normal operational rules used in classical physics. The only rigorous rationale why the QM measurement theory (with its collapse) was needed was the faulty von Neumann's proof of impossibility of any hidden variables. Without hidden variables possible one has to explain how a superposition can yield a specific result at all. With hidden variables one can simply treat it as in the classical physics - the specific result was caused by the values of stochastic variables which were not controlled in the experiment. So, von Neumann invented the collapse (which was earlier used in an informal way) and conjectured that it was the observers mind which causes it, suspending the dynamical evolution, magically creating a definite result then the dynamical evolution resumes its operation.

After that "proof" (and its Kochen-Specker temporary patch) were shown to be irrelevant, having already been refuted by Bohm's counterexample, it was the Bell's theorem which became the sole basis for excluding the hidden variables (since non-local ones are problematic), thus that is the only remaining rationale for maintaining the collapse "solution" to the "no-(local)-hidden-variable" problem. If there were no Bell's no-LHV prediction of QM, there would be no need for a non-dynamical collapse "solution" since there would be no problem to solve. (Since one could simply look at the quantum probabilities as being merely the result of the underlying stochastic variables.)

The Achilles Heel of this new scheme is that in order to prove his theorem Bell needed QM measurement theory and the collapse, otherwise there is no prediction (based on more rigorous QED and a realisitc source and detection treatment) which would be sharp/efficient enough to violate the Bell's inequality.

What we're left with is the Bell's prediction which needs the QM collapse premise and we have the collapse "solution" for the measurement problem, the problem created because the collapse premise yields, via Bell's prediction, to a prohibition of LHVs. So the two, the Bell's prediction and the collapse, form a self-serving closed loop, soleley existing to prop each other up[/color], while nothing else in physics needs either. It is a tragic and shameful waste of time to teach kids and have them dwell on that kind of useless nonsense that future generations will laugh at (once they snap out from the spell of this logical viscious circle).
 
  • #31
nightlight said:
You can't make the prediction of the sharp kind of correlations that can violate the Bell's inequality. You can't get them (and nobody else has done it so far) from QED and using the known properties of the detectors and sources. You get them only using the QM measurement theory with its non-dynamical collapse.

I wonder what you understand by QED, if you don't buy superpositions of states...

cheers,
Patrick.
 
  • #32
vanesch Yes, but why are you insisting on the experimental parameters such as efficiencies ? Do you think that they are fundamental ?

For the violation of inequality, definitely. The inequality is purely enumerative type of mathematical inequality, like a pigeonhole principle. It depends in an essential way on the fact that a sufficient percentage of result slots are filled in for different angles so they cannot be rearranged to fit multiple (allegedly required) correlations. Check for example a recent paper by L. Sica where he shows that if you take a three finite arrays A[n], B1[n] and B2[n], filled with numbers +1 and -1 and you form the cross-correlation expressions (used for Bell's inequality): E1=Sum(A[j],B1[j])/n, E2=Sum(A[j],B2[j])/n and E12=Sum(B1[j],B2[j])/n, then no matter how you fill the numbers or how big the arrays are, they always satisfy the Bell's inequality:

| E1 - E2 | <= 1 - E12.

So no classical data set could violate this purely enumerative inequality, i.e. the QM prediction of violation means that if we were to turn the apparatus B from angle B1 to B2, the set of results on A must be always strictly different than what it was for B position B1 . Similarly, it implies that in the actual data, the sets of results on fixed A position, and two B positions, B1 and B2, the two sequences of A results must be strictly different from each other (they would normally have roughly equal numbers of +1's and -1's in each array, so the arrangement would have to be different between the two; they can't be the same even accidentally if the inequality is violated).

For the validity of this purely enumerative inequality, it is essential that the sufficient number of array slots is filled with +1 and -1, otherwise (e.g. if you put 0's in sufficient number of slots) the inequality doesn't hold. Some discussion of this result are in the later papers by L. Sica and by A. F. Kracklauer.

That would have interesting consequences. If you claim that ALL experiments should be such that the raw data satisfy Bell's inequalities, that gives upper bounds to detection efficiencies of a lot of instruments, as a fundamental statement. Don't you find that a very strong statement ? Even though for photocathodes in the visible light range with today's technologies, that boundary is not reached ? You seem to claim that it is a fundamental limit.

There is a balancing or tradeoff between the "loopholes" in the experiments. The detection efficiency can be traded for lower polarization resolution (by going to higher energy photons). Also, the detectors can be tuned to a higher sensitivity, but then the dark current rises, blurring the results and producing more "background" and "accidental" concidences have to be subtracted (which is another big no-no for a loophole free test).

For massive particles, similar tradeoffs occur as well -- if you want better detection efficiency by going to higher energies, the spin measurement resolution drops. For ultra-relativistic particles, which are detectable almost 100%, the Stern-Gerlach doesn't work at all any more and the very lossy Compton scattering spin resolution must be used.

You can check a recent paper by Emilio Santos for much more discussion of these and other tradeoffs (also in his earlier papers). Basically, there is a sort of "loophole conservation" phenomenon, so that squeezing one out makes another one grow. Detector efficiency is just one of the parameters.

But what is wrong with the procedure of taking the "toy" predictions of the ideal experiment, apply efficiency factors (which can very easily be established also experimentally) and compare that with the data ? After all, experiments don't serve to measure things but to falsify theories.

Using the toy model prediction as a heuristic to come up with an experiment is fine, of course. But bridging the efficiency losses to obtain the inequality violation by the data requires the additional assumptions such as "fair sampling" (which Santos discusses in the paper above and which I had discussed in detail in an earlier thread here, where the Santos' paper was discussed as well). After seeing enough of this kind of find-a-better-euphemism-for-failure games, it all starts to look more and more like perpetuum mobile inventors thinking up the excuses, blaming ever more creatively the real world's "imperfections" and "non-idealities" for the failure of their allegedly 110% overunity device.
 
  • #33
I wonder what you understand by QED, if you don't buy superpositions of states...

Where did I say I don't buy superposition of states? You must be confusing my rejection of your QM measurement theory version of "magical" Born rule and its application to the Bell's system with my support for different Born rule (non-collapsing, non-fundamental, approximate, apparatus dependent rule) to jump to conclusion that I don't believe in superposition.
 
  • #34
nightlight said:
For the violation of inequality, definitely. The inequality is purely enumerative type of mathematical inequality, like a pigeonhole principle.

Good. So your claim is that we will never find raw data which violates Bell's inequality. That might, or might not be the case, depending on whether you take this to be a fundamental principle and not a technological issue. I personally find it hard to believe that this is a fundamental principle, but as it stands experimentally currently it cannot be negated (I think ; I haven't followed the most recent experiments on EPR type stuff). I don't know if it has any importance at all that we can or not have these raw data. The point is that we can have superpositions of states which are entangled in bases that we would ordinary think are factorized.

However, I find your view of quantum theory rather confusing (I have to admit I don't know what you call quantum theory, honestly: you seem to use different definitions for terms than what most people use, such as the Born rule or the quantum state, or an observable).
Could you explain me how you see the Hilbert state formalism of say, 10 electrons, and what you understand by a state of the system, and what you call the Born rule in that case ?
Do you accept that the states are all the completely assymetrical functions in 10 3-dim space coordinates and 10 spin-1/2 states, or do you think this is not adequate ?

cheers,
Patrick.
 
  • #35
So your claim is that we will never find raw data which violates Bell's inequality.

These kind of enumerative constraints have been tightening in recent years and the field of Extremal set theory has been very lively lately. I suspect it will eventually be shown that, purely enumeratively (a la Sperner's theorem or Kraft's inequalities), the fully filled in arrays of results satisfying perhaps a larger set of correlation constraints from the QM prediction (not just the few that Bell had used), will yield a result that the measure of inequality violating sets converges to zero as the constraints (e.g. for different angles) are added in the limit of infinite arrays.

That would then put the claims of experimental inequality violation at the level of perpetuum mobile of the second kind, i.e. one could look at such claims as if someone claimed that he can flip the coin million times and get regularly the first million binary digits of Pi. If he can, he is almost surely cheating.

That might, or might not be the case, depending on whether you take this to be a fundamental principle and not a technological issue. I personally find it hard to believe that this is a fundamental principle,

Well, the inequality is of the enumerative kind of mathematical result, like a pigeonhole principle. Say, someone claimed they have a special geometric layout of holes and a special pigeon placement ordering which allows them to violate the pigeonhole principle inequalities, so they can put > N pigeons in N holes without having any hole with multiple pigeons. When asked to show it, the inventor brings out a gigantic panel of holes arranged in strange ways and in strange shapes, and starts placing pigeons jumping in some odd ways across the pannel, taking longer and longer to pick a hole as he went on, as if calculating where to go next.

After some hours and finishing about 10% of the holes, he stops and proudly proclaims, here, it is obvious that it works, no double pigeons in any hole. Yeah, sure, just some irrelevant, minor holes left due to the required computation. If I assume that holes filled are a fair sample of all the holes[/color] and I extrapolate proportionately the area I filled to the entire board[/color], you can see that continuing in this manner, I can put exactly 5/4 N pigeons without having to share a hole. And this is only a prototype algorithm. That's just a minor problem which will be solved as I refine the algorithm performance. After all, it would be implausible that an algorithm which worked so well so far, in just rough prototype form, would fail when polished to full strength.

This is precisely the kind of claims we have for the Bell's inequality tests with 10% of coincidences filled in, then claiming to violate the enumerative inequality for which the fillup of at least 82% is absolutely vital to even begin looking at it as a constraint. As Santos argues in the paper mentioned, the ad hoc "fair sampling" conjecture used to fast-talk over the failure, is a highly absurd assumption in this context (see also the earlier thread on this topic).

And often heard invocation of implausibility of QM failing with better detectors is as irrelevant as the pigeonhole algorithm inventor's assertion of implausibility of better algorithm failing -- it is a complete non-sequitur in the enumerative inequality context. Especially recalling the closed loop, self-serving circular nature of the Bell's No-LHV QM "prediction"[/color] and the vital tool used to prove it, the collapse postulate[/color] (the non-dynamical non-local evolution, vaguely specified suspension of dynamics), which in turn was introduced into the QM for the sole reason to "solve" the No-HV problem with measurement. And the sole reason it is still kept is to "solve" the remaining No-LHV problem, the one resulting from the Bell's theorem, which in turn requires the very same collapse in a vital manner to violate the inequality.

Since nothing else needs either of the two pieces, the two only serve to prop each other up while predicting no other empirical consequences for testing except for causing each other (if Bell's violation were shown experimentally, that would support the distant system-wide state collapse; no other test for the collapse exists).

The point is that we can have superpositions of states which are entangled in bases that we would ordinary think are factorized.

The problem is not the superposition but the adequacy of the overall model of (which the state is part of), and secondarily, attributing the particular state to a given preparation.

However, I find your view of quantum theory rather confusing (I have to admit I don't know what you call quantum theory, honestly: you seem to use different definitions for terms than what most people use, such as the Born rule or the quantum state, or an observable).

The non-collapse version of Born rule (as an approximate operational shortcut) has a long tradition. If you have learned just one approach and one perspective fed from a textbook, with usual pedagogical cheating on proofs and skirting of opposing or different approaches (to avoid confusing a novice), then yes, it could appear confusing. Any non-collapse approach takes this kind of Born rule, which goes back to Schroedinger.

Could you explain me how you see the Hilbert state formalism of say, 10 electrons, and what you understand by a state of the system, and what you call the Born rule in that case ? Do you accept that the states are all the completely assymetrical functions in 10 3-dim space coordinates and 10 spin-1/2 states, or do you think this is not adequate ?

For non-relativistic approximation, yes, it would be antisymmetrical 10*3 spatial coordinate function with Coulomb interaction Hamiltonian. This is still an external field approximation, which doesn't account for self-interaction of EM and the fermion fields. Asim Barut had worked out a scheme which superficially looks like self-consistent effective field methods (such as Hartree-Fock), but underneath it is an updated old Schroedinger's idea of treating matter fiels and EM fields as two interacting fields in a single 3D space. The coupled Dirac-Maxwell equations form a nonlinear set of PDEs. He shows that this treatment is equivalent to conventional QM 3N dimensional antisymmetrized N fermion equations, but with a fully interacting model (wich doesn't use the usual external EM field or external charge current approximations). With that approach he and his graduate students had reproduced the full non-relativistic formalism and the leading orders of radiative corrections of perturbative QED, without needing renormalization (no point particles, no divergences). You can check a http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=&AU=barut&AF=&CL=&RP=&YR= on this topic on the KEK server.

Regarding the Born rule for this atom, the antisymmetrization and the high interaction make the concept of individual electron here largely meaningless for any discussion of the Bell correlations (via Born rule). In Barut's approach this manifests in assuming no point electrons at all but simply using the single fermion matter field (normalized to 10 electron charge; he does have separate models of charge quantisation, these are stil somewhat sketchy) which has the identical scattering properties as the conventional QM/QED models, i.e. Born rule is valid in the limited, non-collapsing sense. E.T Jaynes has a similar perspective (see his paper `Scattering of Light by Free Electrons'; unfortunately, both Jaynes and Barut have died few years ago, so if you have any questions or comments, you'll probably have to wait a while since I am not sure that emails can go there and back).
 
Last edited by a moderator:
  • #36
vanesch Good. So your claim is that we will never find raw data which violates Bell's inequality.

Just to highlight the implications of the Sica's theorem a bit for the experimenal tests of Bell's inequality.

Say, you have an ideal setup, with 100% efficiency. You take two sets of measurements, keeping A orientation fixed and changing B from B1 to B2. You collect data as numbers +1 and -1 into arrays A[n] and B[n]. Since p(+)=p(-)=1/2, there will be roughly same number of +1 and -1 entries in each data array, i.e. this 50:50 ratio is insensitive to the orientation of the polarizers.

You have now done (A,B1) test and you have two arrays of +1/-1 data A1[n] and B1[n]. You are ready for the second test, you turn B to B2 direction to obtain data arrays A2[n], B2[n]. The Sica's theorem tells you that you will not get again (to any desired degree of certainty) the same sequence as A1[n], i.e. that a new sequence A2[n] must be explicitly different[/color] than A1[n], it must have +1s and -1s arranged differently (although stil in 50:50 ratio). You can keep repeating A,B2 run, and somehow the 50:50 content of A2[n] has to keep rearranging itself, while avoiding in some way arranging itself as A1[n].

Now, if you hadn't done (A,B1) test, then there is no such constraint on what A2[n] can be. To paraphrase a kid's response when told that thermos bottle keeps the hot liquids hot and cold liquids cold -- "How do it know?"[/color]

Or, another twist, you take 99 different angles for B and ontain sets of data A1[n],B1[n]; A2[n],B2[n]; ... A99[n],B99[n]. Now you're ready for the angle B100. This time the A100[n] has to keep rearranging itself to avoid matching all 99 previous arrays Ak[N].

Then you extend the above and, say, collect r=2^n data sets for 2^n different angles (they could be all the same angle, too). This time at the next angle, B_(2^n+1), the data for A_(2^n+1)[n] would have to avoid all 2^n arrays Ak[n], which it can't do. So you get that in each test there would be one failed QM prediction, for at least one angle, since that Bell inequality would not be violated.

Then you take 2^n*2^n previous test,... and so on. As you go up, it gets harder for the inequality violator, its negative test count has a guaranteed growth. Also, I think this therem is not nearly restraining enough and the real state is much worse for the inequality violator (as simple computer enumerations suggest when counting the percentages of violation cases for the finite data sets).

Or, you go back and start testing, say, angle B7 again. Now the QM magician in heaven has to allow new A7[n] to be the same as the old A7[n], which was prohibited up to that point. You switch B to B9, and now, the QM magician, has to disallow again the match with A7[n] and allow the match with old A9[n], which was prohibited until now.

Where is the memory for all that[/color]? And what about the elaborate mechanisms or the infrastructure needed to implement the avoidance scheme? And why? What is the point of remembering[/color] all that stuff? What does it (or anyone/anything anywhere) get in return?

The conjectured QM violation of of Bell's inequality basically looks sillier and sillier[/color] once these kind of implications are followed through. It is not any more mysterious or puzzling but plainly ridiculous[/color].

And what do we get from the absurdity? Well, we get the only real confirmation for the collapse since Bell's theorem uses collapse to produce the QM prediction which violates the inequality. And what do we need the collapse for? Well, it helps "solve" the measurement problem. And why is there the measurement problem? Well, because Bell's theorem shows you can't have LHVs to produce definite results. Anything else empirical from either. Nope. What a deal.

The collapse postulate first lends a hand to prove Bell's QM prediciton which in turn, via LHV prohibition, creates a measurement problem which then the collapse "solves" (thank you very much). So the collapse postulate creates a problem then solves it. What happens if we take out collapse postulate all together. No Bell's theorem, hence no measurement problem, hence no problem at all. Nothing else is sensitive to the existence (or the lack) of the collapse but the Bell's inequality experiment. Nothing else needs the collapse. It is a parasitic historical relic in the theory.
 
Last edited:
  • #37
nightlight said:
vanesch Good. So your claim is that we will never find raw data which violates Bell's inequality.

Just to highlight the implications of the Sica's theorem a bit for the experimenal tests of Bell's inequality.

Say, you have an ideal setup, with 100% efficiency. You take two sets of measurements, keeping A orientation fixed and changing B from B1 to B2. You collect data as numbers +1 and -1 into arrays A[n] and B[n]. Since p(+)=p(-)=1/2, there will be roughly same number of +1 and -1 entries in each data array, i.e. this 50:50 ratio is insensitive to the orientation of the polarizers.

You have now done (A,B1) test and you have two arrays of +1/-1 data A1[n] and B1[n]. You are ready for the second test, you turn B to B2 direction to obtain data arrays A2[n], B2[n]. The Sica's theorem tells you that you will not get again (to any desired degree of certainty) the same sequence as A1[n], i.e. that a new sequence A2[n] must be explicitly different[/color] than A1[n], it must have +1s and -1s arranged differently (although stil in 50:50 ratio). You can keep repeating A,B2 run, and somehow the 50:50 content of A2[n] has to keep rearranging itself, while avoiding in some way arranging itself as A1[n].

Now, if you hadn't done (A,B1) test, then there is no such constraint on what A2[n] can be.

I'm not sure I understand what you are at. I will tell you what I make of what is written out above, and you correct me if I'm not understanding it right, ok ?

You say:
Let us generate a first sequence of couples:
S1 = {(a_1(n),b_1(n)) for n running from 1:N, N large}
Considering a_1 = (a_1(n)) as a vector in N-dim Euclidean space, we can require a certain correlation between a_1 and b_1. So a_1 and b_1 are to have an angle which is not 90 degrees.

We next generate other sequences, S2, S3... S_M with similar, or different, correlations. I don't know Sica's theorem, but what it states seems quite obvious, if I understand so: a_1, a_2 ... are to be considered independent, and hence uncorrelated, or approximately orthogonal in our Euclidean space. The corresponding b_2, b_3 etc... are also to be considered approximately orthogonal amongst themselves, but are correlated with similar or different correlations (angles different from 90 degrees in that Euclidean space) with their partner a_n. Sounds perfectly all right to me. Where's the problem ? You have couples of vectors making an angle (say 30 degrees) which are essentially orthogonal between pairs in E_N. Of course for this to hold, N has to be much much larger than the number of sequences M. So you have a very high dimensionality in E_n. I don't see any trouble, moreover, this is classical statistics ; where's the trouble with Bell and consorts ? It is simply an expression about the angles between the pairs, no ?

cheers,
patrick.
 
  • #38
vanesch I don't know Sica's theorem, but what it states seems quite obvious,

It is the array correlation inequality statement I gave earlier which also had the Sica's preprint link.

if I understand so: a_1, a_2 ... are to be considered independent, and hence uncorrelated, or approximately orthogonal in our Euclidean space. ... Sounds perfectly all right to me. Where's the problem ?

You were looking for a paradox or a contradiction[/color] while I was pointing at the peculiarity for a specific subset[/color] of sequences. The most probable and the average behavior is, as you say, the approximate orthogonality among a1, a2,... or among b1, b2,... There is no problem here.

The Sica's result is stronger, though -- it implies that if the two sets of measurements (A,B1) and (A,B2) satisfy Bell's QM prediction then it is necessary that a1 and a2 vectors in E_N be explicitly different[/color] -- the a1 and a2 cannot be parallel, or even approximately parallel[/color].

With just these two data sets, the odds of a1 and a2 being nearly parallel are very small, of the order 1/2^N. But if you have more than 2^N such vectors, they cannot all[/color] satisfy the QM prediction requirement that each remain non-parallel to all others. What is abnormal about this, is that this means that at least one test is guaraneed[/color] to fail, which is quite different from what one would normally expect of a statistical prediction: one or more finite tests may[/color] fail (due to a statistical fluctuation). Sica's result implies that at least one test must fail, no matter how large the array sizes.

What I find peculiar about it is that there should be any requirement of this kind at all between the two separate sets of measurements. To emphasize the peculiarity of this, consider that each vector a1 and a2 will have roughly the 50:50 split between +1 and -1 values. So it is the array ordering convention[/color] for individual results in two separate experiments that is constrained by the requirement of satisfying the alleged QM prediction.

The peculiarity of such kind of constraint is that the experimenter is free to label individual pairs of detection results in any order he wishes, i.e. he doesn't have to store the results of (A,B2) into A2[] and B2[] arrays so that array indices follow temporal order. He can store the first pair of detection results into A2[17], B2[17], the second pair at A2[5], B2[5], ... etc. This labeling is purely a matter of convention and no physics should be sensitive to such labeling convention[/color]. Yet, the Sica's result implies that there is always a labeling convention for these assignements which yields negative result for the test of Bell's QM prediction (i.e. it produces the classical inequality).

Let's now pursue the oddity one more step. The original labeling convention for the experiment (A,B2) was to map the larger time of detection to larger index. But you could have mapped it the opposite way i.e. the larger time to smaller index. The experiments should still succeed, i.e. violate the inequality (with any desired certainty, provided you pick N large enough). Now, you could have ignored the time of detection and used a random number generator to pick the next array slot to put the result in. You still expect the experiment to almost always succeed. It shouldn't matter whether you use a computer generated pseudo-random generator or flip a coin. Now, the the array a1 is equivalent to a sequence of coin flips, as random as one can get. So we use that sequence for our labeling convention to allocate next slot in the arrays a2,b2. Now with a1 used as the labeling seed, you can make a2 parallel to a1, 100% of the time. Thus there is a labeling scheme for experiments (A,B2), (A,B3),... which makes all of them always fail the Bell's QM prediction test.

Now, you may say, you are not allowed to use a1 for your array labeling convention in experiment (A,B2). Well, Ok, so this rule for the labeling must be added to the QM postulates[/color] since it doesn't follow from the existent postulates. And we now have another postulate that says, roughly:

COINCIDENCE DATA INDEX LABELING POSTULATE: if you are doing a photon correlation experiment and your setup has the efficiency above 82%, and you desire to uphold the collapse postulate[/color], you cannot label the results in any order you wish. Specifically, your labeling algorithm cannot use the data from any other similar photon correlation experiment which has one polarizer axis parallel to your current test and which has the setup efficiency above 82%. If none of the other experiment's axis is parallel to yours, or if their efficiency is below 82%, then you're free to use the data of said experiment in your labeling algorithm. Also, if you do not desire to uphold the collapse postulate, you're free to use any labeling algorithm and any data you wish.[/color]

That's what I see as peculiar. Not contradictory or paradoxical, just ridiculous.
 
Last edited:
  • #39
nightlight said:
vanesch I don't know Sica's theorem, but what it states seems quite obvious,

It is the array correlation inequality statement I gave earlier which also had the Sica's preprint link.

I'll study it... even if I still think that you have a very peculiar view of things, it merits some closer look because I can learn some stuff too...

cheers,
Patrick.
 
  • #40
vanesch said:
I'll study it... even if I still think that you have a very peculiar view of things, it merits some closer look because I can learn some stuff too...

cheers,
Patrick.

Ok, I read the paper you indicated and I have to say I'm disappointed, because there seems to be a blatant error in the reasoning.
If you have 2 series of measurements, (a,b) and (a',b'), and you REORDER the second stream so that a = a', then of course the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>, because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> by quantum theory.
So the point of the paper escapes me completely.

Look at an example.
Suppose we had some Bantum Theory, which predicts that <a.b> = 0, <a.b'> = 1 and <b.b'> = 1. You cannot have any harder violation of equation (3). (Quantum theory is slightly nicer).
Now, Bantum theory also only allows you to confront two measurements at a time.

First series of experiments: a and b:
(1,1), (1,-1),(-1,1),(-1,-1),(1,1), (1,-1),(-1,1),(-1,-1)

Clearly, we have equal +1 and -1 in a and in b, and we have <a.b> = 0.

Second series of experiments: a and b':
(1,1),(1,1),(-1,-1),(-1,-1),(1,1),(1,1),(-1,-1),(-1,-1),

Again, we have equal amount of +1 and -1 in a and b', and <a.b'> = 1.
Note that I already put them in order of a.

Third series of experiments: b and b':
(1,1),(1,1),(-1,-1),(-1,-1),(1,1),(1,1),(-1,-1),(-1,-1)

Note that for the fun of it, I copied the previous one. We have <b.b'> = 1, and an equal amount of +1 and -1 in b and b'.

There is no fundamental reason why we cannot obtain these measurement results, is there ? If experiments confirm this, Bantum theory is right. Nevertheless, |<ab> - <ab'>| <? 1 - <b.b'>
or |0 - 1| < 1 - 1 or 1 < 0 ?


cheers,
Patrick.
 
  • #41
vanesch said:
If you have 2 series of measurements, (a,b) and (a',b'), and you REORDER the second stream so that a = a', then of course the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>, because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> by quantum theory.

I can even add that (a,b) and (a',b') have absolutely nothing to do with (b,b') as a new measurement. I have seen this kind of reasoning to refute EPR kinds of experiments or theoretical results several times now, and they are all based on a fundamental misunderstanding of what exactly quantum theory, as most people accept it, really predicts.

This doesn't mean that these discussions aren't interesting. However, you should admit that your viewpoint isn't so obvious as to call the people who take the standard view blatant idiots. Some work on the issue can be done, but one should keep an open mind. I have to say that I have difficulties seeing the way you view QM, because it seems to jump around certain issues in order to religiously fight the contradiction with Bell's identities. To me, they don't really matter so much, because the counterintuitive aspects of QM are strongly illustrated in EPR setups, but they are already present from the moment you accept superposition of states and the Born rule.

I've seen up to now two arguments: one is that you definitely need the projection postulate to deduce Bell's inequality violation, which I think is a wrong statement, and second that numerically, out of real data, you cannot hope to violate systematically Bell's inequality, which I think is also misguided, because a local realistic model is introduced to deduce these properties.

cheers,
Patrick.
 
Last edited:
  • #42
vanesch Ok, I read the paper you indicated and I have to say I'm disappointed, because there seems to be a blatant error in the reasoning.

... the correlation <a.b'> = <a.b> is conserved, but you've completely changed <b.b'>[/color], because the b hasn't permuted, and the b' has. From there on, there's no reason why this re-calculated <b.b'> (which enters in the Bell inequality, and must indeed be satisfied) has anything to do with the completely different prediction of <b.b'> [/color]by quantum theory.


The indicated statements show you have completely missed the several pages of discussion in the Sica's paper on his "data matching" procedure[/color] where he brings out that question and explicitly preserves <b.b'>[/color]. Additional analysis of the same question is in his later paper. It is not necessary to change the sum <b1.b2> even though individual elements of the arrays b1[] and b2[] are reshuffled. Namely there is a great deal of freedom when matching a1[] and a2[] elements since any +1 from a2[] can match any +1 from a1[], allowing thus for [(N/2)!]^2 ways to match N/2 of +1's and N/2 of -1's between the two arrays. The constraint from <b1.b2> requires only that the sum is preserved in the permutation, which is a fairly weak constraint.

Although Sica's papers don't give a blow by blow algorithm for adjusting b2[], there is enough description in the two papers to work out a simple logistics for the swapping moves between the elements of b2[] which don't change the correlation <a.b2> and which monotonically (in steps of 2 per move) approach the required correlation <b1.b2> until reaching it within the max error of 1/N.

Let me know if you have any problem replicating the proof, then I'll take the time to type it in (it can be seen from a picture of the three arrays almost at a glance, although typing it all in would be a bit of a tedium).
 
  • #43
nightlight said:
The indicated statements show you have completely missed the several pages of discussion in the Sica's paper on his "data matching" procedure[/color] where he brings out that question and explicitly preserves <b.b'>[/color].

Ok, I can understand that maybe there's a trick to reshuffle the b2[] in such a way as to rematch <b.b'> that would be present if there was a local realistic model (because that is hidden in Sica's paper, see below). I didn't check it and indeed I must have missed that in Sica's paper. However, I realized later that I was tricked into the same reasoning error as is often the case in these issues, and that's why I posted my second post.
There is absolutely no link between the experiments [angle a, angle b] and [angle a, angle b'] on one hand, and a completely new experiment [angle b, angle b']. The whole manipulation of series of data tries to find a correlation <b.b'> from the first two experiments, and because there is a notational equivalence (namely the letters b and b') between the second datastreams of these first two experiments, and the first and second datastream of the third experiment. So I will now adjust the notation:
First experiment: analyser 1 at angle a, and analyser 2 at angle b, results in a data stream {(a1[n],b1[n])}, shortly {Eab[n]}
Second experiment: analyser 1 at angle a, and analyser 2 at angle b', results in a datastream {(a2[n],b'2[n])} , shortly {Eab'[n]}.
Third experiment: analyser 1 at angle b, and analyser 2 at angle b', results in a datastream {(b3[n],b'3[n])}, shortly {Ebb'[n]}.

There is no way to deduce <b3.b'3> from the first two experiments UNLESS you assume an underlying model which has a "master distribution" from which all these series are drawn ; this is nothing else but a local realistic model, for which indeed, Bell's inequalities must hold. The confusion seems to come from the fact that one tries to construct a <b.b'> from data that haven't been generated in the Ebb' condition, but from the Eab and Eab' conditions. Indeed, if these b and b' streams were to be predetermined, this reasoning would hold. But it is the very prediction of standard QM that they aren't. So the Ebb' case has the liberty of generating completely different correlations than those of the Eab and Eab' cases.

That's why I gave my (admittedly rather silly) counter example in Bantum mechanics. I generated 3 "experimental results" which

Although Sica's papers don't give a blow by blow algorithm for adjusting b2[], there is enough description in the two papers to work out a simple logistics for the swapping moves between the elements of b2[] which don't change the correlation <a.b2> and which monotonically (in steps of 2 per move) approach the required correlation <b1.b2> until reaching it within the max error of 1/N.
Contrary to what Sica writes below his equation (3) in the first paper, he DID introduce an underlying realistic model, namely that the correlations of b and b' in the case of Eab and Eab' have anything to do with the correlations of Ebb'.

Let me know if you have any problem replicating the proof, then I'll take the time to type it in (it can be seen from a picture of the three arrays almost at a glance, although typing it all in would be a bit of a tedium).

I'll give it a deeper look ; indeed it excaped me, it must be what phrase starting on line 14 of p6 alludes to (I had put a question mark next to it!), but even if I agree with it, the point is moot, because, as I said, THIS correlation between the b and b' trains (in my notation b1 and b'2) should a priori have nothing to do with the correlation between b3 and b'3. In fact, I now realize that you can probably fabricate every thinkable correlation between b1 and b'2 that is compatible with (3) by reshuffling b'2[n], so this correlation is not even well-defined. Nevertheless, by itself the result is interesting, because it illustrates very well a fundamental misunderstanding of standard quantum theory (at least I think so :-). I think I illustrated the point with my data streams in Bantum theory ; however, because i tricked them by hand you might of course object. If you want to, I can generate you a few more realistic series of data which will ruin what I think Sica is claiming when he writes (lower part of p9) "However, violation of the inequality by the correlations does imply that they cannot represent any data streams that could possibly exist or be imagined".

cheers,
Patrick.
 
  • #44
vanesch I can even add that (a,b) and (a',b') have absolutely nothing to do with (b,b') as a new measurement.

You may need to read the part two of the paper where the connection is made more explicit, and also check the original Bell's 1964 paper (especially Bell's Eq's (8) and (13) which utilize the perfect correlations for the parallel and anti-parallel apparatus orientations, to move implicitly between the measurements on B and A, in the QM or in the classical model; these are essential steps for the operational interpretation of the three correlations case).


I have seen this kind of reasoning to refute EPR kinds of experiments or theoretical results several times now, and they are all based on a fundamental misunderstanding of what exactly quantum theory, as most people accept it, really predicts.

That wasn't a refutation of Bell's theorem or experiments, merely a new way to see the nature of the inequality. It is actually quite similar to an old visual proof of Bell's theorem by Henry Stapp from late 1970s (I had it while working on masters thesis on this subject, it was an ICTP preprint that my advisor brought back from a conference in Trieste).

However, you should admit that your viewpoint isn't so obvious as to call the people who take the standard view blatant idiots.

I wouldn't do that. I was under the spell for quite a few years, even though I did masters degree on the topic and have read quite a few papers and books at the time and had spent untold hours discussing it with the advisors and the colleagues. It was only after leaving the academia and forgetting about it for few years, then happening to get involved a bit helping my wife (also a physicist, but experimental) with some quantum optics coincidence setups, that it struck me as I was looking at the code that processed the data -- wait a sec, this is nothing like I imagined. All the apparent firmness of assertions, such as A goes here, B goes there, ... in textbooks or papers, rang pretty hollow.

On the other hand, I do think, it won't be too long before the present QM mystification is laughed at by the next generation of physicists. Even the giants like Gerard 't Hooft are ignoring the Bell's and other no-go theorems and exploring the purely local sub-quantum models (the earlier heretics such as Schroedinger, Einstein, de Broglie, later Dirac, Barut, Jaynes,.. weren't exactly midgets, either).

...Bell's identities. To me, they don't really matter so much, because the counterintuitive aspects of QM are strongly illustrated in EPR setups, but they are already present from the moment you accept superposition of states and the Born rule.

There is nothing odd about superposition at least since Maxwell and Faraday. It might surprise you, but plain old classical EM fields can do the entanglement, GHZ state, qubits, quantum teleportation, quantum error correction,... and the rest of the buzzwords, just about all but the non-local, non-dynamical collapse - that bit of magic they can't do (check papers by Robert Spreeuw, also http://remote.science.uva.nl/~spreeuw/lop.htm ).

I've seen up to now two arguments: one is that you definitely need the projection postulate to deduce Bell's inequality violation, which I think is a wrong statement

What is your (or rather, the QM Measurement theory's) Born rule but a projection -- that's where your joint probabilities come from. Just recall that the Bell test can be viewed as a preparation of a system B in, say, pure |B+> state, which appears as a sub-ensemble of B for which A produced the (-1) result. The state of A+B which is a pure state initially collapses into a mixture rho = 1/2 |+><+|x|-><-| + 1/2 |-><-|x|+><+| from which one can identify the sub-ensemble of a subsystem, such as |B+> (without going to mixture the statements such as "A produced -1" are meaningless since the initial state is a superposition and spherically symmetrical). Unitary evolution can't do that without non-dynamical collapse (see von Neumann's chapter on measurement process and his infiite chain problem and why you have to have it).

You may imagine that you never used state |B+> but you did since you used the probabilities for B system (via the B subspace projector within your joint probabilities calculation) which only that state can reproduce (the state is unique once all the probabilities are given, according to Gleason).

Frankly, you're the only one ever to deny using collapse in deducing Bell's QM prediction. Only a suspension of the dynamic evolution can arrive from the purely local PDE evolution equations [/color](the 3N coordinate relativistic Dirac-Maxwell equations for the full system, including the aparatus) to a prediction which prohibits any purely local PDE based mechanism[/color] from reproducing such prediction. (You never explained how can local PDEs do it without suspension of dynamics; except for trying to throw into the equations, as a premise, the approximate, explicitly non-local/non-relativistic instantaneous potentials.)


and second that numerically, out of real data, you cannot hope to violate systematically Bell's inequality, which I think is also misguided,

I never claimed that the Sica's result rules out, mathematically or logically, the experimental confirmation of the Bell's QM prediction. It only makes those predictions seem stranger than one would have thought from usual descriptions.

The result also sheds light on the nature of Bell's inequalities -- they are enumerative constraints, a la pigeonhole principle. Thus the usual euphemisms and excuses used to soften the plain simple fact of over three decades of failed tests, are non-sequiturs (see couple messages back for the explanation).
 
Last edited by a moderator:
  • #45
nightlight said:
What is your (or rather, the QM Measurement theory's) Born rule but a projection -- that's where your joint probabilities come from.

Yes, that was indeed the first question I asked you: to me, the projection postulate IS the Born rule. However, I thought you aimed at the subtle difference between calculating probabilities (Born rule) and the fact that AFTER the measurement, the state is assumed to be the eigenstate corresponding to the measurement result, and I thought it was the second part that you were denying, but accepting the absolute square of inproduct as the correct probability prediction. As you seem to say yourself, it is very difficult to disentangle both!

The reason why I say that you seem to deny superposition in its general sense is that without the Born rule (inproduct squared = probability) the Hilbert space has no meaning. So if you deny the possibility for me to use that rule on the product space, this means you deny the existence of that product space, and hence the superpositions of states such that the result is not a product state. You need that Born rule to DEFINE the Hilbert space. It is the only link to physical results. So in my toy model in 2x2 D hilbert space, I can think of measurements (observables, Hermitean operators) which can, or cannot factor into 1 x A or A x 1, it is my choice. If I choose to have a "global measurement" which says "result +1 for system 1 and result -1 for system 2", then that is ONE SINGLE MEASUREMENT, and I do not need to use any fact of "after the measurement, the state is in an eigenstate of...". I need such kind of specification in order for the product space to be defined as a single Hilbert space, and hence to allow for the superposition of states across the products. Denying this is denying the superposition.
However, you need a projection, indeed, to PREPARE any state. As I said, without it, there is no link between the Hilbert space description and any physical situation. The preparation is here the singlet state. But in ANY case, you need some link between an initial state in Hilbert space, corresponding to a physical setup.

Once you've accepted that superposition of the singlet state, it should be obvious that unitary evolution cannot undo it. So, locally (meaning, acting on the first part of the product space), you can complicate the issue as much as you like, there's no way in which you can undo the correlation. If you accept the Born rule, then NO MATTER WHAT HAPPENS LOCALLY, these correlations will show up, violating Bell's inequalities in the case of 100% efficient experiments.

cheers,
Patrick.
 
  • #46
vanesch There is no way to deduce <b3.b'3> from the first two experiments UNLESS you assume an underlying model which has a "master distribution" from which all these series are drawn ; this is nothing else but a local realistic model, for which indeed, Bell's inequalities must hold.

The whole point of reshuffling was to avoid the need for an underlying model (the route Bell took) in order to get around the non-commutativity of b,b' and the resulting counterfactuality (thus having to do the third experiment; as any Bell inequality test does) when trying to compare their correlations in the same inequality (that was precisely the point of the Bell's original objection to von Neumann's proof). See both Sica's papers (and the Bell's 1964 paper, also useful the related 1966 paper [Rev. Mod. Phys 38, 447-52] on von Neumann's proof) to see how much work it took to weave the logic around the counterfactuality and the need for either the third experiment or for the underlying model.
 
  • #47
nightlight said:
vanesch There is no way to deduce <b3.b'3> from the first two experiments UNLESS you assume an underlying model which has a "master distribution" from which all these series are drawn ; this is nothing else but a local realistic model, for which indeed, Bell's inequalities must hold.

The whole point of reshuffling was to avoid the need for an underlying model (the route Bell took) in order to get around the non-commutativity of b,b' and the resulting counterfactuality (thus having to do the third experiment; as any Bell inequality test does) when trying to compare their correlations in the same inequality (that was precisely the point of the Bell's original objection to von Neumann's proof).

Ah, well, I could have helped them out back then :smile: :smile:. It was exactly the misunderstanding of what exactly QM predicts that I tried to point out. The very supposition that the first two b[n] and b'[n] series should have anything to do with the result of the third experiment means that 1) one didn't understand what QM said and didn't say, but also 2) that one supposes that these series were drawn from a master distribution, which would have yielded the same b[n] and b'[n] if we happened to have choosen to measure those instead of (a,b) and (a,b'). This supposition by itself (which comes down to saying that the <b1[n],b'2[n]> correlation (which, I repeat, is not uniquely defined by Sica's procedure) has ANYTHING to do with whatever should be measured when performing the (b,b') experiment IS BY ITSELF a hidden-variable hypothesis.

cheers,
Patrick.
 
  • #48
Yes, that was indeed the first question I asked you: to me, the projection postulate IS the Born rule.

In the conventional textbooks QM measurement axiomatics the core empirical essence of the original Born rule (as Schoredinger intepreted it in his founding papers; Born introduced a related rule as a footnote for interpreting the scattering amplitudes) is hopelessly entangled with the Hilbert space observables, projectors, Gleason's theorem, etc.

However, I thought you aimed at the subtle difference between calculating probabilities (Born rule) and the fact that AFTER the measurement, the state is assumed to be the eigenstate corresponding to the measurement result, and I thought it was the second part that you were denying,

Well, that part, the non-destructive measurement, is mostly von Neumman's high abstraction which has little relevance or physical content (any discussion on that topic is largely a slippery semantic game, arising from the overload of term "measurement" and shifting its meaning between the prepartion and detection -- that whole topic is empty). Where is the photon A in Bell's experiment after it triggered the detector? Or for that matter any photon after detection in any Quantum Optics experiment?

but accepting the absolute square of inproduct as the correct probability prediction.

That is an approximate and limited operational rule. Any claimed probability of detection ultimately has to check against the actual design and the settings of the aparatus. Basically, the presumed linear response of an apartus to the Psi^2 is a linear approximation to a more complex non-linear response (e.g. check the actual photodetectors sensitivity curves, they are sigmoid with only one section approximately linear). Talking of "probability of detecting a photon" is somewhat misleading, often confusing, and it is less accurate than talking about degree and nature of response to an EM field.

In the usual Hilbert space formulation, the Born rule is a static, geometric property of vectors, projectors, subspaces. It lacks the time dimension, thus the connection to the dynamics which is its real origin and ultimate justification and delimiter.

The reason it is detached from the time and the dynamics[/color] is precisely in order to empower it with the magic capability of suspending the dynamics[/color], producing the "measurement" result with such and such probability, then resuming the dynamics. And without ever defining how and when exactly this suspension occurs, what and when restarts it... etc. It is so much easier to forget about time and dynamics if you smother them with ambiguous verbiage ("macroscopic" and other such obfuscations) and vacuous but intricate geometric postulates. By the time student gets through all of it, his mind will be too numbed, his eyes too glazed to notice that emperor wears no trousers.

The reason why I say that you seem to deny superposition in its general sense is that without the Born rule (inproduct squared = probability) the Hilbert space has no meaning.

It is a nice idea (linearization) and a useful tool taken much too far. The actual PDEs and the integral equations formulations of the dynamics are mathematically much richer modelling medium then their greately impoverishing abstraction, the Hilbert space.

Superposition is as natural with any linear PDEs and integral equations as it is with Hilbert space. On the other hand, the linearity is almost always an approximation[/color]. The QM (or of QED) linearity is an approximation for the more exact interaction between the matter fields and EM field. Namely the linearization arises from assuming that the EM fields are "external" [/color](such as Coulomb potential or an external EM fields interacting with the atoms) and that the charge currents giving rise to quantum EM fields are external[/color]. Schroedinger's original idea was to put the Psi^2 (and its current) as the source terms in the Maxwell equations, obtaining thus the coupled non-linear PDEs. Of course, at the time and in that phase that was much too ambitious project which never got very far. It was only in late 1960s that Jaynes picked up the Schroedinger's idea and developed somewhat flawed "neoclassical electrodynamics". That was picket in mid-1980s by Asim Barut which worked out more accurate "self-field electrodynamics" which reproduces not only the QM but the leading radiative corrections of QED, without ever quantizing (which amounts to linearizing the dynamics then adding the non-linearities via perturbative expansion) the EM field. He viewed the first quantization not as some fancy change of classical variables to operators, but as a replacement of the Newtonian-Lorentz particle model with the Fraday-Maxwell type matter field model, resolving thus the particle-field dichotomy (which was plagued with the point-particle divergencies). Thus for him (or for Jaynes) the field quantization was unneccessary, non-fundamental, at best a computational linearization procedure.

On the other hand, I think the future will favor neither, but rather the completely different, new modelling tools (physical theories are models) more in tune with the technology (such as Wolfram's automata, networks, etc). The Schroedinger, Dirac and Maxwell equations can already be rederived as macroscopic approximation of the dynamics of simple binary on/off automata (see for example some interesting papers by Garnet Ord). These kind of tools are hugely richer modelling medium than either PDEs or Hilbert space.

So if you deny the possibility for me to use that rule on the product space, this means you deny the existence of that product space, and hence the superpositions of states such that the result is not a product state.

I am only denying that this abstraction/generalization automatically applies that far in such simple-minded, uncritical manner in the Bell experiment setup. Just calling all Hermitean operators observable, doesn't make them so, much less at the "ideal" level. The least one needs to do in the modelling in this manner the Bell's setup is to include projectors to no_coincdince and no_detection subspaces (these would be from orbital degrees of freedom) so the prediction has some contact with the reality instead of bundling all the unknowns it into the engineering parameter "efficiency" so all the ignorance can be swept under the rug, while wishfully and imodestly labeling the toy model "ideal"[/color]. What a joke. The most ideal model is one that predicts the best what actually happens[/color] (which is 'no violation') and not the one which makes the human modeler appear in the best light or in control of the situation the most.
 
  • #49
nightlight said:
[/color](the 3N coordinate relativistic Dirac-Maxwell equations for the full system, including the aparatus) to a prediction which prohibits any purely local PDE based mechanism[/color] from reproducing such prediction. (You never explained how can local PDEs do it without suspension of dynamics; except for trying to throw into the equations, as a premise, the approximate, explicitly non-local/non-relativistic instantaneous potentials.)

I don't know what 3N coordinate relativistic Dirac-Maxwell equations are. It sounds vaguely as the stuff an old professor tried to teach me instead of QFT. True QFT cannot be described - as far as I know - by any local PDE ; it should fit in a Hilbert state formalism. But I truly think you do not need to go relativistically in order to talk about Bell's stuff. In fact, the space-like separation is nothing very special to me. As I said before, it is just an extreme illustration of the simple superposition + Born rule case you find in almost all QM applications. So all this Bell-stuff should be explainable in simple NR theory, because exactly the same mechanisms are at work when you calculate atomic structure, when you do solid-state physics or the like.

cheers,
Patrick.
 
  • #50
nightlight said:
In the usual Hilbert space formulation, the Born rule is a static, geometric property of vectors, projectors, subspaces. It lacks the time dimension, thus the connection to the dynamics which is its real origin and ultimate justification and delimiter.

You must be kidding. The time evolution is in the state in Hilbert space, not in the Born rule itself.

And without ever defining how and when exactly this suspension occurs, what and when restarts it... etc. It is so much easier to forget about time and dynamics if you smother them with ambiguous verbiage ("macroscopic" and other such obfuscations) and vacuous but intricate geometric postulates.

Well, the decoherence program has something to say about this. I don't know if you are aware of this.

It is a nice idea (linearization) and a useful tool taken much too far. The actual PDEs and the integral equations formulations of the dynamics are mathematically much richer modelling medium then their greately impoverishing abstraction, the Hilbert space.

Superposition is as natural with any linear PDEs and integral equations as it is with Hilbert space. On the other hand, the linearity is almost always an approximation[/color]. The QM (or of QED) linearity is an approximation for the more exact interaction between the matter fields and EM field.

Ok this is what I was claiming all along. You DO NOT ACCEPT superposition of states in quantum theory. In quantum theory, the linearity of that superposition (in time evolution and in a single time slice) is EXACT ; this is its most fundamental hypothesis. So you shouldn't say that you are accepting QM "except for the projection postulate". You are assuming "semiclassical field descriptions".


Namely the linearization arises from assuming that the EM fields are "external" [/color](such as Coulomb potential or an external EM fields interacting with the atoms) and that the charge currents giving rise to quantum EM fields are external[/color]. Schroedinger's original idea was to put the Psi^2 (and its current) as the source terms in the Maxwell equations, obtaining thus the coupled non-linear PDEs.

I see, that's indeed semiclassical. This is NOT quantum theory, sorry. In QED, but much more so in non-abelian theories, you have indeed a non-linear classical theory, but the quantum theory is completely linear.

Of course, at the time and in that phase that was much too ambitious project which never got very far. It was only in late 1960s that Jaynes picked up the Schroedinger's idea and developed somewhat flawed "neoclassical electrodynamics".

Yeah, what I said above. Ok, this puts the whole discussion in another light.

That was picket in mid-1980s by Asim Barut which worked out more accurate "self-field electrodynamics" which reproduces not only the QM but the leading radiative corrections of QED, without ever quantizing (which amounts to linearizing the dynamics then adding the non-linearities via perturbative expansion) the EM field. He viewed the first quantization not as some fancy change of classical variables to operators, but as a replacement of the Newtonian-Lorentz particle model with the Fraday-Maxwell type matter field model, resolving thus the particle-field dichotomy (which was plagued with the point-particle divergencies). Thus for him (or for Jaynes) the field quantization was unneccessary, non-fundamental, at best a computational linearization procedure.

Such semiclassical models are used all over the place, such as to calculate effective potentials in quantum chemistry. I know. But I consider them just as computational approximations to the true quantum theory behind it, while you are taking the opposite view.

You tricked me into this discussion because you said that you accepted ALL OF QUANTUM THEORY except for the projection, so I was shooting at the wrong target ; nevertheless, several times it occurred to me that you were actually defying the superposition principle, which is at the heart of QM. Now you confessed :-p :-p

cheers,
Patrick.
 
Back
Top