Young's Experiment: Exploring Wave-Particle Duality

  • Thread starter Cruithne
  • Start date
  • Tags
    Experiment
In summary: This is not supported by the experiments.In summary, this article is discussing a phenomena that is observed in Young's experiment, however it contradicts what is said in the other examples of the experiment. The mystery of the experiment is that even when treating light as individual particles (photons) the light still produces behaviour that would imply it is acting as if it is a wave. Additionally, the statement suggests that the interference patterns produced were not the result of any observations.
  • #1
Cruithne
2
0
I was recently handed a link by a friend after a discussion about wave-particle duality that seems to me to be incorrect in certain areas. This is the link: http://www.jracademy.com/~jtucek/science/exp.html

I'll have a quick go at dissecting parts of the page.

If light were just a particle, and you were able to send just one photon through (Fig 1.3), then there would be no pattern on the screen, just a single point of light. However, it has been found that even if just one photon is sent through, it creates the same interference pattern, although dimmer

Well this seems basically true, however I understood that given a large enough sample of single photons the interference pattern would be indistinguishable from that produced by a continuous light source. It would be dimmer with a smaller sample of photons. However, the next statement seems to contradict this...

If the light is measured, or observed, in between the screen and the second barrier, no interference pattern is formed. Instead, there is the most intense light in between the two slits, which gets dimmer as it progresses away

This seems to go against every other example of Young's experiment I've come across. What the author seems to be describing is the result of closing one of the slits. With both slits open an interference pattern will eventually appear over time...

This phenomenon is one of the basic principles of quantum physics, the Heisenberg Uncertainty Principle

I'm not sure why!

If light is not being observed, it acts as a wave, but if it is being observed, it has to behave itself and act like particles

This just strikes me as wrong. The essential mystery of Young's experiment is that even when treating light as individual particles (photons) the light still produces behaviour that would imply it is acting as if it is a wave. This statement also seems to suggest that the interference patterns produced were not the result of any observations :shy: And then there's the fact that observing light's behaviour in other circumstances show it acting like a wave (e.g. diffraction and polarisation)

Could anyone clear up my confusion please?

(Hi btw :) I've been lurking for a while but hadn't signed up...)
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
The great mystery of QM : if you do not know by which hole the particle went through, it went through both and inteferences are produced. That is only if by any mean you cannot know which hole, because if there is some kind of detector forcing the particle to go through only one hole, even when nobody watches the result of the detector, the interferences are broken. I think you have understood everything, you are just confused by this mystery. We are all. When nobody touches a particle, it behaves like a wave. When you try to catch a particle, you indeed get a corpuscle.
 
  • #3
Cruithne said:
I was recently handed a link by a friend after a discussion about wave-particle duality that seems to me to be incorrect in certain areas. This is the link: http://www.jracademy.com/~jtucek/science/exp.html

The source you are citing has major confusion about the subtle part of the experiment. Please refer to this one below and see if it is any clearer.

http://www.optica.tn.tudelft.nl/education/photons.asp

Zz.
 
Last edited by a moderator:
  • #4
ZapperZThe source you are citing has major confusion about the subtle part of the experiment. Please refer to this one below and see if it is any clearer.

http://www.optica.tn.tudelft.nl/education/photons.asp

Zz.

Detail from that page:

To generate the coherent light needed for this experiment a He-Ne laser was used, with an average wavelength of 632.8nm.

The problem is that even the most ideal laser light cannot show any non-classical effect here. Namely, if you were to place one detector behind each slit, the trigger by detector A does not exclude trigger by detector B since the laser light has Poisson distribution -- each detector triggers on its own whether or not the other one triggered (no collapse occurs). There is an old theorem of Sudarshan on this very question (the result also appears in a more elaborate paper from that same year by Roy Glauber, which is a foundations of Quantum Optics):

E.C.G. Sudarshan "The equivalence of semiclassical and quantum mechanical descriptions of statistical light beams" Phys. Rev. Lett., Vol 10(7), pp. 277-279, 1963.

Roy Glauber "The Quantum Theory of Optical Coherence" Phys. Rev., Vol 130(6), pp. 2529-2539, 1963.

Sudarshan shows that correlations among the trigger counts of any number of detectors are perfectly consistent with classical wave picture, i.e. you can think of a detector as simply thresholding the energy of the superposed incoming wave packet fragment (A or B) with the local field fluctuations, and triggering (or not triggering) based on these purely local causes, regardless of what the other detector did.

Thus there is nothing in these experiments that would surprise a 19th century physicist (other than technology itself). The students are usually confused by the lose claim that there is a "particle" which always goes one way or the other. If one thinks of two equal wave fragments and detector thresholding (after superposition with local field fluctuations), there is nothing in the experiment that is mysterious.

Even the much stricter non-classicality test, such as Bell's inequality experiments are still fully explicable with this kind of simple classical models (usually acknowledged via euphemisms: "detection loophole" or "fair sampling loophole"). You can check the earlier thread here where I posted more details and references, along with the discussions.
 
Last edited by a moderator:
  • #5
nightlight said:
ZapperZThe source you are citing has major confusion about the subtle part of the experiment. Please refer to this one below and see if it is any clearer.

http://www.optica.tn.tudelft.nl/education/photons.asp

Zz.

Detail from that page:

To generate the coherent light needed for this experiment a He-Ne laser was used, with an average wavelength of 632.8nm.

The problem is that even the most ideal laser light cannot show any non-classical effect here. Namely, if you were to place one detector behind each slit, the trigger by detector A does not exclude trigger by detector B since the laser light has Poisson distribution -- each detector triggers on its own whether or not the other one triggered (no collapse occurs). There is an old theorem of Sudarshan on this very question (the result also appears in a more elaborate paper from that same year by Roy Glauber, which is a foundations of Quantum Optics):

E.C.G. Sudarshan "The equivalence of semiclassical and quantum mechanical descriptions of statistical light beams" Phys. Rev. Lett., Vol 10(7), pp. 277-279, 1963.

Roy Glauber "The Quantum Theory of Optical Coherence" Phys. Rev., Vol 130(6), pp. 2529-2539, 1963.

Sudarshan shows that correlations among the trigger counts of any number of detectors are perfectly consistent with classical wave picture, i.e. you can think of a detector as simply thresholding the energy of the superposed incoming wave packet fragment (A or B) with the local field fluctuations, and triggering (or not triggering) based on these purely local causes, regardless of what the other detector did.

Thus there is nothing in these experiments that would surprise a 19th century physicist (other than technology itself). The students are usually confused by the lose claim that there is a "particle" which always goes one way or the other. If one thinks of two equal wave fragments and detector thresholding (after superposition with local field fluctuations), there is nothing in the experiment that is mysterious.

Even the much stricter non-classicality test, such as Bell's inequality experiments are still fully explicable with this kind of simple classical models (usually acknowledged via euphemisms: "detection loophole" or "fair sampling loophole"). You can check the earlier thread here where I posted more details and references, along with the discussions.

I'm not sure what your point here. Are you trying to say that there are no photons? Or are you trying to convey that the interference phenomena are purely due to classical waves? What if I can show that ALL wave phenomena of light can also be described via the photon picuture? Then what? Is there an experiment that can clearly distinguish between the two? (This is not a trick question since I have already mentioned it a few times.)

The detection loophole is well-known. If we have no such loophole, the EPR-type experiment would be a done deal. It is why we have to deal with the statistics of large number of data to be able to know how many standard deviation the results deviate from classical predictions. If we are not encumbered by such loophole, we could in principle just do one measurement and be done with.

Zz.
 
Last edited by a moderator:
  • #6
ZapperZ I'm not sure what your point here. Are you trying to say that there are no photons? Or are you trying to convey that the interference phenomena are purely due to classical waves?

I am saying that nothing in the experiment shows that one has to imagine a particle, thus the common "paradoxical" description is misleading. All their frame-by-frame pictures show is precisely the kind of discretization a 19th century physicist would expect to see if a detector thresholds the energy of incoming perfectly classical wave packet superposed with the local field fluctuations.

It does not show (and can't show since it isn't true) what is commonly claimed or hinted at in popular or "pedagogical" literature, which is that if you were to place two detectors, A and B, one behind each slit, that a trigger of A automatically excludes trigger of B (which would be a particle-like behavior, called "collapse" of wave function or a projection postulate in QM "measurement theory"). You get all 4 combinations of triggers (0,0), (0,1), (1,0) and (1,1) of (A,B). That is the prediction of Quantum Optics (see Sudarshan & Glauber papers) and also what the experiment shows. No wave collapse of B-wave fragment occurs when detector A triggers. The data and the Quntum Optics prediction here are perfectly classical.

What if I can show that ALL wave phenomena of light can also be described via the photon picuture? Then what? Is there an experiment that can clearly distinguish between the two? (This is not a trick question since I have already mentioned it a few times.)

The double-slit experiment doesn't show anything particle-like (the apparent discreteness is an artifact of detector trigger decision thresholding/discretization, which is the point of the Sudarshan's theorem).

Of course, you can simulate any wave field phenomena as a macroscopic/collective effect of many particles at a finer scale. Similarly, you can simulate particle behaviors with a microscopic wave fields in wave packets.

Whether fundamental entities are particles or waves has nothing to do with the double-slit experiment claims (in "pedagogical" and popular literature) -- no dual nature is shown by the experiment. All that is shown is consistent with a discretized detection of a wave phenomenon.


The detection loophole is well-known. If we have no such loophole, the EPR-type experiment would be a done deal. It is why we have to deal with the statistics of large number of data to be able to know how many standard deviation the results deviate from classical predictions. If we are not encumbered by such loophole, we could in principle just do one measurement and be done with.

That's incorrect characterization. The standard deviations have no relation with the detection or the fair sampling "loophole" -- they could have million times as many data points and thousand times as many standard deviation "accuracy" without touching the main problem (that the 90% of data isn't measured and that they assume ad hoc certain properties of the missing data). Check the earlier thread where this was discussed in detail and with references.
 
  • #7
nightlight said:
ZapperZ I'm not sure what your point here. Are you trying to say that there are no photons? Or are you trying to convey that the interference phenomena are purely due to classical waves?

I am saying that nothing in the experiment shows that one has to imagine a particle, thus the common "paradoxical" description is misleading. All their frame-by-frame pictures show is precisely the kind of discretization a 19th century physicist would expect to see if a detector thresholds the energy of incoming perfectly classical wave packet superposed with the local field fluctuations.

I didn't realize that this thread was about answering the validity of the photon picture. I was responding to the confusing brought about by the original link in the first posting of this thread.

It does not show (and can't show since it isn't true) what is commonly claimed or hinted at in popular or "pedagogical" literature, which is that if you were to place two detectors, A and B, one behind each slit, that a trigger of A automatically excludes trigger of B (which would be a particle-like behavior, called "collapse" of wave function or a projection postulate in QM "measurement theory"). You get all 4 combinations of triggers (0,0), (0,1), (1,0) and (1,1) of (A,B). That is the prediction of Quantum Optics (see Sudarshan & Glauber papers) and also what the experiment shows. No wave collapse of B-wave fragment occurs when detector A triggers. The data and the Quntum Optics prediction here are perfectly classical.

What if I can show that ALL wave phenomena of light can also be described via the photon picuture? Then what? Is there an experiment that can clearly distinguish between the two? (This is not a trick question since I have already mentioned it a few times.)

The double-slit experiment doesn't show anything particle-like (the apparent discreteness is an artifact of detector trigger decision thresholding/discretization, which is the point of the Sudarshan's theorem).

So you are asserting that if I stick to the photon picture, I cannot explain all the so-called wavelike observation as in the double slit expt.? See T. Marcella, Eur. J. Phys., v.23, p.615 (2002).

So now we have, at best, the same set of experiments with two different explanations. Just because wavelike picture came first doesn't mean it is right, and just because photon picture came later, doesn't mean that is correct. Again, my question was, is there ANY other experiments that can clearly distinguish between the two and pick out where one deviates from the other? This, you did not answer.

Zz.
 
  • #8
ZapperZ I didn't realize that this thread was about answering the validity of the photon picture.

It got there after some discussion of the Bell inequality experiments.

So you are asserting that if I stick to the photon picture, I cannot explain all the so-called wavelike observation as in the double slit expt.?

I am saying that if you put separate detectors A and B at each slit you will not obtain the usually claimed detection exclusivity that a single particle going through slit A or slit B would produce. The detector A trigger has no effect on the probability of trigger of B. The usual claim is that when the detector A triggers, the wave function in the region B somehow collapses, making detector B silent for that try. That is not what happens. The triggers of B are statistically independent from the triggers on A on each "try" (e.g. if you open & close the light shutter quickly enough for each "try" so that on average a single event is dected per try).


See T. Marcella, Eur. J. Phys., v.23, p.615 (2002).

He is using standard scattering amplitudes, i.e. analyzing the behavior of an extended object, wave, which spans both slits. Keep also in mind that you can affect the interference picture in a predictable manner by placing various optical phase delay devices on each path. That implies that a full phenomenon does involve two physical wave fragments propagating via separate paths, interacting with other objects along the way.

If you had a single particle going always via a single path, it would be insensitive to the relative phase delays of the two paths. The usual Quantum Optics solution is that the source produces Poisson distribution of the photons, with an average of 1 photon per try, although in each try there could be zero, one, two, three... etc photons. That kind of "particle" picture can account for these phase delay phenomena on two paths, but that is what makes it equivalent to classical picture as well.


So now we have, at best, the same set of experiments with two different explanations. Just because wavelike picture came first doesn't mean it is right, and just because photon picture came later, doesn't mean that is correct. Again, my question was, is there ANY other experiments that can clearly distinguish between the two and pick out where one deviates from the other? This, you did not answer.

You don't have a picture of a precisely 1 particle on each try producing the full set of double-slit phenomena (including replicating interference effects of the separate phase delays on each path). You can have a picture of "particles" provided you also assume that the particle number is not controllable, and that it is uncontrollable to exactly such degree that the detector triggers are precisely same as if a simple wave has split in two equal parts, each of which triggers its own detector independently of the other.

Thus the particle model with the uncontrollable particle number is more redundant explanation since you need a separate rule or a model to explain why is the particle number uncontrollable in exactly such way to mimick the wave behavior in detector trigger statistics.


The double-slit experiment is a weak criteria (from the early days of QM) to decide the question. The Bell's experiment was supposed to provide a sharper crietira, but so far it hasn't supported the "collapse" (of two particle state).
 
  • #9
nightlight said:
ZapperZ I didn't realize that this thread was about answering the validity of the photon picture.

It got there after some discussion of the Bell inequality experiments.

Sorry? I could have sworn the original question was on the double slit experiment, and that was what the webpage I replied with was also demonstrating. Where did Bell inequality came in?

So you are asserting that if I stick to the photon picture, I cannot explain all the so-called wavelike observation as in the double slit expt.?

I am saying that if you put separate detectors A and B at each slit you will not obtain the usually claimed detection exclusivity that a single particle going through slit A or slit B would produce. The detector A trigger has no effect on the probability of trigger of B. The usual claim is that when the detector A triggers, the wave function in the region B somehow collapses, making detector B silent for that try. That is not what happens. The triggers of B are statistically independent from the triggers on A on each "try" (e.g. if you open & close the light shutter quickly enough for each "try" so that on average a single event is dected per try).


See T. Marcella, Eur. J. Phys., v.23, p.615 (2002).

He is using standard scattering amplitudes, i.e. analyzing the behavior of an extended object, wave, which spans both slits. Keep also in mind that you can affect the interference picture in a predictable manner by placing various optical phase delay devices on each path. That implies that a full phenomenon does involve two physical wave fragments propagating via separate paths, interacting with other objects along the way.

If you had a single particle going always via a single path, it would be insensitive to the relative phase delays of the two paths. The usual Quantum Optics solution is that the source produces Poisson distribution of the photons, with an average of 1 photon per try, although in each try there could be zero, one, two, three... etc photons. That kind of "particle" picture can account for these phase delay phenomena on two paths, but that is what makes it equivalent to classical picture as well.


So now we have, at best, the same set of experiments with two different explanations. Just because wavelike picture came first doesn't mean it is right, and just because photon picture came later, doesn't mean that is correct. Again, my question was, is there ANY other experiments that can clearly distinguish between the two and pick out where one deviates from the other? This, you did not answer.

You don't have a picture of a precisely 1 particle on each try producing the full set of double-slit phenomena (including replicating interference effects of the separate phase delays on each path). You can have a picture of "particles" provided you also assume that the particle number is not controllable, and that it is uncontrollable to exactly such degree that the detector triggers are precisely same as if a simple wave has split in two equal parts, each of which triggers its own detector independently of the other.

Thus the particle model with the uncontrollable particle number is more redundant explanation since you need a separate rule or a model to explain why is the particle number uncontrollable in exactly such way to mimick the wave behavior in detector trigger statistics.


The double-slit experiment is a weak criteria (from the early days of QM) to decide the question. The Bell's experiment was supposed to provide a sharper crietira, but so far it hasn't supported the "collapse" (of two particle state).

You lost me in this one. What detectors? The interference phenomena as described with photons/electrons/neutrons/etc. are NOT about these "particles", but rather the superpostion of all the possible paths! It isn't the issue of one particle going through either slit, it's the issue of the possible path interfering, creating the often misleading impression that a single particle is interfereing with itself. A single-particle interference is NOT the same as a 2-particle interference.

Again, I have NO IDEA how this thread degenerated into a question of the validity of photons.

If you have a solid argument against it, then let me request that you read this paper:

J.J. Thorn et al., Am. J. Phys., v.72, p.1210 (2004).

The abstract is in one of my postings in my Journals section. If you believe the analysis and conclusion is faulty, please send either a rebuttal or a followup paper to AJP. This isn't a PRL or Science or Nature, so it shouldn't be as difficult to get published there. THEN we'll talk.

Zz.
 
  • #10
Quick question about non-locality

nightlight said:
ZapperZ I'm not sure what your point here. Are you trying to say that there are no photons? Or are you trying to convey that the interference phenomena are purely due to classical waves?

I am saying that nothing in the experiment shows that one has to imagine a particle, thus the common "paradoxical" description is misleading. All their frame-by-frame pictures show is precisely the kind of discretization a 19th century physicist would expect to see if a detector thresholds the energy of incoming perfectly classical wave packet superposed with the local field fluctuations.

It does not show (and can't show since it isn't true) what is commonly claimed or hinted at in popular or "pedagogical" literature, which is that if you were to place two detectors, A and B, one behind each slit, that a trigger of A automatically excludes trigger of B (which would be a particle-like behavior, called "collapse" of wave function or a projection postulate in QM "measurement theory"). You get all 4 combinations of triggers (0,0), (0,1), (1,0) and (1,1) of (A,B). That is the prediction of Quantum Optics (see Sudarshan & Glauber papers) and also what the experiment shows. No wave collapse of B-wave fragment occurs when detector A triggers. The data and the Quntum Optics prediction here are perfectly classical.

What if I can show that ALL wave phenomena of light can also be described via the photon picuture? Then what? Is there an experiment that can clearly distinguish between the two? (This is not a trick question since I have already mentioned it a few times.)

The double-slit experiment doesn't show anything particle-like (the apparent discreteness is an artifact of detector trigger decision thresholding/discretization, which is the point of the Sudarshan's theorem).

Of course, you can simulate any wave field phenomena as a macroscopic/collective effect of many particles at a finer scale. Similarly, you can simulate particle behaviors with a microscopic wave fields in wave packets.

Whether fundamental entities are particles or waves has nothing to do with the double-slit experiment claims (in "pedagogical" and popular literature) -- no dual nature is shown by the experiment. All that is shown is consistent with a discretized detection of a wave phenomenon.


The detection loophole is well-known. If we have no such loophole, the EPR-type experiment would be a done deal. It is why we have to deal with the statistics of large number of data to be able to know how many standard deviation the results deviate from classical predictions. If we are not encumbered by such loophole, we could in principle just do one measurement and be done with.

That's incorrect characterization. The standard deviations have no relation with the detection or the fair sampling "loophole" -- they could have million times as many data points and thousand times as many standard deviation "accuracy" without touching the main problem (that the 90% of data isn't measured and that they assume ad hoc certain properties of the missing data). Check the earlier thread where this was discussed in detail and with references.


Hello nightlight,

I have read several of your postings as suggested and think that you thought the points well through. I also, however, like ZapperZ's attitude. To paraphrase him: "Well, what is the big point whether or not classical and quantum mechanics show the same result in these particular experiments?".

For me the really interesting thing is the answer to the following: In your explanations and works that you refer to, is there a faster than light entanglement, or not?

I hope that you say "yes" because then you help me to save time to compare sophisticated classical arguments (some of which I recognized myself) with standard quantum-mechanical arguments.

Roberth
 
  • #11
If you have a solid argument against it, then let me request that you read this paper:

J.J. Thorn et al., Am. J. Phys., v.72, p.1210 (2004).


There is a semiclassical model of PDC sources (Stochastic Optics) which J.J. Thorn had used (just as there are for regular laser and thermal sources, as was known since Sudarshan-Glauber results from 1963).

Therefore the detection statistics and correlations for any number of detectors (and any number of optical elements) for the field from such source can always be replicated exactly by a semi-classical model. What euphemisms these latest folks have used for their particular form of ad-hockery for the missing data to make what's left look as absolutely non-classical is of as much importance as trying to take apart the latest claimed perpetuum mobile device or random data compressor.

That whole "Quantum Mystery Cult" is a dead horse of no importance to anybody or anything outside that particular tiny mutual back-patting society. That parasitic branch of pseudo-physics has never produced anything but 70+ years of (very) loud fast-talking to bedazzle the young and ignorant.

Nothing, no technology no phenomenon no day-to-day physics, was ever found to depend or require in any way their imaginary "collapse/projection postulate" (or its corollary, the Bell's theorem). The dynamical equations (of QM and QED/QFT) and Born postulate is what does the work. (See the thread mentioned earlier for the explanation of these statements.)
 
  • #12
ZapperZ said:
The source you are citing has major confusion about the subtle part of the experiment. Please refer to this one below and see if it is any clearer.

http://www.optica.tn.tudelft.nl/education/photons.asp

Zz.

Cheers, that's a much clearer description of the experiment. I'll pass it on :smile:
 
Last edited by a moderator:
  • #13
nightlight said:
If you have a solid argument against it, then let me request that you read this paper:

J.J. Thorn et al., Am. J. Phys., v.72, p.1210 (2004).


There is a semiclassical model of PDC sources (Stochastic Optics) which J.J. Thorn had used (just as there are for regular laser and thermal sources, as was known since Sudarshan-Glauber results from 1963).

Therefore the detection statistics and correlations for any number of detectors (and any number of optical elements) for the field from such source can always be replicated exactly by a semi-classical model. What euphemisms these latest folks have used for their particular form of ad-hockery for the missing data to make what's left look as absolutely non-classical is of as much importance as trying to take apart the latest claimed perpetuum mobile device or random data compressor.

That whole "Quantum Mystery Cult" is a dead horse of no importance to anybody or anything outside that particular tiny mutual back-patting society. That parasitic branch of pseudo-physics has never produced anything but 70+ years of (very) loud fast-talking to bedazzle the young and ignorant.

Nothing, no technology no phenomenon no day-to-day physics, was ever found to depend or require in any way their imaginary "collapse/projection postulate" (or its corollary, the Bell's theorem). The dynamical equations (of QM and QED/QFT) and Born postulate is what does the work. (See the thread mentioned earlier for the explanation of these statements.)

I could say the same thing about the similar whinning people always do about QM's photon picture without realizing that if it is simply a cult not based on any form of validity, then it shouldn't WORK as well (eg. refer to the band structure of the very same semiconductors that you are using in your electronics and see how those were verified via photoemission spectroscopy).

If you think you are correct, then put your money where you mouth is and try to have it published in a peer-reviewed journal. Till you are able to do that, then all your whinning are nothing more than bitterness without substance.

Cheers!

Zz.
 
  • #14
nightlight said:

Nothing, no technology no phenomenon no day-to-day physics, was ever found to depend or require in any way their imaginary "collapse/projection postulate" (or its corollary, the Bell's theorem). The dynamical equations (of QM and QED/QFT) and Born postulate is what does the work.


I wonder what difference you see between Born's postulate and the collapse postulate ? To me, they are one and the same thing: namely that, given a quantum state psi, and a measurement of a physical quantity which corresponds to a self-adjoint operator A, gives you the probability |<a_i |psi>|^2 to be in state corresponding to eigenstate |a_i> of A, with value a_i for the physical quantity to be measured. If you accept this, and you accept that measuring twice the same quantity in succession yields the second time the same result as the first time, this time with certainty, then where exactly is the difference between the Born rule (giving the probabilities) and the projection postulate ?

cheers,
Patrick.
 
  • #15
vaneschI wonder what difference you see between Born's postulate and the collapse postulate ? To me, they are one and the same thing:

Unfortunately, the two are melded in the "pedagogicial" expositions so a student is left with the illusion that the projection postulate is empirically essential element of the theory. The only working part of the so-called measurement theory is the operational Born rule (as a convenient practical shortcut, in the way Shroedinger originally understood his wave function) which merely specifies the probability of a detection event without imposing any new non-dynamical evolution (the collapse/projection) on the system state. The dynamical evolution of the state is never interrupted in some non-dynamical, mysterious way by such things as human mind (as von Neumann, the originator of the QM Mystery cult, claimed) or in any other ad hoc fuzzy way.

What happens to the system state after the apparatus has triggered a macroscopic detection event is purely a matter of the specific apparatus design and it is in principle deducible from the design, initial & boundary conditions and the dynamic equations. Since the dynamical equations are local (ignoring the superficial non-locality of the limited non-relativistic approximations for potentials, such as V(r)=q/r) all changes to the state are local and continuous.

There is no coherent way to integrate the non-dynamic collapse into the system dynamics. There is only lots of dance and handwaving on the subject. When exactly does the dynamical equations get put on hold, how long are they held in suspension and when do they resume activity? It doesn't matter, the teacher said. Well, something, somewhere has to know, since it would have to perform it.

How do you know that collapse occurs at all? Well, teacher said, since we cannot attribute a definite value to the position (what exactly is the position? position of what? spread out field?) before the measurement and have the value (a value of what? the location of detector apperture? the blackened photo-grain? the electrode?) after the measurement, the definite value must have been created in a collapse which occurred during the measurement. Why can't there be values before the measurement? Well, von Neumann proved that it can't be done while remaining consistent with all QM predictions. Ooops, sorry, that proof was shown invalid, it's the Kochen-Specker's theorem which shows it can't be done (after Bohm produced the counter-example to von Neumann). Ooops, again, As Bell has shown, that one had the same kind of problem as von Neumman's "proof", it's really the Bell's theorem which shows it can't be done. And what does the Bell's theorem use to show that there is a QM prediciton which violates Bell's inequality? The projection postulate, of course.

So, to show that we absolutely need the projection postulate we use projection postulate to deduce a QM prediction which violates Bell's inequality (and which no local hidden variable theory can violate). Isn't that a bit circular, a kind of cheating? That can't prove that we need projection postulate.

Well, teacher said, this QM prediction was verified experimentally, too. It was? Well, yeah, it was verified, well, other than for some tiny loopholes. You mean the actual measured data hasn't violated the Bell's inequality? It's that far off, over 90% coincidence points missing and just hypothesized into the curve? All these decades, and still this big gap? Well, the gap just appears superficially large, a purely numerical artifact, its true essence is really small, though. It's just matter of time till these minor technological glitches are ironed out.

Oh, that reminded me Mr. Teacher, I think you will be interested in investing in this neat new device I happen to have in my backpack. It works great, it has 110% of output energy vs input energy. Yeah, it can do it, sure, here is the notebook which shows how. By the way, this particular prototype has a very minor, temporary manufacturing glitch which keeps it at the 10% output vs input, just at the moment. Don't worry about it, the next batch will work as predicted.
 
  • #16
I could say the same thing about the similar whinning people always do about QM's photon picture without realizing that if it is simply a cult not based on any form of validity, then it shouldn't WORK as well (eg. refer to the band structure of the very same semiconductors that you are using in your electronics and see how those were verified via photoemission spectroscopy).

That's the point I was addressing -- you can take the projection/collapse postulate out of the theory, it makes no difference for anything that has any contact with empirical reality. The only item that would fall would be Bell's theorem (since it uses the projection postulate to produce the alleged QM "prediction" which violates Bell's inequality). Since no actual experimental data has ever violated the inequality, there is nothing empirical that must be explained.

Since the Bell's theorem on impossibility of LHV is the only remaining rationale for the projection postulate (after von Neumann's & Kochen-Specker's HV "impossibility theorems" were found empirically irrelevant), and since its proof uses in an essential way the projection postulate itself, the two are a closed circle with no connect or usefulness to anything but to each other.


If you think you are correct, then put your money where you mouth is and try to have it published in a peer-reviewed journal. Till you are able to do that, then all your whinning are nothing more than bitterness without substance.

And who do you imagine might be a referee in this field who decides whether the paper gets published or not? The tenured professors and highly reputabe physicists who founded entire branches of research (e.g. Trevor Marshall, Emilio Santos, Asim Barut, E.T. Jaynes,...) with hundreds of papers previously published in reputable journals could not get passed the QM cult zealots to publish a paper which directly and unambiguosly challenges the QM Mystery religion (Marshall calls them the "priesthood"). The best they would get is a highly watered down version with key points edited or dulled out and any back-and-forth arguments spanning several papers cut-off with the last word always for the opponents.

Being irrelevant and useless, this parasitic branch will die off eventually of its own. After all, how many times can one dupe the money man with the magical quantum computer tales, before he gets it and requests that they either show it work or go find another sucker.
 
Last edited:
  • #17
nightlight said:
And what does the Bell's theorem use to show that there is a QM prediciton which violates Bell's inequality? The projection postulate, of course.

Not to my understanding. He only needs the Born rule, no ? Bell's inequalities are just expressions of probabilities, which aren't satisfied by some probabilities predicted by QM. If you accept the Hilbert state description and the Born rule to deduce probabilities, that's all there is to it.

Let's go through a specific example, as given in Modern Quantum Mechanics, by Sakurai, paragraph 3.9. But I will adapt it so that you explicitly don't need any projection.

The initial state is |psi> = 1/sqrt(2) [ |z+>|z-> - |z->|z+> ] (1)
which is a spin singlet state.

(I take it that you accept that).

The probability to have an |a+>|b+> state is simply given (Born rule) by:

P(a,b) =|( <a+|<b+| ) |psi> |^2 = 1/2 | <a+|z+><b+|z-> - <a+|z-><b+|z+> |^2

Let us assume that a and b are in the xz plane, and a and b denote the angle with the z-axis.
In that case, <u+|z+> = cos (u/2) and <u+|z-> = - sin (u/2)

So P(a,b) = 1/2 | - cos(a/2)sin(b/2) + sin(a/2)cos(b/2) |^2

or P(a,b) = 1/2 { sin( (a-b)/2 ) }^2 (2)

So the probability to measure particle 1 in the spin-up state along a and particle 2 in the spin-up state along b is given by P(a,b) as given in (2) and we deduced this simply using the Born rule.

Now one of Bell's inequalities for probabilities if we have local variables determining P(a,b) is given by:

P(a,b) is smaller or equal than P(a,c) + P(c,b).

Fill in the formula (2), and we should have:

Sin^2((a-b)/2) < = Sin^2((a-c)/2) + Sin^2((c - b)/2)

Now, take a = 0 degrees, b = 90 degrees, c = 45 degrees,

sin^2(45) <= ? sin^2(22.5) + sin^2(22.5)

0.5 < ? 0.292893...

See, I didn't need any projection as such...

cheers,
Patrick.
 
Last edited:
  • #18
vanesch Not to my understanding.

The state of the subsystem B which is (in the usual pedagogical description) a mixed state: 1/2 |+><+| + 1/2 |-><-|, becomes a pure state |+> for the sub-ensemble of B for which we get (-1) result on A. This type of composite system measurement treatment and the sub-system state reduction are the consequences of the projection postulate -- the reasoning is an exact replica of the von Neumann's original description of the measured system and the apparatus where he introduced the projection postulate along with the speculation that it was the observer's mind which created the collapse. Without the collapse in this model the entangled state remains entangled, since the unitary evolution cannot, in this scheme of measurement, pick-out of the superposition the specific outcome or a pure resulting sub-ensemble.

Of course, there is a grain of truth in the projection. There is a correlation of cos^2(theta) burried in the coincidence counts as Glauber's Quantum Optics multi-point correlation functions or the actual Quantum Optics experiments show. In terminology of photons, the problem is when one takes the particle picture literally and claims there is exactly one such "particle" (a member of correlated pair) and that we're measuring properties of that one particle. In fact, the photon number isn't a conserved quantity in QED.

The fully detailed QED treatment of the actuall Bell inequality experiments, which takes into account the detection process and the photon number uncertainty, would presumably, at least in principle, reproduce the correct correlations observed, including the actual registered coincidence counts, which don't violate Bell's inequality. The full events for the two detectors of subsystem B include: 1) no trigger on + or -, 2) (+) only trigger, 3) (-) only trigger, 4) (+) and (-) trigger. The pair coincidence counts then consist of all 16 combinations of possible outcomes.

The "pedagogical" scheme (and its von Neumann template) insists that only (2) and (3) are the "legitimate" single particle results and only their 4 combinations, out of 16 that actually occur, the "legitimate" pair events (I guess, since only these fall within its simple-minded approach), while labeling euphemistically (1) and (4), and the 12 remaining pair combinations, which are outside of the scheme, as artifacts of the technological "non-ideality" to be fixed by the future technological progress. The skeptics are saying that it is the "pedagogical" scheme (the von Neumann's collapse postulate with its offshoots, the measurement theory and Bell's QM "prediction" based on it) itself that is "non-ideal" since it doesn't correspond to anything that actually exists, and it is the eyesore which needs fixing.
 
Last edited:
  • #19
nightlight said:
The "pedagogical" scheme (and its von Neumann template) insists that only (2) and (3) are the "legitimate" results (I guess, since only these two fall within its simple-minded approach), while labeling euphemistically (1) and (4), which are outside of the scheme, as artifacts of the technological "non-ideality" to be fixed by the future technological progress.

I don't understand what you are trying to point out. Do you accept, or not, that superpositions of the type |psi> = 1/sqrt(2) (|+>|-> - |->|+>) can occur in nature, where the first and the second kets refer to systems that can be separated by a certain distance ?

If you don't, you cannot say that you accept QM and its dynamics and Born's rule, no ? If you do, I do not need anything else to show that Bell's inequality is violated, and especially, I do not need the projection postulate.
In the state |psi>, the state |+>|+> has coefficient 0, so probability 0 (Born) to occur, just as well as the state |->|->. So these are NOT possible outcomes of measurement if the state |psi> is the quantum state to start with. No projection involved.

I know that with visible photon detection, there are some issues with quantum efficiency. But hey, the scheme is more general, and you can take other particles if you want to. Your claim that quantum mechanics, with the Born rule, but without the projection postulate, does not violate Bell's inequalities is not correct, as I demonstrated in my previous message.

cheers,
patrick.
 
  • #20
vanesch I don't understand what you are trying to point out. Do you accept, or not, that superpositions of the type |psi> = 1/sqrt(2) (|+>|-> - |->|+>) can occur in nature, where the first and the second kets refer to systems that can be separated by a certain distance ?

I am saying that such |psi> for the entangled photons is a schematized back-of-the-envelope sketch, adequate for a heuristic toy model and not a valid model for any physical system. Its "predictions" don't match (not even closely) any actually measured data. To make it "match" the data, over 90% of the "missing" coincidence data points have to be hand-put into the "matching curve" under the "fair sampling" and other speculative conjectures (see the earlier discussion here with details and references on this point).

If you don't, you cannot say that you accept QM and its dynamics and Born's rule, no ? If you do, I do not need anything else to show that Bell's inequality is violated, and especially, I do not need the projection postulate.

You've got the Born rule conceptually melded with the whole measurement theory which came later. The original rule (which Born introduced as a footnote in a paper on scattering) I am talking about is meant in the sense Schroedinger used to interpret his wave function with: it is an operational shortcut, not a foundamental axiom of the theory. There is no non-dynamical change of state or fundamental probabilistic axiom in this interpretation -- the Psi evolves by dynamical equations at all times. All its changes (including any localization and the focusing effects) are due to the interaction with the aparatus. There are no fundamental probabilities or suspension and resumption of the dynamical evolution.

The underlying theoretical foundation that Schroedinger assumed is the interpretation of the |Psi(x)|^2 as a charge/matter density, or in the case of photons as the field energy density. The probability of detection is the result of the specific dynamics between the apparatus and the matter field (the same way one might obtain probabilities in a classical field measurements). You can check the numerous papers and preprints of Asim Barut and his disciples which show how this original Schroedinger view can be consistently carried out for atomic systems including the correct predictions of QED radiative corrections (their self-field electrodynamics, which was a refinement of the eralier "neoclassical electrodynamics" of E.T. Jaynes).


In the state |psi>, the state |+>|+> has coefficient 0, so probability 0 (Born) to occur, just as well as the state |->|->. So these are NOT possible outcomes of measurement if the state |psi> is the quantum state to start with.

And if that simple model of |psi> corresponds to anything real at all. I am saying it doesn't, it is a simple-minded toy model for an imaginary experiment. You would need to do the full QED treatment to make any prediction that could match the actual coincidence results (which don't violate even remotely the Bell's inequality).

No projection involved.

Of course it does have projection (collapse). You've just got used to the usual pedagocial omissions and shortcuts you can't notice it any more. You simply need to include the aparatus in the dynamics and evolve the composite state to see that no (+-) or (-+) result occurs under the unitary evolution of the composite system until something collapses the superposition of the composite system (this is the von Neumann's measurement scheme, which is the bases of the QM measurement theory). That is the state collapse that makes (+) or (-) definite on A and which induces the sub-ensemble state as |-> or |+> on B, which Bell's theorem uses in an essential way to assert that there is a QM "prediction" which violates his inequality.

The full system dynamics (of A,B plus the two polarizers and the 4 detectors) cannot produce via unitary evolution of the full composite system a pure state with a definite polarization of A and B, such as |DetA+>|A+>|DetB->|B->. It can produce only a superposition of such states. That's why von Neumann had to postulate the extra-dynamical collapse -- the unitary dynamics by itself cannot produce such transition within his/QM measurement theory.

Without this extra-dynamical global collapse, you only have A, B, the two polarizers and and the four detectors evolving the superposition via purely local field interactions, incapable even in principle of yielding any prediction that excludes LHV (since the unknown local fields are LHVs themselves). It is precisely this conjectured global extra-dynamical overall state collapse to a definite result which results in the apparent non-locality (no LHV) prediction. Without it, there is no such prediction.

I know that with visible photon detection, there are some issues with quantum efficiency.

This sounds nice and soft, like the Microsoft marketing describing the latest "issues" with IE (the most recent in never ending stream of the major security flaws).

Plainly speaking, the QM "prediction" of the Bell's theorem which violates his inequality, doesn't actually happen in real data. No coincidence counts ever violated the inequality.

Your claim that quantum mechanics, with the Born rule, but without the projection postulate, does not violate Bell's inequalities is not correct, as I demonstrated in my previous message.

You seem unaware of how the projection postulate fits in the QM measurement theory or maybe you don't realize that the Bell's QM prediction is deduced using the QM measurement theory. All you have "demonstrated" so far is that you can superficially replay the back-of-the-envelope pedagogical cliches.
 
  • #21
nightlight said:
The underlying theoretical foundation that Schroedinger assumed is the interpretation of the |Psi(x)|^2 as a charge/matter density, or in the case of photons as the field energy density.

Ok, so you do not accept superpositions of states in a configuration space that describes more than one particle ; so you introduce a superselection rule here. BTW, this superselection rule IS assumed to be there for charged particles, but not for spins or neutral particles.

In the state |psi>, the state |+>|+> has coefficient 0, so probability 0 (Born) to occur, just as well as the state |->|->. So these are NOT possible outcomes of measurement if the state |psi> is the quantum state to start with.

And if that simple model of |psi> corresponds to anything real at all. I am saying it doesn't, it is a simple-minded toy model for an imaginary experiment. You would need to do the full QED treatment to make any prediction that could match the actual coincidence results (which don't violate even remotely the Bell's inequality).

Well, that's a bit easy as a way out: "you guys are only doing toy QM. I'm doing 'complicated' QM and there, a certain theorem holds. But I can't prove it, it is just too complicated."

No projection involved.

Of course it does have projection (collapse). You've just got used to the usual pedagocial omissions and shortcuts you can't notice it any more.

Well, tell me where I use a collapse. I just use the Born rule which states that the probability of an event is the absolute value squared of the inproduct of the corresponding eigenstate and the state of the system. But I think I know what you are having a problem with. It is not the projection or collapse, it is the superposition in a tensor product of Hilbert spaces.



You simply need to include the aparatus in the dynamics and evolve the composite state to see that no (+-) or (-+) result occurs under the unitary evolution of the composite system until something collapses the superposition of the composite system (this is the von Neumann's measurement scheme, which is the bases of the QM measurement theory). That is the state collapse that makes (+) or (-) definite on A and which induces the sub-ensemble state as |-> or |+> on B, which Bell's theorem uses in an essential way to assert that there is a QM "prediction" which violates his inequality.

But that's not true ! Look at my "toy" calculation. I do not need to "collapse" anything, I consider the global apparatus "particle 1 is measured along spin axis a and particle 2 is measured along spin axis b". This corresponds to an eigenstate <a|<b|. I just calculate the in-product and that's it.

That's why von Neumann had to postulate the extra-dynamical collapse -- the unitary dynamics by itself cannot produce such transition within his/QM measurement theory.

Yes, I know, that's exactly the content of the relative-state interpretation.
But it doesn't change anything to the predicted correlations. As I repeat: you do not have a problem with collapse, but with superposition. And that's an essential ingredient in QM.

cheers,
Patrick.
 
  • #22
Well, that's a bit easy as a way out: "you guys are only doing toy QM. I'm doing 'complicated' QM and there, a certain theorem holds. But I can't prove it, it is just too complicated."

You're using a 2(x)2 D Hilbert space. That is a toy model considering that the system space has about infinitely many times more dimensions, even before accounting for the quantum field aspect which is ignored all together (e.g. the indefinitness of the photon number would give application of Bell's argument here a problem). The ignored spatial factors are essential in the experiment. The spatial behavior is handwaved into the resoning in an "idealized" (fictitious) form which goes contrary to the plain experimental facts (the "missing" 90 percent of coincidences) or any adequate theory of the setup ("adequate" in the sense of predicting quntitatively these facts).

A non-toy physical model ought to describe quantitatively what is being measured, not just what might be measured if the world were doing what you imagine it ought to be doing if it were "ideal".

Well, tell me where I use a collapse. I just use the Born rule which states that the probability of an event is the absolute value squared of the inproduct of the corresponding eigenstate and the state of the system.

Then you either don't know that you're using the QM measurement theory for the composite system here (implicit in the formulas for the joint event probabilities) or you don't know how any definite result occurs in the QM measurement theory (via the extra-dynamical collapse).

Just step back and think for a second here to see the blatant self-contradictory nature of your claim (that you're not assuming any non-dynamical state evolution/collapse). Let's say you're correct, you only assume dynamical evolution of the full system state according to QM (or QED, ultimately) and you never claimed, or believe, that the system state stops following the dynamical evolution.

Now, look what happens if you account for the spatial degrees of freedom and follow (at least in principle) the multiparticle Schroedinger-Dirac equations for the full system (the lasers, PDC crystals or atoms for cascade, A,B, polarizers, detectors, computers counting the data). Taken all together, you have a gigantic set of coupled partial differential equations that fully describe what is going on once you give the initial and the boundary conditions (we can assume boundary conditions 0, i.e. any subsystems which might interact are included in the "full" system already in the equations). We also cannot use the approximate non-relativistic instantaneous potentials/interactions since that would be putting in the superficial non-locality by hand into the otherwise local dynamics. This excludes the non-relativistic, explicitly non-local point-particle models with the instantaneous potentials (which are only an approximation for QED interactions), such as Bohm's formalism (which is equivalent to the non-relativistic QM formalism).

The fields described by these equations evolve purely locally (they're 2nd order or some such PDEs, linear or non-linear). Therefore these coupled fields following the local PDEs are local "hidden" variable model all by themselves, if all that is happening is accounted for by these equations.

Yet, you claim that this very same dynamical evolution, without ever having to be suspended and amended by an extra-dynamical deus-ex-machina, yield result which prohibits any local hidden variables from being able to account for the result of their evolution.

A purely dynamical evolution via a set of local PDEs cannot yield result which shows that purely local PDEs cannot describe the system. That's the basic self-contradiction of your claim "I am using no collapse."

You seem unaware of the reasoning behind the probabilistic rules of QM measurement theory you're applying, which bridges gap between the purely local evolution and the "no-local hidden variables" prediction -- a suspension of the dynamics has to occur to make such prediction possible since the dynamics by itself is a local "hidden" variable theory. And that is the problem of the conventional QM measurement theory.

My starting point, several messages back, is that you can drop this suspension of dynamics (which has to be there to yield the alleged system evolution which contradicts the local hidden variables), and nothing will be affected in the application of quantum theory to anything.

Nothing depends on it but the Bell's theorem which would have to be declared void (i.e. no such QM prediction exists if you drop the non-dynamical state collapse). Recall also that the only reason the non-dynamical collapse was introduced into quantum theory was to explain the earlier faulty proofs of impossibility of any hidden variables (von Neumann's, Kochen-Specker's). The only remaining rationale is the Bell's theorem which seemingly prohibits any LHVs, thus one needs some way to make system produce measured values, since the classical view (of pre-existent values) could not work.

Thus the two, the non-dynamical evolution (collapse) and the Bell's QM prediction are a circular system -- the Bell's prediction requires for its deduction the non-dynamical evolution (via QM measurement theory) and the reason we need any collapse at all is the alleged impossibility of the pre-existent values for the variables/observables, and the claimed impossibility hinges solely on the Bell's proof itself. Nothing else in physics requires either. The two are a useless parasitic attachment serving nothing but to support itself.
 
Last edited:
  • #23
nightlight said:
You're using a 2(x)2 D Hilbert space. That is a toy model considering that the system space has about infinitely many times more dimensions, even before accounting for the quantum field aspect which is ignored all together (e.g. the indefinitness of the photon number would give application of Bell's argument here a problem). The ignored spatial factors are essential in the experiment.

Well, you should know that in all of physics, making a model which only retains the essential ingredients is a fundamental part of it, and I claim that the essential part is the 2x2 D Hilbert space of spins. Of course, a common error also in physics is that an essential component is overlooked, and that seems to be what you are claiming. But then it is up to you to show me what that is. You now seem to imply that I should also carry with me the spatial parts of the wave function for both particles. OK, I can do that, but you can already guess that this will factor out if I take a specific model, such as a propagating gaussian bump for each one. But then you will claim that I don't take into account the specific interaction in the photocathode of that particular photomultiplier. I could introduce a simple model for it, but you won't accept that. So you are endlessly complicating the issue, so that in the end nothing can be said, and all claims are open. You are of course free to do so, but no advancement is made nowhere. Witches exist. So now it is up to you to do a precise calculation with the model you can accept as satisfying (it will always be a model, and you'll never describe completely all aspects) and show me what you get, which is different from what the simple 2x2D model gives you, after introducing finite efficiencies for both detectors.

You are making the accusation that I do not know what I'm using in QM measurement theory, but it was YOU who claimed that you accepted the Born rule (stating that the probability for an event to occur is given by |<a|psi>|^2). It is what I used, in my model.

cheers,
Patrick.
 
  • #24
nightlight said:
The fields described by these equations evolve purely locally (they're 2nd order or some such PDEs, linear or non-linear). Therefore these coupled fields following the local PDEs are local "hidden" variable model all by themselves, if all that is happening is accounted for by these equations.

As a specific remark, the Schroedinger equation is not a local PDE equation. Well, it is local in configuration space, but it is not in real space, because the wave function is a function over configuration space. So nothing stops you from coupling remote coordinates.

cheers,
Patrick.
 
  • #25
vanesch As a specific remark, the Schroedinger equation is not a local PDE equation. Well, it is local in configuration space, but it is not in real space, because the wave function is a function over configuration space. So nothing stops you from coupling remote coordinates.

The PDEs are non-local only if you use the non-local (non-relativistic) approximations for the interaction potentials, such as the non-relativistic Coulomb potential approximation. If you allow these potentials in, you don't need QM or Bell's theorem to show that no local hidden variable theory could reproduce the effects of such non-local potentials: as soon as you move particle A its Coulomb potential at the position of a far away particle B changes and affects the B. It's surprising to see such discussion non-sequitur even brought up after I already excluded it upfront, explicitly and with a color emphasis.

The fully relativistic equations do not couple field values at the space-like separations. They are strictly local PDEs.
 
  • #26
nightlight said:
The fully relativistic equations do not couple field values at the space-like separations. They are strictly local PDEs.

Oh, but I agree with you that there is nothing spacelike going on. I had a few other posts here concerning that issue. Nevertheless, the correlations given by superpositions of systems which are separated in space are real, if you believe in quantum superposition, and that the "toy" models you attack do give you the essential facts. You WILL see correlations in the statistics, and no, the fact that we do not get the RAW DATA to be that way is not something deep, but just due to finite efficiencies. A very simple model of the efficiencies of the detectors give a correct description of the data taken.
My point (which I think is not very far from yours) is that one should only consider the measurement complete when the data are brought together (and hence are not separated spacelike) by one observer. Until that point, I do consider that the data of the other "observation" is in superposition.
But that is just interpretational. There's nothing very deep here. And no, I don't think you have to delve into QED (which is really opening a Pandora Box!) for such a simple system, because people use photons, but you could think just as well of electrons or whatever other combination. Only, the experiments are difficult to perform.

cheers,
Patrick.
 
  • #27
vanesch Well, you should know that in all of physics, making a model which only retains the essential ingredients is a fundamental part of it, and I claim that the essential part is the 2x2 D Hilbert space of spins. ... But then it is up to you to show me what that is.

The essential part of the Bell's inequalities is that the counts which violate the inequality in the QM "prediction" include nearly all pairs (equivalent to requiring around 82% of setup efficiency). Otherwise if one allows lower setup efficiency, the counts will not violate the inequality. Thus the spatial aspects and an adequate model of detection are both essential to evaluate the maximum setup efficiency. If you can't predict from QM (or Quantum Optics) the setup efficiency sufficient to violate inequalities, you don't have a QM prediction but an idea, a heuristic sketch of a prediction. And that is the phase that this alleged QM "prediction" never outgrew. It is a back of the envelope hint for a prediction.

You now seem to imply that I should also carry with me the spatial parts of the wave function for both particles. OK, I can do that, but you can already guess that this will factor out if I take a specific model, such as a propagating gaussian bump for each one.

All I am saying, go predict something that you actually could measure. Predict the 90% missing coincidences and then change the model of the setup, the apertures, detector electrode materials, lenses, source intensities or wavelengths,... whatever you need, and then show how these changes can be made so that the setup efficiency falls within the critical range to violate the inequalities. No "fair sampling" or other such para-physical/speculative ad-hockery allowed. Just the basic equations and the verifiable properties of the materials used.


So now it is up to you to do a precise calculation with the model you can accept as satisfying (it will always be a model, and you'll never describe completely all aspects) and show me what you get, which is different from what the simple 2x2D model gives you, after introducing finite efficiencies for both detectors.

All I am saying is that there is no QM prediction which violates Bell's inequalities and which doesn't rely on a non-dynamical, non-local and empirically unverfied and unverifiable state evolution (collapse). The non-locality is thus put in by hand upfront in the premise of the "prediction".

Secondly, after over three decades of trying, there is so far no experimental data which violates the Bell inequalities, either. There are massive extrapolations of the obtained data which put in by hand the 90% of the missing coincidences (handwaved in with a wishfull ad hoc unverifiable conjectures, such as the "fair sampling"), and to everyones big surprise, the massively extrapolated data violates the Bell's inequality.

So there is neither prediction nor data which violates the inequality. Luckily nothing else in the practical applications of physics needs that violation or the non-dynamical collapse (used to deduce the "prediction"), for anything. (Otherwise it would have been put on hold until the violation is achieved.)

You are making the accusation that I do not know what I'm using in QM measurement theory, but it was YOU who claimed that you accepted the Born rule (stating that the probability for an event to occur is given by |<a|psi>|^2). It is what I used, in my model.

I explained in what sense I accepted the "Born rule" --- the same sense that Schroedinger assumed in his interpretation of the wave function --- as a practical, although limited, operational shortcut (the same way the classical physics had used probabilities, e.g. for the scattering theory), not as a fundamental postulate of the theory and certainly not as an all-powerful deus-ex-machina of the later QM measurement theory (which is what Bell used), capable of suspending the dynamical evolution of the system state and then resuming the dynamics after the "result" is produced.

That is an absurdity that will be ridiculed by the future generations as we ridicule the turtle on a turtle... model of earth. Why turtles? Why non-dynamical collapse? What was wrong with those people? What were they thinking.
 
  • #28
Hi nightline,

I'm new to this forum so I'm not really clear on what etiquette is followed here. I just wanted to know what background you have in Physics.

Cheers,

Kane
 
  • #29
nightlight said:
The essential part of the Bell's inequalities is that the counts which violate the inequality in the QM "prediction" include nearly all pairs (equivalent to requiring around 82% of setup efficiency).

Yes, but why are you insisting on the experimental parameters such as efficiencies ? Do you think that they are fundamental ? That would have interesting consequences. If you claim that ALL experiments should be such that the raw data satisfy Bell's inequalities, that gives upper bounds to detection efficiencies of a lot of instruments, as a fundamental statement. Don't you find that a very strong statement ? Even though for photocathodes in the visible light range with today's technologies, that boundary is not reached ? You seem to claim that it is a fundamental limit.

But what is wrong with the procedure of taking the "toy" predictions of the ideal experiment, apply efficiency factors (which can very easily be established also experimentally) and compare that with the data ? After all, experiments don't serve to measure things but to falsify theories.

cheers,
Patrick.
 
  • #30
Oh, but I agree with you that there is nothing spacelike going on.

The QM measurement theory has a spacelike collapse of a state. As explained earlier, it absolutely requires the suspension of the dynamical evolution to perform its magical collapse that yields "result" then somehow let's go, and the dynamics is resumed. That kind of hocus-pocus I don't buy.

Nevertheless, the correlations given by superpositions of systems which are separated in space are real, if you believe in quantum superposition, and that the "toy" models you attack do give you the essential facts. You WILL see correlations in the statistics,

You can't make the prediction of the sharp kind of correlations that can violate the Bell's inequality. You can't get them (and nobody else has done it so far) from QED and using the known properties of the detectors and sources. You get them only using the QM measurement theory with its non-dynamical collapse.

and no, the fact that we do not get the RAW DATA to be that way is not something deep, but just due to finite efficiencies. A very simple model of the efficiencies of the detectors give a correct description of the data taken.

Well, for the Bell's inequality violation, the setup efficiency is the key question. Without the certain minimum percentage of pairs showing violation local models are still perfectly fine. If you can't create a realistic model which can violate the inequality (the sources and the detectors design which could do it if made by the specs), there is no prediction, just an idea of a prediction.

My point (which I think is not very far from yours) is that one should only consider the measurement complete when the data are brought together (and hence are not separated spacelike) by one observer. Until that point, I do consider that the data of the other "observation" is in superposition.

I think that QM should work without any measurement theory beyond the kind of normal operational rules used in classical physics. The only rigorous rationale why the QM measurement theory (with its collapse) was needed was the faulty von Neumann's proof of impossibility of any hidden variables. Without hidden variables possible one has to explain how a superposition can yield a specific result at all. With hidden variables one can simply treat it as in the classical physics - the specific result was caused by the values of stochastic variables which were not controlled in the experiment. So, von Neumann invented the collapse (which was earlier used in an informal way) and conjectured that it was the observers mind which causes it, suspending the dynamical evolution, magically creating a definite result then the dynamical evolution resumes its operation.

After that "proof" (and its Kochen-Specker temporary patch) were shown to be irrelevant, having already been refuted by Bohm's counterexample, it was the Bell's theorem which became the sole basis for excluding the hidden variables (since non-local ones are problematic), thus that is the only remaining rationale for maintaining the collapse "solution" to the "no-(local)-hidden-variable" problem. If there were no Bell's no-LHV prediction of QM, there would be no need for a non-dynamical collapse "solution" since there would be no problem to solve. (Since one could simply look at the quantum probabilities as being merely the result of the underlying stochastic variables.)

The Achilles Heel of this new scheme is that in order to prove his theorem Bell needed QM measurement theory and the collapse, otherwise there is no prediction (based on more rigorous QED and a realisitc source and detection treatment) which would be sharp/efficient enough to violate the Bell's inequality.

What we're left with is the Bell's prediction which needs the QM collapse premise and we have the collapse "solution" for the measurement problem, the problem created because the collapse premise yields, via Bell's prediction, to a prohibition of LHVs. So the two, the Bell's prediction and the collapse, form a self-serving closed loop, soleley existing to prop each other up, while nothing else in physics needs either. It is a tragic and shameful waste of time to teach kids and have them dwell on that kind of useless nonsense that future generations will laugh at (once they snap out from the spell of this logical viscious circle).
 
  • #31
nightlight said:
You can't make the prediction of the sharp kind of correlations that can violate the Bell's inequality. You can't get them (and nobody else has done it so far) from QED and using the known properties of the detectors and sources. You get them only using the QM measurement theory with its non-dynamical collapse.

I wonder what you understand by QED, if you don't buy superpositions of states...

cheers,
Patrick.
 
  • #32
vanesch Yes, but why are you insisting on the experimental parameters such as efficiencies ? Do you think that they are fundamental ?

For the violation of inequality, definitely. The inequality is purely enumerative type of mathematical inequality, like a pigeonhole principle. It depends in an essential way on the fact that a sufficient percentage of result slots are filled in for different angles so they cannot be rearranged to fit multiple (allegedly required) correlations. Check for example a recent paper by L. Sica where he shows that if you take a three finite arrays A[n], B1[n] and B2[n], filled with numbers +1 and -1 and you form the cross-correlation expressions (used for Bell's inequality): E1=Sum(A[j],B1[j])/n, E2=Sum(A[j],B2[j])/n and E12=Sum(B1[j],B2[j])/n, then no matter how you fill the numbers or how big the arrays are, they always satisfy the Bell's inequality:

| E1 - E2 | <= 1 - E12.

So no classical data set could violate this purely enumerative inequality, i.e. the QM prediction of violation means that if we were to turn the apparatus B from angle B1 to B2, the set of results on A must be always strictly different than what it was for B position B1 . Similarly, it implies that in the actual data, the sets of results on fixed A position, and two B positions, B1 and B2, the two sequences of A results must be strictly different from each other (they would normally have roughly equal numbers of +1's and -1's in each array, so the arrangement would have to be different between the two; they can't be the same even accidentally if the inequality is violated).

For the validity of this purely enumerative inequality, it is essential that the sufficient number of array slots is filled with +1 and -1, otherwise (e.g. if you put 0's in sufficient number of slots) the inequality doesn't hold. Some discussion of this result are in the later papers by L. Sica and by A. F. Kracklauer.

That would have interesting consequences. If you claim that ALL experiments should be such that the raw data satisfy Bell's inequalities, that gives upper bounds to detection efficiencies of a lot of instruments, as a fundamental statement. Don't you find that a very strong statement ? Even though for photocathodes in the visible light range with today's technologies, that boundary is not reached ? You seem to claim that it is a fundamental limit.

There is a balancing or tradeoff between the "loopholes" in the experiments. The detection efficiency can be traded for lower polarization resolution (by going to higher energy photons). Also, the detectors can be tuned to a higher sensitivity, but then the dark current rises, blurring the results and producing more "background" and "accidental" concidences have to be subtracted (which is another big no-no for a loophole free test).

For massive particles, similar tradeoffs occur as well -- if you want better detection efficiency by going to higher energies, the spin measurement resolution drops. For ultra-relativistic particles, which are detectable almost 100%, the Stern-Gerlach doesn't work at all any more and the very lossy Compton scattering spin resolution must be used.

You can check a recent paper by Emilio Santos for much more discussion of these and other tradeoffs (also in his earlier papers). Basically, there is a sort of "loophole conservation" phenomenon, so that squeezing one out makes another one grow. Detector efficiency is just one of the parameters.

But what is wrong with the procedure of taking the "toy" predictions of the ideal experiment, apply efficiency factors (which can very easily be established also experimentally) and compare that with the data ? After all, experiments don't serve to measure things but to falsify theories.

Using the toy model prediction as a heuristic to come up with an experiment is fine, of course. But bridging the efficiency losses to obtain the inequality violation by the data requires the additional assumptions such as "fair sampling" (which Santos discusses in the paper above and which I had discussed in detail in an earlier thread here, where the Santos' paper was discussed as well). After seeing enough of this kind of find-a-better-euphemism-for-failure games, it all starts to look more and more like perpetuum mobile inventors thinking up the excuses, blaming ever more creatively the real world's "imperfections" and "non-idealities" for the failure of their allegedly 110% overunity device.
 
  • #33
I wonder what you understand by QED, if you don't buy superpositions of states...

Where did I say I don't buy superposition of states? You must be confusing my rejection of your QM measurement theory version of "magical" Born rule and its application to the Bell's system with my support for different Born rule (non-collapsing, non-fundamental, approximate, apparatus dependent rule) to jump to conclusion that I don't believe in superposition.
 
  • #34
nightlight said:
For the violation of inequality, definitely. The inequality is purely enumerative type of mathematical inequality, like a pigeonhole principle.

Good. So your claim is that we will never find raw data which violates Bell's inequality. That might, or might not be the case, depending on whether you take this to be a fundamental principle and not a technological issue. I personally find it hard to believe that this is a fundamental principle, but as it stands experimentally currently it cannot be negated (I think ; I haven't followed the most recent experiments on EPR type stuff). I don't know if it has any importance at all that we can or not have these raw data. The point is that we can have superpositions of states which are entangled in bases that we would ordinary think are factorized.

However, I find your view of quantum theory rather confusing (I have to admit I don't know what you call quantum theory, honestly: you seem to use different definitions for terms than what most people use, such as the Born rule or the quantum state, or an observable).
Could you explain me how you see the Hilbert state formalism of say, 10 electrons, and what you understand by a state of the system, and what you call the Born rule in that case ?
Do you accept that the states are all the completely assymetrical functions in 10 3-dim space coordinates and 10 spin-1/2 states, or do you think this is not adequate ?

cheers,
Patrick.
 
  • #35
So your claim is that we will never find raw data which violates Bell's inequality.

These kind of enumerative constraints have been tightening in recent years and the field of Extremal set theory has been very lively lately. I suspect it will eventually be shown that, purely enumeratively (a la Sperner's theorem or Kraft's inequalities), the fully filled in arrays of results satisfying perhaps a larger set of correlation constraints from the QM prediction (not just the few that Bell had used), will yield a result that the measure of inequality violating sets converges to zero as the constraints (e.g. for different angles) are added in the limit of infinite arrays.

That would then put the claims of experimental inequality violation at the level of perpetuum mobile of the second kind, i.e. one could look at such claims as if someone claimed that he can flip the coin million times and get regularly the first million binary digits of Pi. If he can, he is almost surely cheating.

That might, or might not be the case, depending on whether you take this to be a fundamental principle and not a technological issue. I personally find it hard to believe that this is a fundamental principle,

Well, the inequality is of the enumerative kind of mathematical result, like a pigeonhole principle. Say, someone claimed they have a special geometric layout of holes and a special pigeon placement ordering which allows them to violate the pigeonhole principle inequalities, so they can put > N pigeons in N holes without having any hole with multiple pigeons. When asked to show it, the inventor brings out a gigantic panel of holes arranged in strange ways and in strange shapes, and starts placing pigeons jumping in some odd ways across the pannel, taking longer and longer to pick a hole as he went on, as if calculating where to go next.

After some hours and finishing about 10% of the holes, he stops and proudly proclaims, here, it is obvious that it works, no double pigeons in any hole. Yeah, sure, just some irrelevant, minor holes left due to the required computation. If I assume that holes filled are a fair sample of all the holes and I extrapolate proportionately the area I filled to the entire board, you can see that continuing in this manner, I can put exactly 5/4 N pigeons without having to share a hole. And this is only a prototype algorithm. That's just a minor problem which will be solved as I refine the algorithm performance. After all, it would be implausible that an algorithm which worked so well so far, in just rough prototype form, would fail when polished to full strength.

This is precisely the kind of claims we have for the Bell's inequality tests with 10% of coincidences filled in, then claiming to violate the enumerative inequality for which the fillup of at least 82% is absolutely vital to even begin looking at it as a constraint. As Santos argues in the paper mentioned, the ad hoc "fair sampling" conjecture used to fast-talk over the failure, is a highly absurd assumption in this context (see also the earlier thread on this topic).

And often heard invocation of implausibility of QM failing with better detectors is as irrelevant as the pigeonhole algorithm inventor's assertion of implausibility of better algorithm failing -- it is a complete non-sequitur in the enumerative inequality context. Especially recalling the closed loop, self-serving circular nature of the Bell's No-LHV QM "prediction" and the vital tool used to prove it, the collapse postulate (the non-dynamical non-local evolution, vaguely specified suspension of dynamics), which in turn was introduced into the QM for the sole reason to "solve" the No-HV problem with measurement. And the sole reason it is still kept is to "solve" the remaining No-LHV problem, the one resulting from the Bell's theorem, which in turn requires the very same collapse in a vital manner to violate the inequality.

Since nothing else needs either of the two pieces, the two only serve to prop each other up while predicting no other empirical consequences for testing except for causing each other (if Bell's violation were shown experimentally, that would support the distant system-wide state collapse; no other test for the collapse exists).

The point is that we can have superpositions of states which are entangled in bases that we would ordinary think are factorized.

The problem is not the superposition but the adequacy of the overall model of (which the state is part of), and secondarily, attributing the particular state to a given preparation.

However, I find your view of quantum theory rather confusing (I have to admit I don't know what you call quantum theory, honestly: you seem to use different definitions for terms than what most people use, such as the Born rule or the quantum state, or an observable).

The non-collapse version of Born rule (as an approximate operational shortcut) has a long tradition. If you have learned just one approach and one perspective fed from a textbook, with usual pedagogical cheating on proofs and skirting of opposing or different approaches (to avoid confusing a novice), then yes, it could appear confusing. Any non-collapse approach takes this kind of Born rule, which goes back to Schroedinger.

Could you explain me how you see the Hilbert state formalism of say, 10 electrons, and what you understand by a state of the system, and what you call the Born rule in that case ? Do you accept that the states are all the completely assymetrical functions in 10 3-dim space coordinates and 10 spin-1/2 states, or do you think this is not adequate ?

For non-relativistic approximation, yes, it would be antisymmetrical 10*3 spatial coordinate function with Coulomb interaction Hamiltonian. This is still an external field approximation, which doesn't account for self-interaction of EM and the fermion fields. Asim Barut had worked out a scheme which superficially looks like self-consistent effective field methods (such as Hartree-Fock), but underneath it is an updated old Schroedinger's idea of treating matter fiels and EM fields as two interacting fields in a single 3D space. The coupled Dirac-Maxwell equations form a nonlinear set of PDEs. He shows that this treatment is equivalent to conventional QM 3N dimensional antisymmetrized N fermion equations, but with a fully interacting model (wich doesn't use the usual external EM field or external charge current approximations). With that approach he and his graduate students had reproduced the full non-relativistic formalism and the leading orders of radiative corrections of perturbative QED, without needing renormalization (no point particles, no divergences). You can check a http://www-lib.kek.jp/cgi-bin/kiss_prepri?KN=&TI=&AU=barut&AF=&CL=&RP=&YR= on this topic on the KEK server.

Regarding the Born rule for this atom, the antisymmetrization and the high interaction make the concept of individual electron here largely meaningless for any discussion of the Bell correlations (via Born rule). In Barut's approach this manifests in assuming no point electrons at all but simply using the single fermion matter field (normalized to 10 electron charge; he does have separate models of charge quantisation, these are stil somewhat sketchy) which has the identical scattering properties as the conventional QM/QED models, i.e. Born rule is valid in the limited, non-collapsing sense. E.T Jaynes has a similar perspective (see his paper `Scattering of Light by Free Electrons'; unfortunately, both Jaynes and Barut have died few years ago, so if you have any questions or comments, you'll probably have to wait a while since I am not sure that emails can go there and back).
 
Last edited by a moderator:

Similar threads

  • Quantum Physics
2
Replies
36
Views
1K
  • Quantum Physics
Replies
2
Views
290
Replies
28
Views
581
  • Quantum Physics
Replies
14
Views
1K
  • Quantum Physics
3
Replies
81
Views
4K
Replies
1
Views
649
  • Quantum Physics
Replies
22
Views
944
Replies
11
Views
1K
Replies
60
Views
3K
Replies
18
Views
2K
Back
Top