Is quantum mechanics a complete theory of nature?

In summary, the conversation discussed the concept of quantum entanglement and its implications on the completeness of quantum theory. The inability to specify momentum in quantum theory raises the question of whether it is incomplete. The EPR experiment and Bell's theorem were mentioned as examples of this debate. The conversation also touched on the difficulties of defining and detecting photons, and the limitations of our current understanding of quantum mechanics.
  • #36
nortonian said:
Please look at the Marshall papers, especially "The myth of the photon" which I cited in an earlier post to see the theoretical basis for that conclusion.
I looked at this paper and my response was (post #14):
Santos [and Marshall] says in abstract of this paper:
"It also requires us to recognize that there is a payoff between detector efficiency and signal-noise discrimination."
This indeed seems to be the case for SPAD detectors. But it turns out this is not a general rule for any detector:
NIST Detector Counts Photons With 99 Percent Efficiency:
“When these detectors indicate they’ve spotted a photon, they’re trustworthy. They don’t give false positives,” says Nam, a physicist with NIST’s Optoelectronics division. “Other types of detectors have really high gain so they can measure a single photon, but their noise levels are such that occasionally a noise glitch is mistakenly identified as a photon. This causes an error in the measurement. Reducing these errors is really important for those who are doing calculations or communications.”

So prediction of this model turns out to be false. I see no point looking further.
 
Physics news on Phys.org
  • #37
nortonian said:
I do not pretend to a complete understanding of the Bell thm, but if what we are calling a photon is actually classical then it is about classical measurements and/or the properties of detectors. Please look at the Marshall papers, especially "The myth of the photon" which I cited in an earlier post to see the theoretical basis for that conclusion.

I have had a look at that paper and there are many good reasons why it actually never gor published somewhere. While their idea to sort classical and non-classical states by means of the Wigner density is odd, but at least consistent from their point of view, they basically have no arguments and sometimes claim wrong stuff. The worst point is their claim that "With respect to the “nonclassical” states of the light field currently widely reported as having been observed, our response is that something approximating the squeezed vacuum, as described by equation (6), has been observed; this, however, according to our new classification, is a classical state, though not Glauber-classical."

At first that means they only talk about SPDC processes and completely ignore single photon sources which have been realized are definitely not amplified squeezed vacuum. Second, the similarity between SPDC and amplification of some vacuum modes of the em field is well known. Pretty much any spontaneous emission process can be understood as amplification of a vacuum mode. All in all the important point is that their claim of "Planck-classicality" being more important than the standard Glauber definition is not tenable. Apart from that the explanation that any SPDC process creates a field with a positive Wigner density is not tenable at all. See e.g. "Amplification of Quantum Entanglement" by De Martini (PRL 81, 2842–2845 (1998)) for a sketch of how the Wigner function of such a state actually looks like.

Add the arguments Zonde presented and it becomes clear that the proposal of Marshall and Santos is just not in accordance with what actually happens.
 
  • #38
zonde said:
So prediction of this model turns out to be false. I see no point looking further.

If quantum mechanics is complete on the microscopic level then what we observe is all that exists and a detection is a photon. On the other hand, if it is incomplete then we do not observe all that exists, we don't know how the detection is produced and we don't have a good model. How do you know which of these choices is the correct one? There is no optical experiment that identifies a single photon with the assurances of the photo electric effect so I prefer to say even though there is a good mathematical model there is not a good physical model so I choose not to reject locality. Is there a problem with that?

Cthugha said:
I have had a look at that paper and there are many good reasons why it actually never gor published somewhere.
Add the arguments Zonde presented and it becomes clear that the proposal of Marshall and Santos is just not in accordance with what actually happens.

OK so there is no competing theory to quantum optics, but that doesn't mean it is the final answer or that it is complete. Qm does not include general relativity and cannot explain life, consciousness, and chaos theory; all of which are local realistic phenomena. Instead of asking how is it possible for non-local phenomena to occur, the question should be rephrased to ask, why do local phenomena appear to be non-local when viewed according to the laws of qm? Only when that question is answered will it be possible to make progress towards answering more fundamental questions.
 
  • #39
nortonian said:
I prefer to say even though there is a good mathematical model there is not a good physical model so I choose not to reject locality. Is there a problem with that?
There is no problem with that part.

About other things - we do not need theory of everything to make testable statements. Strictly speaking all our theories about physics are incomplete. It's that scientific method does not allow conclusive proof of a theory.
 
  • #40
Is quantum mechanics a complete theory of nature? No.
 
  • #41
Considering that there is no complete theories of physics in a sense that they give conclusive statements maybe it's more sense to talk about completeness or incompleteness of QM as consistency or inconsistency of theory.
I suppose that was the sense how Einstein was talking about incompleteness of QM - that different representations of the same physical situation within the theory are not in conflict with each other.
 
  • #42
zonde said:
Considering that there is no complete theories of physics in a sense that they give conclusive statements maybe it's more sense to talk about completeness or incompleteness of QM as consistency or inconsistency of theory.
I suppose that was the sense how Einstein was talking about incompleteness of QM - that different representations of the same physical situation within the theory are not in conflict with each other.
No, I think what Einstein meant by incompleteness is when a theory points "outside itself" in some sense. In other words, when it gives an indication that there are other theories needed to either supplement it or supplant it. For instance, Maxwell's theory of electromagnetism seems to imply that charged particles are unstable if the only forces acting on them were the electromagnetic forces gotten from Maxwell's equations, so all by themselves they suggest that there is something in nature other than Maxwell's equations.
 
  • #43
lugita15 said:
No, I think what Einstein meant by incompleteness is when a theory points "outside itself" in some sense.
There are no self-contained physics theories. So what is the sense to talk about incompleteness this way?

lugita15 said:
In other words, when it gives an indication that there are other theories needed to either supplement it or supplant it.
How it gives that indication? By not being able to give unequivocal predictions? Shouldn't work. Such theory would be simply dismissed.

There is nice letter where Einstein tries to explain his position - Einstein's Reply to Criticisms
From there:
Einstein's Reply to Criticisms: said:
What does not satisfy me in that theory, from the standpoint of principle, is its attitude towards that which appears to me to be the programmatic aim of all physics: the complete description of any (individual) real situation (as it supposedly exists irrespective of any act of observation or substantiation). Whenever the positivistically inclined modern physicist hears such a formulation his reaction is that of a pitying smile. He says to himself: "there we have the naked formulation of a metaphysical prejudice, empty of content, a prejudice, moreover, the conquest of which constitutes the major epistemological achievement of physicists within the last quarter-century. Has any man ever perceived a 'real physical situation'? How is it possible that a reasonable person could today still believe that he can refute our essential knowledge and understanding by drawing up such a bloodless ghost?" Patience! ...
So he talks about real physical situation as it supposedly exists irrespective of any act of observation or substantiation. And this is supposed to be opposite for positivistic attitude.

This gives quite different picture than the one you are drawing. It's not about lack of another theory but about lack of metaphysical core for the theory that we could call model of reality.
 
Last edited:
  • #44
zonde said:
Considering that there is no complete theories of physics in a sense that they give conclusive statements maybe it's more sense to talk about completeness or incompleteness of QM as consistency or inconsistency of theory.
I agree.

The Bell theorem is a mathematical rule describing the behavior of a mathematical model (the photon) in order to define a physical concept (locality). In order for qm to be consistent a physical model is needed to define a physical concept.
 
  • #45
nortonian said:
Qm does not include general relativity and cannot explain life, consciousness, and chaos theory; all of which are local realistic phenomena. Instead of asking how is it possible for non-local phenomena to occur, the question should be rephrased to ask, why do local phenomena appear to be non-local when viewed according to the laws of qm?

What? Which local phenomena appear non-local according to qm? I know of none. Your argument is pretty moot. Why should qm explain life or consciousness? That is not even the domain of physics. Regarding nonlinear dynamics (or chaos theory as you call it) there is the field of quantum chaos that studies how to treat chaotic classical systems in terms of quantum mechanics.

nortonian said:
The Bell theorem is a mathematical rule describing the behavior of a mathematical model (the photon) in order to define a physical concept (locality). In order for qm to be consistent a physical model is needed to define a physical concept.

The concept of the photon is as physical as the concepts of gravity, atoms or angular momentum are.
 
  • #46
Cthugha said:
What? Which local phenomena appear non-local according to qm? I know of none.
Polarization
Cthugha said:
Your argument is pretty moot. Why should qm explain life or consciousness? That is not even the domain of physics.

Qm is fundamental to all microscopic phenomena. Life began at the microscopic level and unsuccessful attempts have been made to explain both it and consciousness in terms of quantum mechanics. In any case general relativity alone suffices as an example.

Cthugha said:
The concept of the photon is as physical as the concepts of gravity, atoms or angular momentum are.

Then why is momentum not conserved locally for photons that produce interference fringes?
 
  • #47
nortonian said:
Polarization

That is not really an answer that explains much. Polarization as such is a property of a system and calling it local or non-local is kind of odd. Do you point at Bell tests? They do not "look" non-local, they are (modulo the typical disclaimer that it might be local if realism is dropped).

nortonian said:
Qm is fundamental to all microscopic phenomena. Life began at the microscopic level and unsuccessful attempts have been made to explain both it and consciousness in terms of quantum mechanics. In any case general relativity alone suffices as an example.

As Anderson said: "more is different". It would be quite daring to declare complex fields explainable by physics alone. This will not work for biology, social sciences or chemistry. Explaining consciousness is the realm of biology and I really doubt there is an explanation purely in terms of physics. I do not understand the reference to general relativity. Of course it is complicated to marry qm and gr. That is well known, but what does it have to do with the topic and hand? The pure fact that you have locality in some subfield does not mean that teverything is local.

nortonian said:
Then why is momentum not conserved locally for photons that produce interference fringes?

Within the coherence volume of some light field momentum is conserved. Do you have any example where it is not conserved?
 
  • #48
Cthugha said:
That is not really an answer that explains much.

The pure fact that you have locality in some subfield does not mean that everything is local.

These disputes are caused by differences in language. You are using quantum speak while I use physical speak.

Cthugha said:
Within the coherence volume of some light field momentum is conserved. Do you have any example where it is not conserved?

Momentum is not an averaged quantity in physical speak. It is applied instantaneously. However, my purpose in starting this thread was not to change quantum speak but to point out that it is incomplete because it cannot be expressed physically with classical terminology and that there must be another way to interpret what is observed that has a wider scope. Qm was formed over many years by consensus as a statistical description of nature. This is especially true of quantum optics which evolved in a rather erratic manner with much confusion before arriving at a consensus. (See R. Hanbury Brown The Intensity Interferometer) In the end a bunch of physicists got together and decided that a detection event is a photon without a shred of hard physical evidence to confirm it. It seems to me that deciding questions of what is real or true, i.e. physical questions, should not be left to such a tenuous process. Although the statistical interpretation used by qm is internally consistent it is not consistent at all when compared to the evolution in space and time of all physical processes (the correspondence principle is a cheap attempt to make it consistent everywhere). What make me so sure? Because it has happened before.
It is not necessary that these hypotheses be true. They need not even be likely. This one thing suffices, that the calculation to which they lead agree with the result of observation. Preface to “On the Revolutions of the Celestial Spheres” by Nicolaus Copernicus 1543
 
  • #49
nortonian said:
These disputes are caused by differences in language. You are using quantum speak while I use physical speak.

I do not think so.

nortonian said:
Momentum is not an averaged quantity in physical speak. It is applied instantaneously.

It is not even clear to me about which scenario or experiment you are talking.

nortonian said:
However, my purpose in starting this thread was not to change quantum speak but to point out that it is incomplete because it cannot be expressed physically with classical terminology

Sorry, but that does not make sense. QM is more advanced than classical mechanics. Of course one will then also need an adequate terminology that goes beyond that of classical mechanics.

nortonian said:
This is especially true of quantum optics which evolved in a rather erratic manner with much confusion before arriving at a consensus. (See R. Hanbury Brown The Intensity Interferometer)

The HBT experiment was performed before the field of quantum optics even existed. Although it is sometimes termed the first experiment in quantum optics, giving it that name is a bad idea because the HBT-effect is entirely classical. Also it did not take too long to arrive at a consensus. The experiment by Hanbury Brown and Twiss, the questions raised by Brannen and Ferguson and the reply by Purcell all happened within one year, 1956. The quantum treatment of the effect has been discussed by Fano already in 1961.

nortonian said:
In the end a bunch of physicists got together and decided that a detection event is a photon without a shred of hard physical evidence to confirm it.

It has been pointed out to you three times now that this is simply not true. You can easily check the higher order moments of a light fields and find out whether a detection event corresponds to a single photon Fock state or something entirely different. Having [itex]g^{(2)}(0)=g^{(3)}(0)=...=g^{(n)}(0)=0[/itex] is a strict and an unambiguous criterion for having a single photon Fock state. If you have some other peer-reviewed publications that state otherwise please present them.

nortonian said:
Although the statistical interpretation used by qm is internally consistent it is not consistent at all when compared to the evolution in space and time of all physical processes

Again, please provide an example. I do not have the slightest idea what you mean.
 
  • #50
Cthugha said:
It is not even clear to me about which scenario or experiment you are talking.
Momentum exchange is instantaneous for particle collisions.

Cthugha said:
QM is more advanced than classical mechanics. Of course one will then also need an adequate terminology that goes beyond that of classical mechanics.

Yes, of course

Cthugha said:
it did not take too long to arrive at a consensus. The experiment by Hanbury Brown and Twiss, the questions raised by Brannen and Ferguson and the reply by Purcell all happened within one year, 1956. The quantum treatment of the effect has been discussed by Fano already in 1961.

As Hanbury Brown described it there were false starts, misunderstandings, and some initial confusion before arriving at a consensus.

Cthugha said:
It has been pointed out to you three times now that this is simply not true. If you have some other peer-reviewed publications that state otherwise please present them.

Of course no evidence exists that disproves qm. I am talking about a lack of evidence with respect to particle properties, a deficiency. The term 'photon' is used loosely in qm as has been recognized. There is no optical experiment similar to the photoelectric effect that indicates either by energy or momentum exchange that detections may be identified with single photons. Conservation of energy and momentum are applied statistically in interference experiments. This is not acceptable for a physically consistent theory.

Cthugha said:
Again, please provide an example. I do not have the slightest idea what you mean.

In a consistent theory it would not be necessary to define an arbitrary transition between quantum and classical by introducing a correspondence principle.
 
  • #51
nortonian said:
I am talking about a lack of evidence with respect to particle properties, a deficiency. The term 'photon' is used loosely in qm as has been recognized. There is no optical experiment similar to the photoelectric effect that indicates either by energy or momentum exchange that detections may be identified with single photons.

And I told you four times now that antibunching IS an accepted and unambiguous way to identify single photons. Where is your problem with that? Please provide some arguments why you think it is not enough.
 
  • #52
Cthugha said:
And I told you four times now that antibunching IS an accepted and unambiguous way to identify single photons. Where is your problem with that? Please provide some arguments why you think it is not enough.

Detection devices sample a volume of space-time much greater than the theoretical size of a photon. Because of this there may be errors in the physical interpretation of data from anti-bunching experiments. This has occurred in the past in other areas. For example, Loudon (2000) in the introduction of his book asserts that
“Taylor (1909) failed to find any changes from the classical fringes of a Young interferometer when the light source was so feeble that only one photon at a time was present in the apparatus”.
There are several errors and/or omissions with this statement:
1. Taylor calculated photon number by comparing it with average light intensity, however the fluctuation of photon density in the light beam is and always will be unknown because detectors are not perfect recording devices.
2. Photographic emulsions depend on the developability of silver bromide crystals to record the arrival of photons. This occurs in two stages lasting approximately 10-6 sec, and is characterized by the ejection of an electron and subsequent neutralization of a silver atom. ( C.E.K.Mees & T.H. James, The Theory of the Photographic Process, (MacMillan, NY), 1966.) The chemical properties of the crystals together with quantum efficiency of film have been used to calculate the estimated number of photons required to develop a silver halide crystal and found to be approximately 100 photons. (P. Kowaliski, Applied Photographic Theory (Wiley, NY), 1972.) Taylor did not know this so his experiment is flawed.
3. A more recent study has found no interference fringes even after 336 hours of exposure with a photodetector, a finding which directly contradicts the idea that a photon interferes only with itself. (E. Panarella (1986). "Quantum uncertainties", in W.M. Honig, D.W. Kraft, & E. Panarella (Eds.) Quantum Uncertainties: Recent and Future Experiments and Interpretations, (p. 105) New York: Plenum Press.)

If Loudon is unaware of these properties of film then how do I know that the photodetection process was properly analyzed? I have found no analysis of its physical properties in his book. The correct interpretation of anti-bunching and other quantum optical experiments is based on the physical nature of detections and is therefore suspect unless these questions can be resolved.
 
  • #53
nortonian, rather than getting bogged down in the weeds of how we know a photon detection is really a photon detection, let me ask you this. The proof in the Herbert link I gave you just involves correlations of detector clicks, whatever is causing those clicks. The point of the proof is that no local hidden variable theory can explain the correlations of detector clicks predicted by QM. Do you agree with this conclusion?
 
  • #54
nortonian said:
Detection devices sample a volume of space-time much greater than the theoretical size of a photon.

This completely depends on the experimental setup. You have detectors with large and small area and (while size is ill defined) the volume on which a photon is localized tends to be on the order of the coherence volume which can vary drastically.

nortonian said:
Because of this there may be errors in the physical interpretation of data from anti-bunching experiments. This has occurred in the past in other areas. For example, Loudon (2000) in the introduction of his book asserts that
There are several errors and/or omissions with this statement:
1. Taylor calculated photon number by comparing it with average light intensity, however the fluctuation of photon density in the light beam is and always will be unknown because detectors are not perfect recording devices.

This is plain wrong. It is non-trivial to reconstruct the whole photon number statistics because detectors are almost never ideal. However, the fluctuations can be measured very well as the ratio of the fluctuations to the mean photon number can be measured quite well and is independent of detector efficiency. This is why people always measure [itex]g^{(2)}(0)[/itex] and not the whole photon number distribution.

nortonian said:
2. Photographic emulsions depend on the developability of silver bromide crystals to record the arrival of photons. This occurs in two stages lasting approximately 10-6 sec, and is characterized by the ejection of an electron and subsequent neutralization of a silver atom. ( C.E.K.Mees & T.H. James, The Theory of the Photographic Process, (MacMillan, NY), 1966.) The chemical properties of the crystals together with quantum efficiency of film have been used to calculate the estimated number of photons required to develop a silver halide crystal and found to be approximately 100 photons. (P. Kowaliski, Applied Photographic Theory (Wiley, NY), 1972.) Taylor did not know this so his experiment is flawed.

Yes, but who cares? The measurement by Taylor anyway has absolutely nothing to do with showing that photons have a particle nature. Also nobody uses photographic emulsions in measurements of photon number statistics. One uses avalanche photodiodes for that purpose.

nortonian said:
3. A more recent study has found no interference fringes even after 336 hours of exposure with a photodetector, a finding which directly contradicts the idea that a photon interferes only with itself. (E. Panarella (1986). "Quantum uncertainties", in W.M. Honig, D.W. Kraft, & E. Panarella (Eds.) Quantum Uncertainties: Recent and Future Experiments and Interpretations, (p. 105) New York: Plenum Press.)

The ideas of photons interfering only with themselves as proposed way back by Dirac was already refuted in the 60s. Roy Glauber formulated a funny punch at Dirac's famous statement in some of his publications, maybe even in his Nobel lecture. I need to check that. The idea that there is also multi-photon interference is well known, but this is something you do not see in a simple double slit experiment. Also whether or not you see an interference pattern in a double slit experiment also depends on the distance between source and slit and the size of the light source. I do not know what your experiment is aiming at. Anyway, it does not really matter. Such experiments are not the ones used to validate the natur e of photons.

nortonian said:
If Loudon is unaware of these properties of film then how do I know that the photodetection process was properly analyzed? I have found no analysis of its physical properties in his book. The correct interpretation of anti-bunching and other quantum optical experiments is based on the physical nature of detections and is therefore suspect unless these questions can be resolved.

You are aware that Glauber got a Nobel prize for the theory of optical coherence and the physics of optical detectors? Read his work (or his Nobel lecture for an easy introduction) or a good book (Mandel/Wolf is the bible of quantum optics, for beginners Fox's introduction to quantum optics is also ok and maybe easier to understand). Talking about photographic films in connection with experiments which tell us nothing about photon statistics is throwing red herrings. The key signature of the photon nature is antibunching and the necessary physics about detectors can be found in the books and publications I mentioned. If you find any flaws in these that is a good starting point for discussion. Just wrongly claiming that detectors are not understood is not.
 
  • #55
lugita15 said:
nortonian, rather than getting bogged down in the weeds of how we know a photon detection is really a photon detection, let me ask you this. The proof in the Herbert link I gave you just involves correlations of detector clicks, whatever is causing those clicks. The point of the proof is that no local hidden variable theory can explain the correlations of detector clicks predicted by QM. Do you agree with this conclusion?
Yes, I agree, but I do not think it is significant with respect to locality because it has to do with detections not with the causes of the detections.
Cthugha said:
I do not know what your experiment is aiming at. Anyway, it does not really matter. Such experiments are not the ones used to validate the natur e of photons.

Talking about photographic films in connection with experiments which tell us nothing about photon statistics is throwing red herrings. The key signature of the photon nature is antibunching and the necessary physics about detectors can be found in the books and publications I mentioned. If you find any flaws in these that is a good starting point for discussion. Just wrongly claiming that detectors are not understood is not.

For you a photon does not exist until it is observed. For me it is impossible to observe a single optical photon because more than one photon is needed to create a detection event. The experiment described in 3 proves this for interference effects. The experiments in 2 were able to prove it for photographic film in general because the development process occurs very slowly. It probably cannot be proven for photodiodes because the reaction time is quicker. You seem to think I am trying to say that qm is wrong. Not at all. I just want it to be made clear that there is a physical difference between what causes a detection and the detection itself. I have not seen any published work that attempts to distinguish between them. Why is that important? Because it deals with physical reality and locality. If qm wants to draw conclusions about what is real it had better analyze all aspects of an experiment, not just what it chooses to.
 
  • #56
nortonian said:
For you a photon does not exist until it is observed. For me it is impossible to observe a single optical photon because more than one photon is needed to create a detection event. The experiment described in 3 proves this for interference effects.

This is not correct. The experiment in 3 shows that more than 1 photon is needed to create a detection event for the detector used.

nortonian said:
The experiments in 2 were able to prove it for photographic film in general because the development process occurs very slowly.

Indeed photographic film typically does not show single photon sensitivity. That is generally accepted. However, it is trivial that detectors without single-photon sensitivity like film or most CCDs are not able to detect single photons. This is why one uses detectors with single-photon sensitivity or even the ability to resolve photon numbers for experiments where single photons matter.


nortonian said:
It probably cannot be proven for photodiodes because the reaction time is quicker. You seem to think I am trying to say that qm is wrong. Not at all. I just want it to be made clear that there is a physical difference between what causes a detection and the detection itself. I have not seen any published work that attempts to distinguish between them.

No, I am just saying that you are arguing from a standpoint which roughly corresponds to the beginning of the seventies. I have given you plenty of references on detector theory, most prominently the Mandel/Wolf and references therein. If you choose to ignore them, I cannot help you much. There are plenty of publications about SPADs and single photon sensitivity.

nortonian said:
Why is that important? Because it deals with physical reality and locality. If qm wants to draw conclusions about what is real it had better analyze all aspects of an experiment, not just what it chooses to.

You always fall back to discussing detectors which are not sensitive to single photons and completely ignore like 35 years of publications on detectors like avalanche photodiodes which are sensitive to single photons. I said before that antibunching is THE key signature of single photons. Perfect antibunching is impossible to measure using detectors which are not single photon sensitive.
 
  • #57
Cthugha said:
The experiment in 3 shows that more than 1 photon is needed to create a detection event for the detector used.
No, it shows that more than one photon is needed for interference. The complete experiment was as follows: The initial step in the experiment was to produce a diffraction pattern using coherent light and a 20 second exposure time. A filter was then inserted in the beam so that 2.5 hours were required to obtain an equivalent intensity. No light at all was registered by the film. Exposure time was increased to 17.5 hours and a nearly 10 fold increase in intensity before the film registered the presence of the light beam. A diffraction pattern was still not observed. Even by increasing the exposure to 336.3 hours and a 100 fold increase in intensity the expected diffraction pattern could not be obtained. The same result was also obtained by using a detector of the photoemissive type.

Cthugha said:
Indeed photographic film typically does not show single photon sensitivity. That is generally accepted.
Then why did Loudon use Taylor's experiment, which uses film, as proof of single photon interference in his textbook?
Cthugha said:
However, it is trivial that detectors without single-photon sensitivity like film or most CCDs are not able to detect single photons. This is why one uses detectors with single-photon sensitivity or even the ability to resolve photon numbers for experiments where single photons matter.
You seem to be saying that single photon interference does not occur for film but it can occur in experiments with improved detectors like the ones Mandel describes. I don't see why interference should depend on what detector is used. Either you are making a distinction between the terms "photon" and "one-photon state", or you are saying that if SPAD detectors were used in experiment 3 they would detect an interference pattern.
Cthugha said:
I have given you plenty of references on detector theory, most prominently the Mandel/Wolf and references therein.
Please be patient.
 
  • #58
nortonian said:
Yes, I agree, but I do not think it is significant with respect to locality because it has to do with detections not with the causes of the detections.
But the whole point of the proof is to show that whatever is causing the detections can NOT be described by local hidden variables.
 
  • #59
nortonian said:
No, it shows that more than one photon is needed for interference. The complete experiment was as follows: The initial step in the experiment was to produce a diffraction pattern using coherent light and a 20 second exposure time. A filter was then inserted in the beam so that 2.5 hours were required to obtain an equivalent intensity. No light at all was registered by the film. Exposure time was increased to 17.5 hours and a nearly 10 fold increase in intensity before the film registered the presence of the light beam. A diffraction pattern was still not observed. Even by increasing the exposure to 336.3 hours and a 100 fold increase in intensity the expected diffraction pattern could not be obtained. The same result was also obtained by using a detector of the photoemissive type.

I routinely perform similar experiments and diffraction and interference patterns never change with intensity. The only case where this happens is when you use detectors relying on TPA (two-photon-absorption) or even multiple photon absorption. This is for example the case when you have a detector based on some semiconductor having a bandgap and use photons that have energy less than the bandgap. In that case you need to have two or more photons arriving within the coherence time of the light to create a transition and a detection event. That can basically happen for every detector that has some characteristic "activation energy" like the mentioned semiconductor detectors or photographic film when low-energy photons are used. So it would be necessary to know the wavelength of the light used and the exact kind of detectors used before one can interpret anything.

nortonian said:
Then why did Loudon use Taylor's experiment, which uses film, as proof of single photon interference in his textbook?

I do not know. I also do not like Loudon's book, but that is a matter of taste. I just would like to point out that single photon interference does not mean that single photons are present, but that interference between different photons is not present. By the way a state containing several indistinguishable photons within the coherence volume does not qualify as having DIFFERENT photons. This is a tiny point which is often overlooked. Actually single photon interference is not the best name for the phenomenon, but it is the one which has grown historically. Also whether or not one sees interferencealso depends on the detector dimensions and time resolution compared to the spatial and temporal coherence properties of the light used.

nortonian said:
You seem to be saying that single photon interference does not occur for film but it can occur in experiments with improved detectors like the ones Mandel describes. I don't see why interference should depend on what detector is used. Either you are making a distinction between the terms "photon" and "one-photon state", or you are saying that if SPAD detectors were used in experiment 3 they would detect an interference pattern.

I do not know experiment 3 and it is hard to tell without knowing details like wavelength and coherence time of the light used, angular size of the light source as seen by the detectors, detector resolution and so on. I can tell you that in any experiment I performed interference patterns do not vanish at reduced intensity - unless of course the signal becomes smaller than the dark count rate of the detector used. Regarding the terminology "photon" and "single-photon state" please see my last comment.

nortonian said:
Please be patient.

No problem. One does not read the Mandel/Wolf within a day or even a week. It takes really long.
 
  • #60
Cthugha said:
I do not know experiment 3 and it is hard to tell without knowing details like wavelength and coherence time of the light used, angular size of the light source as seen by the detectors, detector resolution and so on. I can tell you that in any experiment I performed interference patterns do not vanish at reduced intensity - unless of course the signal becomes smaller than the dark count rate of the detector used.
I have a copy of the manuscript and will see what it says.
lugita15 said:
But the whole point of the proof is to show that whatever is causing the detections can NOT be described by local hidden variables.
I strongly suspect that a detection event is caused by the superposition of fields from many photons. There are several reasons for this.
1. A photographic detection is caused by a superposition of photons, or the fields of photons, so perhaps the same mechanism is what causes detections in other types of detectors.
2. The photon is defined as a wave-packet function whose mean energy is given by hbar times an average over its frequency components. This supports the idea of many superposed fields acting on the detector.
3. The wave packet is delocalized whereas the detection is localized. Either there is a wave function collapse, a conceptual device I prefer to avoid, or there is a local superposition of fields that causes the detection, which is preferred because it avoids non-locality.
4. The argument that a SPAD only detects single photons is a clear objection to these arguments; however, it was defined to be that way and due to uncertainty there is no way to positively distinguish between the two possibilities.
When these points are taken together it means that there is a possibility that the detections are not non-local, but rather due to em fields which always act locally. In that case the Bell theorem is not about non-locality, it is about a characteristic of the light source, or whatever other physical object is being measured.
 
  • #61
nortonian said:
I have a copy of the manuscript and will see what it says.

Just to make my point clear: The author should somehow verify that the detector he uses is indeed a linear one for the range of intensities he is looking at. Generally speaking the photon number distribution in some detector area will be a Poissonian distribution around some mean value. For a detection event to occur one either needs a certain amount of photons within the coherence time of the light (for coherent detection) or during some characteristic timescale of the detector (for incoherent detection) to be present. As soon as the mean photon number becomes similar to the photon number needed for a detection event, non-linearities can and will occur due to the Poissonian nature of the photon number distribution. However, this is a detector effect. It could for example result in vanishing side peak structures or have similar effects.

nortonian said:
2. The photon is defined as a wave-packet function whose mean energy is given by hbar times an average over its frequency components. This supports the idea of many superposed fields acting on the detector.

This is not the typical definition of a photon. Which book describes it this way?

nortonian said:
4. The argument that a SPAD only detects single photons is a clear objection to these arguments; however, it was defined to be that way and due to uncertainty there is no way to positively distinguish between the two possibilities.

Due to uncertainty? Typical clump bunch models are easily ruled out as they cannot explain the joint detection rates at several detectors for non-classical light states. If you do not like the original antibunching paper, a more didactical one was published by Grangier:

P. Grangier, G. Roger, and A. Aspect, "Experimental evidence for a photon anticorrelation effect on a beam splitter: A new light on single-photon interferences", Europhys. Lett. 1, 173-179 (1986).

You need to find a model that violates inequality (7) in order to be in line with experimental observations. That is not possible with classical wave models and that is also the point constantly ignored by the clump-crackpot community.
 
  • #62
Cthugha said:
If you do not like the original antibunching paper, a more didactical one was published by Grangier:

P. Grangier, G. Roger, and A. Aspect, "Experimental evidence for a photon anticorrelation effect on a beam splitter: A new light on single-photon interferences", Europhys. Lett. 1, 173-179 (1986).
If anyone is interested, attached is that paper.
 

Attachments

  • Grangier Paper.pdf
    437.2 KB · Views: 324
  • #63
Cthugha said:
Just to make my point clear: The author should somehow verify that the detector he uses is indeed a linear one for the range of intensities he is looking at.
The experiment with low intensity light by Panarella was not carefully thought out. The physical model he used is a clump model and leads to not very sophisticated experimental procedures. The film used is Type 47 Polaroid high speed film which may seem to be the proper choice. However, a study of starlight photography by Kowaliski indicates that “the use of a slower film can further improve the appearance of the signal.” Only one type of light was used, from a He-Ne laser, but incoherent light should also have been tried for comparison. The data was not normalized for intensity. In other words, as filters were inserted in the output of the interferometer the exposure time should have been increased an amount sufficient to maintain the same total recorded intensity. The visibility is known to decrease with increasing time of exposure, but no one has shown whether it varies linearly. Nevertheless the claim that interference effects were eliminated must be taken seriously.
Cthugha said:
This is not the typical definition of a photon. Which book describes it this way?
Loudon
Cthugha said:
You need to find a model that violates inequality (7) in order to be in line with experimental observations. That is not possible with classical wave models and that is also the point constantly ignored by the clump-crackpot community.
I am not talking about classical wave models, rather about photons with classical fields that superpose. In previous posts I have taken the position that non-locality in Bell theorem tests is a field effect and is therefore due to classical properties of light. If those tests can be successfully performed using very low intensity light from which classical field properties such as interference have been eliminated then qm could make the claim of non-locality and not before. The non-locality experiments depend on the precise meaning of “photon” and “one-photon state”, but as has been pointed out here, by Loudon, and by others there is some ambiguity in the definitions.

I have no dispute with the calculations of qm or the experimental results, but there are serious problems with how initial conditions were defined and therefore with the conclusions drawn from them. No one knows exactly what is going on at the microscopic level and to make pronouncements on reality and locality on such a shaky basis is rash, as though they are simply properties of matter like mass or anything else.
 
  • #64
nortonian said:
Only one type of light was used, from a He-Ne laser, but incoherent light should also have been tried for comparison. The data was not normalized for intensity. In other words, as filters were inserted in the output of the interferometer the exposure time should have been increased an amount sufficient to maintain the same total recorded intensity. The visibility is known to decrease with increasing time of exposure, but no one has shown whether it varies linearly. Nevertheless the claim that interference effects were eliminated must be taken seriously.

Well, as I said, it would be most important to check the response linearity of the film first before jumping to conclusions. Most people doing research in optics make the mistake of wrongly assuming a linear detector response in a regime where it is in fact not linear at lest once in their lives - at least this is my experience. Most of these learn an important lesson from that. Checking incoherent light may not be too interesting. It may be interesting to compare thermal light with coherence times shorter and longer than the typical 'response' time of the film, though.

nortonian said:
I am not talking about classical wave models, rather about photons with classical fields that superpose.

Ok, but the field associated with a photon is classical anyway (is that the point in Loudon's book you mean?). Non-classical signatures arise only at the intensity level. This can be seen easily in the fact that [itex]g^{(1)}[/itex], the field-field correlation function does not carry any signatures of non-classicality and cannot be used to distinguish classical from nonclassical states, while [itex]g^{(2)}[/itex], the intensity correlation function does carry such signatures. Classicality of a system with respect to some quantity roughly means that a measurement of that quantity does not disturb the system. This is trivially true for field correlation measurements, but not true for intensity correlation measurements.

nortonian said:
In previous posts I have taken the position that non-locality in Bell theorem tests is a field effect and is therefore due to classical properties of light.

But this position is not tenable. The closest completely classical analogue to SPDC emission you can find is some phase conjugated classical light field showing classical phase conjugated correlations. See e.g. B. I. Erkmen and J. H. Shapiro, "Ghost imaging: from quantum to classical to computational" in Advances in Optics and Photonics, Vol. 2, Issue 4, pp. 405-450 (2010) for a brief review of phase sensitive coherence properties. However, using this kind of light field in Bell tests does not lead to any violations of Bell inequalities. Obviously, also non-classicality in general is very well known to not be a field effect, so it is very strange to attribute non-locality to field effects.

nortonian said:
If those tests can be successfully performed using very low intensity light from which classical field properties such as interference have been eliminated then qm could make the claim of non-locality and not before.

I do not understand what you mean. In some sense interference is eliminated because (momentum)-entangled photons are necessarily spatially incoherent and cannot show perfect entanglement and a visible double slit interference pattern under the same experimental conditions. One can demonstrate that these properties are complementary (Phys. Rev. A 63, 063803 (2001)).

nortonian said:
The non-locality experiments depend on the precise meaning of “photon” and “one-photon state”, but as has been pointed out here, by Loudon, and by others there is some ambiguity in the definitions.

Is there? A single photon state is one for which [itex]g^{(2)}(0)=0[/itex]. There is no ambiguity about that.

nortonian said:
No one knows exactly what is going on at the microscopic level and to make pronouncements on reality and locality on such a shaky basis is rash, as though they are simply properties of matter like mass or anything else.

I do not see where the basis is shaky.
 
  • #65
Cthugha said:
But this position is not tenable. The closest completely classical analogue to SPDC emission you can find is some phase conjugated classical light field showing classical phase conjugated correlations. See e.g. B. I. Erkmen and J. H. Shapiro, "Ghost imaging: from quantum to classical to computational" in Advances in Optics and Photonics, Vol. 2, Issue 4, pp. 405-450 (2010) for a brief review of phase sensitive coherence properties. However, using this kind of light field in Bell tests does not lead to any violations of Bell inequalities. Obviously, also non-classicality in general is very well known to not be a field effect, so it is very strange to attribute non-locality to field effects.

Classicality of a system with respect to some quantity roughly means that a measurement of that quantity does not disturb the system. This is trivially true for field correlation measurements, but not true for intensity correlation measurements.
You are speaking of non-local classical which is the accepted interpretation of what it means to say classical. I am speaking of local classical. The first can be measured and represented quantitatively, the second cannot be but may perhaps be revealed by physical means, as for example by low intensity light when photons become statistically independent.

Cthugha said:
I do not understand what you mean. In some sense interference is eliminated because (momentum)-entangled photons are necessarily spatially incoherent and cannot show perfect entanglement and a visible double slit interference pattern under the same experimental conditions. One can demonstrate that these properties are complementary (Phys. Rev. A 63, 063803 (2001)).
Formulations of the meaning of classical include implicit prejudices such as saying that classical absorptions of energy occur gradually or interference occurs over the coherence volume. The possibility that they are local phenomena is not considered and so a weakened model of classical is compared to qm and rejected.

Cthugha said:
I do not see where the basis is shaky.
I want only to present an alternative view. One that is local and physical. If it is inadequately expressed it reflects on my capabilities not on the overall picture. I defer to the majority view not because it is correct but due to its intricate design.
 
  • #66
I still do not get it. Almost all of your statements are at odds with experimental results. Do you have ANY justification for your crude theories?

nortonian said:
The first can be measured and represented quantitatively, the second cannot be but may perhaps be revealed by physical means, as for example by low intensity light when photons become statistically independent.

Statistical dependence or independence does not depend on the mean intensity, but just on the 'character of your light field'. Photons in a coherent light beam are always statistically independent irrespective of the mean intensity. Photons in a thermal beam always have the tendency to bunch. Your claim is plain wrong.

nortonian said:
Formulations of the meaning of classical include implicit prejudices such as saying that classical absorptions of energy occur gradually or interference occurs over the coherence volume. The possibility that they are local phenomena is not considered and so a weakened model of classical is compared to qm and rejected.

Argh. None of this is correct. Classical can be used as in opposition to quantized or it can mean that nonperturbative measurements are possible. The paper of Grangier explicitly shows that there are states for which both of these descriptions fail. 'Local' or 'non-local' does not even play a role when considering these arguments.

nortonian said:
I want only to present an alternative view. One that is local and physical. If it is inadequately expressed it reflects on my capabilities not on the overall picture. I defer to the majority view not because it is correct but due to its intricate design.

Your view is at odds with experimental results. Therefore it cannot be physical. By the way this is not a forum for personal theories.
 
  • #67
Cthugha said:
I still do not get it. Almost all of your statements are at odds with experimental results. Do you have ANY justification for your crude theories?
See attachment and explain quantum mechanically why there is greater intensity of field in the middle of the spark discharges. These are unretouched photos of Tesla coil discharges.
Cthugha said:
Statistical dependence or independence does not depend on the mean intensity, but just on the 'character of your light field'. Photons in a coherent light beam are always statistically independent irrespective of the mean intensity. Photons in a thermal beam always have the tendency to bunch. Your claim is plain wrong.
Sorry, change to physical independence. Light does not interfere or interferes less as is apparent from lower visibility or disappearance of fringes because with low intensity light photons are separated physically from each other.
Cthugha said:
Argh. None of this is correct.
We are speaking different languages.
 

Attachments

  • sparks.png
    sparks.png
    18.9 KB · Views: 453
  • #68
nortonian said:
See attachment and explain quantum mechanically why there is greater intensity of field in the middle of the spark discharges. These are unretouched photos of Tesla coil discharges.

And this is supposed to show what? If you have the field distribution, know how and where photons are emitted in these discharges and know your detector response function you can trivially calculate what you will see on a picture. You do not need fringe physics for that.

nortonian said:
Sorry, change to physical independence. Light does not interfere or interferes less as is apparent from lower visibility or disappearance of fringes because with low intensity light photons are separated physically from each other.

Are you still discussing that Panarella junk science claim? It has not been published in a credible peer-reviewed journal and there are dozens of peer-reviewed publications contradicting the results presented there. Panarella is well known to be a fringe scientist who sometimes performed serious work, but very often crossed the border to just claiming nonsense. Panarella just measured his detector response function and claims that it is a property of the light field itself. There is a reason why he did not get his results published in a serious outlet.

nortonian said:
We are speaking different languages.

Nature speaks very clearly in terms of experimental results. Of course one can muddy the waters by claiming things which are not tenable like Panarella did, but why should one do so.

Again, this is not a forum for discussing fringe or crackpot physics and also not a forum for personal theories as is explicitly described in the forum rules you agreed to. Unless you have some peer-reviewed publications backing up your daring claims, I do not see how this discussion could take a sensible course and I think it is better to just quit this discussion.
 
  • #69
Question? Would it be possible to re-write Bell's Theorem in terms of some other type of particle and test that particle under the given theorem?
 
  • #70
Kal-El said:
Question? Would it be possible to re-write Bell's Theorem in terms of some other type of particle and test that particle under the given theorem?

This has been done many times and with many different configurations. There are a lot of things that can be entangled (which means they violate a Bell inequality at some level). So try these:

http://arxiv.org/abs/1202.5328

http://arxiv.org/abs/1202.4206

Or better just look at some of these (this is a hodgepodge but you can still see the idea):

http://arxiv.org/find/quant-ph/1/AND+abs:+bell+abs:+test/0/1/0/all/0/1?per_page=100
 
<h2>1. What is quantum mechanics?</h2><p>Quantum mechanics is a branch of physics that describes the behavior of particles on a very small scale, such as atoms and subatomic particles. It is a mathematical framework that explains how these particles interact with each other and with energy.</p><h2>2. Is quantum mechanics a complete theory of nature?</h2><p>The answer to this question is still debated among scientists. Some argue that quantum mechanics is a complete theory, while others believe that there may be other underlying principles that are yet to be discovered.</p><h2>3. What are the limitations of quantum mechanics?</h2><p>Quantum mechanics has been very successful in describing the behavior of particles on a small scale. However, it breaks down when trying to explain phenomena on a larger scale, such as the behavior of macroscopic objects. It also does not fully explain gravity and the behavior of the universe on a cosmic scale.</p><h2>4. How does quantum mechanics differ from classical mechanics?</h2><p>Classical mechanics is the branch of physics that describes the behavior of objects on a larger scale, while quantum mechanics deals with particles on a smaller scale. Classical mechanics follows deterministic laws, while quantum mechanics introduces the concept of probability and uncertainty.</p><h2>5. What are some real-world applications of quantum mechanics?</h2><p>Quantum mechanics has numerous applications in modern technology, including transistors, lasers, and computer memory. It also plays a crucial role in fields such as chemistry, materials science, and quantum computing. Additionally, our understanding of quantum mechanics has led to advancements in medical imaging and cryptography.</p>

1. What is quantum mechanics?

Quantum mechanics is a branch of physics that describes the behavior of particles on a very small scale, such as atoms and subatomic particles. It is a mathematical framework that explains how these particles interact with each other and with energy.

2. Is quantum mechanics a complete theory of nature?

The answer to this question is still debated among scientists. Some argue that quantum mechanics is a complete theory, while others believe that there may be other underlying principles that are yet to be discovered.

3. What are the limitations of quantum mechanics?

Quantum mechanics has been very successful in describing the behavior of particles on a small scale. However, it breaks down when trying to explain phenomena on a larger scale, such as the behavior of macroscopic objects. It also does not fully explain gravity and the behavior of the universe on a cosmic scale.

4. How does quantum mechanics differ from classical mechanics?

Classical mechanics is the branch of physics that describes the behavior of objects on a larger scale, while quantum mechanics deals with particles on a smaller scale. Classical mechanics follows deterministic laws, while quantum mechanics introduces the concept of probability and uncertainty.

5. What are some real-world applications of quantum mechanics?

Quantum mechanics has numerous applications in modern technology, including transistors, lasers, and computer memory. It also plays a crucial role in fields such as chemistry, materials science, and quantum computing. Additionally, our understanding of quantum mechanics has led to advancements in medical imaging and cryptography.

Similar threads

Replies
2
Views
343
Replies
18
Views
437
  • Quantum Physics
Replies
24
Views
591
  • Quantum Physics
Replies
3
Views
194
  • Quantum Physics
Replies
18
Views
2K
  • Quantum Physics
Replies
12
Views
613
  • Quantum Physics
Replies
7
Views
986
  • Quantum Physics
Replies
4
Views
889
Replies
1
Views
560
Back
Top