## Is quantum mechanics a complete theory of nature?

 Quote by Cthugha And I told you four times now that antibunching IS an accepted and unambiguous way to identify single photons. Where is your problem with that? Please provide some arguments why you think it is not enough.
Detection devices sample a volume of space-time much greater than the theoretical size of a photon. Because of this there may be errors in the physical interpretation of data from anti-bunching experiments. This has occurred in the past in other areas. For example, Loudon (2000) in the introduction of his book asserts that
 “Taylor (1909) failed to find any changes from the classical fringes of a Young interferometer when the light source was so feeble that only one photon at a time was present in the apparatus”.
There are several errors and/or omissions with this statement:
1. Taylor calculated photon number by comparing it with average light intensity, however the fluctuation of photon density in the light beam is and always will be unknown because detectors are not perfect recording devices.
2. Photographic emulsions depend on the developability of silver bromide crystals to record the arrival of photons. This occurs in two stages lasting approximately 10-6 sec, and is characterized by the ejection of an electron and subsequent neutralization of a silver atom. ( C.E.K.Mees & T.H. James, The Theory of the Photographic Process, (MacMillan, NY), 1966.) The chemical properties of the crystals together with quantum efficiency of film have been used to calculate the estimated number of photons required to develop a silver halide crystal and found to be approximately 100 photons. (P. Kowaliski, Applied Photographic Theory (Wiley, NY), 1972.) Taylor did not know this so his experiment is flawed.
3. A more recent study has found no interference fringes even after 336 hours of exposure with a photodetector, a finding which directly contradicts the idea that a photon interferes only with itself. (E. Panarella (1986). "Quantum uncertainties", in W.M. Honig, D.W. Kraft, & E. Panarella (Eds.) Quantum Uncertainties: Recent and Future Experiments and Interpretations, (p. 105) New York: Plenum Press.)

If Loudon is unaware of these properties of film then how do I know that the photodetection process was properly analyzed? I have found no analysis of its physical properties in his book. The correct interpretation of anti-bunching and other quantum optical experiments is based on the physical nature of detections and is therefore suspect unless these questions can be resolved.
 Blog Entries: 1 nortonian, rather than getting bogged down in the weeds of how we know a photon detection is really a photon detection, let me ask you this. The proof in the Herbert link I gave you just involves correlations of detector clicks, whatever is causing those clicks. The point of the proof is that no local hidden variable theory can explain the correlations of detector clicks predicted by QM. Do you agree with this conclusion?

Recognitions:
 Quote by nortonian Detection devices sample a volume of space-time much greater than the theoretical size of a photon.
This completely depends on the experimental setup. You have detectors with large and small area and (while size is ill defined) the volume on which a photon is localized tends to be on the order of the coherence volume which can vary drastically.

 Quote by nortonian Because of this there may be errors in the physical interpretation of data from anti-bunching experiments. This has occurred in the past in other areas. For example, Loudon (2000) in the introduction of his book asserts that There are several errors and/or omissions with this statement: 1. Taylor calculated photon number by comparing it with average light intensity, however the fluctuation of photon density in the light beam is and always will be unknown because detectors are not perfect recording devices.
This is plain wrong. It is non-trivial to reconstruct the whole photon number statistics because detectors are almost never ideal. However, the fluctuations can be measured very well as the ratio of the fluctuations to the mean photon number can be measured quite well and is independent of detector efficiency. This is why people always measure $g^{(2)}(0)$ and not the whole photon number distribution.

 Quote by nortonian 2. Photographic emulsions depend on the developability of silver bromide crystals to record the arrival of photons. This occurs in two stages lasting approximately 10-6 sec, and is characterized by the ejection of an electron and subsequent neutralization of a silver atom. ( C.E.K.Mees & T.H. James, The Theory of the Photographic Process, (MacMillan, NY), 1966.) The chemical properties of the crystals together with quantum efficiency of film have been used to calculate the estimated number of photons required to develop a silver halide crystal and found to be approximately 100 photons. (P. Kowaliski, Applied Photographic Theory (Wiley, NY), 1972.) Taylor did not know this so his experiment is flawed.
Yes, but who cares? The measurement by Taylor anyway has absolutely nothing to do with showing that photons have a particle nature. Also nobody uses photographic emulsions in measurements of photon number statistics. One uses avalanche photodiodes for that purpose.

 Quote by nortonian 3. A more recent study has found no interference fringes even after 336 hours of exposure with a photodetector, a finding which directly contradicts the idea that a photon interferes only with itself. (E. Panarella (1986). "Quantum uncertainties", in W.M. Honig, D.W. Kraft, & E. Panarella (Eds.) Quantum Uncertainties: Recent and Future Experiments and Interpretations, (p. 105) New York: Plenum Press.)
The ideas of photons interfering only with themselves as proposed way back by Dirac was already refuted in the 60s. Roy Glauber formulated a funny punch at Dirac's famous statement in some of his publications, maybe even in his Nobel lecture. I need to check that. The idea that there is also multi-photon interference is well known, but this is something you do not see in a simple double slit experiment. Also whether or not you see an interference pattern in a double slit experiment also depends on the distance between source and slit and the size of the light source. I do not know what your experiment is aiming at. Anyway, it does not really matter. Such experiments are not the ones used to validate the natur e of photons.

 Quote by nortonian If Loudon is unaware of these properties of film then how do I know that the photodetection process was properly analyzed? I have found no analysis of its physical properties in his book. The correct interpretation of anti-bunching and other quantum optical experiments is based on the physical nature of detections and is therefore suspect unless these questions can be resolved.
You are aware that Glauber got a Nobel prize for the theory of optical coherence and the physics of optical detectors? Read his work (or his Nobel lecture for an easy introduction) or a good book (Mandel/Wolf is the bible of quantum optics, for beginners Fox's introduction to quantum optics is also ok and maybe easier to understand). Talking about photographic films in connection with experiments which tell us nothing about photon statistics is throwing red herrings. The key signature of the photon nature is antibunching and the necessary physics about detectors can be found in the books and publications I mentioned. If you find any flaws in these that is a good starting point for discussion. Just wrongly claiming that detectors are not understood is not.

 Quote by lugita15 nortonian, rather than getting bogged down in the weeds of how we know a photon detection is really a photon detection, let me ask you this. The proof in the Herbert link I gave you just involves correlations of detector clicks, whatever is causing those clicks. The point of the proof is that no local hidden variable theory can explain the correlations of detector clicks predicted by QM. Do you agree with this conclusion?
Yes, I agree, but I do not think it is significant with respect to locality because it has to do with detections not with the causes of the detections.
 Quote by Cthugha I do not know what your experiment is aiming at. Anyway, it does not really matter. Such experiments are not the ones used to validate the natur e of photons. Talking about photographic films in connection with experiments which tell us nothing about photon statistics is throwing red herrings. The key signature of the photon nature is antibunching and the necessary physics about detectors can be found in the books and publications I mentioned. If you find any flaws in these that is a good starting point for discussion. Just wrongly claiming that detectors are not understood is not.
For you a photon does not exist until it is observed. For me it is impossible to observe a single optical photon because more than one photon is needed to create a detection event. The experiment described in 3 proves this for interference effects. The experiments in 2 were able to prove it for photographic film in general because the development process occurs very slowly. It probably cannot be proven for photodiodes because the reaction time is quicker. You seem to think I am trying to say that qm is wrong. Not at all. I just want it to be made clear that there is a physical difference between what causes a detection and the detection itself. I have not seen any published work that attempts to distinguish between them. Why is that important? Because it deals with physical reality and locality. If qm wants to draw conclusions about what is real it had better analyze all aspects of an experiment, not just what it chooses to.

Recognitions:
 Quote by nortonian For you a photon does not exist until it is observed. For me it is impossible to observe a single optical photon because more than one photon is needed to create a detection event. The experiment described in 3 proves this for interference effects.
This is not correct. The experiment in 3 shows that more than 1 photon is needed to create a detection event for the detector used.

 Quote by nortonian The experiments in 2 were able to prove it for photographic film in general because the development process occurs very slowly.
Indeed photographic film typically does not show single photon sensitivity. That is generally accepted. However, it is trivial that detectors without single-photon sensitivity like film or most CCDs are not able to detect single photons. This is why one uses detectors with single-photon sensitivity or even the ability to resolve photon numbers for experiments where single photons matter.

 Quote by nortonian It probably cannot be proven for photodiodes because the reaction time is quicker. You seem to think I am trying to say that qm is wrong. Not at all. I just want it to be made clear that there is a physical difference between what causes a detection and the detection itself. I have not seen any published work that attempts to distinguish between them.
No, I am just saying that you are arguing from a standpoint which roughly corresponds to the beginning of the seventies. I have given you plenty of references on detector theory, most prominently the Mandel/Wolf and references therein. If you choose to ignore them, I cannot help you much. There are plenty of publications about SPADs and single photon sensitivity.

 Quote by nortonian Why is that important? Because it deals with physical reality and locality. If qm wants to draw conclusions about what is real it had better analyze all aspects of an experiment, not just what it chooses to.
You always fall back to discussing detectors which are not sensitive to single photons and completely ignore like 35 years of publications on detectors like avalanche photodiodes which are sensitive to single photons. I said before that antibunching is THE key signature of single photons. Perfect antibunching is impossible to measure using detectors which are not single photon sensitive.

 Quote by Cthugha The experiment in 3 shows that more than 1 photon is needed to create a detection event for the detector used.
No, it shows that more than one photon is needed for interference. The complete experiment was as follows: The initial step in the experiment was to produce a diffraction pattern using coherent light and a 20 second exposure time. A filter was then inserted in the beam so that 2.5 hours were required to obtain an equivalent intensity. No light at all was registered by the film. Exposure time was increased to 17.5 hours and a nearly 10 fold increase in intensity before the film registered the presence of the light beam. A diffraction pattern was still not observed. Even by increasing the exposure to 336.3 hours and a 100 fold increase in intensity the expected diffraction pattern could not be obtained. The same result was also obtained by using a detector of the photoemissive type.

 Quote by Cthugha Indeed photographic film typically does not show single photon sensitivity. That is generally accepted.
Then why did Loudon use Taylor's experiment, which uses film, as proof of single photon interference in his textbook?
 Quote by Cthugha However, it is trivial that detectors without single-photon sensitivity like film or most CCDs are not able to detect single photons. This is why one uses detectors with single-photon sensitivity or even the ability to resolve photon numbers for experiments where single photons matter.
You seem to be saying that single photon interference does not occur for film but it can occur in experiments with improved detectors like the ones Mandel describes. I don't see why interference should depend on what detector is used. Either you are making a distinction between the terms "photon" and "one-photon state", or you are saying that if SPAD detectors were used in experiment 3 they would detect an interference pattern.
 Quote by Cthugha I have given you plenty of references on detector theory, most prominently the Mandel/Wolf and references therein.

Blog Entries: 1
 Quote by nortonian Yes, I agree, but I do not think it is significant with respect to locality because it has to do with detections not with the causes of the detections.
But the whole point of the proof is to show that whatever is causing the detections can NOT be described by local hidden variables.

Recognitions:
 Quote by nortonian No, it shows that more than one photon is needed for interference. The complete experiment was as follows: The initial step in the experiment was to produce a diffraction pattern using coherent light and a 20 second exposure time. A filter was then inserted in the beam so that 2.5 hours were required to obtain an equivalent intensity. No light at all was registered by the film. Exposure time was increased to 17.5 hours and a nearly 10 fold increase in intensity before the film registered the presence of the light beam. A diffraction pattern was still not observed. Even by increasing the exposure to 336.3 hours and a 100 fold increase in intensity the expected diffraction pattern could not be obtained. The same result was also obtained by using a detector of the photoemissive type.
I routinely perform similar experiments and diffraction and interference patterns never change with intensity. The only case where this happens is when you use detectors relying on TPA (two-photon-absorption) or even multiple photon absorption. This is for example the case when you have a detector based on some semiconductor having a bandgap and use photons that have energy less than the bandgap. In that case you need to have two or more photons arriving within the coherence time of the light to create a transition and a detection event. That can basically happen for every detector that has some characteristic "activation energy" like the mentioned semiconductor detectors or photographic film when low-energy photons are used. So it would be necessary to know the wavelength of the light used and the exact kind of detectors used before one can interpret anything.

 Quote by nortonian Then why did Loudon use Taylor's experiment, which uses film, as proof of single photon interference in his textbook?
I do not know. I also do not like Loudon's book, but that is a matter of taste. I just would like to point out that single photon interference does not mean that single photons are present, but that interference between different photons is not present. By the way a state containing several indistinguishable photons within the coherence volume does not qualify as having DIFFERENT photons. This is a tiny point which is often overlooked. Actually single photon interference is not the best name for the phenomenon, but it is the one which has grown historically. Also whether or not one sees interferencealso depends on the detector dimensions and time resolution compared to the spatial and temporal coherence properties of the light used.

 Quote by nortonian You seem to be saying that single photon interference does not occur for film but it can occur in experiments with improved detectors like the ones Mandel describes. I don't see why interference should depend on what detector is used. Either you are making a distinction between the terms "photon" and "one-photon state", or you are saying that if SPAD detectors were used in experiment 3 they would detect an interference pattern.
I do not know experiment 3 and it is hard to tell without knowing details like wavelength and coherence time of the light used, angular size of the light source as seen by the detectors, detector resolution and so on. I can tell you that in any experiment I performed interference patterns do not vanish at reduced intensity - unless of course the signal becomes smaller than the dark count rate of the detector used. Regarding the terminology "photon" and "single-photon state" please see my last comment.

 Quote by nortonian Please be patient.
No problem. One does not read the Mandel/Wolf within a day or even a week. It takes really long.

 Quote by Cthugha I do not know experiment 3 and it is hard to tell without knowing details like wavelength and coherence time of the light used, angular size of the light source as seen by the detectors, detector resolution and so on. I can tell you that in any experiment I performed interference patterns do not vanish at reduced intensity - unless of course the signal becomes smaller than the dark count rate of the detector used.
I have a copy of the manuscript and will see what it says.
 Quote by lugita15 But the whole point of the proof is to show that whatever is causing the detections can NOT be described by local hidden variables.
I strongly suspect that a detection event is caused by the superposition of fields from many photons. There are several reasons for this.
1. A photographic detection is caused by a superposition of photons, or the fields of photons, so perhaps the same mechanism is what causes detections in other types of detectors.
2. The photon is defined as a wave-packet function whose mean energy is given by hbar times an average over its frequency components. This supports the idea of many superposed fields acting on the detector.
3. The wave packet is delocalized whereas the detection is localized. Either there is a wave function collapse, a conceptual device I prefer to avoid, or there is a local superposition of fields that causes the detection, which is preferred because it avoids non-locality.
4. The argument that a SPAD only detects single photons is a clear objection to these arguments; however, it was defined to be that way and due to uncertainty there is no way to positively distinguish between the two possibilities.
When these points are taken together it means that there is a possibility that the detections are not non-local, but rather due to em fields which always act locally. In that case the Bell theorem is not about non-locality, it is about a characteristic of the light source, or whatever other physical object is being measured.

Recognitions:
 Quote by nortonian I have a copy of the manuscript and will see what it says.
Just to make my point clear: The author should somehow verify that the detector he uses is indeed a linear one for the range of intensities he is looking at. Generally speaking the photon number distribution in some detector area will be a Poissonian distribution around some mean value. For a detection event to occur one either needs a certain amount of photons within the coherence time of the light (for coherent detection) or during some characteristic timescale of the detector (for incoherent detection) to be present. As soon as the mean photon number becomes similar to the photon number needed for a detection event, non-linearities can and will occur due to the Poissonian nature of the photon number distribution. However, this is a detector effect. It could for example result in vanishing side peak structures or have similar effects.

 Quote by nortonian 2. The photon is defined as a wave-packet function whose mean energy is given by hbar times an average over its frequency components. This supports the idea of many superposed fields acting on the detector.
This is not the typical definition of a photon. Which book describes it this way?

 Quote by nortonian 4. The argument that a SPAD only detects single photons is a clear objection to these arguments; however, it was defined to be that way and due to uncertainty there is no way to positively distinguish between the two possibilities.
Due to uncertainty? Typical clump bunch models are easily ruled out as they cannot explain the joint detection rates at several detectors for non-classical light states. If you do not like the original antibunching paper, a more didactical one was published by Grangier:

P. Grangier, G. Roger, and A. Aspect, "Experimental evidence for a photon anticorrelation effect on a beam splitter: A new light on single-photon interferences", Europhys. Lett. 1, 173-179 (1986).

You need to find a model that violates inequality (7) in order to be in line with experimental observations. That is not possible with classical wave models and that is also the point constantly ignored by the clump-crackpot community.

Blog Entries: 1
 Quote by Cthugha If you do not like the original antibunching paper, a more didactical one was published by Grangier: P. Grangier, G. Roger, and A. Aspect, "Experimental evidence for a photon anticorrelation effect on a beam splitter: A new light on single-photon interferences", Europhys. Lett. 1, 173-179 (1986).
If anyone is interested, attached is that paper.
Attached Files
 Grangier Paper.pdf (437.2 KB, 9 views)

 Quote by Cthugha Just to make my point clear: The author should somehow verify that the detector he uses is indeed a linear one for the range of intensities he is looking at.
The experiment with low intensity light by Panarella was not carefully thought out. The physical model he used is a clump model and leads to not very sophisticated experimental procedures. The film used is Type 47 Polaroid high speed film which may seem to be the proper choice. However, a study of starlight photography by Kowaliski indicates that “the use of a slower film can further improve the appearance of the signal.” Only one type of light was used, from a He-Ne laser, but incoherent light should also have been tried for comparison. The data was not normalized for intensity. In other words, as filters were inserted in the output of the interferometer the exposure time should have been increased an amount sufficient to maintain the same total recorded intensity. The visibility is known to decrease with increasing time of exposure, but no one has shown whether it varies linearly. Nevertheless the claim that interference effects were eliminated must be taken seriously.
 Quote by Cthugha This is not the typical definition of a photon. Which book describes it this way?
Loudon
 Quote by Cthugha You need to find a model that violates inequality (7) in order to be in line with experimental observations. That is not possible with classical wave models and that is also the point constantly ignored by the clump-crackpot community.
I am not talking about classical wave models, rather about photons with classical fields that superpose. In previous posts I have taken the position that non-locality in Bell theorem tests is a field effect and is therefore due to classical properties of light. If those tests can be successfully performed using very low intensity light from which classical field properties such as interference have been eliminated then qm could make the claim of non-locality and not before. The non-locality experiments depend on the precise meaning of “photon” and “one-photon state”, but as has been pointed out here, by Loudon, and by others there is some ambiguity in the definitions.

I have no dispute with the calculations of qm or the experimental results, but there are serious problems with how initial conditions were defined and therefore with the conclusions drawn from them. No one knows exactly what is going on at the microscopic level and to make pronouncements on reality and locality on such a shaky basis is rash, as though they are simply properties of matter like mass or anything else.

Recognitions:
 Quote by nortonian Only one type of light was used, from a He-Ne laser, but incoherent light should also have been tried for comparison. The data was not normalized for intensity. In other words, as filters were inserted in the output of the interferometer the exposure time should have been increased an amount sufficient to maintain the same total recorded intensity. The visibility is known to decrease with increasing time of exposure, but no one has shown whether it varies linearly. Nevertheless the claim that interference effects were eliminated must be taken seriously.
Well, as I said, it would be most important to check the response linearity of the film first before jumping to conclusions. Most people doing research in optics make the mistake of wrongly assuming a linear detector response in a regime where it is in fact not linear at lest once in their lives - at least this is my experience. Most of these learn an important lesson from that. Checking incoherent light may not be too interesting. It may be interesting to compare thermal light with coherence times shorter and longer than the typical 'response' time of the film, though.

 Quote by nortonian I am not talking about classical wave models, rather about photons with classical fields that superpose.
Ok, but the field associated with a photon is classical anyway (is that the point in Loudon's book you mean?). Non-classical signatures arise only at the intensity level. This can be seen easily in the fact that $g^{(1)}$, the field-field correlation function does not carry any signatures of non-classicality and cannot be used to distinguish classical from nonclassical states, while $g^{(2)}$, the intensity correlation function does carry such signatures. Classicality of a system with respect to some quantity roughly means that a measurement of that quantity does not disturb the system. This is trivially true for field correlation measurements, but not true for intensity correlation measurements.

 Quote by nortonian In previous posts I have taken the position that non-locality in Bell theorem tests is a field effect and is therefore due to classical properties of light.
But this position is not tenable. The closest completely classical analogue to SPDC emission you can find is some phase conjugated classical light field showing classical phase conjugated correlations. See e.g. B. I. Erkmen and J. H. Shapiro, "Ghost imaging: from quantum to classical to computational" in Advances in Optics and Photonics, Vol. 2, Issue 4, pp. 405-450 (2010) for a brief review of phase sensitive coherence properties. However, using this kind of light field in Bell tests does not lead to any violations of Bell inequalities. Obviously, also non-classicality in general is very well known to not be a field effect, so it is very strange to attribute non-locality to field effects.

 Quote by nortonian If those tests can be successfully performed using very low intensity light from which classical field properties such as interference have been eliminated then qm could make the claim of non-locality and not before.
I do not understand what you mean. In some sense interference is eliminated because (momentum)-entangled photons are necessarily spatially incoherent and cannot show perfect entanglement and a visible double slit interference pattern under the same experimental conditions. One can demonstrate that these properties are complementary (Phys. Rev. A 63, 063803 (2001)).

 Quote by nortonian The non-locality experiments depend on the precise meaning of “photon” and “one-photon state”, but as has been pointed out here, by Loudon, and by others there is some ambiguity in the definitions.
Is there? A single photon state is one for which $g^{(2)}(0)=0$. There is no ambiguity about that.

 Quote by nortonian No one knows exactly what is going on at the microscopic level and to make pronouncements on reality and locality on such a shaky basis is rash, as though they are simply properties of matter like mass or anything else.
I do not see where the basis is shaky.

 Quote by Cthugha But this position is not tenable. The closest completely classical analogue to SPDC emission you can find is some phase conjugated classical light field showing classical phase conjugated correlations. See e.g. B. I. Erkmen and J. H. Shapiro, "Ghost imaging: from quantum to classical to computational" in Advances in Optics and Photonics, Vol. 2, Issue 4, pp. 405-450 (2010) for a brief review of phase sensitive coherence properties. However, using this kind of light field in Bell tests does not lead to any violations of Bell inequalities. Obviously, also non-classicality in general is very well known to not be a field effect, so it is very strange to attribute non-locality to field effects. Classicality of a system with respect to some quantity roughly means that a measurement of that quantity does not disturb the system. This is trivially true for field correlation measurements, but not true for intensity correlation measurements.
You are speaking of non-local classical which is the accepted interpretation of what it means to say classical. I am speaking of local classical. The first can be measured and represented quantitatively, the second cannot be but may perhaps be revealed by physical means, as for example by low intensity light when photons become statistically independent.

 Quote by Cthugha I do not understand what you mean. In some sense interference is eliminated because (momentum)-entangled photons are necessarily spatially incoherent and cannot show perfect entanglement and a visible double slit interference pattern under the same experimental conditions. One can demonstrate that these properties are complementary (Phys. Rev. A 63, 063803 (2001)).
Formulations of the meaning of classical include implicit prejudices such as saying that classical absorptions of energy occur gradually or interference occurs over the coherence volume. The possibility that they are local phenomena is not considered and so a weakened model of classical is compared to qm and rejected.

 Quote by Cthugha I do not see where the basis is shaky.
I want only to present an alternative view. One that is local and physical. If it is inadequately expressed it reflects on my capabilities not on the overall picture. I defer to the majority view not because it is correct but due to its intricate design.

Recognitions:
I still do not get it. Almost all of your statements are at odds with experimental results. Do you have ANY justification for your crude theories?

 Quote by nortonian The first can be measured and represented quantitatively, the second cannot be but may perhaps be revealed by physical means, as for example by low intensity light when photons become statistically independent.
Statistical dependence or independence does not depend on the mean intensity, but just on the 'character of your light field'. Photons in a coherent light beam are always statistically independent irrespective of the mean intensity. Photons in a thermal beam always have the tendency to bunch. Your claim is plain wrong.

 Quote by nortonian Formulations of the meaning of classical include implicit prejudices such as saying that classical absorptions of energy occur gradually or interference occurs over the coherence volume. The possibility that they are local phenomena is not considered and so a weakened model of classical is compared to qm and rejected.
Argh. None of this is correct. Classical can be used as in opposition to quantized or it can mean that nonperturbative measurements are possible. The paper of Grangier explicitly shows that there are states for which both of these descriptions fail. 'Local' or 'non-local' does not even play a role when considering these arguments.

 Quote by nortonian I want only to present an alternative view. One that is local and physical. If it is inadequately expressed it reflects on my capabilities not on the overall picture. I defer to the majority view not because it is correct but due to its intricate design.
Your view is at odds with experimental results. Therefore it cannot be physical. By the way this is not a forum for personal theories.

 Quote by Cthugha I still do not get it. Almost all of your statements are at odds with experimental results. Do you have ANY justification for your crude theories?
See attachment and explain quantum mechanically why there is greater intensity of field in the middle of the spark discharges. These are unretouched photos of Tesla coil discharges.
 Quote by Cthugha Statistical dependence or independence does not depend on the mean intensity, but just on the 'character of your light field'. Photons in a coherent light beam are always statistically independent irrespective of the mean intensity. Photons in a thermal beam always have the tendency to bunch. Your claim is plain wrong.
Sorry, change to physical independence. Light does not interfere or interferes less as is apparent from lower visibility or disappearance of fringes because with low intensity light photons are separated physically from each other.
 Quote by Cthugha Argh. None of this is correct.
We are speaking different languages.
Attached Thumbnails

Recognitions: