Wave function collapse and entanglement

In summary: The second prong of evidence comes from the fact that the correlations between photons can be hidden by post-selection processes. This occurs when you choose which photons from a detection set you actually measure. For example, in an EPR experiment, you measure the entanglement of pairs of photons that have been sent out into the world. But you also measure the entanglement of photons that have been sent back to the detector. And you can do this randomly, or you can do it in a way that tries to randomize the order in which the photons are sent out and the order in which they are sent back.
  • #1
zonde
Gold Member
2,961
224
Sorry people but some quantum mysteries look quite trivial to me.

Wave function collapse for photons is actually subsampling of whole sample of photons. That way wave function collapse can happen instantaneously in the whole experimental setup or even backwards in time.

Photon entanglement mystery holds on one wrong assumption about realistic explanation. It is that whole sample of photons (assuming it is possible to detect it) should show correlations of polarization. If you assume quite the opposite then it can be seen that this will lead to serious bias in experimental setup. Namely increasing quantum effectiveness will lead to decrease of correlation visibility. This effect actually can be easily tested - increase of quantum effectiveness usually increases dark counts as well but it can be calculated if increase in dark counts alone can explain decrease of correlation visibility or it is not enough.
Post selection in entanglement experiments can be seen as macroscopic quantum measurement e.g. taking subsamples from whole detected samples of Alice and Bob resulting in a kind of wave function collapse.

This question can be approached from other side as well. If one considers reasons for nondetections of photons in detectors it can be seen that there is not much place for classical randomness needed for fair sampling assumption. Some randomness can result from permanent defects in crystal structure of detector material, some from temporary effects resulting from thermal fluctuations. But still big part of possible randomness can result from properties of photons - polarization and phase. Possible randomness from direction of photons is reasonable to assume if wave length of photon is comparable with length scale of crystal structure of detector.
It can be hard to prove that in principle but it is clearly unreasonable to discard the idea that nondetection correlates with properties of photons.

So I say that missing part of QM is undetected photons.
Any comments?
 
Physics news on Phys.org
  • #2
zonde said:
Sorry people but some quantum mysteries look quite trivial to me.

Wave function collapse for photons is actually subsampling of whole sample of photons. That way wave function collapse can happen instantaneously in the whole experimental setup or even backwards in time.

Photon entanglement mystery holds on one wrong assumption about realistic explanation. It is that whole sample of photons (assuming it is possible to detect it) should show correlations of polarization. If you assume quite the opposite then it can be seen that this will lead to serious bias in experimental setup. Namely increasing quantum effectiveness will lead to decrease of correlation visibility. This effect actually can be easily tested - increase of quantum effectiveness usually increases dark counts as well but it can be calculated if increase in dark counts alone can explain decrease of correlation visibility or it is not enough.
Post selection in entanglement experiments can be seen as macroscopic quantum measurement e.g. taking subsamples from whole detected samples of Alice and Bob resulting in a kind of wave function collapse.

This question can be approached from other side as well. If one considers reasons for nondetections of photons in detectors it can be seen that there is not much place for classical randomness needed for fair sampling assumption. Some randomness can result from permanent defects in crystal structure of detector material, some from temporary effects resulting from thermal fluctuations. But still big part of possible randomness can result from properties of photons - polarization and phase. Possible randomness from direction of photons is reasonable to assume if wave length of photon is comparable with length scale of crystal structure of detector.
It can be hard to prove that in principle but it is clearly unreasonable to discard the idea that nondetection correlates with properties of photons.

So I say that missing part of QM is undetected photons.
Any comments?

This is incorrect, and experiments have been done already to confirm. Clearly you have not looked into this sufficiently as this has been widely discussed and tested. There are 2 prongs:

1. As detection efficiency has risen, the violation of Bell Inequalities has INCREASED not DECREASED as you propose. When Alain Aspect did the earliest experiments around 1981, the inequalities were violated by about 5 standard deviations (SD). This increased steadily over the years and there are plenty of experiments with over 200 SD of violation today. That is the opposite of your prediction. Check out for example this comprehensive review of the subject, which has a whopping 505 references:

Research on Hidden Variable Theories: a review of recent progresses by Marco Genovese.

Further, the threshold for sampling issues has been passed. See for example:

Experimental violation of a Bell's inequality with efficient detection by Rowe et al, this is often referenced.

So it is no longer possible to develop stochastic LHV theories where fair sampling problems explain experimental results. In fact, the key researchers in the area of such local theories (folks like Hess and Philipp) spend much of their time trying to convince the mainstream that there is still some slim glimmer of hope in that respect... all to no avail. Every theory they have ever proposed has been soundly bashed and they are out of reasonable ideas.


2. There have been experiments developed in there are no fair sampling issues! These are not Bell tests per se, rather tests of other related theorems such as GHZ, Leggett and Hardy. In these, every recorded event is a violation of local realism!

See:

Experimental test of quantum nonlocality in three-photon Greenberger–Horne–Zeilinger entanglement by Pan et al.


Fair sampling is no longer an open loophole, although there are prominent scientists who feel that the final word in loophole-free tests as a whole has not been written. But that is a different and more complex subject.
 
  • #3
DrChinese said:
This is incorrect, and experiments have been done already to confirm. Clearly you have not looked into this sufficiently as this has been widely discussed and tested. There are 2 prongs:

1. As detection efficiency has risen, the violation of Bell Inequalities has INCREASED not DECREASED as you propose. When Alain Aspect did the earliest experiments around 1981, the inequalities were violated by about 5 standard deviations (SD). This increased steadily over the years and there are plenty of experiments with over 200 SD of violation today. That is the opposite of your prediction. Check out for example this comprehensive review of the subject, which has a whopping 505 references:

Research on Hidden Variable Theories: a review of recent progresses by Marco Genovese.
Thank you for the comments and link to this research of Hidden Variable Theories. I surely will look at it later.

But about first part of your comment. Obviously I have not stated clearly enough about what I am talking. I was not talking about visibility of maximum violation but about differences in visibility of perfect negative correlation inside one single experimental setup when parameters of detectors are manipulated as to change quantum efficiency.
What I am saying is that graph across all possible differences in rotation of polarization should tend to flat line but not that commonly expected zigzag.
Probably it is not clear but actually I am stating that there is no simple correlation between hidden variable "polarization" and observable "polarization" only complex one.

About other part of your comment I can say that I am quite familiar with Rowe's experiment but you have to agree that it would be unreasonable to try to explain many different experiments all at once. Such an undertaking is bound to fail miserably.
And I can say that in Rowe's experiment there are quite some differences from photon polarization experiments.
I think there are no doubts that another experiment you mentioned is quite different too and it even involves different theoretical base for violation of realistic locality.
 
  • #4
zonde said:
About other part of your comment I can say that I am quite familiar with Rowe's experiment but you have to agree that it would be unreasonable to try to explain many different experiments all at once. Such an undertaking is bound to fail miserably.
And I can say that in Rowe's experiment there are quite some differences from photon polarization experiments.

I don't follow you. There are plenty of reasons to reject "fair sampling" as an objection, and if you are familiar with Rowe then why are wouldn't you agree? Surely you do not reject Rowe's conclusion. Or?
 
  • #5
zonde said:
Obviously I have not stated clearly enough about what I am talking. I was not talking about visibility of maximum violation but about differences in visibility of perfect negative correlation inside one single experimental setup when parameters of detectors are manipulated as to change quantum efficiency.
What I am saying is that graph across all possible differences in rotation of polarization should tend to flat line but not that commonly expected zigzag.

I don't follow this either. Can you reference a specific paper or experiment?
 
  • #6
DrChinese said:
I don't follow you. There are plenty of reasons to reject "fair sampling" as an objection, and if you are familiar with Rowe then why are wouldn't you agree? Surely you do not reject Rowe's conclusion. Or?
I would say that Rowe's experiment deserves separate discussion but I can shortly say that it is much more local (3 micro m) than any photon entanglement experiment and there are few things that do not allow to make conclusive statements. Namely manipulations and measurements are not done individually. There is joined readout from both experiments.
They say themselves: "In this case, the issue of detection efficiency is replaced by detection accuracy."
Another thing that hinder analysis but might not be really an issue is that experiments are done only for selected differences in rotation angles.

But I think that considerable local realistic model should explain it anyways justifying QM formalism predicting this entanglement.

DrChinese said:
I don't follow this either. Can you reference a specific paper or experiment?
I do not understand what to expand so I will try to cover all parts.

Well if the question was about visibility of correlation then in wikipedia visibility is defined as ratio between amplitude and average of intensity fluctuation that is 100% visibility for entanglement correlation would be: minimum - at zero value, maximum at two times average value.
Maybe example? - in Weihs et al paper there is visibility briefly mentioned.
http://arxiv.org/abs/quant-ph/9810080v1"

If the question is about operation priciples of detectors (SPADs) then I found as an example this paper that includes graph how dark count rate (noise) and quantum efficiency (detection rate) depends from bias voltage above breakdown (Fig.2 in paper) but I believe there are a lot of technical specifications for SPADs that might include graphs like that.
http://ldavis.utsi.edu/PDFs/1993_Rev_Sci_Instrum_64_1524.pdf"
There is another dependency between dark count rate and temperature of detector. Temperature on the other hand does not affect quantum efficiency.
Link: http://www.vad1.com/publications/CLEO_IQEC_2009-kim-makarov-jeong-kim.pdf"
So it means that changes in visibility is more reasonable to study with cooled detectors (I believe they are commercially available).

If the question is about this zigzag graph then I am quite familiar with this Thompson's paper:
http://arxiv.org/abs/quant-ph/9611037v3"
 
Last edited by a moderator:
  • #7
zonde said:
1. I would say that Rowe's experiment deserves separate discussion but I can shortly say that it is much more local (3 micro m) than any photon entanglement experiment and there are few things that do not allow to make conclusive statements. Namely manipulations and measurements are not done individually. There is joined readout from both experiments.
...

2. I do not understand what to expand so I will try to cover all parts.

Well if the question was about visibility of correlation then in wikipedia visibility is defined as ratio between amplitude and average of intensity fluctuation that is 100% visibility for entanglement correlation would be: minimum - at zero value, maximum at two times average value.
Maybe example? - in Weihs et al paper there is visibility briefly mentioned.
http://arxiv.org/abs/quant-ph/9810080v1"

If the question is about operation priciples of detectors (SPADs) then I found as an example this paper that includes graph how dark count rate (noise) and quantum efficiency (detection rate) depends from bias voltage above breakdown (Fig.2 in paper) but I believe there are a lot of technical specifications for SPADs that might include graphs like that.
http://ldavis.utsi.edu/PDFs/1993_Rev_Sci_Instrum_64_1524.pdf"
There is another dependency between dark count rate and temperature of detector. Temperature on the other hand does not affect quantum efficiency.
Link: http://www.vad1.com/publications/CLEO_IQEC_2009-kim-makarov-jeong-kim.pdf"
So it means that changes in visibility is more reasonable to study with cooled detectors (I believe they are commercially available).

3. If the question is about this zigzag graph then I am quite familiar with this Thompson's paper:
http://arxiv.org/abs/quant-ph/9611037v3"

1. Ah, now you want to consider closing the detection loophole simultaneous with the locality loophole. A strange request for a local realist, don't you think? Anyway, that is a different subject entirely: the so-called "loophole-free" Bell test. I thought your point was that only photons are detected which support QM, and that photons that do not support QM are not detected. Did I miss something? If that is your assertion, let's discuss that rather than getting off track.


2. I recognize that all photons will not be detected. Nothing strange about that. The question is: how does a photon "know" not to be detected if its results will contradict QM? You haven't explained that at all, you are simply asserting it is possible. It can't be their polarization alone, as photons can be seen equally well regardless of rotation. It cannot be that they are entangled, because local realistic theories do not feature entanglement as an attribute.

The most important question to me for your assertion: what is the "true" correlation rate that would occur if ALL photons were detected? If there any evidence for or against that rate?

For example: at theta=60 degrees the QM predicted correlation rate is .2500. The local realistic value is .3333. The difference is .0833. Is that the value you assert is NOT being detected because it would mess up the QM prediction? In other words, if you are asserting something, you had better see it all the way through. I would like to know your specific prediction of the correct true correlation rate - if all photon pairs were detected - at 60 degree... an actual number please.


3. Thompson is a discredited author and her work is a mess. The Chaotic Ball sounds fancy but does not explain the basics of Bell tests. She - like you in some ways - pinned her hopes on an increase in visibility to show QM was wrong. If your line of thinking is to mimic her ideas, I will end my side of this discussion now.
 
Last edited by a moderator:
  • #8
DrChinese said:
1. Ah, now you want to consider closing the detection loophole simultaneous with the locality loophole. A strange request for a local realist, don't you think? Anyway, that is a different subject entirely: the so-called "loophole-free" Bell test. I thought your point was that only photons are detected which support QM, and that photons that do not support QM are not detected. Did I miss something? If that is your assertion, let's discuss that rather than getting off track.
So you want all at once but let me start with this.
Rowe's experiment does not close detection loophole.
I said already - they say themselves: "In this case, the issue of detection efficiency is replaced by detection accuracy."
So consider such explanation:
Atoms scatter photons always not only sometimes but they all share the same properties or speaking in terms of QM they are all entangled. This means that they all are either detected or not detected. But because in Rowe's experiment scattered photons from two atoms are allowed to interact there are interference between them that change properties of photons (for all photons of one atom in the same way) an sometimes that change detectable photons into undetectable and undetectable ones into detectable.
 
  • #9
zonde said:
This means that they all are either detected or not detected. But because in Rowe's experiment scattered photons from two atoms are allowed to interact there are interference between them that change properties of photons (for all photons of one atom in the same way) an sometimes that change detectable photons into undetectable and undetectable ones into detectable.

Well now you are moving into very strange territory. You are proposing a new property of light heretofor unknown (i.e. your "interference"), and it doesn't show up anywhere else except in this experiment. And notice how this no longer agrees with QM, which is the point of Bell. But you are correct that it requires the locality assumption for this particular experiment, even though local effects have already been ruled out as a possibility in other experiments (Weihs et al).

But there is no changing of photons here from detected to not-detected, that is not how this experiment works. There is the possibility that the result will not be accurate even though it is detected. That issue was factored into the accuracy of the results. So no fair sampling assumption is required, and no assumption for detection accuracy either.

However you may be happy (or unhappy as the case may be) to learn that a new experiment is being proposed that addresses the locality and detection "loopholes" simultaneously:

Towards a loophole-free test of Bell's inequality with entangled pairs of neutral atoms by Wenjamin Rosenfeld, Markus Weber, Juergen Volz, Florian Henkel, Michael Krug, Adan Cabello, Marek Zukowski, Harald Weinfurter (2009).

I personally think addressing both simultaneously is unnecessary but there are a lot of folks that feel this is the best way to put the issue behind us once and for all.
 
  • #10
DrChinese said:
Well now you are moving into very strange territory. You are proposing a new property of light heretofor unknown (i.e. your "interference"), and it doesn't show up anywhere else except in this experiment. And notice how this no longer agrees with QM, which is the point of Bell. But you are correct that it requires the locality assumption for this particular experiment, even though local effects have already been ruled out as a possibility in other experiments (Weihs et al).
Ha, where do you see that I say that it requires the locality assumption for this particular experiment?
I do not say that. And as you brought up subject about Rowe's experiment I didn't asked this question but now I feel that I have to ask it - "How familiar are you with setup of Rowe's experiment?" It's because I have feeling that we are talking about two different experiments.

You are ascribing to me that I am referring to unknown property of light (i.e. my "interference"). Probably you think that "my interference" happens over distance. I am describing no such thing. I am referring to perfectly known property of light when two wavefunctions occupy the same place. In case you think no such thing is happening in particular experiment I will quote the paper:
"The state of an ion, |↓> or |↑>, is determined by probing the ion with circularly polarized light from a "detection" laser beam. During this detection pulse, ions in the |↓> or bright state scatter many photons, and on average about 64 of these are detected with photomultiplier tube, while ions in the |↑> or dark state scatter very few photons. For two ions, three cases can occur: zero ions bright, one ion bright, or two ions bright. In the one-ion-bright case it is not necessary to know which ion is bright because the Bell's measurement requires only knowledge of whether or not the ions' states are different. The tree cases are distinguished from each other with simple discriminator levels in the number of photons collected with the phototube."
From this quote you can see that there is joined measurement for both ions and photons can at least interfere inside detector.

If you prefer QM formalism there is alternative explanation:
As ions are scattering photons they become entangled with scattered photon wavefunction. When there is performed joined measurement of two photon wavefunctions from two ions this process results in entanglement transfer and two ions that were not entangled previously become entangled.
 
  • #11
DrChinese said:
2. I recognize that all photons will not be detected. Nothing strange about that. The question is: how does a photon "know" not to be detected if its results will contradict QM? You haven't explained that at all, you are simply asserting it is possible. It can't be their polarization alone, as photons can be seen equally well regardless of rotation. It cannot be that they are entangled, because local realistic theories do not feature entanglement as an attribute.
Which branch a photon is going is not determined by it's value of hidden variable so it can go down by either branch. But if it appears in "wrong" branch it can not be detected.

DrChinese said:
The most important question to me for your assertion: what is the "true" correlation rate that would occur if ALL photons were detected? If there any evidence for or against that rate?
In supposed case when all photons are detected there is no correlation at all.
But I have to warn that this is imaginary situation because maximum detection probability for photon in a wrong branch is 0%. More realistic situation would be where 50% of photons are detected.

DrChinese said:
For example: at theta=60 degrees the QM predicted correlation rate is .2500. The local realistic value is .3333. The difference is .0833. Is that the value you assert is NOT being detected because it would mess up the QM prediction? In other words, if you are asserting something, you had better see it all the way through. I would like to know your specific prediction of the correct true correlation rate - if all photon pairs were detected - at 60 degree... an actual number please.
I will try to give you answer to this question too but a bit later.
 
  • #12
zonde said:
Ha, where do you see that I say that it requires the locality assumption for this particular experiment?
I do not say that. And as you brought up subject about Rowe's experiment I didn't asked this question but now I feel that I have to ask it - "How familiar are you with setup of Rowe's experiment?" It's because I have feeling that we are talking about two different experiments.

You are ascribing to me that I am referring to unknown property of light (i.e. my "interference"). Probably you think that "my interference" happens over distance. I am describing no such thing. I am referring to perfectly known property of light when two wavefunctions occupy the same place. In case you think no such thing is happening in particular experiment I will quote the paper:
"The state of an ion, |↓> or |↑>, is determined by probing the ion with circularly polarized light from a "detection" laser beam. During this detection pulse, ions in the |↓> or bright state scatter many photons, and on average about 64 of these are detected with photomultiplier tube, while ions in the |↑> or dark state scatter very few photons. For two ions, three cases can occur: zero ions bright, one ion bright, or two ions bright. In the one-ion-bright case it is not necessary to know which ion is bright because the Bell's measurement requires only knowledge of whether or not the ions' states are different. The tree cases are distinguished from each other with simple discriminator levels in the number of photons collected with the phototube."
From this quote you can see that there is joined measurement for both ions and photons can at least interfere inside detector.

If you prefer QM formalism there is alternative explanation:
As ions are scattering photons they become entangled with scattered photon wavefunction. When there is performed joined measurement of two photon wavefunctions from two ions this process results in entanglement transfer and two ions that were not entangled previously become entangled.

Perhaps we were talking about 2 different things. I thought we were talking about Rowe. Anyway, with the proposed new experiment per reference I provided about, I think this should answer the remaining issues nicely.
 
  • #13
DrChinese said:
Perhaps we were talking about 2 different things. I thought we were talking about Rowe. Anyway, with the proposed new experiment per reference I provided about, I think this should answer the remaining issues nicely.
Future of course is the ultimate judge.
Thanks then for providing opposition.
 

1. What is wave function collapse?

Wave function collapse is a phenomenon in quantum mechanics where a particle's wave function, which represents its probability of being in different states, is reduced to a single state upon measurement. This means that the particle's exact location or other properties are determined at the time of measurement, rather than existing in multiple possible states simultaneously.

2. How does entanglement occur?

Entanglement is a phenomenon that occurs when two or more particles are connected in such a way that their states are dependent on each other, even when separated by large distances. This means that measuring the state of one particle will immediately affect the state of the other particle, regardless of the distance between them.

3. What is the role of wave function collapse and entanglement in quantum computing?

Wave function collapse and entanglement play crucial roles in quantum computing, as they allow for the manipulation and transmission of information in ways that are not possible with classical computers. Entangled particles can be used to store and transmit information, while wave function collapse allows for the measurement and extraction of this information without destroying the entanglement.

4. Can wave function collapse and entanglement occur on a macroscopic scale?

While these phenomena are typically observed on a microscopic scale, recent research has shown that they can also occur on a macroscopic scale under certain conditions. This has led to the development of theories such as "decoherence" which explain how quantum effects can be observed at larger scales.

5. What are the implications of wave function collapse and entanglement for our understanding of reality?

The concept of wave function collapse and entanglement challenges our traditional understanding of reality, as they suggest that particles can exist in multiple states at once and can be connected in ways that defy classical physics. These phenomena have led to ongoing debates and research about the nature of reality and the fundamental principles that govern our universe.

Similar threads

  • Quantum Physics
Replies
19
Views
1K
Replies
8
Views
1K
Replies
16
Views
1K
  • Quantum Physics
Replies
4
Views
966
Replies
1
Views
610
Replies
2
Views
948
Replies
2
Views
795
  • Quantum Physics
Replies
13
Views
621
Back
Top