Is action at a distance possible as envisaged by the EPR Paradox.

  • #901
ThomasT said:
I'm just trying to understand the correlation between the angular difference in the polarizer settings and coincidental detection. It doesn't 'seem' mysterious. That is, the optics principle that applies to the observed coincidence count when both polarizers are on one side, would seem to be applicable when there's one polarizer on each side.

Except that it doesn't apply to ordinary streams of identically polarized photon pairs coming from a PDC crystal. According to your ideas, it should. You would predict Entangled State stats (cos^2 theta) and you instead see Product State (.5*(.5+cos^2 theta)). So your prediction is flat out incorrect.

I re-iterate: where is your REFERENCE?
 
Physics news on Phys.org
  • #902
ThomasT said:
We're talking about optical Bell tests, right? I think I phrased it as the 'coincidental photon flux'.
Please review the context of this exchange. I said "Well, can you present your local optical explanation in detail, either here or on a new thread?" and you replied "It isn't 'my' optical explanation. It's optics." Naturally I took this to imply you thought there could be a local optical explanation for the cos^2(a-b) statistics seen in entanglement experiments, perhaps one involving Malus' law (which can indeed be derived from classical electromagnetism, a completely local theory). Did I misunderstand? Are you not claiming that the statistics can be explained in terms of local properties that travel along with the two beams (or the individual photons) that were assigned to them by the source?
ThomasT said:
In certain optical Bell tests the coincidence rate is proportional to cos^2(a-b). From your knowledge of quantum optics, do you think that that indicates or requires that something nonlocal or ftl is happening in those experiments?
It does require that if we adopt a "realist" view of the type I discussed in post #101 of the Understanding Bell's Logic thread, and if we assume each measurement has a single unique outcome (so many-worlds type explanations are out), and if the experiment meets various observable requirements like sufficiently high detector efficiency and a spacelike separation between measurements.
JesseM said:
The joint probability P(AB) is not being modeled as the product of P(A)*P(B) by Bell's equation. Do you disagree?
ThomasT said:
My thinking has been that it reduces to that.
Yes, of course I disagree, you're just totally misunderstanding the most basic logic of the proof which is assuming a perfect correlation between A and B whenever both experimenters choose the same detector setting. It's really rather galling that you make all these confident-sounding claims about Bell's proof being flawed when you fail to understand something so elementary about it! Could you maybe display a tiny bit of intellectual humility and consider the possibility that it might not be that the proof itself is flawed and that you've spotted a flaw that so many thousands of smart physicists over the years have missed, that it might instead be you are misunderstanding some aspects of the proof?

If you want an example where two variables are statistically dependent in their marginal probabilities but statistically independent when conditioned on some other variable, you might consider the numerical example I provided in this post, where P(T,U) is not equal to P(T)*P(U) but P(T,U|V) is equal to P(T|V)*P(U|V):
in your example there seem to be two measured variables, T which can take two values {received treatment A, received treatment B} and another one, let's call it U, which can also take two values {recovered from disease, did not recover from disease}. Then there is also a hidden variable we can V, which can take two values {large kidney stones, small kidney stones}. In your example there is a marginal correlation between variables T and U, but there is still a correlation (albeit a different correlation) when we condition on either of the two specific values of V. So, let me modify your example with some different numbers. Suppose 40% of the population have large kidney stones and 60% have small ones. Suppose those with large kidney stones have an 0.8 chance of being assigned to group A, and an 0.2 chance of being assigned to group B. Suppose those with small kidney stones have an 0.3 chance of being assigned to group A, and an 0.7 chance of being assigned to B. Then suppose that the chances of recovery depend only one whether one had large or small kidney stones and is not affected either way by what treatment one received, so P(recovers|large kidney stones, treatment A) = P(recovers|large kidney stones), etc. Suppose the probability of recovery for those with large kidney stones is 0.5, and the probability of recovery for those with small ones is 0.9. Then it would be pretty easy to compute P(treatment A, recovers, large stones)=P(recovers|treatment A, large stones)*P(treatment A, large stones)=P(recovers|large stones)*P(treatment A, large stones)=P(recovers|large stones)*P(treatment A|large stones)*P(large stones) = 0.5*0.8*0.4=0.16. Similarly P(treatment A, doesn't recover, small stones) would be P(doesn't recover|small stones)*P(treatment A|small stones)*P(small stones)=0.1*0.3*0.6=0.018, and so forth.

In a population of 1000, we might then have the following numbers for each possible combination of values for T, U, V:

1. Number(treatment A, recovers, large stones): 160
2. Number(treatment A, recovers, small stones): 162
3. Number(treatment A, doesn't recover, large stones): 160
4. Number(treatment A, doesn't recover, small stones): 18
1. Number(treatment B, recovers, large stones): 40
2. Number(treatment B, recovers, small stones): 378
3. Number(treatment B, doesn't recover, large stones): 40
4. Number(treatment B, doesn't recover, small stones): 42

If we don't know whether each person has large or small kidney stones, this becomes:

1. Number(treatment A, recovers) = 160+162 = 322
2. Number(treatment A, doesn't recover) = 160+18 = 178
3. Number(treatment B, recovers) = 40+378 = 418
4. Number(treatment B, doesn't recover) = 40+42=82

So here, the data shows that of the 500 who received treatment A, 322 recovered while 178 did not, and of the 500 who received treatment B, 418 recovered and 82 did not. There is a marginal correlation between receiving treatment B and recovery: P(treatment B, recovers)=0.418, which is larger than P(treatment B)*P(recovers)=(0.5)*(0.74)=0.37. But if you look at the correlation between receiving treatment B and recovery conditioned on large kidney stones, there is no conditional correlation: P(treatment B, recovers|large stones) = P(treatment B|large stones)*P(recovers|large stones) [on the left side, there are 400 people with large stones and only 40 of these who also received treatment B and recovered, so P(treatment B, recovers|large stones) = 40/400 = 0.1; on the right side, there are 400 with large stones but only 80 of these received treatment B, so P(treatment B|large stones)=80/400=0.2, and there are 400 with large stones and 200 of those recovered, so P(recovered|large stones)=200/400=0.5, so the product of these two probabilities on the right side is also 0.1] The same would be true if you conditioned treatment B + recovery on small kidney stones, or if you conditioned any other combination of observable outcomes (like treatment A + no recovery) on either large or small kidney stones.
If you like I could also show you how something similar would be true in my scratch lotto example, which is even more directly analogous to the situation being considered in Aspect-type experiments.
ThomasT said:
If you don't think that's ok, then how would you characterize the separability of the joint state (ie., the expression of locality) per Bell's (2)?
I would characterize it in terms of A and B being statistically independent only when conditioned on the value of the hidden variable λ. They are clearly not statistically independent otherwise, and Bell makes it explicit that he assumes there is a perfect correlation between their values when both experimenters choose the same detector setting. For example, in the introduction to the original paper he says:
Since we can predict in advance the result of measuring any chosen component of \sigma_2, by previously measuring the same component of \sigma_1, it follows that the result of any such measurement must actually be predetermined.
Likewise in http://cdsweb.cern.ch/record/142461/files/198009299.pdfpapers Bell writes on p. 11:
Let us summarize once again the logic that leads to the impasse. The EPRB correlations are such that the result of the experiment on one side immediately foretells that on the other, whenever the analyzers happen to be parallel. If we do not accept the intervention on one side as a causal influence on the other, we seem obliged to admit that the results on both sides are determined in advance anyway, independent of the intervention on the other side, by signals from the source and by the local magnet setting.
If the value of measurement A "immediately foretells" the value of measurement B when the settings on both sides are the same, that means there's a perfect correlation between the value of A and the value of B when conditioned only on the fact that both sides used the same setting (and not conditioned on any hidden variables)--do you disagree?
 
Last edited by a moderator:
  • #903
ThomasT said:
I'm just trying to understand the correlation between the angular difference in the polarizer settings and coincidental detection. It doesn't 'seem' mysterious.
Well, you are going to have to do some precise reasoning rather than just relying on feelings about how things "seem" if you want to understand Bell's arguments.
ThomasT said:
That is, the optics principle that applies to the observed coincidence count when both polarizers are on one side, would seem to be applicable when there's one polarizer on each side.
If you are doing an experiment which matches the condition of the Bell inequalities that says each measurement must yield one of two binary results (rather than a continuous range of intensities), then even if "both polarizers are on one side" it would be impossible to reproduce the cos^2 relationship between the angles of the two polarizers in classical optics, despite the fact that Malus' law applies in classical optics. Do you disagree? In post #896 I outlined some ways you might design a detector to yield one of two binary outcomes in an experiment based on classical optics:
You are free to design your detectors in a classical optics experiment so that they can only yield two outcomes rather than a continuous range of intensities--for example, you design it so that if the intensity of the light that made it through the polarizer was above a certain threshold a red light would go off, and if the intensity was at or below that threshold a green light would go off. Likewise you might design the detector so the probability or a red light going off vs. a green light going off would depend on the reduction in intensity as the light went through the polarizer--say if the intensity was reduced by 70%, there'd be a 70% chance the red light would go off and a 30% chance the green light would go off.
But no matter how you design your classical detector to give one of two binary outcomes based on how much light passes through it, you can never get a violation of the Bell inequalities in classical optics.
 
  • #904
JesseM said:
"One to one quantitative correspondence" between what and what? the cos^2 in Malus' law is for the difference between the angle of a polarizer and the polarization angle of a beam hitting it at the same location, the cos^2 in entanglement experiments is for the difference in angles between two polarizers at completely different locations making measurements on different particles. There's no way to use the first cos^2 law to derive the second one, whatever ThomasT may think.
Of course the polarizer at the same location in the classic optics case, with no path references for individual photons. That's one of the reasons EPR correlations are sensitive to certain realism claims that standard optics is not. The quantitative correspondence results when you apply the same same location rules to the photon polarizers interactions plus standard conservation requiring photon pairs be anticorrelated, if you insist on rotational invariance at every level. These two conditions lead to an apparent contradiction between the statics we invariable measure and the statistics and the statistics we apparently would have gotten using any other choice of settings besides what we actually measured, if the path through the polarizer was a real property of the photon.


JesseM said:
Why a "greater range of presumptions"? Standard optics can be derived from Maxwell's laws, which is a perfect example of a local realist theory of physics, so Bell's theorem definitely applies to anything in optics (and it's impossible to use classical optics to get a violation of Bell inequalities).
The "greater range of presumptions" allowed is because the standards optics case alone does not have a reference case to question (in principle) how this set of photons would have been detected had we chosen detector settings differently. Maxwell's equations also described fields, leaving the particle behavior more or less out. How would you define the set of all individual variables such a field can define?

Note the original EPR paper hinged on conservation law. The randomized polarizations of the emitter alone statistically demands rotational invariance, irrespective of any underlying mechanism. Thus Malus law + conservation + statistical rotational invariance does lead to the same statistical contradictions.

We are left with a *fundamentally* statistical theory in which statistical outcomes can be deterministically determined in some cases, but lacks variables that can even in principle explain how the outcomes are predetermined.

JesseM said:
Malus' law is only based on the difference in angle between the beam and the polarizer, so it doesn't get violated depending on how your coordinate system defines the angle of the beam. Are you suggesting otherwise?
EXACTLY! Malus law is dependent only on the angle difference. Now note: A randomly polarized emitter physically must, irrespective of and underlying or lack of mechanism, be rotational invariant. So as long as 3 rules always apply, Malus law, rotational invariance, and conservation law, then BI violations must occur. If, statistically, rotational invariance applies, as it physically must for a randomly polarized beam, it does not mean the individual events defined by individual detections must also be rotational invariant, any more than Malus law.

I'm not entirely convinced by my own arguments, but I would appreciate more than hand waving the issues. Now, to argue against this, two things would be acceptable and appreciated:
1) Reject that rotational invariance can be induced simply by randomizing the polarization of the emitter, and explain why.
2) Accepting 1), explain why, if Malus law is dependent solely on angle difference and rotational invariance is dependent solely on randomized polarizations (not individual detection events), it could be expected that EPR correlations should depend on more than just the angle difference between the detector pairs.

You could also try to show that Malus law can be applied to any coordinate rotation rather than just a difference in rotation. You could also try to explain why Malus law + conservation + statistical rotational invariance does not lead to the same statistical contradictions.
 
  • #905
my_wan said:
The quantitative correspondence results when you apply the same same location rules to the photon polarizers interactions plus standard conservation requiring photon pairs be anticorrelated, if you insist on rotational invariance at every level.
Again, "quantitative correspondence" between what and what? Are you claiming that if we "apply the same same location rules to the photon polarizers interactions plus standard conservation requiring photon pairs be anticorrelated", that uniquely leads us to the statistics seen in QM? If so, what "same location rules" are you talking about, given that Malus' law deals with continuous decreases in intensity rather than the binary fact about whether a photon passes through a polarizer?
my_wan said:
Note the original EPR paper hinged on conservation law. The randomized polarizations of the emitter alone statistically demands rotational invariance, irrespective of any underlying mechanism. Thus Malus law + conservation + statistical rotational invariance does lead to the same statistical contradictions.
Well, see above, it's not clear to me what you mean by "Malus law" in the context of detecting individual photons rather than looking at how the intensity of an electromagnetic wave changes when it passes through a polarizer.
 
  • #906
JesseM said:
Again, "quantitative correspondence" between what and what? Are you claiming that if we "apply the same same location rules to the photon polarizers interactions plus standard conservation requiring photon pairs be anticorrelated", that uniquely leads us to the statistics seen in QM? If so, what "same location rules" are you talking about, given that Malus' law deals with continuous decreases in intensity rather than the binary fact about whether a photon passes through a polarizer?
I'm only looking at how a photon responds to the polarizer it comes in contact with irrespective of what a distant correlated photon does, which requires Malus law in all cases. This also entails that a detector offset from the default photon polarization has some likelihood of passing that photon 'as if' it possessed that polarization. These odds defined by Malus law. Now to add an assumption: these photons have properties that predetermine how they will respond to any polarizer setting, such that an identical twin photon would have responded to a polarizer with the same arbitrary setting the same way. Opposite for anti-twins. Now, as long as the properties of the photons in the beam is, as a group, randomized such that rotational invariance must be maintained, and Malus law (with relative offsets) is required for all cases, the predeterminism assumption leads to BI violations, irrespective of any other consideration.

Here's the challenge: you CANNOT construct a local deterministic variable set, independent of QM, BI, etc., that respects Malus law for any arbitrary setting and rotational invariance without violating BI. You will invariably be stuck with the same relative offset requirements that Malus law is predicated on. This results without any reference to QM whatsoever. The effect may be local, at the point where photon meets polarizer, but the counterfactual requirement that the photon polarizer interaction is predetermined in all cases is effectively equivalent to what an anti-twin is doing light years away.

Are you seeing the difficulty here, when the impossibility logic is turned on its head? The same impossibility Bell demonstrated also points to an impossibility of maintaining Malus law without violating BI. Of course, as you noted, Malus law can be derived from Maxwell's equations, which is a classical field theory. So, unless you can deny the validity of the challenge, what does this say about the "reality" of classical fields? Perhaps the deterministic variables are transfinite? I don't know, but if you can successfully reject the validity of the challenge I'll be indebted to you.
 
  • #907
my_wan said:
Are you seeing the difficulty here, when the impossibility logic is turned on its head? The same impossibility Bell demonstrated also points to an impossibility of maintaining Malus law without violating BI. Of course, as you noted, Malus law can be derived from Maxwell's equations, which is a classical field theory. So, unless you can deny the validity of the challenge, what does this say about the "reality" of classical fields? .

Yes, this is true (about Malus). However, this has nothing to do with some kind of "challenge" or impossiblity. Your logic does not work:

Malus-> Bell Inequality violation
QM-> Malus-> Bell Inequality violation

This is perfectly reasonable.
 
  • #908
DrChinese said:
Yes, this is true (about Malus). However, this has nothing to do with some kind of "challenge" or impossiblity. Your logic does not work:

Malus-> Bell Inequality violation
QM-> Malus-> Bell Inequality violation

This is perfectly reasonable.

Not real sure I follow. Not even sure how to guess what you intended to say. My best guess is your saying "QM-> Malus-> Bell Inequality violation" is "perfectly reasonable", but still can't grok your intended meaning with any confidence.

If your summing up what I said as "Malus-> Bell Inequality violation" it's more than a little overly simplified, as is the implicit QM and BI issues. Are you saying that "Malus-> Bell Inequality violation" doesn't work, while "QM-> Malus-> Bell Inequality violation" does?

What I'm saying is that, even if you forget everything you know about QM, and merely try to construct a local realistic variable set that respects Malus law without violating BI, you can't do it. It is you who keeps insisting the similarity between Malus law and QM is an illusion, yet here I am presumptuously interpreting you to say Malus law is a QM law.

Here's an example of what you can't do. Define a set of photons with predefined properties which entails that 50% of all randomly polarized photons are predetermined to pass a polarizer at any angle. Then require the predetermined photon paths to switch paths through the polarizer according to Malus law as you rotate the polarizer. Now try and get this same predefined set of photons to continue honoring Malus law when you pick a pair of counterfactual detector setting in which neither setting is 0. It will not work, and this doesn't even involve correlations, only paths taken by a predefined set of photons through a variable polarizer setting. Without correlation you don't have a uniquely QM phenomena, yet BI violation persist in simple classical paths through a polarizer.

Does it now make sense why the QM requirement (I think) you imposed is not necessary for BI? Not even correlations are required, only assumptions of classical paths. Of course Maxwell's equations, in spite of being a classical construct, has no requirement of presuming photon trajectories represent a classical path, due to its field theoretic construction.
 
  • #909
my_wan said:
I'm only looking at how a photon responds to the polarizer it comes in contact with irrespective of what a distant correlated photon does, which requires Malus law in all cases.
How does Malus' law apply to individual photons, though? The classical version of Malus' law requires uniformly polarized light with a known polarization angle, are you talking about a photon that's known to be in a polarization eigenstate for a polarizer at some particular angle? In that case, whatever the angle v of the eigenstate, I think the probability the photon would pass through another angle at angle a would be cos^2(v-a). But when entangled photons are generated, would they be in such a known polarization eigenstate? If not it seems like you wouldn't be able to talk about Malus' law applying to individual members of the pair.
my_wan said:
Of course, as you noted, Malus law can be derived from Maxwell's equations, which is a classical field theory. So, unless you can deny the validity of the challenge, what does this say about the "reality" of classical fields? Perhaps the deterministic variables are transfinite? I don't know, but if you can successfully reject the validity of the challenge I'll be indebted to you.
But with electromagnetic waves in Maxwell's equations there's no probabilities involved, Malus' law just represents a deterministic decrease in intensity. So there's no case where two detectors at different angles a and b have a probability cos^2(a-b) of opposite results, including the fact that they give opposite results with probability 1 with the detectors at the same angle. This is true even if you design the detectors to give one of two possible outputs depending on the decrease in intensity, as I suggested in post #896 to ThomasT. So no violations of BI and no reason Maxwell's laws can't be understood as a local realist theory, so I'm not sure why you have a problem with the reality of classical fields.
 
Last edited:
  • #910
my_wan said:
Not real sure I follow. Not even sure how to guess what you intended to say. My best guess is your saying "QM-> Malus-> Bell Inequality violation" is "perfectly reasonable", but still can't grok your intended meaning with any confidence.

If your summing up what I said as "Malus-> Bell Inequality violation" it's more than a little overly simplified, as is the implicit QM and BI issues. Are you saying that "Malus-> Bell Inequality violation" doesn't work, while "QM-> Malus-> Bell Inequality violation" does?

What I'm saying is that, even if you forget everything you know about QM, and merely try to construct a local realistic variable set that respects Malus law without violating BI, you can't do it. It is you who keeps insisting the similarity between Malus law and QM is an illusion, yet here I am presumptuously interpreting you to say Malus law is a QM law.

Here's an example of what you can't do. Define a set of photons with predefined properties which entails that 50% of all randomly polarized photons are predetermined to pass a polarizer at any angle. Then require the predetermined photon paths to switch paths through the polarizer according to Malus law as you rotate the polarizer. Now try and get this same predefined set of photons to continue honoring Malus law when you pick a pair of counterfactual detector setting in which neither setting is 0. It will not work, and this doesn't even involve correlations, only paths taken by a predefined set of photons through a variable polarizer setting. Without correlation you don't have a uniquely QM phenomena, yet BI violation persist in simple classical paths through a polarizer.

Does it now make sense why the QM requirement (I think) you imposed is not necessary for BI? Not even correlations are required, only assumptions of classical paths. Of course Maxwell's equations, in spite of being a classical construct, has no requirement of presuming photon trajectories represent a classical path, due to its field theoretic construction.

OK, ask this: so what if Malus rules out local realistic theories per se? You are trying to somehow imply that is not reasonable. Well, it is.

We don't live in a world respecting BIs while we do live in one which Malus is respected. There is no contradiction whatsoever. You are trying to somehow say Malus is classical but it really isn't. It is simply a function of a quantum mechanical universe. So your logic needs a little spit polish.
 
  • #911
JesseM said:
If you are doing an experiment which matches the condition of the Bell inequalities that says each measurement must yield one of two binary results...

This requirement by itself is unreasonable because according to Malus law, it is normal to expect that some photons will not go through the polarizer. Therefore Bell's insistence on having only binary outcomes (+1, -1) goes off the boat right from the start. He should have included a non-detection outcome too.
 
  • #912
(my emphasis)
JesseM said:
... Yes, of course I disagree, you're just totally misunderstanding the most basic logic of the proof which is assuming a perfect correlation between A and B whenever both experimenters choose the same detector setting. It's really rather galling that you make all these confident-sounding claims about Bell's proof being flawed when you fail to understand something so elementary about it! Could you maybe display a tiny bit of intellectual humility and consider the possibility that it might not be that the proof itself is flawed and that you've spotted a flaw that so many thousands of smart physicists over the years have missed, that it might instead be you are misunderstanding some aspects of the proof?


JesseM, thank you so very much for these very intelligent and well expressed words! You’ve hit the nail! THANKS!

And I can guarantee you that you are not the only one exasperated on ThomasT’s general attitude.
 
  • #913
DrChinese said:
OK, ask this: so what if Malus rules out local realistic theories per se? You are trying to somehow imply that is not reasonable. Well, it is.

We don't live in a world respecting BIs while we do live in one which Malus is respected. There is no contradiction whatsoever. You are trying to somehow say Malus is classical but it really isn't. It is simply a function of a quantum mechanical universe. So your logic needs a little spit polish.

The claim that my logic needs a little spit polish is absolutely valid. That's part of why I debate these issues here, to articulate clear my own thinking on the matter.

I'm not making claims about what is or isn't reasonable. Here's the thing, so long as the terms "local" and "realistic" are well defined to be restricted to X and Y conceptions of those terms, I have no issue with the claim that X and/or Y conceptions are invalid. To generalize that claim as evidence that all conception of those terms is likewise invalid is technically dishonest. Reasonable? Perhaps.., but also presumptuous. It may not rise to the level of presumptuousness the realist in general tend toward, but I find it no less distasteful. It would likewise be a disservice to academically lock those terms as representative of certain singular conceptions.

Then there is a more fundamental theoretical issue. Correctly identifying the issues leading to certain empirical results can play an essential role in extending theory in unpredictable ways. To simply imply that if Malus alone can lead to BI violations is a vindication of some status quo interpretation is unwarranted. Your concretion is misplaced. Often the point of questioning the reason X is false is not to establish a claim of its truth value, but to gain insight to a wider range of questions. Teachers that would respond to enumerations of all the possible reasons something was false with: but the point is that it's false, missed the point entirely.

So, in spite of the justifications implied by Malus consequences lacking contradiction, what does it indicate when a preeminent classical theoretical construct predicts a consequence that violates BI? It certainly does NOT indicate that realism, as narrowly defined by EPR and Bell, erred in ruling out a particular form of realism. It does in fact question the generalization of BI to a broader range of conceptions of realism. It also directly brings into question the relevance of nonlocality, when the polarizer path version, resulting from a classical field construct, doesn't even have an existential partner to correlate with. The difficulties might even be a mathematical decomposition limitation, some akin to a la Gödel maybe? It also begs the question, if Maxwell's equations can produce a path version of BI violations, what besides quanta and the Born rule is fundamentally unique to QM, notwithstanding the claim of being fundamental?

I don't have the answers, but I'm not going to restrict myself to BI tunnel vision either.
 
  • #914
billschnieder said:
This requirement by itself is unreasonable because according to Malus law, it is normal to expect that some photons will not go through the polarizer. Therefore Bell's insistence on having only binary outcomes (+1, -1) goes off the boat right from the start. He should have included a non-detection outcome too.
I'm pretty sure that in experiments with entangled photons, there is a difference between non-detection and not making it through the polarizer. For example, if you look at the description and diagram of a "typical experiment" in this section of the wikipedia article on Bell test loopholes, apparently if the two-channel polarizer like "a" doesn't allow a photon through to be detected at detector D+, it simply deflects it at a different angle to another detector D-, so the photon can be detected either way.
 
  • #915
Had to get some sleep after that last post, I go too long sometimes.
JesseM said:
How does Malus' law apply to individual photons, though? The classical version of Malus' law requires uniformly polarized light with a known polarization angle, are you talking about a photon that's known to be in a polarization eigenstate for a polarizer at some particular angle? In that case, whatever the angle v of the eigenstate, I think the probability the photon would pass through another angle at angle a would be cos^2(v-a). But when entangled photons are generated, would they be in such a known polarization eigenstate? If not it seems like you wouldn't be able to talk about Malus' law applying to individual members of the pair.
Yes, I'll restate this again. This is the statistics I have verified as an exact with Malus law expressed sole in terms of intensity for all pure or mixed polarization state beams, as well as cases of passage through arbitrary multiple polarizers. Malus consistency is predicated on modeling intensity, but it even perfectly models EPR correlations with the caveats given. It's a straight up assumption that individual photons, randomly polarized as a group or not, has a very definite default polarization. The default polarization is unique only in that it is the only polarization at which a polarizer with a matching setting effectively has a 100% chance of passing that photon. The odds of any given photon passing a polarizer that is offset from that default is defined by a straight up assumption that ∆I (intensity) constitutes a ∆p (odds), i.e., for any arbitrary theta offset from the individual photons default ∆I = ∆p.

JesseM said:
But with electromagnetic waves in Maxwell's equations there's no probabilities involved, Malus' law just represents a deterministic decrease in intensity. So there's no case where two detectors at different angles a and b have a probability cos^2(a-b) of opposite results, including the fact that they give opposite results with probability 1 with the detectors at the same angle. This is true even if you design the detectors to give one of two possible outputs depending on the decrease in intensity, as I suggested in post #896 to ThomasT. So no violations of BI and no reason Maxwell's laws can't be understood as a local realist theory, so I'm not sure why you have a problem with the reality of classical fields.
Yes, neither does Maxwell's equations explicitly recognize the particle aspect of photons, thus lacked any motivation for assigning probabilities to individual photons. But as noted, ∆I = ∆p perfectly recovers the proper Malus predicted intensities in all pure and mixed state beams, as well as ∆I in stacked (series) polarizer cases. The EPR case is a parallel case involving anti-twins.

At no point, in modeling Malus intensities or EPR correlation, did I use cos^2(a-b), where a and b represented different polarizers. I've already argued with ThomasT over this point. I used cos^2(theta), where theta is defined solely as the offset of the polarizer relative to the photon that actually comes in contact with that polarizer. In fact, since the binary bit field that predetermined outcomes already had cos^2(theta) statistically built in, I didn't even have to calc cos^2(theta) at detection time. I merely did a linear one to one count into the bit field defined by theta (between polarizer and the photon that actually came in contact with it) alone, took that bit and passed it if it was a 1, diverted if 0. I used precisely the same rules, in the computer models, in both Malus intensity and parallel EPR cases, and only compared EPR correlations after the fact.

To clarify, the bit field I used to predetermine outcomes, to computer model both Malus intensities and EPR parallel cases, set a unique bit for each offset, such that a photon that passed at a given offset could sometimes fail at a lesser offset. The difficulties arose only the the EPR modeling case when it only successfully modeled BI violations correlations when one or the other detector was defined as 0. Yet uniformly rotating the default polarizations of the randomly polarized beam changed which individual photons took what path it had no effect whatsoever on the BI violated statistics. The EPR case, like the Malus intensities, was limited to relative coordinates only wrt detector settings. Absolute values didn't work, in spite of the beam rotation statistical invariance with individual photon path variance.

Maxwell's equations didn't assign a unique particulate identity to individual photons. Yet, if consider classically distinct paths as a real property of a particulate photon (duality), you can construct BI violations from path assumptions required to model Malus law. It's easy to hand wave away when were talking a local path through a local polarizer, until you think about it. In fact the path version of BI violation only become an issue when you require absolute arbitrary coordinates, rather than relative offsets, to model. The same issue that is the sticking point in modeling EPR BI violations.
 
  • #916
I was looking over DrC's Frankenstein particles and read the wiki page JesseM linked:
http://en.wikipedia.org/wiki/Loopholes_in_Bell_test_experiments
Where it mentions a possible failure of of rotational invariance and realized it should be possible, if the Malus law assumptions I used is valid, to construct a pair of correlated beams that explicitly fails rotational invariance.

I need to think through it more carefully, but it would involve using a PBS to split both channels from the source emitter. Use a shutter to selectively remove some percentage of both correlated pairs of a certain polarization, and recombine the remainder. Similar to DrC's Frankenstein setup. Done such that all remaining photons after recombination should have a remaining anti-twin, which, as a group, has a statistically preferred polarization. The only photons that can be defined as observed, and their partners, are no longer present in the beam to effect the correlation statistics. Then observe the effects on correlation statistics at various common detector settings and offsets. The possible variations of this is quiet large.

Edit: A difficulty arises when you consider that while photons are being being shuttered in one side of the polarized beam, before recombination, any photon that takes the other path at that time can be considered detected. Non-detection in QM can sometime qualify as a detection in QM.
 
Last edited:
  • #917
JesseM said:
... it might instead be you are misunderstanding some aspects of the proof ...
Of course, that's why I'm still asking questions.

JesseM said:
The joint probability P(AB) is not being modeled as the product of P(A)*P(B) by Bell's equation.

ThomasT said:
My thinking has been that it reduces to that. Just a stripped down shorthand for expressing Bell's separability requirement. If you don't think that's ok, then how would you characterize the separability of the joint state (ie., the expression of locality) per Bell's (2)?

JesseM said:
I would characterize it in terms of A and B being statistically independent only when conditioned on the value of the hidden variable λ. They are clearly not statistically independent otherwise ...

JesseM said:
A and B are not independent in their marginal probabilities (which determine the actual observed frequencies of different measurement outcomes), only in their probabilities conditioned on λ.


Ok, you seem to be saying that P(AB)=P(A) P(B) isn't an analog of Bell's (2). You also seem to be saying that Bell's (2) models A and B as statistically independent for all joint settings except (a-b)=0. Is this correct?

My thinking has been that Bell's(2) is a separable representation of the entangled state, and that this means that it models the joint state in a factorable form. Is this correct? If so, then is this the explicit expression of Bell locality in Bell's (2).
 
  • #918
JesseM, please correct me if I’m wrong, but haven’t you already answer the question above perfectly clear??

JesseM said:
... Yes, and this was exactly the possibility that Bell was considering! If you don't see this, then you are misunderstanding something very basic about Bell's reasoning. If A and B have a statistical dependence, so P(A|B) is different than P(A), but this dependence is fully explained by a common cause λ, then that implies that P(A|λ) = P(A|λ,B), i.e. there is no statistical dependence when conditioned on λ. That's the very meaning of equation (2) in Bell's original paper, that the statistical dependence which does exist between A and B is completely determined by the state of the hidden variables λ, and so the statistical dependence disappears when conditioned on λ. Again, please tell me if you disagree with this.


Could we also make it as simple that even a 10-yearold can understand, by stating:

Bell's(2) is not about entanglement, Bell's(2) is only about the Hidden variable λ.​
 
Last edited:
  • #919
ThomasT said:
Of course, that's why I'm still asking questions.
I'm glad you're still asking questions, but if you don't really understand the proof, and you do know it's been accepted as valid for years by mainstream physicists, doesn't it make sense to be a little more cautious about making negative claims about it like this one from an earlier post?
ThomasT said:
I couldn't care less if nonlocality or ftl exist or not. In fact, it would be very exciting if they did. But the evidence just doesn't support that conclusion.
On to the topic of probabilities:
ThomasT said:
Ok, you seem to be saying that P(AB)=P(A) P(B) isn't an analog of Bell's (2). You also seem to be saying that Bell's (2) models A and B as statistically independent for all joint settings except (a-b)=0. Is this correct?
No. In any Bell test, the marginal probability of getting either of the two possible results (say, spin-up and spin-down) should always be 0.5, so P(A)=P(B)=0.5. But if you're doing an experiment where the particles always give identical results with the same detector setting, then if you learn the other particle gave a given result (like spin-up) with detector setting b and you're using detector setting a, then the conditional probability your particle gives the same result is cos^2(a-b). So if A and B are identical results, in this case P(AB)=P(B)*P(A|B)=0.5 * cos^2(a-b), so as long as a and b have an angle between them that's something other than 45 degrees (since cos^2(45) = 0.5), P(AB) will be different than P(A)*P(B), and there is a statistical dependence between them.
ThomasT said:
My thinking has been that Bell's(2) is a separable representation of the entangled state, and that this means that it models the joint state in a factorable form. Is this correct? If so, then is this the explicit expression of Bell locality in Bell's (2).
It shows how the joint probability can be separated into the product of two independent probabilities if you condition on the hidden variables λ. So, P(AB|abλ)=P(A|aλ)*P(B|bλ) can be understood as an expression of the locality condition. But he obviously ends up proving that this doesn't work as a way of modeling entanglement...it's really only modeling a case where A and B are perfectly correlated (or perfectly anticorrelated, depending on the experiment) whenever a and b are the same, under the assumption that there is a local explanation for this perfect correlation (like the particles being assigned the same hidden variables by the source that created them).
 
  • #920
my_wan said:
The claim that my logic needs a little spit polish is absolutely valid. That's part of why I debate these issues here, to articulate clear my own thinking on the matter.

That is why I participate too. :smile:
 
  • #921
billschnieder said:
This requirement by itself is unreasonable because according to Malus law, it is normal to expect that some photons will not go through the polarizer. Therefore Bell's insistence on having only binary outcomes (+1, -1) goes off the boat right from the start. He should have included a non-detection outcome too.

Not with beam splitters! (Non-detection is not an issue ever, as we talk about the ideal case. Experimental tests must consider this.)
 
  • #922
I want to make an important statement that counters the essence of some of the arguments being presented about "non-detections" or detector effeciency.

A little thought will tell you why this is not much of an issue. If we have a particle pair A, B and we send them through a beam splitter with detectors at both output ports, we should end up with one of the following 4 cases:

1. A not detected, B not detected.
2. A detected, B not detected.
3. A not detected, B detected.
4. A detected, B detected.

However we won't actually know when case 1 occurs, correct? But unless the chance of 1 is substantially greater than either 2 or 3 individually (and probability logic indicates it should be definitely less - can you see why?), then we can estimate it. If case 4 occurs 50% of the time or more, then 1 should occur less than 10% of the time. This is in essence a vanishing number, since visibility is approaching 90%. That means cases 2 and 3 are happening only about 1 in 10, which would imply case 1 of about 1%.

So you have got to claim all of the "missing" photons are carrying the values that would prove a different result. And this number is not much. I guess it is *possible* if there is a physical mechanism which is responsible for the non-detections, but that would also make it experimentally falsifiable. But you should be aware of how far-fetched this really is. In other words, in actual experiments cases 2 and 3 don't occur very often. Which places severe constraints on case 1.
 
  • #923
JesseM said:
It shows how the joint probability can be separated into the product of two independent probabilities if you condition on the hidden variables λ. So, P(AB|abλ)=P(A|aλ)*P(B|bλ) can be understood as an expression of the locality condition. But he obviously ends up proving that this doesn't work as a way of modeling entanglement...it's really only modeling a case where A and B are perfectly correlated (or perfectly anticorrelated, depending on the experiment) whenever a and b are the same, under the assumption that there is a local explanation for this perfect correlation (like the particles being assigned the same hidden variables by the source that created them).

This must mean that my "10-yearold explanation" is correct, and hopefully this information can even make sense to ThomasT:

Bell's(2) is not about entanglement, Bell's(2) is only about the Hidden variable λ.​

JesseM said:
No. In any Bell test, the marginal probability of getting either of the two possible results (say, spin-up and spin-down) should always be 0.5, so P(A)=P(B)=0.5. But if you're doing an experiment where the particles always give identical results with the same detector setting, then if you learn the other particle gave a given result (like spin-up) with detector setting b and you're using detector setting a, then the conditional probability your particle gives the same result is cos^2(a-b). So if A and B are identical results, in this case P(AB)=P(B)*P(A|B)=0.5 * cos^2(a-b), so as long as a and b have an angle between them that's something other than 45 degrees (since cos^2(45) = 0.5), P(AB) will be different than P(A)*P(B), and there is a statistical dependence between them.

Could we make a 'simplification' of this also, and say:

According to QM predictions, all depends on the relative angle between the polarizers a & b. If measured parallel (0º) or perpendicular (90º) the outcome is perfectly correlated/anticorrelated. In any other case, it’s statistically correlated thru QM predictions cos^2(a-b).

Every outcome on every angle is perfectly random for A & B, with one 'exception' for parallel and perpendicular, where the outcome for A must be perfectly correlated/anticorrelated to B (still individually perfectly random).​

Correct?
 
  • #924
DrChinese said:
... However we won't actually know when case 1 occurs, correct?

DrC, could you help me out? I must be stupid...

If we use a beam splitter, then we always get a measurement, unless something goes wrong. Doesn’t this mean we would know that case 1 has occurred = nothing + nothing?? :rolleyes:
 
  • #925
DevilsAvocado said:
DrC, could you help me out? I must be stupid...

If we use a beam splitter, then we always get a measurement, unless something goes wrong. Doesn’t this mean we would know that case 1 has occurred = nothing + nothing?? :rolleyes:

That is the issue everyone is talking about, except it really doesn't fly. Normally, and in the ideal case, a photon going through a beam splitter comes out either the H port or the V port. Hypothetically, photon Alice might never emerge in line, or might not trigger the detector, or something else goes wrong. So neither of the 2 detectors for Alice fires. Let's say that happens 1 in 10 times, and we can see it happening because one of the Bob detectors fires.

Ditto for Bob. There might be a few times in which the same thing happens on an individual trial for both Alice and Bob. If usual probability laws are applied, you might expect something like the following:

Neither detected: 1%.
Alice or Bob not detected, but not both: 18%.
Alice and Bob both detected: 81%.

I would call this visibility of about 90% which is about where things are at in experiments these days. But you cannot say FOR CERTAIN that the 1 case only occurs 1% of the time, you must estimate using an assumption. But if you *assert* that the 1 case occurs a LOT MORE OFTEN than 1% and you STILL have a ratio of 81% to 18% per above (as these are experimentally verifiable of course), then you have a lot of explaining to do.

And you will need all of it to make a cogent argument to that effect. Since any explanation will necessarily be experimentally falsifiable. The only way you get around this is NOT to supply an explanation. Which is the usual path when this argument is raised. So visibility is a function of everything involved in the experiment, and it is currently very high. I think around 85% + range but it varies from experiment to experiment. I haven't seen good explanations of how it is calculated or I would provide a reference. Perhaps someone else knows a good reference on this.
 
  • #926
DevilsAvocado said:
This must mean that my "10-yearold explanation" is correct, and hopefully this information can even make sense to ThomasT:

Bell's(2) is not about entanglement, Bell's(2) is only about the Hidden variable λ.​
Basically I'd agree, although I'd make it a little more detailed: (2) isn't about entanglement, it's about the probabilities for different combinations of A and B (like A=spin-up and B=spin down) for different combinations of detector settings a and b (like a=60 degrees, b=120 degrees), under the assumption that there is a perfect correlation between A and B when both sides use the same detector setting, and that this perfect correlation is to be explained in a local realist way by making use of hidden variable λ.
DevilsAvocado said:
Could we make a 'simplification' of this also, and say:

According to QM predictions, all depends on the relative angle between the polarizers a & b. If measured parallel (0º) or perpendicular (90º) the outcome is perfectly correlated/anticorrelated. In any other case, it’s statistically correlated thru QM predictions cos^2(a-b).

Every outcome on every angle is perfectly random for A & B, with one 'exception' for parallel and perpendicular, where the outcome for A must be perfectly correlated/anticorrelated to B (still individually perfectly random).​

Correct?
By "perfectly random" you just mean that if we look at A individually or B individually, without knowing what happened at the other one, then there's always an 0.5 chance of one result and an 0.5 chance of the opposite result, right? (so P(A|a)=P(B|b)=0.5) And this is still just as true when talk about the "exception" case of parallel or perpendicular detectors (as you point out when you say 'still individually perfectly random'), so it could be a little misleading to call this an "exception", but otherwise I have no problem with your summary.
 
  • #927
DrChinese said:
... I haven't seen good explanations of how it is calculated or I would provide a reference. Perhaps someone else knows a good reference on this.

Thanks for the info DrC.
 
  • #928
JesseM said:
... this perfect correlation is to be explained in a local realist way by making use of hidden variable λ.
Yes, this is obviously the key.
JesseM said:
By "perfectly random" you just mean that if we look at A individually or B individually, without knowing what happened at the other one, then there's always an 0.5 chance of one result and an 0.5 chance of the opposite result, right? (so P(A|a)=P(B|b)=0.5) And this is still just as true when talk about the "exception" case of parallel or perpendicular detectors (as you point out when you say 'still individually perfectly random'), so it could be a little misleading to call this an "exception", but otherwise I have no problem with your summary.
Correct, that’s what I meant. Thanks!
 
  • #929
DrChinese said:
Not with beam splitters! (Non-detection is not an issue ever, as we talk about the ideal case. Experimental tests must consider this.)

1) Non-detection is present in every bell-test experiment ever performed
2) The relevant beam-splitters are real ones used in real experiments, not idealized ones that have never and can never be used in any experiment.

Bell should still have considered non-detection as one of the outcomes in addition to (-1, and +1). If you are right that non-detection is not an issue, the inequalities derived by assuming there are three possible outcomes right from the start, should also be violated. But if you do this and end up with an inequality that is no longer violated, then non-detection IS an issue.
 
  • #930
DrChinese said:
A little thought will tell you why this is not much of an issue. If we have a particle pair A, B and we send them through a beam splitter with detectors at both output ports, we should end up with one of the following 4 cases:

1. A not detected, B not detected.
2. A detected, B not detected.
3. A not detected, B detected.
4. A detected, B detected.
Correct.
In Bell's treatment, only case 4 is considered, the rest are simply ignored or assumed to not be possible.

However we won't actually know when case 1 occurs, correct? But unless the chance of 1 is substantially greater than either 2 or 3 individually (and probability logic indicates it should be definitely less - can you see why?), then we can estimate it. If case 4 occurs 50% of the time or more, then 1 should occur less than 10% of the time. This is in essence a vanishing number, since visibility is approaching 90%. That means cases 2 and 3 are happening only about 1 in 10, which would imply case 1 of about 1%.
This is not even wrong. It is outrageous. Case 1 corresponds to two photons leaving the source but none detected, cases 2-3 correspond to two photons leaving the source and only one detected on any of the channels. In Bell test experiments, coincidence-circuitry eliminates 1-3 from consideration because there is no way in the inequalities to include that information. The inequalities are derived assuming that only case 4 is possible.

To determine likelihood of each case from relative frequencies, you will need to count that specific case, and divide by the total number for all cases (1-5), or alternatively, all photon pairs leaving the source. Therefore, the relative frequency will be the total number of the specific case observed divided by the total number of photon pairs produced by the source.
ie: P(caseX) = N(caseX) / { N(case1) + N(case2) + N(case3) + N(case4)}

If you are unable to tell that case 4 has occurred, then you can never know what proportion of the particle pairs resulted in any of the cases, because N(case4) is part of the denominator!

So when you say "if case 4 occurs 50% of the time", you have to explain what represents 100%.
for example consider the following frequencies, for the case in which 220 particle pairs are produced and we have:

case 1: 180
case 2: 10
case 3: 10
case 4: 20

Since according to you, we can not know when case 1 occurs, then our total is only (40)
according to you, P(case4) = 50% and P(case2) = 25% P(case 3) = 25%
according to you, P(case1) should be vanishingly small since P(case4) is high.

But as soon as you realize that our total is actually 220 not 40 as you mistakenly thought,
P(case1) now becomes 82%, for exactly the same experiment, just by correcting the simple error you made.
It is even worse with Bell because according to him, cases 1-3 do not exist so his P(case4) is 100%, since considering even only case 2 and 3 as you suggested would required including a non-detection outcome as well as (+1 and -1).

Now that this blatant error is clear, let us look at real experiments to see which approach is more reasonable, by looking at what proportion of photons leaving the source is actually detected.

For all Bell-test experiments performed to date, only 5-30% of the photons emitted by the detector have been detected, with only one exception. And this exception, which I'm sure DrC and JesseM will remind us of, had other more serious problems. Let us make sure we are clear what this means.

It means of almost all those experiments usually thrown around as proof of non-locality, P(case4) has been at most 30% and even as low as 30% in some cases. The question then is, where did the whopping 70% go?

Therefore it is clear first of all by common sense, then by probability theory, and finally confirmed by numerous experiments that non-detection IS an issue and should have been included in the derivation of the inequalities!
 
  • #931
billschnieder said:
1) Non-detection is present in every bell-test experiment ever performed
2) The relevant beam-splitters are real ones used in real experiments, not idealized ones that have never and can never be used in any experiment.

Bell should still have considered non-detection as one of the outcomes in addition to (-1, and +1). If you are right that non-detection is not an issue, the inequalities derived by assuming there are three possible outcomes right from the start, should also be violated. But if you do this and end up with an inequality that is no longer violated, then non-detection IS an issue.

Did you read what I said? I said non-detection DO matter in experiments. But not in a theoretical proof such as Bell.
 
  • #932
DrChinese said:
Did you read what I said? I said non-detection DO matter in experiments. But not in a theoretical proof such as Bell.

Therefore correlations observed in real experiments in which non-detection matters can not be compared to idealized theoretical proofs in which non-detection was not considered since those idealized theoretical proofs made assumptions that will never be fulfilled in any real experiments.

QM works because it is not an idealized theoretical proof, it actually incorporates and accounts for the experimental setup. It is therefore not surprising that QM and real experiments agree while Bell's inequalities are the only ones left hanging in the cold.
 
Last edited:
  • #933
billschnieder said:
Correct.
In Bell's treatment, only case 4 is considered, the rest are simply ignored or assumed to not be possible.

This is not even wrong. It is outrageous. Case 1 corresponds to two photons leaving the source but none detected, cases 2-3 correspond to two photons leaving the source and only one detected on any of the channels. In Bell test experiments, coincidence-circuitry eliminates 1-3 from consideration because there is no way in the inequalities to include that information. The inequalities are derived assuming that only case 4 is possible.

Oh really. Cases 2 and 3 are quite observed and recorded. They are usually excluded from counting because of the Coincidence Time Window, this is true. But again, this is just a plain misunderstanding of the process. You cannot actually have the kind of stats you describe because the probability p(1)=p(2)*p(3)=p(2)^2 and p(4)=1 - p(1). Now this is approximate and there hypothetically could be a force or something that causes deviation. But as mentioned, that would require a physically testable hypothesis.

As far as I can see, there is currently very high detection efficiencies. From Zeilinger et al:

These can be characterized individually by measured visibilities, which were: for the source, ≈ 99% (98%) in the H/V (45°/135°) basis; for both Alice’s and Bob’s polarization analyzers, ≈ 99%; for the fibre channel and Alice’s analyzer (measured before each run), ≈ 97%, while the free-space link did not observably reduce Bob’s polarization visibility; for the effect of accidental coinci-dences resulting from an inherently low signal-to-noise ratio (SNR), ≈ 91% (including both dark counts and multipair emissions, with 55 dB two-photon attenuation and a 1.5 ns coincidence window).

Violation by 16 SD over 144 kilometers.
http://arxiv.org/abs/0811.3129

Or perhaps:

(You just have to read this as it addresses much of these issues directly. Suffice it to say that they address the issue of collection of pairs from PDC very nicely.)

Violation by 213 SD.
http://arxiv.org/abs/quant-ph/0303018
 
  • #934
billschnieder said:
Therefore correlations observed in real experiments in which non-detection matters can not be compared to idealized theoretical proofs in which non-detection was not considered since those idealized theoretical proofs made assumptions that will never be fulfilled in any real experiments.

You know, if there were only 1 experiment ever performed you might be correct. But this issue has been raised, addressed and ultimately rejected as on ongoing issue over and over.
 
  • #935
DrChinese said:
Did you read what I said? I said non-detection DO matter in experiments. But not in a theoretical proof such as Bell.
Yes, Bell's proof was just showing that the theoretical predictions of QM were incompatible with the theoretical predictions of local realism, not derive equations that were directly applicable to experiments. Though as I've already said, you can derive inequalities that include detector efficiency as a parameter, and there have been at least a few experiments with sufficiently high detector efficiency such that these inequalities are violated (though these experiments were vulnerable to the locality loophole).

A few papers I came across suggested that experiments which closed both the detector efficiency loophole and the locality loophole simultaneously would likely be possible fairly soon. If someone offered to bet Bill a large sum of money that the results of these experiments would continue to match the predictions of QM (and thus continue to violate Bell inequalities that take into account detector efficiency), would Bill bet against them?
 
  • #936
JesseM said:
If someone offered to bet Bill a large sum of money that the results of these experiments would continue to match the predictions of QM (and thus continue to violate Bell inequalities that take into account detector efficiency), would Bill bet against them?
What has this got to do with anything. If there was a convincing experiment which fulfilled all the assumptions in Bell's derivation, I would change my mind. I am after the truth, I don't religiously follow one side just because I have invested my whole life to it. So why would I want to bet at all.

I am merely pointing out here that the so-called proof of non-locality is unjustified, which is not the same as saying there will never be any proof. it seems from your suggestion thatyou are already absolutely convinced of non-locality, so would you bet a large sum of money against the idea that non-locality will be found to be a serious misunderstanding?
 
  • #937
BTW,
Even if an experimenter ensured 100% detection efficiency, they still have to ensure cyclicity in their data, as illustrated https://www.physicsforums.com/showpost.php?p=2766980&postcount=110

Surprisingly, you both artfully avoid addressing this example which clearly shows a mechanism for violating the inequalities that has nothing to do with detection efficiency.

Bell derives inequalities by assuming that a single particle is measured at multiple angles. Experiments are performed in which many different particles are measured at multiple angles. Apples vs oranges. Comparing the two is equivalent to comparing the average height obtained by measuring a single person's height 1000000 times, with the average height obtained by measuring 1000000 different people each exactly one time.

The point is that certain assumptions are made about the data when deriving the inequalities, that must be valid in the data-taking process. God is not taking the data, so the human experimenters must take those assumptions into account if their data is to be comparable to the inequalities.

Consider a certain disease that strikes persons in different ways depending on circumstances. Assume that we deal with sets of patients born in Africa, Asia and Europe (denoted a,b,c). Assume further that doctors in three cities Lyon, Paris, and Lille (denoted 1,2,3) are are assembling information about the disease. The doctors perform their investigations on randomly chosen but identical days (n) for all three where n = 1,2,3,...,N for a total of N days. The patients are denoted Alo(n) where l is the city, o is the birthplace and n is the day. Each patient is then given a diagnosis of A = +1/-1 based on presence or absence of the disease. So if a patient from Europe examined in Lille on the 10th day of the study was negative, A3c(10) = -1.

According to the Bell-type Leggett-Garg inequality

Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1

In the case under consideration, our doctors can combine their results as follows

A1a(n)A2b(n) + A1a(n)A3c(n) + A2b(n)A3c(n)

It can easily be verified that by combining any possible diagnosis results, the Legett-Garg inequalitiy will not be violated as the result of the above expression will always be >= -1, so long as the cyclicity (XY+XZ+YZ) is maintained. Therefore the average result will also satisfy that inequality and we can therefore drop the indices and write the inequality only based on place of origin as follows:

<AaAb> + <AaAc> + <AbAc> >= -1

Now consider a variation of the study in which only two doctors perform the investigation. The doctor in Lille examines only patients of type (a) and (b) and the doctor in Lyon examines only patients of type (b) and (c). Note that patients of type (b) are examined twice as much. The doctors not knowing, or having any reason to suspect that the date or location of examinations has any influence decide to designate their patients only based on place of origin.

After numerous examinations they combine their results and find that

<AaAb> + <AaAc> + <AbAc> = -3

They also find that the single outcomes Aa, Ab, Ac, appear randomly distributed around +1/-1 and they are completely baffled. How can single outcomes be completely random while the products are not random. After lengthy discussions they conclude that there must be superluminal influence between the two cities.

But there are other more reasonable reasons. Note that by measuring in only two citites they have removed the cyclicity intended in the original inequality. It can easily be verified that the following scenario will result in what they observed:

- on even dates Aa = +1 and Ac = -1 in both cities while Ab = +1 in Lille and Ab = -1 in Lyon
- on odd days all signs are reversed

In the above case
<A1aA2b> + <A1aA2c> + <A1bA2c> >= -3
which is consistent with what they saw. Note that this equation does NOT maintain the cyclicity (XY+XZ+YZ) of the original inequality for the situation in which only two cities are considered and one group of patients is measured more than once. But by droping the indices for the cities, it gives the false impression that the cyclicity is maintained.

The reason for the discrepancy is that the data is not indexed properly in order to provide a data structure that is consistent with the inequalities as derived.Specifically, the inequalities require cyclicity in the data and since experimenters can not possibly know all the factors in play in order to know how to index the data to preserve the cyclicity, it is unreasonable to expect their data to match the inequalities.

For a fuller treatment of this example, see Hess et al, Possible experience: From Boole to Bell. EPL. 87, No 6, 60007(1-6) (2009)

The key word is "cyclicity" here. Now let's look at various inequalities:

Bell's equation (15):
1 + P(b,c) >= | P(a,b) - P(a,c)|
a,b, c each occur in two of the three terms. Each time together with a different partner. However in actual experiments, the (b,c) pair is analyzed at a different time from the (a,b) pair so the bs are not the same. Just because the experimenter sets a macroscopic angle does not mean that the complete microscopic state of the instrument, which he has no control over is in the same state.

CHSH:
|q(d1,y2) - q(a1,y2)| + |q(d1,b2)+q(a1,b2)| <= 2
d1, y2, a1, b2 each occur in two of the four terms. Same argument above applies.

Leggett-Garg:
Aa(.)Ab(.) + Aa(.)Ac(.) + Ab(.)Ac(.) >= -1
 
  • #938
billschnieder said:
What has this got to do with anything. If there was a convincing experiment which fulfilled all the assumptions in Bell's derivation, I would change my mind.
What do you mean by "assumptions", though? Are you just talking about the assumptions about the observable experimental setup, like spacelike separation between measurements and perfect detection of all pairs (or a sufficiently high number of pairs if we are talking about a derivation of an inequality that includes detector efficiency as a parameter)? Or are you including theoretical assumptions like the idea that the universe obeys local realist laws and that there is some set of local variables λ such that P(AB|ab)=P(A|aλ)*P(B|bλ)? Of course Bell would not expect that any real experiment could fulfill those theoretical assumptions, since he believed the predictions of QM were likely to be correct and his proof was meant to be a proof-by-contradiction showing these theoretical assumptions lead to predictions incompatible with QM under the given observable experimental conditions.
billschnieder said:
I am merely pointing out here that the so-called proof of non-locality is unjustified
You can only have "proofs" of theoretical claims, for empirical claims you can build up evidence but never prove them with perfect certainty (we can't 'prove' the Earth is round, for example). Bell's proof is not intended to be a proof that non-locality is real in the actual world, just that local realism is incompatible with QM. Of course you apparently doubt some aspects of this purely theoretical proof, like the idea that in any local realist universe it should be possible to find a set of local variables λ such that P(AB|ab)=P(A|aλ)*P(B|bλ), but you refuse to engage in detailed discussion on such matters. In any case I would say the evidence is strong that QM's predictions about Aspect-type experiments are correct, even if there are a few loopholes like the fact that no experiment has simultaneously closed the detector efficiency and locality loopholes (but again, I think it would be impossible to find a local realist theory that exploited both loopholes in a way consistent with the experiments that have been done so far but didn't look extremely contrived).
billschnieder said:
it seems from your suggestion thatyou are already absolutely convinced of non-locality, so would you bet a large sum of money against the idea that non-locality will be found to be a serious misunderstanding?
Personally I tend to favor the many-worlds interpretation of QM, which could allow us to keep locality by getting rid of the implicit assumption in Bell's proof that every measurement must have a unique outcome (to see how getting rid of this can lead to a local theory with Bell inequality violations, you could check out my post #11 on this thread for a toy model, and post #8 on this thread gives references to various MWI advocates who say it gives a local explanation for BI violations). I would however bet a lot of money that A) future Aspect-type experiments will continue to match the predictions of QM about Bell inequality violations, and B) mainstream physicists aren't going to end up deciding that Bell's theoretical proof is fundamentally flawed and that QM is compatible with a local realist theory that doesn't have any of the kinds of "weird" features that are included as loopholes in rigorous versions of the proof (like many-worlds, or like 'conspiracies' in past conditions that create a correlation between the choice of detector setting and the state of hidden variables at some time earlier than when the choice is made)
 
Last edited:
  • #939
billschnieder said:
BTW,
Even if an experimenter ensured 100% detection efficiency, they still have to ensure cyclicity in their data, as illustrated https://www.physicsforums.com/showpost.php?p=2766980&postcount=110

Surprisingly, you both artfully avoid addressing this example which clearly shows a mechanism for violating the inequalities that has nothing to do with detection efficiency.

Thank you! I was hoping the artistry would show through.

All I can really say is that any local realistic prediction you care to make can pretty well be falsified. On the other hand, any Quantum Mechanical prediction will not. So at the end of the day, your definitional quibbling is not very convincing. All you need to do is define LR so we can test it. Saying "apples and oranges" when it looks like "apples and apples" (since we start with perfect correlations) is not impressive.

So... make an LR prediction instead of hiding.
 
  • #940
24 000 hits and still going... Einstein is probably turning in his grave at the way the EPR argument is still going and going...
 
  • #941
billschnieder said:
BTW,
Even if an experimenter ensured 100% detection efficiency, they still have to ensure cyclicity in their data, as illustrated https://www.physicsforums.com/showpost.php?p=2766980&postcount=110

Surprisingly, you both artfully avoid addressing this example which clearly shows a mechanism for violating the inequalities that has nothing to do with detection efficiency.
I did respond to that post, but I didn't end up responding to your later post #128 on the subject here because before I got to it you said you didn't want to talk to me any more unless I agreed to make my posts as short as you wanted them to be and for me not to include discussions of things I thought were relevant if you didn't agree they were relevant. But since you bring it up, I think you're just incorrect in saying in post #128 that the Leggett-Garg inequality is not intrinsically based on a large collection of trials where on each trial we measure the same system at 2 of 3 possible times (as opposed to measuring two parts of an entangled system with 1 of several possible combinations detector settings as with other inequalities)--see this paper and http://www.nature.com/nphys/journal/v6/n6/full/nphys1641.html which both describe it using terms like "temporal inequality" and "inequality in time", for example. I also found the paper where you got the example with patients from different countries here, they explain in the text around equation (8) what the example (which doesn't match the conditions assumed in the Leggett-Garg inequality) has to do with the real Leggett-Garg inequality:
Realism plays a role in the arguments of Bell and followers because they introduce a variable λ representing an element of reality and then write

\Gamma = &lt;A_a(\lambda)A_b(\lambda)&gt; + &lt;A_a(\lambda)A_c(\lambda)&gt; + &lt;A_b(\lambda)A_c(\lambda)&gt; \, \geq -1 \, (8)

Because no λ exists that would lead to a violation except a λ that depends on the index pairs (a, b), (a, c) and (b, c) the simplistic conclusion is that either elements of reality do not exist or they are non-local. The mistake here is that Bell and followers insist from the start that the same element of reality occurs for the three different experiments with three different setting pairs. This assumption implies the existence of the combinatorial-topological cyclicity that in turn implies the validity of a non-trivial inequality but has no physical basis. Why should the elements of reality not all be different? Why should they, for example not include the time of measurement?
If you look at that first paper they mention on p. 2 that in deriving the inequality we assume each particle is assumed to be in one of the two possible states at all times, so each particle has a well-defined classical "history" of the type shown in the diagram at the top of p. 4, and we assume there is some well-defined probability distribution on the ensemble of all possible classical histories. They also mention at the bottom of p. 3 that deriving the inequality requires that we assume it is possible to make "noninvasive measurements", so the choice of which of 3 times to make our first measurement does not influence the probability of different possible classical histories. They mention that this assumption can also be considered a type of "locality in time". This assumption is a lot more questionable than the usual type of locality assumed when there is a spacelike separation between measurements, since nothing in local realism really should guarantee that you can make "noninvasive" measurements on a quantum system which don't influence its future evolution after the measurement. And this also seems to be the assumption the authors are criticizing in the quote above when they say 'Why should the elements of reality not all be different? Why should they, for example not include the time of measurement?' (I suppose the λ that appears in the equation in the quote represents a particular classical history, so the inequality would hold as long as the probability distribution P(λ) on different possible classical histories is independent of what pair of times the measurements are taken on a given trial.) So this critique appears to be rather specific to the Leggett-Garg inequality, maybe you could come up with a variation for other inequalities but it isn't obvious to me (I think the 'noninvasive measurements' condition would be most closely analogous to the 'no-conspiracy' condition in usual inequalities, but the 'no-conspiracy' condition is a lot easier to justify in terms of local realism when λ can refer to the state of local variables at some time before the experimenters choose what detector settings to use)
 
Last edited by a moderator:
  • #942
JesseM said:
...mainstream physicists aren't going to end up deciding that Bell's theoretical proof is fundamentally flawed and that QM is compatible with a local realist theory that doesn't have any of the kinds of "weird" features that are included as loopholes in rigorous versions of the proof (like many-worlds, or like 'conspiracies' in past conditions that create a correlation between the choice of detector setting and the state of hidden variables at some time earlier than when the choice is made)

True, not likely to change much anytinme soon.

The conspiracy idea (and they go by a lot of names, including No Free Will and Superdeterminism) is not really a theory. More like an idea for a theory. You would need to provide some kind of mechanism, and that would require a deep theoretical framework in order to account for Bell Inequality violations. And again, much of that would be falsifiable. No one actually has one of those on the table. A theory, I mean, and a mechanism.

If you don't like abandoning c, why don't you look at Relational Blockworld? No non-locality, plus you get the added bonus of a degree of time symmetry - no extra worlds to boot! :smile:
 
  • #943
DrChinese said:
... However we won't actually know when case 1 occurs, correct? But unless the chance of 1 is substantially greater than either 2 or 3 individually (and probability logic indicates it should be definitely less - can you see why?), then we can estimate it. If case 4 occurs 50% of the time or more, then 1 should occur less than 10% of the time. This is in essence a vanishing number, since visibility is approaching 90%. That means cases 2 and 3 are happening only about 1 in 10, which would imply case 1 of about 1%.

OMG. I can only hope that < 10% of my brain was 'connected' when asking about this first time... :redface:

OF COURSE we can’t know when case 1 occurs! There are no little "green EPR men" at the source shouting – Hey guys! Here comes entangled pair no. 2345! ARE YOU READY!

Sorry.

DrChinese said:
So you have got to claim all of the "missing" photons are carrying the values that would prove a different result. And this number is not much. I guess it is *possible* if there is a physical mechanism which is responsible for the non-detections, but that would also make it experimentally falsifiable. But you should be aware of how far-fetched this really is. In other words, in actual experiments cases 2 and 3 don't occur very often. Which places severe constraints on case 1.

Far-fetched?? To me this looks like something that Crackpot Kracklauer would use as the final disproval of all mainstream science. :smile:

Seriously, an unknown "physical mechanism" working against the reliability of EPR-Bell experiments!:bugeye:? If someone is using this cantankerous argument as a proof against Bell's Theorem, he’s apparently not considering the consequences...

That "physical mechanism" would need some "artificial intelligence" to pull that thru, wouldn’t it?? Some kind of "global memory" working against the fair sampling assumption – Let’s see now, how many photon pairs are detected and how many do we need to mess up, to destroy this silly little experiment?

Unless this "physical mechanism" also can control the behavior of humans (humans can mess up completely on their own as far as we know), it would need some FTL mechanism to verify that what should be measured is really measured = back to square one!

By the way, what’s the name of this interesting "AI paradigm"...?? :biggrin:


P.S. I checked this pretty little statement "QM works because it is not an idealized theoretical proof" against the http://math.ucr.edu/home/baez/crackpot.html" and it scores 10 points!
"10 points for each claim that quantum mechanics is fundamentally misguided (without good evidence)."
Not bad!
 
Last edited by a moderator:
  • #944
JesseM said:
But since you bring it up, I think you're just incorrect in saying in post #128 that the Leggett-Garg inequality is not intrinsically based on a large collection of trials where on each trial we measure the same system at 2 of 3 possible times (as opposed to measuring two parts of an entangled system with 1 of several possible combinations detector settings as with other inequalities)

As I mentioned to you earlier, it is your opinion here that is wrong. Of course, the LGI applies to the situation you mention, but inequalities of that form were originally proposed by Boole in 1862 (see http://rstl.royalsocietypublishing.org/content/152/225.full.pdf+html) and had nothing to do with time. All that is necessary for it to apply is n-tuples of two valued (+/-) variables. In Boole's case it was three boolean variables. The inequalities result simply from arithmetic, and nothing else.
We perform an experiment in which each data point consists of triples of data such as (i,j,k). Let us call this set S123. We then decide to analyse this data by extracting three data sets of pairs such as S12, S13, S23. What Boole showed was essentially if i, j,k are two valued variables, no matter the type of experiment generating S123, the datasets of pairs extracted from S123 will satisfy the inequalities:

|<S12> +/- <S13>| <= 1 +/- <S23>

You can verify that this is Bell's inequality (replace 1,2,3 with a,b,c,). Using the same ideas he came up with a lot of different inequalities one of which is the LGI, all from arithmetic. So a violation of these inequalities by data, points to mathematically incorrect treatment of the data.

You may be wondering how this applies to EPR. The EPR case involves performing an experiment in which each point is a pair of two-valued outcomes (i,j), let us call it R12. Bell and followers then assume that they should be able to substitute Sij for Rij in the inequalities forgetting that the inequality holds for pairs extracted from triples, but not necessarily for pairs of two-valued data.

Note that each term in Bell's inequality is a pair from a set of triples (a, b, c), but the data obtained from experiments is a pair from a set of pairs.

I also found the paper where you got the example with patients from different countries here,
That is why I gave you the reference before, have you read it, all of it?

So this critique appears to be rather specific to the Leggett-Garg inequality, maybe you could come up with a variation for other inequalities but it isn't obvious to me (I think the 'noninvasive measurements' condition would be most closely analogous to the 'no-conspiracy' condition in usual inequalities, but the 'no-conspiracy' condition is a lot easier to justify in terms of local realism when λ can refer to the state of local variables at some time before the experimenters choose what detector settings to use)
This is not a valid criticism for the following reason:

1) You do not deny that the LGI is a Bell-type inequality. Why do you think it is called that?
2) You have not convincingly argued why the LGI should not apply to the situation described in the example I presented
3) You do not deny the fact that in the example I presented, the inequalities can be violated simply based on how the data is indexed.
4) You do not deny the fact that in the example, there is no way to ensure the data is correctly indexed unless all relevant parameters are known by the experimenters
5) You do not deny that Bell's inequalities involve pairs from a set of triples (a,b,c) and yet experiments involve triples from a set of pairs.
6) You do not deny that it is impossible to measure triples in any EPR-type experiment, therefore Bell-type inequalities do not apply to those experiments. Boole had shown 100+ years ago that you can not substitute Rij for Sij in those type of inequalities.
 
Last edited:
  • #945
I can see why Dirac disdained this kind of pondering, which in the end has little or nothing to do with the work of physics and its applications in life.
 
  • #946
nismaratwork said:
I can see why Dirac disdained this kind of pondering, which in the end has little or nothing to do with the work of physics and its applications in life.
There's a real point here. If the motivation in defining a realistic mechanism is simply to sooth a preexisting philosophical disposition, then such debates have nothing to do with anything. However, some big game remains in physics, perhaps even the biggest available. If farther constraints can be established, or constraints that have been overly generalized better defined, it might turn out to be of value.

As DevilsAvocado put it, little "green EPR men" are not a very satisfactory theoretical construct. Realist want to avoid them with realistic constructs, with varying judgment on what constitutes realistic. Non-realist avoid them by denying the realism of the premise. In the end, the final product needs only a formal description with the greatest possible predictive value, independent of our philosophical sensibilities.
 
  • #947
JesseM said:
I'm glad you're still asking questions, but if you don't really understand the proof, and you do know it's been accepted as valid for years by mainstream physicists, doesn't it make sense to be a little more cautious about making negative claims about it like this one from an earlier post?

ThomasT said:
I couldn't care less if nonlocality or ftl exist or not. In fact, it would be very exciting if they did. But the evidence just doesn't support that conclusion.
I understand the proofs of BIs. What I don't understand is why nonlocality or ftl are seriously considered in connection with BI violations and used by some to be synonymous with quantum entanglement.

The evidence supports Bell's conclusion that the form of Bell's (2) is incompatible with qm and experimental results. But that's not evidence, and certainly not proof, that nature is nonlocal or ftl. (I think that most mainstream scientists would agree that the assumption of nonlocality or ftl is currently unwarranted.) I think that a more reasonable hypothesis is that Bell's (2) is an incorrect model of the experimental situation.

Which you seem to agree with:
JesseM said:
It (the form of Bell's 2) shows how the joint probability can be separated into the product of two independent probabilities if you condition on the hidden variables ?. So, P(AB|ab?)=P(A|a?)*P(B|b?) can be understood as an expression of the locality condition. But he obviously ends up proving that this doesn't work as a way of modeling entanglement...it's really only modeling a case where A and B are perfectly correlated (or perfectly anticorrelated, depending on the experiment) whenever a and b are the same, under the assumption that there is a local explanation for this perfect correlation (like the particles being assigned the same hidden variables by the source that created them).

Why doesn't the incompatibility of Bell's (2) with qm and experimental results imply nonlocality or ftl? Stated simply by DA, and which you (and I) agree with:
DevilsAvocado said:
Bell's(2) is not about entanglement, Bell's(2) is only about the Hidden variable .
 
  • #948
ThomasT said:
I understand the proofs of BIs. What I don't understand is why nonlocality or ftl are seriously considered in connection with BI violations and used by some to be synonymous with quantum entanglement.
Yes, you don't understand it, but mainstream physicists are in agreement that Bell's equations all follow directly from local realism plus a few minimal assumptions (like no parallel universes, no 'conspiracies' in past conditions that predetermine what choice the experimenter will make on each trial and tailor the earlier hidden variables to those future choices), so why not consider the probability that the problem likely lies with your understanding rather than that of all those physicists for decades?
ThomasT said:
The evidence supports Bell's conclusion that the form of Bell's (2) is incompatible with qm and experimental results.
And (2) would necessarily be true in all local realist theories that satisfy those few minimal assumptions. (2) is not in itself a separate assumption, it follows logically from the postulate of local realism.
ThomasT said:
But that's not evidence, and certainly not proof, that nature is nonlocal or ftl. (I think that most mainstream scientists would agree that the assumption of nonlocality or ftl is currently unwarranted.)
They would agree that it's warranted to rule out local realist theories. Do you disagree with that? Of course this doesn't force you to believe in ftl, you are free to just drop the idea of an objective universe that has a well-defined state even when we're not measuring it (which is basically the option taken by those who prefer the Copenhagen interpretation), or consider the possibility that each measurement splits the experimenter into multiple copies who see different results (many-worlds interpretation), or consider the possibility of some type of backwards causality that can create the kind of "conspiracies" I mentioned.
ThomasT said:
I think that a more reasonable hypothesis is that Bell's (2) is an incorrect model of the experimental situation.
Local realism is an incorrect model, but (2) is not a separate assumption from local realism, it would be true in any local realist theory.
ThomasT said:
Why doesn't the incompatibility of Bell's (2) with qm and experimental results imply nonlocality or ftl?
It implies the falsity of local realism, which means if you are a realist who believes in an objective universe independent of our measurements, and you don't believe in any of the "weird" options like parallel worlds or "conspiracies", your only remaining option is nonlocality/ftl.
 
Last edited:
  • #949
DrChinese said:
As far as I can see, there is currently very high detection efficiencies. From Zeilinger et al:

These can be characterized individually by measured visibilities, which were: for the source, ≈ 99% (98%) in the H/V (45°/135°) basis; for both Alice’s and Bob’s polarization analyzers, ≈ 99%; for the fibre channel and Alice’s analyzer (measured before each run), ≈ 97%, while the free-space link did not observably reduce Bob’s polarization visibility; for the effect of accidental coinci-dences resulting from an inherently low signal-to-noise ratio (SNR), ≈ 91% (including both dark counts and multipair emissions, with 55 dB two-photon attenuation and a 1.5 ns coincidence window).

Violation by 16 SD over 144 kilometers.
http://arxiv.org/abs/0811.3129
What has visibility in common with detection efficiency. :bugeye:
Visibility=(coincidence-max - coincidence-min)/(coincidence-max + coincidence-min)
Efficiency=coincidence rate/singlet rate
 
  • #950
JesseM said:
A few papers I came across suggested that experiments which closed both the detector efficiency loophole and the locality loophole simultaneously would likely be possible fairly soon. If someone offered to bet Bill a large sum of money that the results of these experiments would continue to match the predictions of QM (and thus continue to violate Bell inequalities that take into account detector efficiency), would Bill bet against them?
Interesting. And do those papers suggest at least approximately what kind of experiments they will be?
Or is it just very general idea?

Besides if you want to discuss some betting with money you are in the wrong place.
 

Similar threads

Replies
45
Views
3K
Replies
4
Views
1K
Replies
18
Views
3K
Replies
6
Views
2K
Replies
2
Views
2K
Replies
100
Views
10K
Back
Top