Local realism ruled out? (was: Photon entanglement and )

  • #151
What about "Bell Locality" alone?

DrChinese said:
Bell local + Bell realistic = ruled out.
Dr. Chinese, I am wondering, do you not also agree that the following stronger statement is true as well?

Bell local = ruled out
 
Physics news on Phys.org
  • #152


Eye_in_the_Sky said:
Dr. Chinese, I am wondering, do you not also agree that the following stronger statement is true as well?

Bell local = ruled out

No, but I can certainly understand why you might feel that way.

The Bell argument - to me - centers around counterfactual reasoning (realism) more than locality (separability). Realism being the requirement that particles have definite values for observables regardless of their actually being measured. Without this critical requirement, the Bell Inequality cannot be derived, and therefore it cannot be violated.

But a reasonable person would also look at entanglement and say, gee, there must be *some* kind of non-local action occurring. I refer to that as "quantum non-locality" which to me simply encapsulates the idea that there are non-local correlations. But that does not strictly imply that Einsteinian (Bell) locality is violated.
 
  • #153
akhmeteli said:
Sorry, ThomasT, you've lost me again. This time I cannot say I don't understand a word, but 30% is too little for a meaningful discussion - this is a physics forum, not a crossword contest. With all due respect, if you believe you're saying something well-known that I don't know, give me a reference, if not, try to be clearer. And I mean much clearer.
I think I might have been presenting the argument the wrong way.

Bell locality applied to LHV representation of two photon entangled state entails this:

P(A,B) = P(A) P(B)

That is, it entails that the joint probability be factorable (separable) as the product of the individual probabilities.

From probability theory and statistics, if two (sets of) events, A and B, are independent, then their joint probability is the product of the individual probabilites, P(A) and P(B).

So, we start out by observing that Bell locality represents statistical independence.
(This is different from the previous approach of assuming that Bell's locality condition represents causal independence, and then parsing it to include statistical independence.)

Does statistical independence imply causal independence?

The answer is yes (causal dependence entails statistical dependence).

However, in order to ascertain whether or not Bell locality is viable (that is, whether or not its application allows us to deduce the presence of superluminal causality) we must ask:

Does statistical dependence imply causal dependence?

Ok so far?
 
  • #154
ThomasT said:
I think I might have been presenting the argument the wrong way.

Bell locality applied to LHV representation of two photon entangled state entails this:

P(A,B) = P(A) P(B)

That is, it entails that the joint probability be factorable (separable) as the product of the individual probabilities.

From probability theory and statistics, if two (sets of) events, A and B, are independent, then their joint probability is the product of the individual probabilites, P(A) and P(B).

So, we start out by observing that Bell locality represents statistical independence.
(This is different from the previous approach of assuming that Bell's locality condition represents causal independence, and then parsing it to include statistical independence.)
1. I agree that P(A,B) = P(A) P(B) is a test of a local realistic theory. Certainly I look for that in any model claiming to be local realistic. Actually I look for something in which P(A)=f(A, v1, v2, etc) as long as B is not a variable, even indirectly.

Because of the advent of Bell, though, the meaning of independence has been blurred. Because it is obvious now that somehow or another, P(A) and P(B) must be connected some way to make the relationships work out. In the De Raedt computer simulation, for example, random variables and specially shaped functions are introduced to achieve a pseudo dependence on theta=A-B.

2. Of course, P(A,B) = P(A) P(B) alone does not lead to a violation of a Bell Inequality. That requires P(A, B, C)>=0 which is actually not true for all A, B, C. So that too is something I test for. I want there to be values produced for an A, B and C simultaneously using the same f(...). Now according to your line of thinking, this is automatically true if there is a separable f(). So 2. must not be necessary.

3. Lastly, I want the results to match QM predictions. I know, on the other hand, that this will not happen because of requirement 2 above. In the De Raedt simulation, it is false. So their argument is that the full universe does not, in fact, match the QM predictions. So you might conclude that we have demonstrated that 2. is not necessary.

--------

So where is the problem? Because I can ALSO start with 2 as well, ignoring 1. All I need to do is ask you to provide me with a data set of values for an A, B and C I select (such as 0, 120, 240 degrees). You can make them up any way you want, and you can use A, B and C to determine those values... they do not need to be separable! You cannot do that AND have them match requirement 3, which is that they match QM predictions for those same angles.

So are these requirements simply alternative requirements? In some sense they are. Travis Norsen, for one, would agree with you that Bell locality (he defines it somewhat differently) is sufficient for Bell's Theorem. I.e. he believes that realism is NOT a requirement of Bell's Theorem, and therefore all Bell tests prove that nature is non-local.

On the other hand, I argue that realism IS a requirement of Bell's Theorem. It is the requirement of a simultaneous value for A, B and C that makes it work. I personally think the separability requirement is not as important, but again it too IS a requirement.

And that is the standard view. You can try to prove Bell's Theorem without either requirement (locality or realism), but you won't be able to. You need both the ideas of f(variables) and values for A, B and C to get the result.
 
  • #155
zooming in on "realism"

DrChinese said:
No, but I can certainly understand why you might feel that way.

The Bell argument - to me - centers around counterfactual reasoning (realism) more than locality (separability). Realism being the requirement that particles have definite values for observables regardless of their actually being measured. Without this critical requirement, the Bell Inequality cannot be derived, and therefore it cannot be violated.

But a reasonable person would also look at entanglement and say, gee, there must be *some* kind of non-local action occurring. I refer to that as "quantum non-locality" which to me simply encapsulates the idea that there are non-local correlations. But that does not strictly imply that Einsteinian (Bell) locality is violated.
In the above, you cite "realism" in connection with to different points:

CF ≡ counterfactuality

and

IS ≡ the existence of "instruction sets" .

How do you see the status of IS in Bell's derivation? Do you see it as a derived principle, or do you see it as an independent assumption? I see it as a derived principle in the following way:

BL Λ PC Λ CF → IS ,

where

BL ≡ Bell Locality

and

PC ≡ perfect (anti-)correlation for equal settings .
 
  • #156


Eye_in_the_Sky said:
In the above, you cite "realism" in connection with to different points:

CF ≡ counterfactuality

and

IS ≡ the existence of "instruction sets" .

How do you see the status of IS in Bell's derivation? Do you see it as a derived principle, or do you see it as an independent assumption? I see it as a derived principle in the following way:

BL Λ PC Λ CF → IS ,

where

BL ≡ Bell Locality

and

PC ≡ perfect (anti-)correlation for equal settings .

This is a good point. When you really break down Bell, you see what he has given us is a road map. Once we have that map, we have the key to breaking apart any local realistic theory. The map points out a lot of features of the territory. Some of these features probably should have been obvious even without the Inequality itself. And your point about perfect correlations (PC) is a great example.

PC needs to be a requirement of a LR theory, and Bell points this out early. It turns out this is NO MINOR POINT at all! Here we have inherent randomness that itself defies modeling (as the entangled outcomes are random at all settings) and yet they must match. Now, how can there be separability with this characteristic? Yes, we must add yet another constraint to account for this - one I didn't explicitly mention. I said: "... somehow or another, P(A) and P(B) must be connected some way to make the relationships work out." And that is both the PC you mention and more generally, Malus.

1. Instruction set may be a slight misnomer (although it is a good visual), because as Bell says: "...it follows that the result of any such measurement must actually be predetermined..." Now, of course you can say "don't forget the interaction with the polarizer" but that really makes no sense. Polarizer, beamsplitter, or whatever optical system, the results are PC. That is also true with electrons so obviously it has nothing whatsoever to do with underlying nature of the interaction with the measurement apparatus per se. There must be complete predetermination of all settings in an LR model if there is PC. You also have random results (RR). So to me, PC + RR -> IS. And so this simple case is not so simple at all.

2. If you have predetermination, then presumably you have the separability we are asking for AND we have the realism we are asking for. Now you just need one final point, that there are angle settings for which no predetermined IS will work. If Bell simply supplied these settings out of thin air, without a proof, it would still be enough to provide a contradiction. Assuming for a moment - as Bell does - that QM predictions would match experimental results: IS <> QM.

3. You then don't even need Bell's Theorem to be valid if you accept the reasoning so far. Because who cares if the theorem is even valid? Once you know - as Bell figured out - what those angles are, you have everything you need to finish the picture. Bell mentions the predetermination in the second paragraph of his paper (i.e. the first paragraph of his argument). If he stopped there and said: PC -> IS <> QM we would already have a big mess on our hands for those who advocate the HV position.

4. We are now left to struggle to determine what element(s) of IS precisely is wrong. And that is where everyone gets into a tizzy. Is it locality? Is it realism? Is it contextuality? Is it separability? Clearly, there are a lot of issues to consider, and a lot rides on your definitions for these terms. It is obvious to me that the IS cannot exist "there" and "then". I.e. at the spacetime point that the entangled particle pair comes into existence, the IS cannot be restricted to that location at that time. There MUST be information entering into the equation from somewhere else and/or at some other point in time.

5. QM considers the "context" of the setup as part of its successsful predictions. So I would simply state that the context spans points in space (i.e. is non-local), and the context somehow spans a points in time (non-temporal, non-causal, or whatever you want to call it). That context violates our notions of local realism.
 
  • #157


DrChinese said:
No, but I can certainly understand why you might feel that way.

The Bell argument - to me - centers around counterfactual reasoning (realism) more than locality (separability). Realism being the requirement that particles have definite values for observables regardless of their actually being measured. Without this critical requirement, the Bell Inequality cannot be derived, and therefore it cannot be violated.

But a reasonable person would also look at entanglement and say, gee, there must be *some* kind of non-local action occurring. I refer to that as "quantum non-locality" which to me simply encapsulates the idea that there are non-local correlations. But that does not strictly imply that Einsteinian (Bell) locality is violated.


reality can have no definite values.
 
  • #158
ThomasT said:
I don't know if it's a well known approach or not.

The argument is that Bell's locality condition isn't, exclusively, a locality condition. If it isn't, then what might this entail wrt the interpretation of experimental violations of inequalities based on Bell locality?

In a nutshell:

Bell locality doesn't just represent causal independence between A and B, but also statistical independence between A and B.

Statistical dependence between A and B means that a detection at A changes the sample space at B, and vice versa. The pairing process entails statistical dependence between A and B, and this statistical dependence can be accounted for via the local transmissions and interactions of the coincidence circuitry.

Statistical dependence between A and B is sufficient to violate inequalities based on Bell locality.

So, experimental violations of inequalities based on Bell locality, while they do rule out Bell local theories, don't imply nonlocality or necessarily rule out local realism.
I will try to describe the point of ThomasT a bit differently. I hope that I will be in line with what ThomasT is saying.

Lets take equation that describes correlations of photons from Type I PDC:
P_{VV}(\alpha,\beta) = sin^{2}\alpha\, sin^{2}\beta\, cos^{2}\theta_{l} + cos^{2}\alpha\, cos^{2}\beta\, sin^{2}\theta_{l} + \frac{1}{4}sin 2\alpha\, sin 2\beta\, sin 2\theta_{l}\, cos \phi
This is equation (9) from paper - http://arxiv.org/abs/quant-ph/0205171/"

This relation produces \frac{1}{2}cos^{2}(\alpha-\beta) law when \theta_{l} is Pi/4 and \phi is 0. But when for example \theta_{l} is 0 it produces sin^{2}\alpha\, sin^{2}\beta that is simply product of two probabilities from Malus law.

So let's rewrite this EPR state with \theta_{l}=\pi/4 and \phi=0:
P_{VV}(\alpha,\beta) = \frac{1}{2}sin^{2}\alpha\, sin^{2}\beta + \frac{1}{2}cos^{2}\alpha\, cos^{2}\beta + \frac{1}{4}sin 2\alpha\, sin 2\beta
First and second term have perfectly sensible physical interpretation - it's product of H (V) photon detection probabilities from Alice and Bob i.e. chance that we have a click at coincidence counter.
However if detection efficiency is not 100% then we have to make additional assumption that sample spaces of Alice and Bob for detected H(V) photons are completely uncorrelated (random).
But we have third - interference term that can be positive as well as negative. For me it seems that straight forward interpretation of this interference term is that sample spaces of detected photons can become correlated (as I understand this is the point of ThomasT). Say if sin 2\alpha becomes non zero Alice's side has uneven distribution of sample space but as long as sin 2\beta is zero (Bob has even distribution) Alice's uneven distribution has no effect. But if both sides have uneven distributions depending from correlation or anti-correlation of this unevenness this term becomes nonzero positive or negative.

Please note that this is just an interpretation of SQM and it confirms correctness of ensemble interpretation and denies correctness of other interpretations that unconditionally separate ensemble in individual photons.
 
Last edited by a moderator:
  • #159
on "instruction sets"

DrChinese said:
This is a good point. When you really break down Bell, you see what he has given us is a road map. Once we have that map, we have the key to breaking apart any local realistic theory. The map points out a lot of features of the territory. Some of these features probably should have been obvious even without the Inequality itself. And your point about perfect correlations (PC) is a great example.

PC needs to be a requirement of a LR theory, and Bell points this out early. It turns out this is NO MINOR POINT at all! Here we have inherent randomness that itself defies modeling (as the entangled outcomes are random at all settings) and yet they must match. Now, how can there be separability with this characteristic? Yes, we must add yet another constraint to account for this - one I didn't explicitly mention. I said: "... somehow or another, P(A) and P(B) must be connected some way to make the relationships work out." And that is both the PC you mention and more generally, Malus.

1. Instruction set may be a slight misnomer (although it is a good visual), because as Bell says: "...it follows that the result of any such measurement must actually be predetermined..." Now, of course you can say "don't forget the interaction with the polarizer" but that really makes no sense. Polarizer, beamsplitter, or whatever optical system, the results are PC. That is also true with electrons so obviously it has nothing whatsoever to do with underlying nature of the interaction with the measurement apparatus per se. There must be complete predetermination of all settings in an LR model if there is PC. You also have random results (RR). So to me, PC + RR -> IS. And so this simple case is not so simple at all.

2. If you have predetermination, then presumably you have the separability we are asking for AND we have the realism we are asking for. Now you just need one final point, that there are angle settings for which no predetermined IS will work. If Bell simply supplied these settings out of thin air, without a proof, it would still be enough to provide a contradiction. Assuming for a moment - as Bell does - that QM predictions would match experimental results: IS <> QM.

3. You then don't even need Bell's Theorem to be valid if you accept the reasoning so far. Because who cares if the theorem is even valid? Once you know - as Bell figured out - what those angles are, you have everything you need to finish the picture. Bell mentions the predetermination in the second paragraph of his paper (i.e. the first paragraph of his argument). If he stopped there and said: PC -> IS <> QM we would already have a big mess on our hands for those who advocate the HV position.

4. We are now left to struggle to determine what element(s) of IS precisely is wrong. And that is where everyone gets into a tizzy. Is it locality? Is it realism? Is it contextuality? Is it separability? Clearly, there are a lot of issues to consider, and a lot rides on your definitions for these terms. It is obvious to me that the IS cannot exist "there" and "then". I.e. at the spacetime point that the entangled particle pair comes into existence, the IS cannot be restricted to that location at that time. There MUST be information entering into the equation from somewhere else and/or at some other point in time.

5. QM considers the "context" of the setup as part of its successsful predictions. So I would simply state that the context spans points in space (i.e. is non-local), and the context somehow spans a points in time (non-temporal, non-causal, or whatever you want to call it). That context violates our notions of local realism.
Dr. Chinese, thank you for your reply. However, I cannot see in it an answer to my question.

From your reply to ThomasT (sitting right above the post in which I asked my question), I think I can see what your answer would be.
DrChinese said:
... he believes that realism is NOT a requirement of Bell's Theorem, and therefore all Bell tests prove that nature is non-local.

On the other hand, I argue that realism IS a requirement of Bell's Theorem. It is the requirement of a simultaneous value for A, B and C that makes it work.

And that is the standard view. You can try to prove Bell's Theorem without either requirement (locality or realism), but you won't be able to. You need both the ideas of f(variables) and values for A, B and C to get the result.
It appears to me that your answer is:

I see the existence of instruction sets as an independent assumption and not as a derived principle in Bell's derivation.

... Am I correct?
 
  • #160


Eye_in_the_Sky said:
Dr. Chinese, thank you for your reply. However, I cannot see in it an answer to my question.

From your reply to ThomasT (sitting right above the post in which I asked my question), I think I can see what your answer would be.It appears to me that your answer is:

I see the existence of instruction sets as an independent assumption and not as a derived principle in Bell's derivation.

... Am I correct?

I think the Bell paper is a road map to a disproof of local realism. One of the paths is to to demonstrate that PC (perfect correlations) implies predetermination in a classical (local realistic) world. Bell makes several concurrent arguments, so I try not to specifically say X -> Y about too many things. To me, it is clear that the instruction set mentality cannot work, and there are several ways to arrive at that point - and it depends on which assumption you start with.
 
  • #161
ThomasT said:
So, experimental violations of inequalities based on Bell locality, while they do rule out Bell local theories, don't imply nonlocality or necessarily rule out [STRIKE]local[/STRIKE] realism.

imply nonlocality, but does not rule out realism.




http://arxiv.org/ftp/arxiv/papers/0811/0811.2862.pdf

...the Bell theorem has demonstrably nothing to do with the 'realism' as defined
by these authors Leggett, Zeilinger, Gröblacher and that, as a consequence, their conclusions about the foundational significance of the Bell theorem are unjustified...

...the role of Bell’s theorem is not to set constraints on how ‘realist’ we are allowed to be about quantum systems...





http://arxiv.org/PS_cache/arxiv/pdf/0901/0901.4255v2.pdf

...In recent years the violation of Bell's inequality has often been interpreted as
either a failure of locality or of realism (or of both). The problem with such a
claim is that it is not clear what realism in this context should mean. Sometimes
realism is dened as the hypothesis that every physical quantity always has a
value and that measurements merely reveal these predetermined values. That
is, realism is identied with determinism. But if so, then, rst, why should
one use the word local realism instead of local determinism? And second, Bell's
inequality can be stated and proven without any assumption about determinism.
Consequently, determinism is not the issue......


...In conclusion, the claim that the observation of a violation of a Bell inequality
leads to an alleged alternative between nonlocality and non-realism....However, it is not specifc to Bell inequalities......Hence, all violations of Bell's inequality should be interpreted as a demonstration of nonlocality...





http://arxiv.org/PS_cache/arxiv/pdf/0809/0809.4000v1.pdf

...There is hardly a result that is more widely misunderstood in the scientific
community than Bell’s theorem...

...To summarize, what can one conclude from the violation of Leggett’s
inequality ? ....That doesn’t tell us anything about determinism or any type of philosophical realism.





http://arxiv.org/PS_cache/arxiv/pdf/0904/0904.0958v1.pdf

...What really matters is the fact that the derivation of Bell’s inequality in no way whatsoever needs an assumption of realism...


....This being the situation we must conclude that in no way whatsoever Bell’s
inequality has something to do with realism. It simply identifies in a straightforward
and lucid way that what quantum phenomena impose to us is to accept the
unescapable fact that natural processes involving entangled states of composite
and far-away systems turn out to be unavoidably non-local...


....or by those who derive from experimental results inspired by not strictly convincing theoretical models unjustified conclusions concerning such an important issue as the one of the reality of the world around us.......
 
Last edited:
  • #162
Paging DrC. Haven't really gotten my head around the Gisin paper. The Zeilinger group's Legget paper (arxiv 0704.2529) is titled "An experimental test of non-local realism" and starts out thus:

Most working scientists hold fast to the concept of 'realism' - a viewpoint according to which an external reality exists independent of observation. But quantum physics has shattered some of our cornerstone beliefs. According to Bell's theorem, any theory that is based on the joint assumption of realism and locality (meaning that local events cannot be aected by actions in spacelike separated regions) is at variance with certain quantum predictions. Experiments with entangled pairs of particles have amply conrmed these quantum predictions, thus rendering local realistic theories untenable. Maintaining realism as a fundamental concept would therefore necessitate the introduction of 'spooky' actions that defy locality. Here we show by both theory and experiment that a broad and rather reasonable class of such non-local realistic theories is incompatible with experimentally observable quantum correlations. In the experiment, we measure previously untested correlations between two entangled photons, and show that these correlations violate an inequality proposed by Leggett for non-local realistic theories. Our result suggests that giving up the concept of locality is not sucient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned [1].​

concluding in this wise:

We have experimentally excluded a class of important non-local hidden-variable theories. In an attempt to model quantum correlations of entangled states, the theories under consideration assume realism, a source emitting classical mixtures of polarized particles (for which Malus' law is valid) and arbitrary non-local dependencies via the measurement devices. Besides their natural assumptions, the main appealing feature of these theories is that they allow us both to model perfect correlations of entangled states and to explain all existing Bell-type experiments. We believe that the experimental exclusion of this particular class indicates that any non-local extension of quantum theory has to be highly counterintuitive. For example, the concept of ensembles of particles carrying denite polarization could fail. Furthermore, one could consider the breakdown of other assumptions that are implicit in our reasoning leading to the inequality. These include Aristotelian logic, counterfactual deniteness, absence of actions into the past or a world that is not completely deterministic [30]. We believe that our results lend strong support to the view that any future extension of quantum theory that is in agreement with experiments must abandon certain features of realistic descriptions.​

In addition there are a couple of Charles Tresser papers (arxiv 0501030 and 0608008v2) proposing that the locality assumption isn't even necessary for Bell: Occamize it out and what you have left are actually tests that violate "classical realism".
 
Last edited:
  • #163
yoda jedi said:
http://arxiv.org/PS_cache/arxiv/pdf/0904/0904.0958v1.pdf

...What really matters is the fact that the derivation of Bell’s inequality in no way whatsoever needs an assumption of realism...


....This being the situation we must conclude that in no way whatsoever Bell’s
inequality has something to do with realism. It simply identifies in a straightforward
and lucid way that what quantum phenomena impose to us is to accept the
unescapable fact that natural processes involving entangled states of composite
and far-away systems turn out to be unavoidably non-local...


....or by those who derive from experimental results inspired by not strictly convincing theoretical models unjustified conclusions concerning such an important issue as the one of the reality of the world around us.......

Yes I have seen some of these arguments and papers previously. There is, as I have mentioned, those such as Norsen that make the argument that realism is not assumed in Bell. Of course, it is but it is not marked as "Here is where the realism argument starts." So if you want to see it, look after Bell's (14). "It follows that c is another unit vector..." That is where the counterfactual argument begins, obviously with 2 photons there can only be 2 measurements (a and b).

So you can think whatever you want. On the other hand, there are plenty of other experiments - such as GHZ - in which the realistic position is demolished independently of Bell. So you might want to consider that as well.

As Gisin says, realism is often poorly defined. There is a reason for that, the fact is we don't know precisely how to define it. It could be considered akin to causality, non-contextuality, or something else. Experiments continue to probe the frontier.
 
  • #164
nikman said:
Paging DrC. Haven't really gotten my head around the Gisin paper. The Zeilinger group's Legget paper (arxiv 0704.2529) is titled "An experimental test of non-local realism" and starts out thus:

Most working scientists hold fast to the concept of 'realism' - a viewpoint according to which an external reality exists independent of observation. But quantum physics has shattered some of our cornerstone beliefs. According to Bell's theorem, any theory that is based on the joint assumption of realism and locality (meaning that local events cannot be aected by actions in spacelike separated regions) is at variance with certain quantum predictions. Experiments with entangled pairs of particles have amply conrmed these quantum predictions, thus rendering local realistic theories untenable. Maintaining realism as a fundamental concept would therefore necessitate the introduction of 'spooky' actions that defy locality. Here we show by both theory and experiment that a broad and rather reasonable class of such non-local realistic theories is incompatible with experimentally observable quantum correlations. In the experiment, we measure previously untested correlations between two entangled photons, and show that these correlations violate an inequality proposed by Leggett for non-local realistic theories. Our result suggests that giving up the concept of locality is not sucient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned [1].​

concluding in this wise:

We have experimentally excluded a class of important non-local hidden-variable theories. In an attempt to model quantum correlations of entangled states, the theories under consideration assume realism, a source emitting classical mixtures of polarized particles (for which Malus' law is valid) and arbitrary non-local dependencies via the measurement devices. Besides their natural assumptions, the main appealing feature of these theories is that they allow us both to model perfect correlations of entangled states and to explain all existing Bell-type experiments. We believe that the experimental exclusion of this particular class indicates that any non-local extension of quantum theory has to be highly counterintuitive. For example, the concept of ensembles of particles carrying denite polarization could fail. Furthermore, one could consider the breakdown of other assumptions that are implicit in our reasoning leading to the inequality. These include Aristotelian logic, counterfactual deniteness, absence of actions into the past or a world that is not completely deterministic [30]. We believe that our results lend strong support to the view that any future extension of quantum theory that is in agreement with experiments must abandon certain features of realistic descriptions.​

In addition there are a couple of Charles Tresser papers (arxiv 0501030 and 0608008v2) proposing that the locality assumption isn't even necessary for Bell: Occamize it out and what you have left are actually tests that violate "classical realism".

I am with you on the Gisin paper, I don't really follow what point he is making. Clearly, he is in the middle of some of the most fascinating research on delayed choice experiments and quantum teleportation. To me, any delayed choice experiment is automatically an attack on realism. After all: if you can change the past with a future decision, how much realism can there be?

See also Hall's: http://arxiv.org/abs/0909.0015"

Tresser's are good papers, I have seen his previously and I happen to agree with him: I don't think you need locality to obtain the main Bell result. But I also believe that Bell's final conclusion does include a locality condition and I think that is a generally accepted result.

Zeilinger and others have come up with a number of experiments showing a similar result: realism suffers from severe problems which are fundamentally in conflict with QM. I think the HUP is such, for example. I think entanglement, delayed choice, quantum erasers, GHZ, all of these are counterexamples to realism. Also Adan Cabello is invloved in some good work, here is a paper he was involved with demonstrating contextuality in single photons (i.e. against realism).

http://arxiv.org/abs/0907.4494"

"We present an experimental state-independent violation of an inequality for noncontextual theories on single particles. We show that 20 different single-photon states violate an inequality which involves correlations between results of sequential compatible measurements by at least 419 standard deviations. Our results show that, for any physical system, even for a single system, and independent of its state, there is a universal set of tests whose results do not admit a noncontextual interpretation. This sheds new light on the role of quantum mechanics in quantum information processing. "
 
Last edited by a moderator:
  • #165
ThomasT said:
I don't know if it's a well known approach or not.

The argument is that Bell's locality condition isn't, exclusively, a locality condition. If it isn't, then what might this entail wrt the interpretation of experimental violations of inequalities based on Bell locality?

In a nutshell:

Bell locality doesn't just represent causal independence between A and B, but also statistical independence between A and B.

Statistical dependence between A and B means that a detection at A changes the sample space at B, and vice versa. The pairing process entails statistical dependence between A and B, and this statistical dependence can be accounted for via the local transmissions and interactions of the coincidence circuitry.

Statistical dependence between A and B is sufficient to violate inequalities based on Bell locality.

Sorry, was not able to answer for a few days.

OK, at least now it's in plain English. Thank you.

However, your statement "The pairing process entails statistical dependence between A and B" is not obvious, so it needs proof, as you admit you don't know if this reasoning is well-known. One would think that if experiments are performed infrequently, coincidence circuitry effect should not be important. Of course, this is hand-waving, but I don't think it's my duty to prove that your statement is wrong, it's your duty to prove it's correct.
 
  • #166
ThomasT said:
I think I might have been presenting the argument the wrong way.

Bell locality applied to LHV representation of two photon entangled state entails this:

P(A,B) = P(A) P(B)

That is, it entails that the joint probability be factorable (separable) as the product of the individual probabilities.

From probability theory and statistics, if two (sets of) events, A and B, are independent, then their joint probability is the product of the individual probabilites, P(A) and P(B).

So, we start out by observing that Bell locality represents statistical independence.
(This is different from the previous approach of assuming that Bell's locality condition represents causal independence, and then parsing it to include statistical independence.)

Does statistical independence imply causal independence?

The answer is yes (causal dependence entails statistical dependence).

However, in order to ascertain whether or not Bell locality is viable (that is, whether or not its application allows us to deduce the presence of superluminal causality) we must ask:

Does statistical dependence imply causal dependence?

In general, not necessarily. But if the Bell inequalities are violated, I guess yes.
 
  • #167
nikman said:
Haven't really gotten my head around the Gisin paper.

Just to make it a little weirder, what about this one from Gisin which superficially seems to be in direct opposition to the other one (by ruling out more classes of non-local theories):

http://arxiv.org/abs/1002.1390"

"... Hence, any covariant nonlocal model is equivalent to a Bell-local model and, consequently, contradicts well tested quantum predictions, the violation of Bell's inequality. ..."
 
Last edited by a moderator:
  • #168
DrChinese said:
Just to make it a little weirder, what about this one from Gisin which superficially seems to be in direct opposition to the other one (by ruling out more classes of non-local theories)... "... Hence, any covariant nonlocal model is equivalent to a Bell-local model and, consequently, contradicts well tested quantum predictions, the violation of Bell's inequality. ..."

Gisin (kind of like our own Peter Morgan?) seems to be zeroing in on the measurement problem as the nexus of all our confusions? The inputting must be considered real, he says, and both Alice and Bob must be assumed to have freedom of choice -- but the reality of the physical measurement itself isn't obvious and this fact surely points to something deeply important although without suggesting specific questions to ask.

In a paper from a year ago Gisin's occasional colleague Suarez, sounding a bit quantum mystical, advances this:

It is argued that the quantum correlations are not maximally nonlocal to make it possible to control local outcomes from outside spacetime, and quantum mechanics emerges from timeless nonlocality and biased local randomness. This rules out a world described by NL (nonlocal) boxes. A new type of experiments is suggested.​

and continues a bit later:

The violation of Leggett inequalities was first interpreted as an experimental falsification of "nonlocal realism", where "realism" refers to the view that the single particles carry well defined properties when they leave the source. Such an interpretation is misleading: By testing models fulfilling Leggett inequalities one does not test "nonlocal realism", but rather models assuming both nonlocal randomness and outcomes that depend on biased random local variables. Nevertheless, it is the Colbeck-Renner theorem which clearly shows the relationship between nonlocality and biased local randomness in entanglement experiments.​

Any relationship(s) here?
 
  • #169
nikman said:
Gisin (kind of like our own Peter Morgan?) seems to be zeroing in on the measurement problem as the nexus of all our confusions? The inputting must be considered real, he says, and both Alice and Bob must be assumed to have freedom of choice -- but the reality of the physical measurement itself isn't obvious and this fact surely points to something deeply important although without suggesting specific questions to ask.

In a paper from a year ago Gisin's occasional colleague Suarez, sounding a bit quantum mystical, advances this:

It is argued that the quantum correlations are not maximally nonlocal to make it possible to control local outcomes from outside spacetime, and quantum mechanics emerges from timeless nonlocality and biased local randomness. This rules out a world described by NL (nonlocal) boxes. A new type of experiments is suggested.​

and continues a bit later:

The violation of Leggett inequalities was first interpreted as an experimental falsification of "nonlocal realism", where "realism" refers to the view that the single particles carry well defined properties when they leave the source. Such an interpretation is misleading: By testing models fulfilling Leggett inequalities one does not test "nonlocal realism", but rather models assuming both nonlocal randomness and outcomes that depend on biased random local variables. Nevertheless, it is the Colbeck-Renner theorem which clearly shows the relationship between nonlocality and biased local randomness in entanglement experiments.​

Any relationship(s) here?

Not sure, I will need to look at the theorem you mention. I think it is interesting that NO model really seems to come close. You can start one place and rule out some things. Or start somewhere else and rule out what seems to be everything else. Maybe we should be considering non-local non-realistic solutions.
 
  • #170
consulting the "atlas"

DrChinese said:
I think the Bell paper is a road map to a disproof of local realism.
Okay. I will look it up in the atlas.

In my atlas, the Bell map shows three roads converging into one main road called "local determinism" at which point there is a signpost reading "equation (1)". The three convergent roads are called:

"locality", "perfect anti-correlation for equal settings", and "counterfactuality".

The names of latter two roads are respectively abbreviated as "PC" and "CF". Now, in propositional terms, the convergence of these three roads into one means:

Proposition 1: locality Λ PC Λ CF → local determinism .

I invite anyone who wishes to verify the accuracy and validity of this proposition to do so. Here are Bell's own words:
Consider a pair of spin one-half particles formed somehow in the singlet spin state and moving freely in opposite directions. Measurements can be made, say by Stem-Gerlach magnets, on selected components of the spins σ1 and σ2. If measurement of the component σ1a, where a is some unit vector, yields the value +1 then, according to quantum mechanics, measurement of σ2a must yield the value -1 and vice versa. Now we make the hypothesis [2], and it seems one at least worth considering, that if the two measurements are made at places remote from one another the orientation of one magnet does not influence the result obtained with the other. Since we can predict in advance the result of measuring any chosen component of σ2, by previously measuring the same component of σ1, it follows that the result of any such measurement must actually be predetermined.
-------------------------
[2] "But on one supposition we should, in my opinion, absolutely hold fast: the real factual situation of the system S2 is independent of what is done with the system S1, which is spatially separated from the former." A. EINSTEIN in Albert Einstein, Philosopher Scientist, (Edited by P. A. SCHILP) p. 85, Library of Living Philosophers, Evanston, Illinois (1949).
Next:
DrChinese said:
One of the paths is to to demonstrate that PC (perfect correlations) implies predetermination in a classical (local realistic) world.
Dr. Chinese, by this statement do you mean?

"Proposition 1" above is both accurate and valid.

Next, returning once again to the Bell map, I see that the road called "local determinism" eventually merges with another road called "QM". They merge into an unnamed dirt path. This path leads directly into the mouth of an abyss. In propositional terms, this means:

Proposition 2: local determinism Λ QM → CONTRADICTION .
DrChinese said:
Bell makes several concurrent arguments, so I try not to specifically say X -> Y about too many things.
Yes, and it is for this reason that you make a conceptual error in your reply to yoda jedi:
DrChinese said:
There is ... those ... that make the argument that realism is not assumed in Bell. Of course, it is but it is not marked as "Here is where the realism argument starts." So if you want to see it, look after Bell's (14). "It follows that c is another unit vector..." That is where the counterfactual argument begins, obviously with 2 photons there can only be 2 measurements (a and b).
No. That is not where the counterfactual argument begins. Counterfactuality has already been invoked in the first paragraph of the section II, "Formulation", of Bell's paper, the relevant part of which I have quoted above, whose argument therein I have summarized as "Proposition 1". But now we are in the section IV, "Contradiction", in the process of developing the validity of "Proposition 2". The premise of "local determinism" is now considered to be given, and therefore, at this stage, there is no difficulty with a counterfactual claim such as "It follows that c is another unit vector...". At this stage, there can be any number of simultaneous specifications of measurement outcomes.

That is to say, Dr. Chinese, any quarrel you may have with CF as it is used in Bell's original paper does not lie with an ad hoc simultaneous assignment of values to noncommuting observables, as you have been thinking it does; but, rather, any such quarrel you may have lies with the truth of CF as it is used as a premise in "Proposition 1" above.
 
Last edited:
  • #171
DrChinese said:
http://arxiv.org/abs/1002.1390"

"... Hence, any covariant nonlocal model is equivalent to a Bell-local model and, consequently, contradicts well tested quantum predictions, the violation of Bell's inequality. ..."
Given the definition of the word "covariant" used in that paper, the conclusions of that paper are correct. However, his definition of the word "covariant" is, mildly speaking, quite unusual.
 
Last edited by a moderator:
  • #172
DrChinese said:
Yes I have seen some of these arguments and papers previously. There is, as I have mentioned, those such as Norsen that make the argument that realism is not assumed in Bell. Of course, it is but it is not marked as "Here is where the realism argument starts."

So if you want to see it,

look after Bell's (14). "It follows that c is another unit vector..." That is where the counterfactual argument begins, obviously with 2 photons there can only be 2 measurements (a and b).

So you can think whatever you want.

On the other hand, there are plenty of other experiments - such as GHZ - in which the realistic position is demolished independently of Bell.

So you might want to consider that as well.


the fact is we don't know precisely how to define it.


pathetical ludicrosity, then, how can be demolished ?

if it is not definite yet....

...laughs...


DrChinese said:
in which the realistic position is demolished

wooowww DEMOLISHED !
 
  • #173
akhmeteli said:
... your statement "The pairing process entails statistical dependence between A and B" is not obvious, so it needs proof ...
It's not a matter of proof, it's just a matter of identifying the symbolic convention, statistical dependence, with the experimental setup. When a detection is registered at one end, then the sample space at the other end is altered. The matching of the separate data streams at A and B isn't done randomly. The matching process itself produces (via local interactions and transmissions) the statistical dependence between A and B -- and this is sufficient to violate Bell inequalities based on the assumption that the data set at A is statistically independent from the data set at B via Bell locality.

akhmeteli said:
One would think that if experiments are performed infrequently, coincidence circuitry effect should not be important.
The separate accumulations of data at A and B have to be matched somehow. The point is that the designs of entanglement experiments contradict Bell locality.

ThomasT said:
Does statistical dependence imply causal dependence?
akhmeteli said:
In general, not necessarily. But if the Bell inequalities are violated, I guess yes.
Statistical dependence between A and B doesn't imply a direct causal link between A and B whether Bell inequalities are violated or not.

Therefore, even though entanglement experimental designs and standard QM are incompatible with Bell locality, we can't conclude that violations of Bell inequalities require nonlocal propagations in Nature.
 
Last edited:
  • #174
In the beginning ...

This thread began with a post in which it was written:
akhmeteli said:
... the proof of the Bell theorem uses two mutually contradictory results/assumptions of quantum theory: unitary evolution and the projection postulate. Therefore, I argued, the Bell theorem is on a shaky ground ... on the theoretical ... level.
Hello, akhmeteli. It appears to me there may be some misconception in the way you are thinking about Bell's theorem.

Bell's theorem, per se, is nothing more than a proposition of the form

P → D ,

where "P" is the conjunction of some set of premises, the 'truth' of which does not in any way require the 'truth' of any of the premises of Quantum Mechanics, and "D" is a certain condition (e.g. a Bell inequality).
________________

Now, it happens that Quantum Mechanics (let us denote its premises by "QM") is such that

QM → ~D .

Therefore, the conjunction "P Λ QM" is inconsistent.
________________

In the weak version of Bell's theorem

P = local determinism .


In the strong version of Bell's Theorem

P = locality Λ PC Λ CF ,

where

PC ≡ perfect anti-correlation for equal settings

and

CF ≡ counterfactuality .


In the strong version, of course, "PC" has been employed as premise; but this means only that we are considering any theory which admits "PC" as a feature.
________________


... Do you see what I am saying?
 
  • #175


Eye_in_the_Sky said:
Proposition 1: locality Λ PC Λ CF → local determinism .

I invite anyone who wishes to verify the accuracy and validity of this proposition to do so. Here are Bell's own words:
Can you tell where do you yourself see the problem?

As I understand the sentence you quoted from Bell ascribe counterfactuality to QM:
"If measurement of the component σ1∙a, where a is some unit vector, yields the value +1 then, according to quantum mechanics, measurement of σ2∙a must yield the value -1 and vice versa."
This statement is taken as experimentally valid but on the other hand experiments with photons say that you can only detect 50% of photons maximum with one detector. So it is not conclusively true.
Then if we accept as empirical fact that only 50% of photons can be detected and we do not invoke counterfactuality then QM can not make prediction like that about photons.
 
  • #176


Eye_in_the_Sky said:
This thread began with a post in which it was written:Hello, akhmeteli. It appears to me there may be some misconception in the way you are thinking about Bell's theorem.

Bell's theorem, per se, is nothing more than a proposition of the form

P → D ,

where "P" is the conjunction of some set of premises, the 'truth' of which does not in any way require the 'truth' of any of the premises of Quantum Mechanics, and "D" is a certain condition (e.g. a Bell inequality).
________________

Now, it happens that Quantum Mechanics (let us denote its premises by "QM") is such that

QM → ~D .

Therefore, the conjunction "P Λ QM" is inconsistent.
________________

In the weak version of Bell's theorem

P = local determinism .


In the strong version of Bell's Theorem

P = locality Λ PC Λ CF ,

where

PC ≡ perfect anti-correlation for equal settings

and

CF ≡ counterfactuality .


In the strong version, of course, "PC" has been employed as premise; but this means only that we are considering any theory which admits "PC" as a feature.
________________


... Do you see what I am saying?
In any version of Bell's theorem

P = statistical independence

and

P → D

where D is a Bell inequality .

We observe that

QM → ~D

and

Experiment → ~D .

Therefore, the conjunctions "P Λ QM" and "P Λ Experiment" are inconsistent.

This is all that can be said vis QM's incompatibility with Bell local formulations, and experimental violations of Bell inequalities.
 
  • #177


zonde said:
Can you tell where do you yourself see the problem?
Do you mean:

Which premise in "locality Λ PC Λ CF" do I see as false?
zonde said:
As I understand the sentence you quoted from Bell ascribe counterfactuality to QM:
"If measurement of the component σ1∙a, where a is some unit vector, yields the value +1 then, according to quantum mechanics, measurement of σ2∙a must yield the value -1 and vice versa."
This statement is taken as experimentally valid but on the other hand experiments with photons say that you can only detect 50% of photons maximum with one detector. So it is not conclusively true.
Then if we accept as empirical fact that only 50% of photons can be detected and we do not invoke counterfactuality then QM can not make prediction like that about photons.
Zonde, I am sorry, but I cannot figure out what you mean here.
 
  • #178


ThomasT said:
In any version of Bell's theorem

P = statistical independence

and

P → D

where D is a Bell inequality .
Of course, "statistical independence" alone is not enough to derive a Bell inequality. There must be other assumptions.

So maybe you mean this:

Regarding the proposition

BL Λ PC Λ CF → D ,

where

BL ≡ Bell Locality (mathematical formulation in terms of probabilities) ,

it will be found, upon scrutiny, that BL is not in fact an expression of local causality. Rather, it is merely an expression of statistical independence.

Is that what you mean?
 
  • #179


Eye_in_the_Sky said:
Do you mean:
Which premise in "locality Λ PC Λ CF" do I see as false?
Let's say do you see Proposition 1 (locality Λ PC Λ CF → local determinism) as not valid? Or is it valid but wrongly applied to physical situation? ... or neither.

Eye_in_the_Sky said:
Zonde, I am sorry, but I cannot figure out what you mean here.
Do you see any problems in this statement?
"If measurement of the component σ1∙a, where a is some unit vector, yields the value +1 then, according to quantum mechanics, measurement of σ2∙a must yield the value -1 and vice versa."
 
  • #180


zonde said:
Can you tell where do you yourself see the problem?

As I understand the sentence you quoted from Bell ascribe counterfactuality to QM:
"If measurement of the component σ1∙a, where a is some unit vector, yields the value +1 then, according to quantum mechanics, measurement of σ2∙a must yield the value -1 and vice versa."
This statement is taken as experimentally valid but on the other hand experiments with photons say that you can only detect 50% of photons maximum with one detector. So it is not conclusively true.
Then if we accept as empirical fact that only 50% of photons can be detected and we do not invoke counterfactuality then QM can not make prediction like that about photons.

Hmmm ... while it is true that only 50% of entangled photons can be detected on a single detector in a polarization experiment, that is not the same as saying only 50% of photons can be detected. Couldn't one just use a polarizing beamsplitter with detectors for both the transmitted and reflected photons? Then the experiment would pick up 100% of the photons, and the measurements from Alice's two detectors could be compared with Bob's two detectors to reveal the perfect correlation between the two. Wouldn't this close the loophole you are talking about above?
 
  • #181


SpectraCat said:
Hmmm ... while it is true that only 50% of entangled photons can be detected on a single detector in a polarization experiment, that is not the same as saying only 50% of photons can be detected. Couldn't one just use a polarizing beamsplitter with detectors for both the transmitted and reflected photons? Then the experiment would pick up 100% of the photons, and the measurements from Alice's two detectors could be compared with Bob's two detectors to reveal the perfect correlation between the two. Wouldn't this close the loophole you are talking about above?
I didn't mean that.
The question is whether increasing detection efficiency does not diminish result for perfect correlations settings. Because perfect correlations at theta=0 and pi/2 is a requirement for Bell inequalities.
I looked up about detection efficiencies of commercially available SPADs and it seems that 50% is not the limit however the question stays whether prefect correlations can be achieved with such levels of detection efficiency.
 
  • #182


zonde said:
I didn't mean that.
The question is whether increasing detection efficiency does not diminish result for perfect correlations settings. Because perfect correlations at theta=0 and pi/2 is a requirement for Bell inequalities.
I looked up about detection efficiencies of commercially available SPADs and it seems that 50% is not the limit however the question stays whether prefect correlations can be achieved with such levels of detection efficiency.

Ok .. I see your point now. However, I don't think this is a real issue, because all it does is change the discussion from the realm of complete certainty (i.e. Bell inequalities are always violated) to the realm of probability (i.e. the result of a given set of measurements with some mean and standard deviation is outside the permissible range of the Bell inequality by 30 standard deviations).

My view on this is that it is easy to see that the second case approaches the first as the detector efficiency is improved, and so the gedanken conditions of "perfect detector efficiency" is a reasonable simplification to make. Thus the point you raised is really a non-issue in my view. Essentially, it puts you in the position of the sophists who say "well, the detectors aren't perfect, so you can't be sure". Granted .. but 30 standard deviations is close enough for me. :wink:
 
  • #183


SpectraCat said:
... Essentially, it puts you in the position of the sophists who say "well, the detectors aren't perfect, so you can't be sure". Granted .. but 30 standard deviations is close enough for me. :wink:

30 and rising... some experiments are at 200+ SD.
 
  • #184


SpectraCat said:
Ok .. I see your point now. However, I don't think this is a real issue, because all it does is change the discussion from the realm of complete certainty (i.e. Bell inequalities are always violated) to the realm of probability (i.e. the result of a given set of measurements with some mean and standard deviation is outside the permissible range of the Bell inequality by 30 standard deviations).
Not sure we are talking about the same thing.
You should actively tune experimental setup to reach as low as possible detection rate for perfect anti-correlation settings. How you can talk about standard deviation in this case? Of course no one is calculating standard deviation for minimum correlation settings because this is requirement not result.
 
  • #185
zonde said:
Not sure we are talking about the same thing.
You should actively tune experimental setup to reach as low as possible detection rate for perfect anti-correlation settings. How you can talk about standard deviation in this case? Of course no one is calculating standard deviation for minimum correlation settings because this is requirement not result.

My point is that perfect anti-correlation is not required for demonstration of Bell inequality violation. First of all, there will *never* be "perfect" detectors. Second, measurements with imperfect detectors show Bell inequality violations by over 30 standard deviations. Why would better detectors make any difference at this point?
 
  • #186
SpectraCat said:
My point is that perfect anti-correlation is not required for demonstration of Bell inequality violation. First of all, there will *never* be "perfect" detectors. Second, measurements with imperfect detectors show Bell inequality violations by over 30 standard deviations. Why would better detectors make any difference at this point?

As far as I know, no violations of the Bell inequalities have been demonstrated - there was some "loophole" in each of the experiments claiming such violations. I suspect those "violations by over 30 standard deviations" were obtained using the fair sampling assumption, and if you use this assumption, you can get as many standard deviations as you want. The problem is it is not clear why anyone has to accept the fair sampling assumption.
 
  • #187
SpectraCat said:
My point is that perfect anti-correlation is not required for demonstration of Bell inequality violation. First of all, there will *never* be "perfect" detectors. Second, measurements with imperfect detectors show Bell inequality violations by over 30 standard deviations. Why would better detectors make any difference at this point?
Perfect anti-correlation settings are theta=0deg for Type II PDC. With that I do not mean "perfect detection" or rather noiseless non-detection.
The point is that you have to assume that coincidence count at minimum can be extrapolated linearly to reasonably low value at 100% efficiency in order to violate Bell inequalities.
If you can't do that then you don't have violation of Bell inequalities.

It has nothing to do with "Bell inequality violations by over 30 standard deviations".
Please look up in wikipedia http://en.wikipedia.org/wiki/Precision_bias" .
 
Last edited by a moderator:
  • #188
ThomasT said:
It's not a matter of proof, it's just a matter of identifying the symbolic convention, statistical dependence, with the experimental setup. When a detection is registered at one end, then the sample space at the other end is altered. The matching of the separate data streams at A and B isn't done randomly. The matching process itself produces (via local interactions and transmissions) the statistical dependence between A and B -- and this is sufficient to violate Bell inequalities based on the assumption that the data set at A is statistically independent from the data set at B via Bell locality.

The separate accumulations of data at A and B have to be matched somehow. The point is that the designs of entanglement experiments contradict Bell locality.


Statistical dependence between A and B doesn't imply a direct causal link between A and B whether Bell inequalities are violated or not.

Therefore, even though entanglement experimental designs and standard QM are incompatible with Bell locality, we can't conclude that violations of Bell inequalities require nonlocal propagations in Nature.

Sorry, has been busy again.

Look, ThomasT, you offer some statements that may be correct or wrong, but you do not offer any proof (or reference to such proof) and even state that you don't need any proof (if I understood you correctly). Maybe you don't, but I do. I don't see solid reasoning behind your statements, so I cannot agree or disagree with them, as neither those statements nor their negations seem obvious. With all due respect, I cannot believe you on your word - you are not a priest (or maybe you are? :-)), and I am not religious. Until you give some reasoning, I just have no comments, sorry. For example, can you offer a local theory violating the Bell inequalities? Or, if you think this is a tall order, can you at least explain if your phrase "the designs of entanglement experiments contradict Bell locality" means the same as "there are loopholes in those experiments"?
 
  • #189
akhmeteli said:
Sorry, has been busy again.

Look, ThomasT, you offer some statements that may be correct or wrong, but you do not offer any proof (or reference to such proof) and even state that you don't need any proof (if I understood you correctly). Maybe you don't, but I do.
It seems that ThomasT is talking basically about the same thing - fair sampling.
Look, if sample space at one end exactly matches sample space at other end (say we have PBS and we detect photon always in one channel or the other) sample spaces stay the same after matching at coincidence counter (except of course that you have additional information about how different channels are matched). But if you don't have perfect efficiency then you reduce each sample space at coincidence counter.

Does not seem that this needs a proof.

However this part could be expanded as it is not obvious:
"this is sufficient to violate Bell inequalities based on the assumption that the data set at A is statistically independent from the data set at B via Bell locality."
 
  • #190
akhmeteli said:
The problem is it is not clear why anyone has to accept the fair sampling assumption.

Why should you accept any scientific evidence? And why do you suspect that the full universe would not match the results of a subsample? And why does increasing the sample percentage not lead to a different answer? And why do other tests - not requiring the fair sampling assumption - give the same results?

You keep saying the same thing without providing scientific basis. You don't have to accept the results, but you shouldn't state the "loophole" as being proof of anything. It isn't.
 
  • #191
zonde said:
Perfect anti-correlation settings are theta=0deg for Type II PDC. With that I do not mean "perfect detection" or rather noiseless non-detection.
The point is that you have to assume that coincidence count at minimum can be extrapolated linearly to reasonably low value at 100% efficiency in order to violate Bell inequalities.
If you can't do that then you don't have violation of Bell inequalities.

Ok ... so you are saying that the "false coincidence" rate must be below some critical value in order to satisfy the Bell inequality, right? But false coincidences are included in the analysis, so that is accounted for in the experiment. If the false coincidence rate were too high, then the results would appear more "random" and so would no longer show a Bell inequality violation. That is, it is fundamentally IMPOSSIBLE to show a Bell inequality violation with an apparatus that has too high a rate of false coincidences. So your statement is correct, but again I don't see how this is an issue ... the fact that the experimental results do show a violation indicates that false coincidences are not an issue, right?
 
  • #192
DrChinese said:
Why should you accept any scientific evidence? And why do you suspect that the full universe would not match the results of a subsample? And why does increasing the sample percentage not lead to a different answer? And why do other tests - not requiring the fair sampling assumption - give the same results?

I have no reasons to accept the fair sampling assumption. You yourself mentioned a situation where it is not true (planets of the Solar system). Of course, my opinion means nothing, but Shimony, Zeilinger and many others believe "loopholes", such as fair sampling assumption, are essential, so I just side with the mainstream view. As for "other tests giving the same results" - I never heard from you on my example with Euclidian geometry.

DrChinese said:
You keep saying the same thing without providing scientific basis. You don't have to accept the results, but you shouldn't state the "loophole" as being proof of anything. It isn't.

I did provide scientific basis - the opinions of Shimony, Zeilinger, Genovese. You disagree with them, but why is this my problem? You don't seem to claim they are not experts. Again, don't kill the messenger. I keep saying local realism has not been ruled out. You keep saying it has been. Maybe you're right, maybe I am, but so far I don't see any reasons to accept your point of view. Looks like you don't see any reasons to agree with my point of view. So we disagree. So what?
 
  • #193
akhmeteli said:
I did provide scientific basis - the opinions of Shimony, Zeilinger, Genovese. ... You don't seem to claim they are not experts. Again, don't kill the messenger. I keep saying local realism has not been ruled out. You keep saying it has been. Maybe you're right, maybe I am, but so far I don't see any reasons to accept your point of view. Looks like you don't see any reasons to agree with my point of view. So we disagree. So what?

I'd like to see the experiment where Zeilinger concludes local realism is plausible, because there isn't one. There are plenty (of his) proving LR is not. The difference in our opinions is that one is the mainstream and one is not. Local realism is not a mainstream view. The idea that "loopholes" support LR (or otherwise imply it is feasible) is not either. Don't advertise a false viewpoint. Yours is an extreme minority view in the scientific community.

As mentioned in another thread, for example, it is possible to entangle photons that have never even existed in each other's spacetime light cone. Oh, and that was courtesy Zeilinger. So I don't get where you think LR is considered a viable alternative. You are ignoring a substantial body of work that does not require the fair sampling assumption, such as this.
 
  • #194
DrChinese said:
I'd like to see the experiment where Zeilinger concludes local realism is plausible, because there isn't one.

I gave you his quote confirming local realism has not been ruled out.
If you don't like the quote, it's not my problem.

DrChinese said:
There are plenty (of his) proving LR is not.

Give me one where he says local realism has been ruled out.


DrChinese said:
The difference in our opinions is that one is the mainstream and one is not.

I fully agree. And mine is mainstream, yours is not. I confirmed mine by quotes. Again, if you don't like the quotes, it's not my problem.

DrChinese said:
Local realism is not a mainstream view.

No, it isn't.


DrChinese said:
The idea that "loopholes" support LR (or otherwise imply it is feasible) is not either. Don't advertise a false viewpoint. Yours is an extreme minority view in the scientific community.

They do imply it is feasible, and this is mainstream. If you believe I advertise a false viewpoint, why don't you kick Shimony's behind, Zeilinger's behind, Genovese's behind? I am of no importance whatsoever. Nobody cares what I advertise. The problem is what I advertise is mainstream, sorry. And if you state that local realism has been ruled out, you're just trying to impose your personal opinion on the others.


DrChinese said:
As mentioned in another thread, for example, it is possible to entangle photons that have never even existed in each other's spacetime light cone. Oh, and that was courtesy Zeilinger. So I don't get where you think LR is considered a viable alternative. You are ignoring a substantial body of work that does not require the fair sampling assumption, such as this.

Does anybody (but you) claim that these experiments demonstrate loophole-free violations of the Bell inequalities?
 
  • #195
akhmeteli said:
I gave you his quote confirming local realism has not been ruled out.
If you don't like the quote, it's not my problem.



Give me one where he says local realism has been ruled out.




I fully agree. And mine is mainstream, yours is not. I confirmed mine by quotes. Again, if you don't like the quotes, it's not my problem.



No, it isn't.




They do imply it is feasible, and this is mainstream. If you believe I advertise a false viewpoint, why don't you kick Shimony's behind, Zeilinger's behind, Genovese's behind? I am of no importance whatsoever. Nobody cares what I advertise. The problem is what I advertise is mainstream, sorry. And if you state that local realism has been ruled out, you're just trying to impose your personal opinion on the others.




Does anybody (but you) claim that these experiments demonstrate loophole-free violations of the Bell inequalities?

This isn't a rhetorical pissing match... the experimental evidence and lack of refutation is more important than your quotes. This is why results are PUBLISHED, and we don't just listen to the researcher's interpretation. As we see, results can be open to multiple interpretations. In short, answer his questions and cut the grade-school debate club horsh-****.
 
  • #196
Frame Dragger said:
This isn't a rhetorical pissing match... the experimental evidence and lack of refutation is more important than your quotes. This is why results are PUBLISHED, and we don't just listen to the researcher's interpretation. As we see, results can be open to multiple interpretations. In short, answer his questions and cut the grade-school debate club horsh-****.

Sorry, I just cannot understand a word. Whose "his"? What questions? Do you mean "DrChinese'" and the following:
"Why should you accept any scientific evidence? And why do you suspect that the full universe would not match the results of a subsample? And why does increasing the sample percentage not lead to a different answer? And why do other tests - not requiring the fair sampling assumption - give the same results?" ?

Then, as I said, I don't see any reason to accept the fair sampling assumption. If Dr Chinese (or you) wants to prove it, good luck, but I don't hold my breath, because he'll have to produce something Shimony and Zeilinger are not aware of. Furthermore, DrChinese himself gave an example where fair sampling does not work. If DrChinese (or you) believes that the assumption does not need any proof as it is obvious, I reject that. You would not understand if I said that local realism does not need any proof as it is obvious, so don't even try to sell me the fair sampling assumption without proof. And again, it does not really matter if you sell it to me or not, as I am of no consequence. Experts agree that the detection loophole is essential. If DrChinese (or you) disagrees, this is his (or your) personal opinion, nothing more.

Please try to understand this: I don't need to prove that fair sampling is wrong. If you like fair sampling, the burden of proof is all yours. Let me rephrase this. I could admit (cutting some corners, such as "free will") that experiments demonstrate that at least one of the following three is wrong: 1)locality; 2) realism; 3) fair sampling . For DrChinese, fair sampling is a "holy cow", for somebody else local realism is a "holy cow". What I am trying to say, there is not enough data so far to make a definite choice.

As for the last question: "why do other tests - not requiring the fair sampling assumption - give the same results?", as I said, I offered DrChinese to explain how my "proof" in post 34 in this thread (that the sum of angles of a planar triangle does not equal 180 degrees) is any worse than "closing loopholes separately". Not a word from him.

So what experimental evidence exactly? There has been no experimental demonstration of the genuine Bell inequalities - 45 years after Bell. And there has been all the refutation you want - if Shimony and Zeilinger admit that, strictly speaking, local realism has not been ruled out, I can assure you, this is not because they like local realism - nobody accused them of such love. So somebody raised the issue of the "detection loophole", somebody raised the issue of the "locality loophole". I am no expert in the Bell inequalities, and I don't even know who raised these issues first, but seems they did a pretty good job, if all the leading experts agree what was actually demonstrated experimentally, and what was not.
 
  • #197
akhmeteli said:
Sorry, I just cannot understand a word. Whose "his"? What questions? Do you mean "DrChinese'"...

Let me get this straight... you have a firm grasp of QM, but contextual language eludes you? Yes, I mean Dr. Chinese, as you knew from the first. Care to answer those questions now that you've thrown your tantrum?
 
  • #198
Frame Dragger said:
Let me get this straight... you have a firm grasp of QM, but contextual language eludes you? Yes, I mean Dr. Chinese, as you knew from the first. Care to answer those questions now that you've thrown your tantrum?

Why should I guess? I hope you don't feel it is beneath you to be clearer. And I think I answered the questions, but I am not asking if contextual language eludes you, I'll try to repeat or rephrase my answers.

DrChinese said:
I'd like to see the experiment where Zeilinger concludes local realism is plausible, because there isn't one. There are plenty (of his) proving LR is not.

I did not say Zeilinger says local realism is plausible. He says it has not been ruled out, and I gave the quote. If DrChinese (or you) believes he changed his mind since then, why does not he give me a direct quote?

DrChinese said:
So I don't get where you think LR is considered a viable alternative.

Same answer. Zeilinger said LR has not been ruled out. I gave the quote confirming that. If later he said LR has been ruled out, give me the quote.

DrChinese said:
Why should you accept any scientific evidence?
As you (I mean Frame Dragger) said, this is not a rhetorical pissing match. I accept scientific evidence when I feel satisfied with it. Of course, there are a lot of areas where I just believe experts on their word, at least for the time being, as I cannot sort out everything myself. In this case, however, I don't see enough evidence to rule out local realism. Its elimination is a very radical idea, so the proof should be really good. However, both theoretical and experimental evidence against local realism is dubious in the best case.
DrChinese said:
And why do you suspect that the full universe would not match the results of a subsample?
For one, because the universe is not uniform in space or in time. And the application of fair sampling relevant to Bell is not about the universe. The question at hand is whether the set of detected photons has the same statistics as the set of undetected ones. Hidden variable theories suggest that there is a reason why one photon is detected and another is not. If you impose fair sampling, you reject such a possibility. Let me give you an example. Suppose you throw a lot of knives at a tree. Sometimes a knife gets stuck in the tree, sometimes it is bounced. The knives can have the same velocity and rotate in flight with the same angular velocity, but the results can vary depending on the phase (the knife can hit the tree point first or handle first). So if we try to build the statistics for the phase, the statistics will be different for knives stuck in the tree and for all knives. So, as Santos emphasized, fair sampling eliminates a great deal of local realistic theories immediately, so it would be indeed absurd to blindly accept fair sampling if you're trying to decide if local realistic theories are possible.
DrChinese said:
And why does increasing the sample percentage not lead to a different answer?
I don't know. In general, I don't know why physics laws are the laws we study at school, not some other laws. So what? But if you imply that the same results will hold for 100% efficiency, I don't buy it without proof. Indeed, you may try to break a steel bar by pulling it apart with a force of 1 N. No luck? Try one ton. Still no luck? Then let us conclude that the result remains the same as we increase the load. Of course, you'll just roll your eyes as you know that no material is infinitely strong. How the case of the Bell inequalities is any different? As long as you use some ersatz inequalities (using fair sampling), you can violate them with one hand tied behind your back. However, the entire humanity has not been able to violate the genuine inequalities for 45 years. You want to eliminate local realism? Break the true inequalities. Anything else is not enough. A theorem is a theorem. You cannot ensure its conclusion until its assumptions are fulfilled.
DrChinese said:
And why do other tests - not requiring the fair sampling assumption - give the same results?
Because some of the assumptions of the theorem are not fulfilled. Again, a theorem is a theorem. If the assumptions are not fulfilled, it is easy to avoid the conclusion. Same story as with my example of planar geometry.
 
  • #199


Eye_in_the_Sky said:
This thread began with a post in which it was written:Hello, akhmeteli. It appears to me there may be some misconception in the way you are thinking about Bell's theorem.

Bell's theorem, per se, is nothing more than a proposition of the form

P → D ,

where "P" is the conjunction of some set of premises, the 'truth' of which does not in any way require the 'truth' of any of the premises of Quantum Mechanics, and "D" is a certain condition (e.g. a Bell inequality).
________________
Sorry for the delay.

I am afraid I disagree that "Bell's theorem, per se, is nothing more than a proposition of the form

P → D"

Usually statement "QM → ~D" is also included in the Bell theorem.

Eye_in_the_Sky said:
Now, it happens that Quantum Mechanics (let us denote its premises by "QM") is such that

QM → ~D .

Therefore, the conjunction "P Λ QM" is inconsistent.
________________

In the weak version of Bell's theorem

P = local determinism .


In the strong version of Bell's Theorem

P = locality Λ PC Λ CF ,

where

PC ≡ perfect anti-correlation for equal settings

and

CF ≡ counterfactuality .


In the strong version, of course, "PC" has been employed as premise; but this means only that we are considering any theory which admits "PC" as a feature.
________________


... Do you see what I am saying?

Not really, sorry. It seems to me I understand what you wrote, but I don't quite see from your post where my misconception is. Could you please explain?
 
  • #200
akhmeteli said:
Why should I guess? I hope you don't feel it is beneath you to be clearer. And I think I answered the questions, but I am not asking if contextual language eludes you, I'll try to repeat or rephrase my answers.
I did not say Zeilinger says local realism is plausible. He says it has not been ruled out, and I gave the quote. If DrChinese (or you) believes he changed his mind since then, why does not he give me a direct quote?
Same answer. Zeilinger said LR has not been ruled out. I gave the quote confirming that. If later he said LR has been ruled out, give me the quote.As you (I mean Frame Dragger) said, this is not a rhetorical pissing match. I accept scientific evidence when I feel satisfied with it. Of course, there are a lot of areas where I just believe experts on their word, at least for the time being, as I cannot sort out everything myself. In this case, however, I don't see enough evidence to rule out local realism. Its elimination is a very radical idea, so the proof should be really good. However, both theoretical and experimental evidence against local realism is dubious in the best case.

For one, because the universe is not uniform in space or in time. And the application of fair sampling relevant to Bell is not about the universe. The question at hand is whether the set of detected photons has the same statistics as the set of undetected ones. Hidden variable theories suggest that there is a reason why one photon is detected and another is not. If you impose fair sampling, you reject such a possibility. Let me give you an example. Suppose you throw a lot of knives at a tree. Sometimes a knife gets stuck in the tree, sometimes it is bounced. The knives can have the same velocity and rotate in flight with the same angular velocity, but the results can vary depending on the phase (the knife can hit the tree point first or handle first). So if we try to build the statistics for the phase, the statistics will be different for knives stuck in the tree and for all knives. So, as Santos emphasized, fair sampling eliminates a great deal of local realistic theories immediately, so it would be indeed absurd to blindly accept fair sampling if you're trying to decide if local realistic theories are possible.

I don't know. In general, I don't know why physics laws are the laws we study at school, not some other laws. So what? But if you imply that the same results will hold for 100% efficiency, I don't buy it without proof. Indeed, you may try to break a steel bar by pulling it apart with a force of 1 N. No luck? Try one ton. Still no luck? Then let us conclude that the result remains the same as we increase the load. Of course, you'll just roll your eyes as you know that no material is infinitely strong. How the case of the Bell inequalities is any different? As long as you use some ersatz inequalities (using fair sampling), you can violate them with one hand tied behind your back. However, the entire humanity has not been able to violate the genuine inequalities for 45 years. You want to eliminate local realism? Break the true inequalities. Anything else is not enough. A theorem is a theorem. You cannot ensure its conclusion until its assumptions are fulfilled.

Because some of the assumptions of the theorem are not fulfilled. Again, a theorem is a theorem. If the assumptions are not fulfilled, it is easy to avoid the conclusion. Same story as with my example of planar geometry.

Ok .. so after reading this, I am confused. I realize that you are unconvinced by the experimental demonstrations of Bell inequality violations, because you are not willing to grant the fair sampling assumption. That seems fair to me ... I tend to be more willing to accept it, but perhaps that is because LHV theories have always seemed to me to fail the Occam's razor test.

However, on reading this post and others, it has become unclear to me whether you even accept Bell's theorem to begin with, or at least that you have some issues about how it is interpreted. So, just so I have it straight, do you accept that Bell's theorem proves that any theory that is consistent with QM experiments must violate either locality or counterfactual determinism?

EDIT: Sorry, you can ignore the above ... I should have gone back to the first pages again before posting. It had been a while since I read them, and I had forgotten your points about PP and UE with regards to Bell's theorem. No need to repeat yourself on my account.
 
Last edited:
Back
Top