Joy Christian, Disproof of Bell's Theorem

bcrowell
Staff Emeritus
Science Advisor
Insights Author
Messages
6,723
Reaction score
431
Joy Christian, "Disproof of Bell's Theorem"

This showed up on the arxiv blog today: Joy Christian, "Disproof of Bell's Theorem," http://arxiv.org/abs/1103.1879

I'm not enough of a specialist to be able to judge the correctness or significance of the result.

Comments?
 
Physics news on Phys.org


A(a,lambda) seems to depend only on Lambda according to (1).
my Joy remains.
 


I'm not familiar enough to judge either, but i just find it interesting that the only references for the paper, besides Bell's original paper, are Christian's own papers...
 


I'm not knowledgeable enough to judge/comment definitively either. But here's my two cents in lieu of some experts chiming in.

Christian apparently became convinced some years ago that there's nothing special about Bell's theorem or quantum entanglement. Since then he's presented a number of nonrealistic counterexamples to Bell's theorem. This is the latest. My guess, not having worked through all of it, is that all his math is probably correct, but that his result will probably be regarded as insignificant in that it's a consequence of assumptions/definitions that seem even more clearly nonrealistic than his previous attempts.
 


I tried to read one of Christian's "disproof" articles once, and it was a bunch of incoherent nonsense. It was impossible to make sense of what he was saying. I decided then to not let him waste any of my time again.

There are several other threads about his articles by the way. In one of them, I got an infraction for posting a link to the article I alluded to above. A bit excessive perhaps, but I do agree that discussions about articles that the authors have been unable to get published don't really belong in this forum.
 
Last edited:


His paper is interesting, but I have heard a convincing explanation for why it is probably not significant:

Assuming that all his mathematics are correct, he still assumes that the measurement outcomes follow some weird algebra which when combined together, to give a desired value which he designed to replicate quantum experiments. However, measurement outcomes are defined by clicks on detectors, and it is the experimentalist, not the experiment itself, who assigns the values to the outcomes and combines the values of the outcomes to get the correlations. The experimentalist combines the values in such a way that it follows NORMAL algebra, and yet it still violates Bell's inequality. You just need to see Ekert's cryptographical protocol to see that this is true. In that protocol, two experiementalists randomly performs independent measurements and assigns +1 or -1 to the outcomes, then compares them using simple multiplication to violate Bell's inequality, no special algebra required.

Therefore, in a sense, Joy Christian is missing the point. What is required, is an explanation of why it is that when 2 experimentalists perform independent measurements and assigns real values to them, and thereafter combine them using ONLY normal algebra, the resulting correlations STILL violate Bell's inequality. So even though he found an interesting algebra which is local with hidden variables and is able to reproduce certain quantum results when you combine them, it still does not explain why when experimentalists apply only normal algebra to their measured outcomes, Bell's inequality is still violated. So his 'disproof' is probably incorrect.
 


bcrowell said:
This showed up on the arxiv blog today: Joy Christian, "Disproof of Bell's Theorem," http://arxiv.org/abs/1103.1879

I'm not enough of a specialist to be able to judge the correctness or significance of the result.

Comments?

His "proof" gives values for A and B, along with joint probabilities for A & B. But a realistic proof should give a value of C as well, such that various joint probabilities for A, B and C sum to 1 and none are negative. It is no surprise that you can derive a Bell Inequality violation without the realistic test applied.

Don't hold your breath on this one.
 


bobbytkc said:
So even though he found an interesting algebra which is local with hidden variables.

Thats goes back to Bohm and Hiley.
 


yoda jedi said:
Thats goes back to Bohm and Hiley.

You are talking about Bohmian mechanics?

Bohmian mechanics has hidden variables (in that particles have definite position and momentum, but still obeys Schrodinger's equations), however, the cost is that it is distinctively non local, so it still falls under the umbrella of Bell's theorem, which states that quantum mechanics is non-local and/or has no hidden variable. So in that respect, you are mistaken. What Joy Christian purports to have found is something more significant, that a local AND hidden variable theory is possible and matches the predictions of quantum mechancs. Unfortunately, it is probably not the right way to approach it.
 
  • #10


DrChinese said:
His "proof" gives values for A and B, along with joint probabilities for A & B. But a realistic proof should give a value of C as well, such that various joint probabilities for A, B and C sum to 1 and none are negative. It is no surprise that you can derive a Bell Inequality violation without the realistic test applied.

Don't hold your breath on this one.

What C? I believe Bell's theorem is only bipartite,no?
 
  • #11


bobbytkc said:
His paper is interesting, but I have heard a convincing explanation for why it is probably not significant:

Assuming that all his mathematics are correct, he still assumes that the measurement outcomes follow some weird algebra which when combined together, to give a desired value which he designed to replicate quantum experiments. However, measurement outcomes are defined by clicks on detectors, and it is the experimentalist, not the experiment itself, who assigns the values to the outcomes and combines the values of the outcomes to get the correlations. The experimentalist combines the values in such a way that it follows NORMAL algebra, and yet it still violates Bell's inequality. You just need to see Ekert's cryptographical protocol to see that this is true. In that protocol, two experiementalists randomly performs independent measurements and assigns +1 or -1 to the outcomes, then compares them using simple multiplication to violate Bell's inequality, no special algebra required.

Therefore, in a sense, Joy Christian is missing the point. What is required, is an explanation of why it is that when 2 experimentalists perform independent measurements and assigns real values to them, and thereafter combine them using ONLY normal algebra, the resulting correlations STILL violate Bell's inequality. So even though he found an interesting algebra which is local with hidden variables and is able to reproduce certain quantum results when you combine them, it still does not explain why when experimentalists apply only normal algebra to their measured outcomes, Bell's inequality is still violated. So his 'disproof' is probably incorrect.
I agree. Your post is enlightening and points to the reason why Christian's purported LR models of entanglement are not regarded as LR models of entanglement -- which results, ultimately, from a qualitative assessment of how the purported local realism is encoded in the formulation(s).

bobbytkc said:
What Joy Christian purports to have found is something more significant, that a local AND hidden variable theory is possible and matches the predictions of quantum mechancs.
Yes. Obviously, proposing a LR formulation that doesn't reproduce qm expectation values is a non-starter.

bobbytkc said:
Unfortunately, it is probably not the right way to approach it.
If the aim is to produce an LR theory of entanglement, then it's the only way to approach it. Explicit, clearly LR models a la Bell have been definitively ruled out. Christian's LR offerings fail the quantitative test of this (which is what DrC is talking about). So we know that Christian's models are not Bell LR.

However, in the absence of a logical proof that Bell LR is the only possible LR, then it remains to assess each purported LR model qualitatively. Christian's λ fail the realism test by any qualitative standard that I'm aware of.

If the aim is to understand why LR models of entanglement are impossible (even in a world that obeys the principle of locality and the light speed limit), then I agree with you that Christian is missing the point and taking the wrong approach. But it's still fun to check out the stuff that he comes up with.
 
  • #12


bobbytkc said:
What C? I believe Bell's theorem is only bipartite,no?

With 2 entangled photons, you can measure coincidence at 2 angle settings (say A and B, which will follow the cos^2 rule for AB). The third, C, is hypothetical in a realistic universe because the realist asserts it exists. However it cannot be measured. QM makes no statement about its existence, so no problem there. But the realist does. Bell pointed out that the values of coincidence for AC and BC will not both follow the QM predictions (if C existed). See his (14) where C is introduced.

So a local realistic model without a prediction for C is not truly "realistic" after all. In other words, A and B are not truly independent of each other for if they were, there would also be C, D, E... values possible which would all follow the QM expectation values when considered with A or B. Many "disproofs" of Bell conveniently skip this requirement.
 
  • #13


bobbytkc said:
You are talking .....?


bobbytkc said:
So even though he found an interesting algebra which is local with hidden variables

Clifford Algebra.
(goes back to Bohm and Hiley).
 
Last edited:
  • #14


Just to add to what I said above:

a) It is a requirement that 1 >= f(A,B) >= f(A,B,C) >= 0 where f() is a correlation function. This is the realism requirement. Using the logic of Bell, this is shown to be false if C is assumed to exist. (See also "Bell's Theorem and Negative Probabilities".)

b) It is a requirement that f(A,A)=f(B,B)=1 (or 0 depending on your basis). This is the requirement for perfect correlations. This is sometimes overlooked, but is actually the reason someone might think there are hidden variables in the first place. However, it is really a consequence of the cos^2 rule since:

f(A,A) = cos^2(A,A) = cos^2(A-A) = cos^2(0) = 1
 
  • #15


DrChinese said:
With 2 entangled photons, you can measure coincidence at 2 angle settings (say A and B, which will follow the cos^2 rule for AB).
What might be confusing for some is that what you're denoting as A,B and C are unit vectors associated with spin analyzer settings, and are usually denoted by bolded lower case letters (eg., a, b, c, etc.).

DrChinese said:
The third, C, is hypothetical in a realistic universe because the realist asserts it exists. However it cannot be measured.
This might be confusing because (a,b), (a,c), and (b,c) are denotations of different dual analyzer settings, ie., different θ, or angular differences (a-b), in 3D, Euclidean space, and therefore realistic, and all follow the cos2 rule.

So, in what sense is c not realistic?

DrChinese said:
QM makes no statement about its existence, so no problem there.
The qm (a,b) refers to any combination of analyzer settings, any θ, wrt the dual, joint analysis of bipartite systems. Since a can take on any value from the set of all possible analyzer settings, and so can b, then it isn't clear what you mean that qm makes no statement about the existence of a certain possible analyzer setting.

[...]

DrChinese said:
So a local realistic model without a prediction for C is not truly "realistic" after all.
But all purported LR models make a prediction for any individual analyzer setting, as well as any θ. So does qm.

DrChinese said:
In other words, A and B are not truly independent of each other ...
Well, obviously the analyzer settings aren't independent wrt the measurement of any given pair since together they're the global measurement parameter, θ. Is that what you mean? If not, then what?

DrChinese said:
... for if they were, there would also be C, D, E... values possible which would all follow the QM expectation values when considered with A or B. Many "disproofs" of Bell conveniently skip this requirement.
It's not clear to me what you're saying or how you got there.
 
  • #16


ThomasT said:
But all purported LR models make a prediction for any individual analyzer setting, as well as any θ. So does qm.

Joy's doesn't, and it should if it is realistic.

Where a=0, b=67.5, c=45:

The QM prediction for f(a, b)=.1464
There is no QM prediction for either f(a, b, c) or f(a, b, ~c). QM is not realistic. Note: the ~c means Not(c) which is the same as saying you would get the opposite result. I.e. a plus instead of minus, or vice versa.

The LR prediction for f(a, b)=.1464. OK so far.
But if any LR is truly realistic, then it has a prediction for both f(a, b, c) and f(a, b, ~c). For the angles above, what is that value? When you run the math, the value for f(a, b, ~c) comes out -.1036 which is impossible as it is less than zero.
 
  • #17


DrChinese said:
Joy's doesn't, and it should if it is realistic.
Sure it does. We're talking about analyzing bipartite systems with dual analyzers. ab, ac, and bc are the only possible analyzer settings for a given run.

DrChinese said:
There is no QM prediction for either f(a, b, c) or f(a, b, ~c).
Why would there be? The bipartite system is generating data via dual, not triple, analyzers.

DrChinese said:
But if any LR is truly realistic, then it has a prediction for both f(a, b, c) and f(a, b, ~c).
Why, if a model is intended to describe a bipartite system that's generating data via dual, not triple, analyzers? Seems like an unrealistic requirement.
 
  • #18


bcrowell said:
This showed up on the arxiv blog today: Joy Christian, "Disproof of Bell's Theorem," http://arxiv.org/abs/1103.1879

I'm not enough of a specialist to be able to judge the correctness or significance of the result.

Comments?

There is a book edited by him and myrvold.

Quantum reality, relativistic causality, and closing the epistemic circle. (Springer).

http://books.google.co.ve/books?id=...&resnum=3&ved=0CCsQ6AEwAg#v=onepage&q&f=false

http://www.springerlink.com/content...aef650fda1&pi=0#section=23120&page=1&locus=71
 
Last edited:
  • #19


ThomasT said:
Why, if a model is intended to describe a bipartite system that's generating data via dual, not triple, analyzers? Seems like an unrealistic requirement.

What else would realism be except the requirement that unmeasured c exists alongside measured a and b?
 
  • #20


DrChinese said:
What else would realism be except the requirement that unmeasured c exists alongside measured a and b?
Realism is made explicit in Bell's equation (1), where he defines the functions A (operating at S1) and B (operating at S2),
A(a,λ) = ± 1, B(b,λ) = ± 1 ,
where the spin analyzer settings are described as unit vectors in 3D Euclidean space and denoted as a and b, and where λ denotes arbitrary hidden parameters (determining individual detection) carried by the particles from the source and is, in his equation (2), associated with the particles via probability density ρ.

Locality is made explicit via his equation (2),
P(a,b) = ∫dλρ(λ)A(a,λ)B(b,λ)

In Bell's (14) c isn't unmeasured. It represents an analyzer setting that produces a third individual ± 1 datastream. There are only two analyzers, one at S1 and one at S2, which produce the three joint datastreams, ab, ac, and bc necessary for the inequality, Bell's (15).

I still don't understand what you're getting at (at one point I thought I did, but now I see that I don't). But since we're assessing Christian's model as unrealistic for different reasons, then it seems ok to continue the discussion regarding your 'realistic dataset requirement".
 
Last edited:
  • #21


ThomasT said:
Realism is made explicit in Bell's equation (1), where he defines the functions A (operating at S1) and B (operating at S2),
A(a,λ) = ± 1, B(b,λ) = ± 1 ,
where the spin analyzer settings are described as unit vectors in 3D Euclidean space and denoted as a and b, and where λ denotes arbitrary hidden parameters (determining individual detection) carried by the particles from the source and is, in his equation (2), associated with the particles via probability density ρ.

...

In Bell's (14) c isn't unmeasured. It represents an analyzer setting that produces a third individual ± 1 datastream. There are only two analyzers, one at S1 and one at S2, which produce the three joint datastreams, ab, ac, and bc necessary for the inequality, Bell's (15).

If you can measure it (c), then you don't have a realism assumption. And you specifically say that the analyzers are S1 and S2 for a and b (actually any 2 of a, b, c). The whole idea of Bell is that when you measure ab, there are no datasets for an additional assumed c which is itself consistent as to ac and bc - even though the third is not measured. This is very straightforward, see his (14) and after.
 
  • #22


DrChinese said:
If you can measure it (c), then you don't have a realism assumption. And you specifically say that the analyzers are S1 and S2 for a and b (actually any 2 of a, b, c). The whole idea of Bell is that when you measure ab, there are no datasets for an additional assumed c which is itself consistent as to ac and bc - even though the third is not measured. This is very straightforward, see his (14) and after.
Realism is assumed and explicated by Bell via the functions A and B defined in his equation (1).

Bell's inequality has nothing to do with not being able to generate an abc dataset. Obviously, it's physically impossible to generate an abc dataset using dual analyzers, and it's not clear to me why you think that that has anything to do with Bell's realism assumption.
 
  • #23


ThomasT said:
Realism is assumed and explicated by Bell via the functions A and B defined in his equation (1).

Bell's inequality has nothing to do with not being able to generate an abc dataset. Obviously, it's physically impossible to generate an abc dataset using dual analyzers, and it's not clear to me why you think that that has anything to do with Bell's realism assumption.
Your mistake is a common one. :biggrin:

Bell's (1) leads to nothing inconsistent with QM. You would actually expect perfect correlations from that, and of course we see that experimentally. But Bell's (14+) is required to see the fallacy of (1). Once you try to fit a, b and c into things, it all falls apart. And certainly not before (14).
 
  • #24


For those reading along, allow me to add the following. When you have 2 entangled particles that are essentially clones of each other, you would expect that if they were independent (locality holds) then any measurement on Alice (say) would yield the same result as an identical measurement on Bob. Therefore, you would expect that the result of ANY measurement on either Alice or Bob is actually predetermined. How else to explain the results? This idea - that the results of any measurement is predetermined - can be considered to be the assumption of Realism. Realism is the idea that ALL particle properties are independent of an actual measurement.

Of course, the Heisenberg Uncertainty Principle essentially says the opposite: a measurement of one property makes its non-commuting partner completely uncertain.

So if Realism AND Locality hold, particle properties are predetermined. So presumably the unmeasured properties have values. For polarization of photons, that means that you could expect either a + or - result and that such result would occur with a frequency of somewhere between 0 and 100% of the time. That's reasonable, right?

Ah, reasonable but wrong (says Bell)! Turns out you cannot construct a dataset in which the QM expectation value holds for many a, b and c settings. And yet we said those were predetermined if the entangled particles were really clones and if locality holds.
 
  • #25


DrChinese said:
Your mistake is a common one. :biggrin:
What mistake?

DrChinese said:
Bell's (1) leads to nothing inconsistent with QM.
Agreed. Bell's (1) has to do with spin properties carried by particles produced at a common source which produce individual results. All consistent with the qm model and application of the conservation law.

DrChinese said:
But Bell's (14+) is required to see the fallacy of (1).
What fallacy? Don't we agree that Bell's (1) is consistent with qm, as per Bell himself?

Bell's (14) is a revision of Bell's (2) in view of Bell's (12) and (13). Bell's (2) makes explicit the locality assumption, which is necessary because Bell's (1) doesn't explicate locality wrt joint detections.

DrChinese said:
Once you try to fit a, b and c into things, it all falls apart. And certainly not before (14).
Bell shows that the form of Bell's (2) is incompatible with qm. The incompatibility is due to the locality assumption embodied in the form (2), which converted to (14) and evaluated wrt expectation values for three distinct joint analyzer settings (ab, ac, and bc) gives Bell's inequality.
 
  • #26


ThomasT said:
What fallacy? Don't we agree that Bell's (1) is consistent with qm, as per Bell himself?

...

The issue is that there was no APPARENT flaw in (1) prior to Bell. Bell then showed how this innocent looking formula is wrong. Which it is. We now know that it cannot account at all for the observed behavior.
 
  • #27


DrChinese said:
The issue is that there was no APPARENT flaw in (1) prior to Bell. Bell then showed how this innocent looking formula is wrong.
Bell didn't show this. In fact, he showed that the functions (1) that determine individual detection, are compatible with qm. Which is not to say that standard qm can be interpreted as being realistic. It can't. It's nonrealistic and acausal.

What Bell did show was that the separable form of (2), the embodiment of his locality condition, is incompatible with qm.

DrChinese said:
Which it is.
The functions A and B in (1) can't be said to be wrong, because they're compatible with qm and experiment. They determine individual detection. Period.

DrChinese said:
We now know that it cannot account at all for the observed behavior.
What we know is that the separable form of (2) skews and reduces the range of the the predictions. This is because what's being measured by the analyzers in the joint context is a nonseparable parameter, unchanging from entangled pair to entangled pair (as opposed to what's being measured by the individual analyzers, which varies from particle to particle). Unfortunately for diehard local realists, there's no known way to make a realistic theory local without something akin to Bell's locality condition, which results in a separable form, which skews the predictions.
 
Last edited:
  • #28


ThomasT said:
...This is because what's being measured by the analyzers in the joint context is a nonseparable parameter, unchanging from entangled pair to entangled pair (as opposed to what's being measured by the individual analyzers, which varies from particle to particle). Unfortunately for diehard local realists, there's no known way to make a realistic theory local without something akin to Bell's locality condition, which results in a separable form, which skews the predictions.

Presumably, if it is nonseparable it is also nonlocal. That is consistent with accepted interpretations of QM.

Now Bell's (1) is essentially A(a)={+1,-1}, B(b)={+1,-1}

Bell later effectively says that realism implies simultaneously C(c)={+1,-1}. This assumption is wrong if QM is correct.
 
  • #29


DrChinese said:
Presumably, if it is nonseparable it is also nonlocal. That is consistent with accepted interpretations of QM.
Yes, I agree, given certain definitions of the terms nonseparable and nonlocal. Due to ambiguous connotations of those terms it takes a bit of sorting out. In the case of standard qm nonlocal doesn't mean what it means wrt 3D classical physics, the assumed locality of which, vis SR, is compatible with quantum nonlocality. The nonlocality and nonseparability of entanglement can be taken as referring to essentially the same thing, with nonseparability ultimately tracing back to parameter (not ontological) nonseparability due to the experimental analysis of relationships between particles, which entails the dependence of measured particle properties and why the entangled system can be more completely described than it's subsystems.

DrChinese said:
Now Bell's (1) is essentially A(a)={+1,-1}, B(b)={+1,-1}
Ok.

DrChinese said:
Bell later effectively says that realism implies simultaneously C(c)={+1,-1}. This assumption is wrong if QM is correct.
I understand how this is the basis for your dataset requirement and your 'negative probability' paper, ie., I think it does constitute an understandable insight. My only problems with it were 1) that I thought there might be a more thorough process for assessing proposed LR models, and 2) that I wasn't sure where/why you were reading this, apparently tacit (because I don't remember it being mentioned in Bell's paper), realization on Bell's part into Bell's development of his theorem. I was concerned with nailing down Bell's explicit realism assumption as a guide to evaluating the realism of LR models, and thought that your understanding of that might have been a bit off the mark. In any case, whether Bell was actually thinking along those lines or not is less important than the fact that it works as an evaluative tool.

Regarding Christian, my current opinion is that his LR program fails, and he's missing the point, for essentially the reason that bobbytkc gave in post #6. Christian, apparently, doesn't quite get what the LR program is about.
 
Last edited:
  • #30


ThomasT said:
<SNIP>
Regarding Joy Christian, my current opinion is that his LR program fails, and he's missing the point, for essentially the reason that bobbytkc gave in post #6. Christian, apparently, doesn't quite get what the LR program is about.
["Joy" inserted above for clarity. GW]

1. The thread initiated by me -- https://www.physicsforums.com/showthread.php?t=475076 --

is an off-shoot from another thread discussing Joy Christian's work.


2. I make no claim as to whether Joy Christian does or does NOT understand the LR program. But I would be very surprised if he does not understand it exactly, precisely, whatever.

3. IMHO, it is not that difficult; unless I too am missing some extreme subtlety; or there is being inserted a requirement that goes beyond the Einstein and EPR program.

4. I would certainly expect that anyone, critically and carefully studying Bell's theorem, would be trying to ensure that their efforts did not breach the commonsense (the core Einstein and EPR principles) that attaches to the LR program.

5. However, in this widely rejected/neglected area of study (Einstein's baby, IMHO), slips are possible. So a better critique of JC's work, for those concerned by it, would be to identify JC's error specifically; my own critical opinion of JC's efforts not being relevant here.

6. The point that I would like to emphasize is this: The L*R program, discussed in the above thread (https://www.physicsforums.com/showthread.php?t=475076), is most certainly local and realistic, and in full accord with the Einstein and EPR program, as I understand it. (And I doubt that JC understands it any less than I do -- so why not help him find his slip -- IF slip there be. Because my "guess" is: it's fixable!)
 
  • #31


Nonseparability has been mentioned but I doubt that it's impact to this discussion has been fully understood. In the the Gordon Watson's linked thread he mentioned "triangle inequality", I have a variation of it which may throw some light in a simple and common sense manner why "nonseparability" is so important to the issue being raised by Joy Christian. DrC may be interested in this because it blows the lid off his "negative probabilities" article.

A simple analogy is the x^2 + y^2 = z^2 relationship for right angled triangles, of sides, x, y and z. Consider a process which generates right angled triangles defined within a unit circle, where z is always = 1, x = cos(angle), y = sin(angle), where the angle is randomly chosen each time. Our goal is to measure the lengths of the sides x and y. But, assume that in the first experiment, we can only make a single measurement. So we run our experiment a gazzillion number of times and obtain the averages <x> and <y> averages. Do you think <x>^2 + <y>^2 will obey the relationship of being equal to 1. If you do, think again <x>^2 + <y>^2 converges to 0.8105... not 1, a violation. This is simply because x and y are non-separable in our relationship.

However we can imagine that in our experiment we also had corresponding values for both x and y for each individual measurement. So we might think that using our new dataset with all corresponding values included will result in <x>^2 + <y>^2 = 1, right? Wrong. We get exactly the same violation as before. The reason is separability. But there is one thing we can calculate in our second scenario which we could not in the first. We can calculate <x^2 + y^2> since we now have both points, and indeed we obtain 1 as the result which obeys the relationship.

In our first experiment, x and y do not commute therefore it is a mathematical error to use x and y in the same expression, that is why the violation was observed. In probability theory, an expectation value such as E(a,c) is undefined if A(a,lambda) and A(c,lambda) do not commute. Expectation values are only defined for E(a,c) if there is an underlying probability distribution P(a,c). But it is not possible to measure at angles "a" and "c" on the same particle pair therefore there is no P(a,c) probability distribution. The same is the case in Bell-test experiments and QM, in which it is possible to measure "a" and "b" but not "c" simultaneously so, the pairs measured in different runs do not correspond to each other, so we are left with calculating three different expectation values from three different probability distributions to plug into an inequality in which the terms are defined on the same probability distribution. This is a mathematical error.

Concerning negative probabilities, Dr C says:
X is determined by the angle between A and B, a difference of 67.5 degrees X = COS^2(67.5 degrees) = .1464 This prediction of quantum mechanics can be measured experimentally.*
Y is determined by the angle between A and C, a difference 45 degrees Y = SIN^2(45 degrees) = .5000 This prediction of quantum mechanics can be measured experimentally.*
Z is determined by the angle between B and C, a difference 22.5 degrees Z = COS^2(22.5 degrees) = .8536 This prediction of quantum mechanics can be measured experimentally.*

...

(X + Y - Z) / 2

Substituting values from g. above:

= (.1464 + .5000 - .8536)/2

= (-.2072)/2

= -.1036
Note how he defines X, Y and Z as being non commuting since only two of such angles can be measured at the same time, and yet he writes down an impossible equation which includes terms which can never be simultaneously valid. No doubt he obtains his result.
 
  • #32


Gordon Watson said:
["Joy" inserted above for clarity. GW]

1. The thread initiated by me -- https://www.physicsforums.com/showthread.php?t=475076 --

is an off-shoot from another thread discussing Joy Christian's work.2. I make no claim as to whether Joy Christian does or does NOT understand the LR program. But I would be very surprised if he does not understand it exactly, precisely, whatever.

3. IMHO, it is not that difficult; unless I too am missing some extreme subtlety; or there is being inserted a requirement that goes beyond the Einstein and EPR program.

4. I would certainly expect that anyone, critically and carefully studying Bell's theorem, would be trying to ensure that their efforts did not breach the commonsense (the core Einstein and EPR principles) that attaches to the LR program.

5. However, in this widely rejected/neglected area of study (Einstein's baby, IMHO), slips are possible. So a better critique of JC's work, for those concerned by it, would be to identify JC's error specifically; my own critical opinion of JC's efforts not being relevant here.

6. The point that I would like to emphasize is this: The L*R program, discussed in the above thread (https://www.physicsforums.com/showthread.php?t=475076), is most certainly local and realistic, and in full accord with the Einstein and EPR program, as I understand it. (And I doubt that JC understands it any less than I do -- so why not help him find his slip -- IF slip there be. Because my "guess" is: it's fixable!)
See Carlos Castro's, There is no Einstein-Podolsky-Rosen Paradox in Clifford-Spaces . In C-space, the particles can exchange signals encoding their spin measurement values across a null interval, which isn't the sort of locality required by the LR program. Or can it be translated into that because this is essentially the same as specifying a relationship produced via a common source? I don't know.

Because Christian is using tensors (in the papers using Clifford algebra and in the paper currently under discussion with the Kronecker Delta, Levi-Cevita algebra) to deal with a relationship (which is what Bell tests are actually measuring) between vectors, then maybe I was too quick to dismiss his stuff. Or maybe not. Again, I don't know.

These articles might also be relevant:

Bound entanglement and local realism

All the Bell Inequalities

Clearly, we need some input from experts, or at least more knowledgeable, in the field.
 
Last edited by a moderator:
  • #33


billschnieder said:
Nonseparability has been mentioned but I doubt that it's impact to this discussion has been fully understood. In the the Gordon Watson's linked thread he mentioned "triangle inequality", I have a variation of it which may throw some light in a simple and common sense manner why "nonseparability" is so important to the issue being raised by Joy Christian. DrC may be interested in this because it blows the lid off his "negative probabilities" article.

A simple analogy is the x^2 + y^2 = z^2 relationship for right angled triangles, of sides, x, y and z. Consider a process which generates right angled triangles defined within a unit circle, where z is always = 1, x = cos(angle), y = sin(angle), where the angle is randomly chosen each time. Our goal is to measure the lengths of the sides x and y. But, assume that in the first experiment, we can only make a single measurement. So we run our experiment a gazzillion number of times and obtain the averages <x> and <y> averages. Do you think <x>^2 + <y>^2 will obey the relationship of being equal to 1. If you do, think again <x>^2 + <y>^2 converges to 0.8105... not 1, a violation. This is simply because x and y are non-separable in our relationship.

However we can imagine that in our experiment we also had corresponding values for both x and y for each individual measurement. So we might think that using our new dataset with all corresponding values included will result in <x>^2 + <y>^2 = 1, right? Wrong. We get exactly the same violation as before. The reason is separability. But there is one thing we can calculate in our second scenario which we could not in the first. We can calculate <x^2 + y^2> since we now have both points, and indeed we obtain 1 as the result which obeys the relationship.

In our first experiment, x and y do not commute therefore it is a mathematical error to use x and y in the same expression, that is why the violation was observed. In probability theory, an expectation value such as E(a,c) is undefined if A(a,lambda) and A(c,lambda) do not commute. Expectation values are only defined for E(a,c) if there is an underlying probability distribution P(a,c). But it is not possible to measure at angles "a" and "c" on the same particle pair therefore there is no P(a,c) probability distribution. The same is the case in Bell-test experiments and QM, in which it is possible to measure "a" and "b" but not "c" simultaneously so, the pairs measured in different runs do not correspond to each other, so we are left with calculating three different expectation values from three different probability distributions to plug into an inequality in which the terms are defined on the same probability distribution. This is a mathematical error.

Concerning negative probabilities, Dr C says:

---Quote---
X is determined by the angle between A and B, a difference of 67.5 degrees X = COS^2(67.5 degrees) = .1464 This prediction of quantum mechanics can be measured experimentally.*
Y is determined by the angle between A and C, a difference 45 degrees Y = SIN^2(45 degrees) = .5000 This prediction of quantum mechanics can be measured experimentally.*
Z is determined by the angle between B and C, a difference 22.5 degrees Z = COS^2(22.5 degrees) = .8536 This prediction of quantum mechanics can be measured experimentally.*

...

(X + Y - Z) / 2

Substituting values from g. above:

= (.1464 + .5000 - .8536)/2

= (-.2072)/2

= -.1036
---End Quote---

Note how he defines X, Y and Z as being non commuting since only two of such angles can be measured at the same time, and yet he writes down an impossible equation which includes terms which can never be simultaneously valid. No doubt he obtains his result.

Note how he defines X, Y and Z as being non commuting since only two of such angles can be measured at the same time, and yet he writes down an impossible equation which includes terms which can never be simultaneously valid. No doubt he obtains his result.

I, for one, will be very interested in studying this beautiful example. My current concern is to first show where BT fails. That for me will open the way for me (and others) to assess what I would presently call "analogies." For my job then would be to use examples such as yours; showing how they fit into a "more formal" disproof of BT.

Until that time, I can hear Bell's supporters discussing "loopholes" against you, ad nauseam.

(The boot will be on the other foot, as it were, for them then; considering all the loopholes that EPR-style supporters adduce to ignore BT and related experimental results. Me here wanting to be very clear that LOOPHOLES are not only unnecessary but unwarranted. And have never been considered valid or relevant by me.)
 
Last edited:
  • #34


billschnieder said:
A simple analogy is the x^2 + y^2 = z^2 relationship for right angled triangles, of sides, x, y and z. Consider a process which generates right angled triangles defined within a unit circle, where z is always = 1, x = cos(angle), y = sin(angle), where the angle is randomly chosen each time. Our goal is to measure the lengths of the sides x and y. But, assume that in the first experiment, we can only make a single measurement. So we run our experiment a gazzillion number of times and obtain the averages <x> and <y> averages. Do you think <x>^2 + <y>^2 will obey the relationship of being equal to 1. If you do, think again <x>^2 + <y>^2 converges to 0.8105... not 1, a violation. This is simply because x and y are non-separable in our relationship.

However we can imagine that in our experiment we also had corresponding values for both x and y for each individual measurement. So we might think that using our new dataset with all corresponding values included will result in <x>^2 + <y>^2 = 1, right? Wrong. We get exactly the same violation as before. The reason is separability. But there is one thing we can calculate in our second scenario which we could not in the first. We can calculate <x^2 + y^2> since we now have both points, and indeed we obtain 1 as the result which obeys the relationship.
You say you're varying θ randomly. So, <θ> = 45°, where <x> = cosθ = .707..., <y> = sinθ = .707... , (.707...)2 + (.707...)2 = 1. No violation.

billschnieder said:
In our first experiment, x and y do not commute therefore it is a mathematical error to use x and y in the same expression, that is why the violation was observed. In probability theory, an expectation value such as E(a,c) is undefined if A(a,lambda) and A(c,lambda) do not commute. Expectation values are only defined for E(a,c) if there is an underlying probability distribution P(a,c). But it is not possible to measure at angles "a" and "c" on the same particle pair therefore there is no P(a,c) probability distribution. The same is the case in Bell-test experiments and QM, in which it is possible to measure "a" and "b" but not "c" simultaneously so, the pairs measured in different runs do not correspond to each other, so we are left with calculating three different expectation values from three different probability distributions to plug into an inequality in which the terms are defined on the same probability distribution. This is a mathematical error.
Bell's inequality is based on the fact that for x,y,z = ±1, you have |xz - yz| = 1 - xy. Substituting x = A(b,λ), y = A(c,λ), z = A(a,λ) and integrating wrt the measure ρ, you get 1 + P(b,c) ≥ |P(a,b) - P(a,c)| , (Bell's inequality), in view of Bell's (14), P(a,b) = - ∫dλρ(λ)A(a,λ)B(b,λ) . There's no mathematical error in Bell's stuff.

billschnieder said:
Note how he (DrC) defines X, Y and Z as being non commuting since only two of such angles can be measured at the same time, and yet he writes down an impossible equation which includes terms which can never be simultaneously valid. No doubt he obtains his result.
I don't see any mathematical error in DrC's stuff either. It's an interesting numerical treatment based on Einstein realism which demonstrates the incompatibility with qm.
 
Last edited:
  • #35


ThomasT said:
You say you're varying θ randomly. So, <θ> = 45°, where <x> = cosθ = .707, <y> = sinθ = .707 . (.707)2 + (.707)2 = 1. No violation.

This is inaccurate. Generating θ randomly around a circle gives us values in the range [0,360]. So how do you get <θ>=45 degrees shouldn't it be 180? Even if your 45 degrees were correct which it is not, <x> is not the same as Sin<θ>. You may be tempted to say Sin(180) = 0 and Cos(180) = 1 which still adds up to 1 but the error here is that you are assuming that information is present in the experiment which is is not. Remember that x is a length and our experimenter is measuring a length not an angle. He is never given an angle, just a triangle so he can not determine <θ>. He only has the length which is the absolute value of Sin(θ). Secondly, were you to suggest that the mean value for x which he measured were <x> = 0 (cf sin(180)), you will be suggesting that he actually measured negative lengths which is not possible.

In fact <x> is 0.6366.. NOT 0.707 as you stated. You can verify it with a simple calculation, the python code below does that
0.6366^2 + 0.6366^2 = 0.81056.. NOT 1

I hope you see that this simple example is not as stupid as you may have assumed at first. In fact your misunderstanding of this example highlights exactly the point I'm trying to make.

Code:
import numpy
# generate 1million angles from 0 to 360
thetas = numpy.linspace(0,2*numpy.pi, 1000000)

# calculating |sin(<theta>)|
x1 = numpy.abs(numpy.sin(thetas.mean())) 
print "%0.4f" % x1
#Output: 0.0000                

# calculating <|sin(theta)|>
x2 = numpy.abs(numpy.sin(thetas)).mean()
print "%0.4f" % x2
#Output 0.6366

Bell's inequality is based on the fact that for x,y,z = ±1, you have |xz - yz| = 1 - xy. Substituting x = A(b,λ), y = A(c,λ), z = A(a,λ) and integrating wrt the measure ρ, you get 1 + P(b,c) ≥ |P(a,b) - P(a,c)| , (Bell's inequality), in view of Bell's (14), P(a,b) = - ∫dλρ(λ)A(a,λ)B(b,λ) . There's no mathematical error in Bell's stuff.
That is not my point. For the valid inequality |xz - yz| = 1 - xy., all three terms xz, yz, and xy are defined within the same probability space. You can not take terms from three different probability spaces and substitute them in the above equation. The problem is not with the inequality. It is a question of whether bipartite experiments, and QM's predictions for expectation values for bipartite experiments (which do not commute with each other) can be used as legitimate sources of terms to be substituted into the equation for comparisons. I believe not.

I don't see any mathematical error in DrC's stuff either. It's an interesting numerical treatment based on Einstein realism which demonstrates the incompatibility with qm.
Given that you did not understand my original point, I did not expect that you will see the error either. The main point is simply that you can not combine expectation values for non-commuting observables into the same expression as is commonly done when comparing Bell's inequality with QM, and as DrC does in the text I quote. If anybody thinks it is a valid mathematical procedure, let them say so and we can discuss that in a new thread.
 
Last edited:
  • #36


billschnieder said:
This is inaccurate. Generating θ randomly around a circle gives us values in the range [0,360]. So how do you get <θ>=45 degrees shouldn't it be 180?
Just simplifying. Shouldn't varying θ from 0° to 90° be enough to demonstrate what you want to demonstrate?

billschnieder said:
Even if your 45 degrees were correct which it is not, <x> is not the same as Sin<θ>.
You defined x = cosθ. I wrote <x> = cos<θ> because you said you're randomly varying θ. If instead you randomly vary x from 0 to 1, then <x> = <cosθ> = .5, but then you're not randomly varying θ, which is what you said you were doing. It was a little confusing. But I now understand what you're doing. Anyway, I don't think we need it, unless you want to contribute to the collection of illustrations showing that qm is incompatible with LR.

billschnieder said:
The problem is not with the inequality. It is a question of whether bipartite experiments, and QM's predictions for expectation values for bipartite experiments (which do not commute with each other) can be used as legitimate sources of terms to be substituted into the equation for comparisons. I believe not.
Given what's being compared, it's legitimate. And the conclusion is that qm is incompatible with Bell's generalized LR form (2). You do agree with that, don't you?

billschnieder said:
The main point is simply that you can not combine expectation values for non-commuting observables into the same expression as is commonly done when comparing Bell's inequality with QM, and as DrC does in the text I quote.
Bell is comparing his form (2) with qm. They're incompatible. DrC is comparing Einstein realism (via his numerical treatment) with qm. They're incompatible. Both comparisons are mathematically sound.

If your point is that this doesn't inform us about the underlying reality, then I agree with you. Joy Christian on the other hand is presenting so called LR models of entanglement that agree with qm predictions. Any ideas you have on Christian's offerings, and in particular the one presented in this thread, are most welcome.
 
  • #37


..

DrC, ThomasT, billschnieder, and others.Am I mistaken?

We have here, in the "triangles" and "negative probability" discussions, a chance to at least settle these issues with finality. Yes?

And, even if little else were to be resolved: That would be progress. Yes?

So shouldn't someone take the initiative and start a new thread -- leaving this one to the JC discussions, per the OP?

How about: Bell's theorem and negative probabilities versus triangle-inequalities?

??

With some of the discussion, already here, transferred to kick it off?
 
Last edited:
  • #38


ThomasT said:
Just simplifying. Shouldn't varying θ from 0° to 90° be enough to demonstrate what you want to demonstrate?
Why should it? Try to understand the point before you suggest what should be enough or not. The simple fact the <θ> in your "simplification" is different from <θ> in my original example 'should' tell you that it is not the same thing we are talking about.
You defined x = cosθ. I wrote <x> = cos<θ> because you said you're randomly varying θ. If instead you randomly vary x from 0 to 1, then <x> = <cosθ> = .5, but then you're not randomly varying θ, which is what you said you were doing. It was a little confusing.
I also mentioned that x was the length of one side of a triangle. I assumed it will be obvious to most that a length can not be negative which means you should take its absolute value. Which means <x> is not the same as cos<θ> for the same reason that |<v>| does not mean the same thing as <|v|>. You do not deny that randomly varying θ reaches the conclusion I reached so your response here is curious and very surprising.

But I now understand what you're doing. Anyway, I don't think we need it, unless you want to contribute to the collection of illustrations showing that qm is incompatible with LR.
I still do not think you understand it, otherwise you will not conclude that you do not need it.

And the conclusion is that qm is incompatible with Bell's generalized LR form (2). You do agree with that, don't you?
No I do not agree. I would instead say that, neither QM not Bell test experiments are legitimate sources of terms for the inequality 1 + P(b,c) ≥ |P(a,b) - P(a,c)|. Simply because all three terms are not defined within the same probability space neither QM nor in Bell test experiments. Non-locality and/or reality are completely peripheral here. There is no P(a,b,c) distribution from which you can extract the three terms, not in QM, not in Bell test experiments and that alone explains why you can not use QM nor Bell test experiments as sources for those three terms.

Bell is comparing his form (2) with qm. They're incompatible. DrC is comparing Einstein realism (via his numerical treatment) with qm. They're incompatible. Both comparisons are mathematically sound.
This is wrong. There is no conflict with QM until Bell introduces the third angle. Please check his original paper again to confirm that this is correct. I mentioned DrC article because the same error is made in which expectation values from three incompatible non-commuting measurements are combined in the same expression. Are you claiming hereby that it is sound mathematics to do that? This is the question you did not answer.

If your point is that this doesn't inform us about the underlying reality, then I agree with you.
I'm not just interested in stating that. I am explaining WHY any result so obtained can not inform us of anything other than the fact that a subtle mathematical error has been made, ie substituting incompatible expectation values within Bell's inequality.

Joy Christian on the other hand is presenting so called LR models of entanglement that agree with qm predictions. Any ideas you have on Christian's offerings, and in particular the one presented in this thread, are most welcome.
Did you read the one posted in this thread? You seemed to dismiss it earlier based on what you had heard about his other offerings. He presents in 1/2 a page, a LR model which violates Bell's inequality. You may ask how come his LR model could violate the inequallity, and the answer is for the same reasons I have already explained. -- the terms he used are not all defined within the same probability space. It is the same reason why QM violates the inequalities.


He concludes that:
Evidently, the variables A(a, λ) and B(b, λ) defined above respect both the remote parameter independence and the remote outcome independence (which has been checked rigorously [2][3][4][5][6][7]). This contradicts Bell’s theorem.

I haven't seen anybody here argue that his model presented in the above paper is not LR, nor have I seen anyone argue that his model does not reproduce the QM result. All I have seen is discussion around his other papers.
 
  • #39


ThomasT said:
Just simplifying. Shouldn't varying θ from 0° to 90° be enough to demonstrate what you want to demonstrate?
billschnieder said:
Why should it? Try to understand the point before you suggest what should be enough or not. The simple fact the <θ> in your "simplification" is different from <θ> in my original example 'should' tell you that it is not the same thing we are talking about.

I also mentioned that x was the length of one side of a triangle. I assumed it will be obvious to most that a length can not be negative which means you should take its absolute value. Which means <x> is not the same as cos<θ> for the same reason that |<v>| does not mean the same thing as <|v|>. You do not deny that randomly varying θ reaches the conclusion I reached so your response here is curious and very surprising.
The values I input for 0° --> 90° give roughly <x>2 + <y>2 = .8, which corresponds with what you got. And <x2 + y2> = .975. So, isn't the net effect the same -- you get a contradiction between separable and nonseparable formulations?

billschnieder said:
I still do not think you understand it, otherwise you will not conclude that you do not need it.
Only that we already have illustrations of the incompatibility between separable and nonseparable formulations. Bell's, for one.

ThomasT said:
And the conclusion is that qm is incompatible with Bell's generalized LR form (2). You do agree with that, don't you?

billschnieder said:
No I do not agree. I would instead say that, neither QM nor Bell test experiments are legitimate sources of terms for the inequality 1 + P(b,c) ≥ |P(a,b) - P(a,c)|. Simply because all three terms are not defined within the same probability space neither QM nor in Bell test experiments. Non-locality and/or reality are completely peripheral here.
The inequality is based on Bell's LR form. Any model of entanglement taking that form must satisfy his inequality. The question concerns how locality and reality might be explicitly encoded in the same model, while remaining compatible with qm, and Bell shows that they can't be.

billschnieder said:
There is no P(a,b,c) distribution from which you can extract the three terms, not in QM, not in Bell test experiments and that alone explains why you can not use QM nor Bell test experiments as sources for those three terms.
That's the point of DrC's illustration. (a,b,c) is the LR dataset, based on the idea that underlying predetermined particle parameters exist independent of measurement.
There is no such dataset in qm. Hence, the conflict.

ThomasT said:
Bell is comparing his form (2) with qm. They're incompatible. DrC is comparing Einstein realism (via his numerical treatment) with qm. They're incompatible. Both comparisons are mathematically sound.

billschnieder said:
This is wrong. There is no conflict with QM until Bell introduces the third angle. Please check his original paper again to confirm that this is correct.
The results (10) and (11) are in conflict with qm. The unit vectors a and b in (2) can refer to any θ. The unit vector, c, is introduced after that, specifically to derive the inequality. The whole point of Bell's paper is that the generalized LR form (2) is incompatible with qm.

billschnieder said:
I mentioned DrC article because the same error is made in which expectation values from three incompatible non-commuting measurements are combined in the same expression. Are you claiming hereby that it is sound mathematics to do that? This is the question you did not answer.
Yes, it's sound mathematics to do that given what he's trying to show. There are limits on how explicit LR models can be formulated. These limits are based on certain assumptions. Based on the assumption of realism, DrC has fashioned a numerical treatment that demonstrates a conflict between that assumption and qm.

ThomasT said:
If your point is that this doesn't inform us about the underlying reality, then I agree with you.

billschnieder said:
I'm not just interested in stating that. I am explaining WHY any result so obtained can not inform us of anything other than the fact that a subtle mathematical error has been made, ie. substituting incompatible expectation values within Bell's inequality.
We sort of agree then. The results can't inform us of anything other than the fact that a certain mathematical form can't possibly agree with qm or experiment. But, what Bell did is not a mathematical error. Bell constructed a generalized LR form and compared it with qm. They're incompatible.

If you can present another form that an LR model can take, that meets the the requirements for an explicit LR model, and reproduces qm predictions, then that might be interesting.

billschnieder said:
Did you read the one posted in this thread?
Sure, but I don't really understand what he did.

billschnieder said:
You seemed to dismiss it earlier based on what you had heard about his other offerings.
I thought he might be doing essentially the same thing in both, ie., allowing a and b to communicate, but 'locally' in an imaginary space, which wouldn't be an LR model. Then I was wondering if there might be 'any' way to translate what he did into a realistic local view of the underlying mechanics. But, even if so, if it can't be made explicitly LR, that is with a clearly 3D classical LR encoded in the model, then it isn't an LR model.

billschnieder said:
You may ask how come his LR model could violate the inequality, and the answer is for the same reasons I have already explained. -- the terms he used are not all defined within the same probability space. It is the same reason why QM violates the inequalities.
I don't think this clarifies it fully enough.

The inequality is based on a generalized LR form, the salient feature of which is the separability of the underlying parameter determining coincidental detection. Standard qm and Christian's formalisms violate the inequality because those formalisms don't encode a feature that skews the underlying parameter nonseparability (ie., they don't skew the relationship between the particles) -- qm, 'nonlocally' via the projection, and Christian's Clifford algebraic models by allowing the particles to communicate 'locally' via a null interval in C-space. I'm not sure how Christian's paper in this thread does it.

billschnieder said:
I haven't seen anybody here argue that his model presented in the above paper is not LR, nor have I seen anyone argue that his model does not reproduce the QM result. All I have seen is discussion around his other papers.
Hence, my call for experts or at least more knowledgeable people than myself. Glad you showed up.

His model does reproduce the qm result. But it doesn't 'look' LR because of the bivectors and the algebra he employs. I'm just plodding along trying to learn as I go, so if you or anybody else has some insights into Christian's stuff to offer then that would be most appreciated. And thanks for your input so far. It's motivating me to think about this a little more and not just set it aside.
 
Last edited:
  • #40


billschnieder said:
Concerning negative probabilities, Dr C says:

...

Note how he defines X, Y and Z as being non commuting since only two of such angles can be measured at the same time, and yet he writes down an impossible equation which includes terms which can never be simultaneously valid. No doubt he obtains his result.

I think that is precisely my point. The HUP should be applied literally, and that makes realism untenable. Experiment follows this in all respects.
 
  • #41


DrChinese said:
Now Bell's (1) is essentially A(a)={+1,-1}, B(b)={+1,-1}

Bell later effectively says that realism implies simultaneously C(c)={+1,-1}. This assumption is wrong if QM is correct.

Just to drive the above home, here is a definition of realism from an experimental paper from the past few days:

"Reality": The state of any physical system is always well defined, i.e. the dichotomic variable Mi(t), which tells us whether (Mi(t) = 1) or not (Mi(t) = 0) the system is in state i, is, at any time, Mi(t) = {0, 1}.

This from Violation of a temporal Bell inequality for single spins in solid by over 50 standard deviations. And you could find similar definitions in hundreds of papers.
 
  • #42


ThomasT said:
The inequality is based on Bell's LR form. Any model of entanglement taking that form must satisfy his inequality. The question concerns how locality and reality might be explicitly encoded in the same model, while remaining compatible with qm, and Bell shows that they can't be.
I have already shown elsewhere in another thread that you do not need LR or anything other than paired products of three variables to obtain Bell-like inequalities, irrespective of the physics behind the variables. It is a mathematical fact first established by Boole almost a hundred years before Bell, that paired products of three variables will obey Bell-like inequalities. Boole even concluded at the time that if in an experiment the data for three variables did not obey the inequality, it simply meant that those three variables could not possibly exist at the same time. He called them "conditions of possible experience".

That's the point of DrC's illustration. (a,b,c) is the LR dataset, based on the idea that underlying predetermined particle parameters exist independent of measurement.
There is no such dataset in qm. Hence, the conflict.
But there can never be such a dataset for the EPR scenario ever because it is impossible to measure two particles three times. Why would any sane individual expect a joint probability space of P(a,b,c) to exist? We do not need a thread discussing the idea that our inability to observe square circles in an experiment means nature is not real do we? We stop the discussion at the point where we realize that there is no such thing as a square circle.

All I am doing here is highlighting the fact that the lack of a P(a,b,c) in QM and in experiments is sufficient to make it impossible to apply Bell's inequalities to the EPR scenario. They are incompatible. So you can't even talk of a violation yet, because the laws of mathematics and logic prohibit you from using those terms from QM and experiments in the inequality. Find an experimental scenario for which P(a,b,c) is a valid probability distribution and you can discuss all you want about QM and experiments and Bell's inequality and LR etc. Until then such discussion is a waste of time and a weapon for increasing mutual confusion.


EDIT:
I thought the above was too complicated so I thought I should simplify.

Some choose to say: the fact that it is impossible to provide a dataset of triples which obeys Bell's inequality, implies that realism is false.

I say: Duh, in the statement of the problem, the impossibility of measuring two particles three times is almost explicitly recognized by any sane individual. Why then would any such individual expect two particles to actually be measured 3 times to obtain the dataset? It can not be done in QM, nor in any experiment, nor in any LR theory that anyone could cook up. Obviously, the fact that we can not measure two particles three times, says absolutely nothing about locality or realism.
 
Last edited:
  • #43


billschnieder said:
But there can never be such a dataset for the EPR scenario ever because it is impossible to measure two particles three times. Why would any sane individual expect a joint probability space of P(a,b,c) to exist? We do not need a thread discussing the idea that our inability to observe square circles in an experiment means nature is not real do we? We stop the discussion at the point where we realize that there is no such thing as a square circle.

Well gosh darn, Bill. I have non-brown eyes, non-black hair and light skin. My friend has brown eyes, black hair and dark skin. Funny, groups of people have properties that seem to persist and follow Bell Inequalities all day long. I will gladly show you datasets of these, 3 properties for random pairs of persons (that would be 2). The only samples I know of that don't follow these inequalities are quantum particles that are well described by the HUP.

And that would be: any 2 measurements of 3 properties of 2 particles. Is that too hard for you to follow? I mean, really, when has anyone tried to measure 2 particles 3 times? Basically I am saying you are full of hot air, and I think I have said as much before in our prior discussions. Or perhaps you can provide some experimental support for your position. Perhaps a reputable source other than yourself? Otherwise, you are adding nothing of value here except confusion for folks who have no idea that your views are not standard science.

If you want to add here, please add normal scientific thought. Set up your own site for your personal views.
 
  • #44


We've gotten a bit off topic. But all these considerations are connected. I'll tie it to Christian's stuff at the end.

billschnieder said:
We do not need a thread discussing the idea that our inability to observe square circles in an experiment means nature is not real do we?
No. And it seems that the discussion here at PF has moved beyond that, and that the physics community at large is moving beyond that as well. We're concerned with the form that models of entanglement can take to be instrumentally viable, and why -- and the why of it has, effectively, only to do with a formalism's correspondence with experimental design and preparation. Realism and localism refer to certain formal requirements or limits.

Einstein thought, and others still think (now, in the face of overwhelming evidence to the contrary), that LR formalisms are possible. Opposing postulates associated with competing formalisms are the basis for theorems (Bell) and 'tautologies' (DrC) which are developed to show a quantitative difference between incompatible formalisms. Incompatibility between LR and qm/experiment doesn't imply that some form of nonlocality exists or that an underlying reality with specific properties doesn't -- in fact there's absolutely no empirical evidence that even suggests those notions. It's unfortunate that so much of the literature, and our understanding, has been clouded by claims to the contrary.

It gets complicated insofar as theories do develop according to certain visions of the underlying reality, but those visions should always be based on empirical evidence, not lack of it. We infer from what's known, not from what isn't. It gets even more complicated when theories are developed primarily via abstract mathematics as opposed to primarily via reasonable inference from empirical evidence and sensory experience -- giving rise to paradoxes, pseudo problems and exotic interpretations. Which is not to say that this could be entirely avoided.

Regarding entanglement, it seems that we can reasonably infer from the experimental designs, preparations and observed correlations, that the relationships between the entangled entities are being produced locally via the various experimental protocols.

So, the interesting question has to do with why certain formalisms correctly model entanglement while others don't. What's the important difference between them? The current focus seems to be on separability vs nonseparability. LR formalisms are separable, while qm and Christian's are nonseparable. It's observed that qm and Christian's Clifford algebraic formalisms allow 'communication' between particles in imaginary spaces. But what does that mean? It's speculated that the real reason these formalisms work is because they don't skew the relationships between entangled entities via formal separation which is at odds with experimental design and preparation. This remains to be sorted out, and may never be fully because the exact characteristics of the underlying relationships (in real 3D space and time) are and will remain a matter of speculation.

I think we can say that Christian's current offering isn't an LR model of entanglement. It remains to sort out why it works -- what the formalism does, and maybe more importantly, what it doesn't do in light of reasonable inference from empirical evidence and sensory experience regarding the nature of entanglement.

I've benefitted from your analyses regarding this stuff, that is, your point regarding Bell and DrC is taken, and while your point helps to clean up the language surrounding Bell stuff, it doesn't diminish the correctness of their (Bell, DrC) math or the usefulness of their analyses, so anything you might want to say specifically about Christian's formalism in the current paper is welcomed.
 
Last edited:
  • #45


ThomasT said:
[..] Incompatibility between LR and qm/experiment doesn't imply that some form of nonlocality exists or that an underlying reality with specific properties doesn't -- in fact there's absolutely no empirical evidence that even suggests those notions. It's unfortunate that so much of the literature, and our understanding, has been clouded by claims to the contrary. [..]

Please clarify what you mean with "Incompatibility between local realism and [..] experiment doesn't imply that some form of nonlocality exists". Why do you say that the one doesn't imply the other? I don't even know the difference!

Thanks,
Harald
 
  • #46


harrylin said:
Please clarify what you mean with "Incompatibility between local realism and [..] experiment doesn't imply that some form of nonlocality exists". Why do you say that the one doesn't imply the other? I don't even know the difference!

Thanks,
Harald
Nonlocality for LR and qm refers to different things. For Einstein and local realists it refers to instantaneous action at a distance in real space and time. For qm it refers to an abstract and acausal math formalism whose connection to the reality underlying instrumental behavior is unknowable (ie., not scientifically ascertainable).
 
Last edited:
  • #47


ThomasT said:
Nonlocality for LR and qm refers to different things. For Einstein and local realists it refers to instantaneous action at a distance in real space and time. For qm it refers to an abstract and acausal math formalism whose connection to the reality underlying instrumental behavior is unknowable (ie., not scientifically ascertainable).


1. This looks to me like an excellent summary of the two positions: mainstream LR versus more mainstream QM. (Or the beginning of one.)

2. It certainly looks like my view of LR, which I associate with Einstein and EPR.

3. So I'd like to be sure that the summary is OK from the QM point of view.

4. In other words: I'd like to see this summary endorsed by those who believe a LR view of the world to be untenable; or by those who might modify the QM view (expressed above) to a more mainstream (and accurate) expression.

5. In other words: Can we sharpen the current dichotomy between LR and QM, in the way ThomasT has begun here, ENSURING that the views he has captured/initiated are "corrected if necessary" so as to be widely accepted by both camps ... and are similarly compressed?

6. In a nutshell: I personally see no objection to the LR view, as expressed above (at this early hour, for me). Is the QM view equally OK?
 
  • #48


harrylin said:
Please clarify what you mean with "Incompatibility between local realism and [..] experiment doesn't imply that some form of nonlocality exists". Why do you say that the one doesn't imply the other? I don't even know the difference!

Thanks,
Harald

You have the option of accepting non-realism and retaining locality.
 
  • #49


DrChinese said:
You have the option of accepting non-realism and retaining locality.
You mean like QFT? Is that really local in the sense that LR means local, ie., in real space and time? I've not studied it yet.
 
  • #50


Gordon Watson said:
1. This looks to me like an excellent summary of the two positions: mainstream LR versus more mainstream QM. (Or the beginning of one.)

I sometimes call it "quantum non-locality" to make it clear that it complies with the QM formalism.

Since you are also a fan of EPR: I would say that EPR would never have contemplated the kind of correlations that today are commonplace in Bell tests. You have to believe that Bell would have altered Einstein's view of things substantially.
 

Similar threads

Back
Top