Bell's theorem and local realism

  • #101
TrickyDicky said:
Your first definition enters in the causes of the nonlocality to avoid confrontation with relativity disallowance of ftl signals, but the theorem works irrespective of the causes, treats them like a black Box.
So it is obvious that is not a valid definition of locality regarding Bells theorem.

PS I remind you that Boris Tsirelson, who may certainly be regarded as an authority in this field, states that Bell's theorem says that QM is incompatible with locality+realism+no-conspiracy and that the choice of which of those three to reject (taking QM to be true or close to true) is a matter of *taste* or if you prefer *philosophy*.

Sure, there are other authorities who say different things; and perhaps they have different definitions of locality, or perhaps are not so sharp in philosophy as they are in physics. I think that there is presently a consensus among experts on Bell's theorem that Tsirelson's statement is correct, but maybe there is a different broad consensus among physicists at large. So everyone can choose what is the "official line" and indeed according to Tsirelson everyone can choose what they like to believe.
 
Physics news on Phys.org
  • #102
gill1109 said:
A *reasonable* definition of locality depends on what you take to be *real* hence located in space-time, and what you don't take to be real. Most people find it reasonable to let detector clicks be part of reality (according to MWI they are not real since only the set of possible outcomes is real; one particular branch is imagination). Whether or not the wave function is real and whether or not outcomes of unperformed measurements are real etc etc are questions of metaphysics.

So the definition of *locality* is not absolute, but relative.

I don't think the theorem is about realism, it is an exercise in logic, and it is concerned with locality in a quite specific and well defined way. Insisting in the definition being "relative" or in whether you take the term local as real or not seems to render the theorem totally useless. Like saying: well the conclusion of the theorem depends on the meaning you may want to give to the central concept being proved(since its definition is relative) so that you can make the theorem conclude whatever you like just by adding conditions or that they depend on whether you give a real significance to that concept.
In a theorem the definitions can't be relative in that sense, they better be specifically defined or it is not a theorem.
 
  • #103
TrickyDicky said:
I don't think the theorem is about realism, it is an exercise in logic, and it is concerned with locality in a quite specific and well defined way.

Bell's theorem is an answer to the question: "Can the correlations in EPR be explained by supposing that there are hidden local variables shared by the two particles?" The answer to that question is "no". It's not purely a question about locality, it's a question about a particular type of local model of correlations. The fact that it isn't purely about locality is proved by the possibility of superdeterministic local explanations for the EPR. (On the other hand, if you're going to allow superdeterminism, then the distinction between local and nonlocal disappears, I guess.)
 
  • #104
TrickyDicky said:
I disagree, based on Bell's theorem, not on later misconstructions of it.
The theorem rejects locality, period. The subsequent addition of the concept "local realism" that allowed to keep locality if one gave up realistic descriptions of what was going on in order to get the probabilistic outcomes in experiments, was an ad hoc retelling, probably to avoid problems with relativistic QM.

See for instance:
Foundations of Physics, Vol. 37 No. 3, 311-340 (March 2007)
"Against 'realism'" Norsen T

gill1109 said:
PS I remind you that Boris Tsirelson, who may certainly be regarded as an authority in this field, states that Bell's theorem says that QM is incompatible with locality+realism+no-conspiracy and that the choice of which of those three to reject (taking QM to be true or close to true) is a matter of *taste* or if you prefer *philosophy*.

Sure, there are other authorities who say different things; and perhaps they have different definitions of locality, or perhaps are not so sharp in philosophy as they are in physics. I think that there is presently a consensus among experts on Bell's theorem that Tsirelson's statement is correct, but maybe there is a different broad consensus among physicists at large. So everyone can choose what is the "official line" and indeed according to Tsirelson everyone can choose what they like to believe.

bohm2 has pointed out on these forums that Wiseman argues that there are two theorems and two definitions of locality, so that it depends on what one is talking about. http://arxiv.org/abs/1402.0351.
 
Last edited:
  • #105
TrickyDicky said:
I don't think the theorem is about realism, it is an exercise in logic, and it is concerned with locality in a quite specific and well defined way. Insisting in the definition being "relative" or in whether you take the term local as real or not seems to render the theorem totally useless. Like saying: well the conclusion of the theorem depends on the meaning you may want to give to the central concept being proved(since its definition is relative) so that you can make the theorem conclude whatever you like just by adding conditions or that they depend on whether you give a real significance to that concept.
In a theorem the definitions can't be relative in that sense, they better be specifically defined or it is not a theorem.

How about this method of arguing that reality is at least assumed in using a Bell test to disprove nonlocality? The Bell inequality is about the correlation between definite results. In quantum mechanics, we can put the Heisenberg cut however we want. So Bob can deny the reality that Alice had a result at spacelike separation. Bob is entitled to say that he had a result that Alice claimed a result at spacelike separation, but this result is about Alice's claim, which Bob obtained at non-spacelike separation. So there is no spacelike separation, and no Bell test.
 
  • Like
Likes 1 person
  • #106
stevendaryl said:
Bell's theorem is an answer to the question: "Can the correlations in EPR be explained by supposing that there are hidden local variables shared by the two particles?" The answer to that question is "no". It's not purely a question about locality, it's a question about a particular type of local model of correlations.
That particular type,that local model is what I call locality, making it purely a question about it.
Is this the same locality as that of relativity, and classical field theory in general? What do you think?

The fact that it isn't purely about locality is proved by the possibility of superdeterministic local explanations for the EPR. (On the other hand, if you're going to allow superdeterminism, then the distinction between local and nonlocal disappears, I guess.)
And it therefore spoils the supposed proved fact:wink:
That's why I insist there should be one unified and specific definition of locality, to avoid semantic confusion.
 
  • #107
atyy said:
How about this method of arguing that reality is at least assumed in using a Bell test to disprove nonlocality? The Bell inequality is about the correlation between definite results. In quantum mechanics, we can put the Heisenberg cut however we want. So Bob can deny the reality that Alice had a result at spacelike separation. Bob is entitled to say that he had a result that Alice claimed a result at spacelike separation, but this result is about Alice's claim, which Bob obtained at non-spacelike separation. So there is no spacelike separation, and no Bell test.
Yes. Basically as long as the quantum/classical cut is not solved this heuristic is valid.
 
  • #108
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.
 
  • #109
TrickyDicky said:
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.

So let's say we keep enough realism to do a Bell test, then you would say QM is nonlocal. However, it is consistent with relativity because relativity is consistent with nonlocality. What relativity is inconsistent with is using that nonlocality for superluminal classical communication ("causality"). Is that your argument?

Maybe something like the terminology in http://arxiv.org/abs/quant-ph/9709026, which terms quantum mechanics as "nonlocal" and "causal"?
 
Last edited:
  • #110
atyy said:
So let's say we keep enough realism to do a Bell test, then you would say QM is nonlocal.

Exactly. The problem is that I tend to think that QM's antirealism is so strong that I'm not sure it allows to keep even that bit enough for Bell.
However, it is consistent with relativity because relativity is consistent with nonlocality. What relativity is inconsistent with is using that nonlocality for superluminal classical communication ("causality").
Hmmm, let's say I would favor this view of the situation. But subject to the above disclaimer. And probably biased by my admiration for both relativity and QM :-p
 
  • #111
TrickyDicky said:
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.
Some authors have argued that the correlations in Bell-type experiments have yet to be explained by any local, non-realist model (whatever that means). Is there even any such model? I recall only 1 such model that was posted previously but it doesn't appear to be very popular and it's a difficult model to understand. I read it twice and still had trouble with it even though the author tried explaining it on this forum. Moreover, if non-locality is already implied by Bell-type experiments, why give up both realism and locality when giving up locality is all that is necessary to get results?
 
  • #112
TrickyDicky said:
Let's give some context. It is not that the theorem introduces any "particle" concept as its premise. It is about the conclusions from the theorem given certain assumption that is virtually shared by the whole physics community, namely atomism, the atomic theory as explanation of matter(the fundamental building blocks narrative) .
[..]
Now I have to say that I disagree with Neumaier that Classical field theory like electrodynamics as understood at least since Lorentz, violates Bell's inequalities as a theory. The reason is that electrodyamics includes classical particles. So it is both local and realistic.
I did not see Neumaier phrase it like that. It is true that in order to create EM radiation, one needs a radiation source; but IMHO, for his argument it's irrelevant how you model that source. It suffices that EM radiation can be modeled in a precise way.

He gave a neat illustration of how it can be sufficiently "nonlocal" for the hidden-variable analysis in his unpolished paper http://lanl.arxiv.org/abs/0706.0155. However, how EM waves could be sufficiently "nonlocal" for doing the trick with distant polarizers is still far from clear to me, although the paper by Banaszek quant-ph/9806069 seems to give, unwittingly, a hint at the end.

PS: the "fundamental building blocks" according to Neumaier are (something like) waves.
 
Last edited:
  • #113
harrylin said:
I did not see Neumaier phrase it like that. It is true that in order to create EM radiation, one needs a radiation source; but IMHO, for his argument it's irrelevant how you model that source. It suffices that EM radiation can be modeled in a precise way.

He gave a neat illustration of how it can be sufficiently "nonlocal" for the hidden-variable analysis in his unpolished paper http://lanl.arxiv.org/abs/0706.0155. However, how EM waves could be sufficiently "nonlocal" for doing the trick with distant polarizers is still far from clear to me, although the paper by Banaszek quant-ph/9806069 seems to give, unwittingly, a hint at the end.

PS: the "fundamental building blocks" according to Neumaier are (something like) waves.

I had not read Neumaier's paper linked by you when I wrote that, and now I have just read the conclusions.
He seems to center his analysis just on EM radiation and I was referring to electrodynamics whole theory so it's natural his argument has nothing to do with what I said there.

There is a trivial way in which say a plane wave is nonlocal, as it correlates its waveform for infinitely separated points.

His conclusion that "the present analysis demonstrates that a classical wave model for quantum
mechanics is not ruled out by experiments demonstrating the violation of the traditional
hidden variable assumptions" even if it was true(I don't know since I didn't read the analysis) looks to me not very useful since ruling out classical wave models explaining QM experiments doesn't need Bell's theorem.

His other conclusion:"the traditional hidden variable assumptions therefore only amount to hidden particle assumptions, and the experiments demonstrating their violation are just another chapter
in the old dispute between the particle or field nature of light conclusively resolved in favor of the field" I might agree with, as long as we use an extended notion of particle(basically any particle-like object).
 
  • #114
TrickyDicky said:
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.
What do you mean? Anyway, QM does not allow *anything* to go. Not at all. QM can't get the CHSH quantity S to go above 2 sqrt 2 but alternative theories could, still without violating locality. It could get all the way to 4.

It's called Tsirelson's inequality. I know that some very respectable and serious physicists have published experimental violation of Tsirelson inequality, and got that published in PRL or PRA - says something about refereeing and editing and general knowledge among physicists - but fortunately for QM, their experiment was flawed (loopholes!).
 
  • #115
bohm2 said:
Some authors have argued that the correlations in Bell-type experiments have yet to be explained by any local, non-realist model (whatever that means). Is there even any such model? I recall only 1 such model that was posted previously but it doesn't appear to be very popular and it's a difficult model to understand. I read it twice and still had trouble with it even though the author tried explaining it on this forum. Moreover, if non-locality is already implied by Bell-type experiments, why give up both realism and locality when giving up locality is all that is necessary to get results?

1) Lots of authors have argued that correlations in Bell-type experiments can be explained by local realist models. But so far none of those explanations stood up for long.
2) You could say that QM does not "explain" those correlations, it only describes them.
3) Bohmian theory does explain them, but it is non-local, of course (Bell's theorem).
4) No experiment was yet performed which was both succesfull in violating Bell type inequalities AND simultaneously satisfied the standard requirements for a "loophole-free" experiment, namely an experiment which (if succesful) can't be explained by a LHV theory. Possibly such an experiment might finally have gotten done within about a year from now. They getting pretty damned close.

For instance experiments with photons suffer from photons getting lost. You don't have a binary outcome you have a ternary outcome yes/no/disappeared (detection loophole). Experiments with atoms have the atoms so close the measurements so slow that it would be easy for one of the atoms to "know" how the other is being measured (locality loophole). Many experiments do not have fast, random, switching of detector settings, so later "particles" can easily "know" how earlier particles were being measured (memory loophole).
 
  • #116
atyy said:
So let's say we keep enough realism to do a Bell test, then you would say QM is nonlocal. However, it is consistent with relativity because relativity is consistent with nonlocality. What relativity is inconsistent with is using that nonlocality for superluminal classical communication ("causality"). Is that your argument?

Maybe something like the terminology in http://arxiv.org/abs/quant-ph/9709026, which terms quantum mechanics as "nonlocal" and "causal"?

Belavkin's eventum mechanics provides a view of QM which is both local and causal. As long as you don't ask for a mechanistic ie classical like explanation for "what is going on behind the scenes". You have to stop and accept quantum randomness. Irreducible. Intrinsic. Not like usual randomness ("merely statistical").

Sorry here I give you a reference to an unpublished unfinished manuscript by myself but it does give you some references and a quick easy (?) intro: http://arxiv.org/abs/0905.2723
 
  • #117
gill1109 said:
1) Lots of authors have argued that correlations in Bell-type experiments can be explained by local realist models. But so far none of those explanations stood up for long.

There was some interesting work done years ago by an Israeli mathematical physicist, Itamar Pitowsky, about the possibility of evading Bell's theorem by using non-measurable sets. The basic idea was to construct (in the mathematical sense) a function F of type S^2 \rightarrow \{+1,-1\} (S^2 being a unit sphere, or alternatively the set of unit direction vectors in 3D space) such that

  1. The measure of the set of points \vec{a} such that F(\vec{a}) = 1 is 1/2.
  2. For almost all points \vec{a}, the measure of the set of points \vec{b} such that F(\vec{a}) = F(\vec{b}) is cos^2(\theta/2) where \theta is the angle between \vec{a} and \vec{b}

It is actually mathematically consistent to assume the existence of such a function. Such a function could be used for a hidden variable explanation of EPR, contrary to Bell. The loophole that this model exploits is that Bell implicitly assumed that everything of interest is measurable, while in Pitowsky's model, certain joint probabilities correspond to non-measurable sets.

The problem with Pitowsky's model turns out to be that a satisfactory physical interpretation of non-measurable sets is about as elusive as a satisfactory physical interpretation of QM. In particular, if your theory predicts that a certain set of events is non-measurable, and then you perform experiments to actually count the number of events, you will get some actual relative frequency. So the assumption, vital to making probabilistic models testable, that relative frequency approaches the theoretical probability, can't possibly hold for nonmeasurable sets. In that case, it's not clear what the significance of the theoretical probability is, in the first place.

In particular, as applied to the spin-1/2 EPR experiment, I think it's true that every finite set of runs of the experiment will have relative frequencies that violate Pitowsky's theoretical probabilities. That's not necessarily a contradiction, but it certainly shows that introducing non-measurable sets makes the interpretation of experiment statistics very strange.
 
  • #118
stevendaryl said:
There was some interesting work done years ago by an Israeli mathematical physicist, Itamar Pitowsky, about the possibility of evading Bell's theorem by using non-measurable sets.
I know. As a mathematician I can tell you that this is quite bogus. Does not prove what it seems to prove. (It's not for nothing that no-one has ever followed this up).

Pitowsky has done a lot of great things! But this one was a dud, IMHO.

Here's a version of Bell's theorem which *only* uses finite discrete probability and elementary logic http://arxiv.org/abs/1207.5103. Moreover it is stronger than the conventional result since it is a "finite N" result: a probability inequality for the observed correlations after N trials. The assumptions are slightly different from the usual ones: I put probability into the selection of settings, not into the particles.

No, sorry, all the people claiming that some mathematical niceties e.g. measure theory or conventional definitions of integrability or the topology of space-time are the "way out" are barking up the wrong tree (IMHO)

Bell makes some conventional assumptions in order to write his proof out using conventional calculus. But you don't *have* to make those assumptions in order to get his main result. What you actually use is a whole lot weaker. Pitowsky only shows how Bell's line of proof would break down ... he does not realize that there are alternative lines of proof which would not break down even if one did not make measurability assumptions.

NB the existence of non-measurable functions requires the axiom of choice. A somewhat arbitrary assumption about infinite numbers of infinite sets. There exist consistent axioms for mathematics without the axiom of choice but making all sets measurable. So what are we talking about here? Formal word games, I think.
 
Last edited:
  • #119
gill1109 said:
I know. As a mathematician I can tell you that this is quite bogus. Does not prove what it seems to prove. (It's not for nothing that no-one has ever followed this up).

Pitowsky has done a lot of great things! But this one was a dud, IMHO.

Here's a version of Bell's theorem which *only* uses finite discrete probability and elementary logic http://arxiv.org/abs/1207.5103.

I think maybe I had read something along those lines, which was the reason I said that the nice(?) measure-theoretic properties of Pitowsky's model doesn't seem to imply anything about actual experiments.

Well, that's disappointing. It seemed to me that something like that might work, because non-measurable sets are weird in a way that has something of the same flavor as quantum weirdness.

An example (assuming the continuum hypothesis, this is possible) is to have an ordering (not the usual ordering) \leq on the unit interval [0,1] such that for every real number x in the interval, there are only countably many y such that y \leq x. Since every countable set has Lebesgue measure 0, we have the following truly weird situation possible:

Suppose you and I both generate a random real between 0 and 1. I generate the number x and later, you generate the number y. Before you generate your number, I look at my number and compute the probability that you will generate a number less than mine (in the special ordering). Since there are only countably many possibilities, I conclude that the probability is 0. So I should have complete confidence that my number is smaller than yours.

On the other hand, by the perfect symmetry between our situations, you could make the same argument.

So one or the other of us is going to be infinitely surprised (an event of probability zero actually happened).
 
  • #120
stevendaryl said:
I think maybe I had read something along those lines, which was the reason I said that the nice(?) measure-theoretic properties of Pitowsky's model doesn't seem to imply anything about actual experiments.

Well, that's disappointing. It seemed to me that something like that might work, because non-measurable sets are weird in a way that has something of the same flavor as quantum weirdness.

An example (assuming the continuum hypothesis, this is possible) is to have an ordering (not the usual ordering) \leq on the unit interval [0,1] such that for every real number x in the interval, there are only countably many y such that y \leq x. Since every countable set has Lebesgue measure 0, we have the following truly weird situation possible:

Suppose you and I both generate a random real between 0 and 1. I generate the number x and later, you generate the number y. Before you generate your number, I look at my number and compute the probability that you will generate a number less than mine (in the special ordering). Since there are only countably many possibilities, I conclude that the probability is 0. So I should have complete confidence that my number is smaller than yours.

On the other hand, by the perfect symmetry between our situations, you could make the same argument.

So one or the other of us is going to be infinitely surprised (an event of probability zero actually happened).
I think you are referring here to paradoxes from "model theory" namely there exist countable models for the real numbers. Beautiful. It's a self-reference paradox, really just a hyped up version of the old paradox of the barber who shaves everyone in the village who doesn't shave himself. In some sense, it is just a word game. It's a useful tool in maths - one can prove theorems by proving theorems about proving theorems. Nothing wrong with that.

Maybe there is superficially a flavour of that kind of weirdness in quantum weirdness. But after studying this a long time (and analysing several such "solution") I am certain that quantum weirdness is weirdness of a totally different nature. It is *physical*, it conflicts with our in-built instinctive understanding of the world (which got there by evolution. It allowed our ancestors to succesfully raise more kids than the others. Evolution is blind and even leads species into dead ends, again and again!). So I would prefer to see it as quantum wonderfulness, not quantum weirdness.
 
  • #121
gill1109 said:
I know. As a mathematician I can tell you that this is quite bogus. Does not prove what it seems to prove. (It's not for nothing that no-one has ever followed this up).

Pitowsky has done a lot of great things! But this one was a dud, IMHO.

Here's a version of Bell's theorem which *only* uses finite discrete probability and elementary logic http://arxiv.org/abs/1207.5103. Moreover it is stronger than the conventional result since it is a "finite N" result: a probability inequality for the observed correlations after N trials. The assumptions are slightly different from the usual ones: I put probability into the selection of settings, not into the particles.

To follow up a little bit, I feel that there is still a bit of an unsolved mystery about Pitowsky's model. I agree that his model can't be the way things REALLY work, but I would like to understand what goes wrong if we imagined that it was the way things really work. Imagine that in an EPR-type experiment, there was such a spin-1/2 function F associated with the electron (and the positron) such that a subsequent measurement of spin in direction \vec{x} always gave the answer F(\vec{x}). We perform a series of measurements and compile statistics. What breaks down?

On the one hand, we could compute the relative probability that F(\vec{a}) = F(\vec{b}) and we conclude that it should be given by cos^2(\theta/2) (because F) was constructed to make that true). On the other hand, we can always find other directions \vec{a'} and \vec{b'} such that the statistical correlations don't match the predictions of QM (because your finite version of Bell's inequality shows that it is impossible to match the predictions of QM for every direction at the same time).

So what that means is that for any run of experiments, there will be some statistics that don't come close to matching the theoretical probability. I think this is a fundamental problem with relating non-measurable sets to experiment. The assumption that relative frequencies are related (in a limiting sense) to theoretical probabilities can't possibly hold when there are non-measurable sets involved.
 
  • #122
stevendaryl said:
To follow up a little bit, I feel that there is still a bit of an unsolved mystery about Pitowsky's model. I agree that his model can't be the way things REALLY work, but I would like to understand what goes wrong if we imagined that it was the way things really work. Imagine that in an EPR-type experiment, there was such a spin-1/2 function F associated with the electron (and the positron) such that a subsequent measurement of spin in direction \vec{x} always gave the answer F(\vec{x}). We perform a series of measurements and compile statistics. What breaks down?

On the one hand, we could compute the relative probability that F(\vec{a}) = F(\vec{b}) and we conclude that it should be given by cos^2(\theta/2) (because F) was constructed to make that true). On the other hand, we can always find other directions \vec{a'} and \vec{b'} such that the statistical correlations don't match the predictions of QM (because your finite version of Bell's inequality shows that it is impossible to match the predictions of QM for every direction at the same time).

So what that means is that for any run of experiments, there will be some statistics that don't come close to matching the theoretical probability. I think this is a fundamental problem with relating non-measurable sets to experiment. The assumption that relative frequencies are related (in a limiting sense) to theoretical probabilities can't possibly hold when there are non-measurable sets involved.
If we do the Bell-CHSH type experiment picking settings at random as we are supposed to, nothing breaks down. At least, nothing breaks down if you use a sharper method of proof than Bell's old approach.

That's the point of my own work, going back to http://arxiv.org/abs/quant-ph/0110137. No measurability assumptions. The only assumption is that both outcomes are simultaneously defined - both the outcomes which would have been seen if either setting had been in force. aka counterfactual definiteness. The experimenter tosses a coin and gets to see one or the other, at random. This works for Pitowsky's "model" too. It works for any LHV model. A function is a function whether it is measurable or not. It works for stochastic LHV models as well as deterministic. Just a matter of redefining what is the hidden variable.

The only escape is super-determinism so that I cannot actually effectively randomize experimental settings.
 
  • #123
TrickyDicky said:
I had not read Neumaier's paper linked by you when I wrote that, and now I have just read the conclusions.
[..]
ruling out classical wave models explaining QM experiments doesn't need Bell's theorem.
Sure. The last days I did read some of his papers while I found them, and (as you may have guessed), that's not what he had in mind. He (re)discovered that QM is totally incompatible with classical particle theory but very close to classical wave theory. The naive particle concept must be dropped.
There is a trivial way in which say a plane wave is nonlocal, as it correlates its waveform for infinitely separated points. [..]
If I'm not mistaken, all matter is similarly modeled in QFT as field excitations.
 
  • #124
stevendaryl said:
On the other hand, we can always find other directions \vec{a'} and \vec{b'} such that the statistical correlations don't match the predictions of QM (because your finite version of Bell's inequality shows that it is impossible to match the predictions of QM for every direction at the same time).
No this is a misunderstanding. My theorem says that for the set of correlations you actually did choose to measure, the chance that they'll violate CHSH by more than some given amount is incredibly small if N is pretty large. The theorem doesn't say anything about what you didn't do. It only talks about what you actually did experimentally observe. It assumes you are doing a regular CHSH type experiment - Alice and Bob are repeatedly and independently choosing between just two particular settings. So only four correlations are getting measured.
Note, Pitowsky has a non-measurable law of large numbers which says that the relative frequency of the event you are looking at will continue for ever to fluctuate between its outer probability and its lower probability. Those two numbers can be 1 and 0 respectively. So what. My theorem talks about the chance of something happening for a given fixed finite value of N, conditional on the values of the hidden variables etc etc. The probability in my theorem is *exclusively* in the 2 N coin tosses determining Alice and Bob's settings. If N goes to infinity it doesn't matter at all whether or not the quantum averages converge or not. There are always subsequences along which they converge by compactness. Along any such subsequence, in the long run CHSH will certainly only be violated by more than epsilon at most finitely many times. (Here I am using the Borel-Cantelli lemma which is how you can prove the strong law of large numbers once you have got an exponential bound like we have here).
 
Last edited:
  • #125
gill1109 said:
I think you are referring here to paradoxes from "model theory" namely there exist countable models for the real numbers.

No, not at all. Let \omega_1 be the smallest uncountable ordinal. Then for any ordinal \alpha < \omega_1 (with < the usual ordering on ordinals), there are only countably many \beta < \alpha but there are uncountably many \beta > \alpha. So if we assume the continuum hypothesis, then every real in [0,1] can be associated with an ordinal less than \omega_1. This gives us a total ordering on reals such that for any x there are only countably many smaller reals in [0,1] but uncountably many larger reals.

Beautiful. It's a self-reference paradox, really just a hyped up version of the old paradox of the barber who shaves everyone in the village who doesn't shave himself. In some sense, it is just a word game. It's a useful tool in maths - one can prove theorems by proving theorems about proving theorems. Nothing wrong with that.

No, I don't think it's paradoxical in that sense. It's perfectly consistent mathematics (unlike the Liar Paradox, which is an actual logical contradiction). It's just weird.
 
  • #126
stevendaryl said:
No, not at all. Let \omega_1 be the smallest uncountable ordinal. Then for any ordinal \alpha < \omega_1 (with < the usual ordering on ordinals), there are only countably many \beta < \alpha but there are uncountably many \beta > \alpha. So if we assume the continuum hypothesis, then every real in [0,1] can be associated with an ordinal less than \omega_1. This gives us a total ordering on reals such that for any x there are only countably many smaller reals in [0,1] but uncountably many larger reals.
I think you are wrong. The continuum hypothesis tells us that the unit interval has the same cardinality as Aleph_1, the first cardinal number large that Aleph_0, the first infinite cardinal. This does not mean that the numbers in [0, 1] can be put in 1-1 correspondence with 1, 2, ... You are saying that there is a 1-1 map from [0, 1] to the numbers 1, 2, ... hence [0, 1] is countable.

Continuum hypothesis says there is no cardinality strictly between Aleph_0, the first infinite cardinal = the cardinality of the set of the natural numbers, and 2^Aleph_0, the set of functions from Aleph_0 to {0, 1}, which is easily seen to be the same cardinality as that of the unit interval on the real line. So no infinite set which cannot be put into one-to-one correspondence with the natural numbers but which is the domain of some one-to-one map into the unit interval but which cannot be put into one-to-one correspondence with the whole unit interval.

Maybe you are mixing up cardinals and ordinals?
 
Last edited:
  • #127
gill1109 said:
No this is a misunderstanding. My theorem says that for the set of correlations you actually did choose to measure, the chance that they'll violate CHSH by more than some given amount is incredibly small if N is pretty large. The theorem doesn't say anything about what you didn't do. It only talks about what you actually did experimentally observe. It assumes you are doing a regular CHSH type experiment - Alice and Bob are repeatedly and independently choosing between just two particular settings. So only four correlations are getting measured.

I don't think there's a misunderstanding. I'm just saying that there is an apparent contradiction and I don't see how to resolve it.

Imagine generating a sequence of Pitowsky spin-1/2 functions:

F_1

F_2

.
.
.

For each such run, you let Alice and Bob pick a direction:

a_1, b_1
a_2, b_2

.
.
.

Then we lookup their corresponding results:

R_{A,1} = F_1(a_1), R_{B,1} = F_1(b_1)
R_{A,2} = F_2(a_2), R_{B,2} = F_2(b_2)
.
.
.

The question is: what are the statistics for correlations between Alice's results and Bob's results?

On the one hand, your finite version of Bell's inequality can show that (almost certainly) the statistics can't match the predictions of QM. On the other hand, the functions F_j were specifically constructed so that the probability of Bob getting F_j(b_j) = +1 given that Alice got F_j(a_j) = +1 is given by the QM relative probabilities. That seems to be a contradiction. So what goes wrong?
 
  • #128
gill1109 said:
I think you are wrong. The continuum hypothesis tells us that the unit interval has the same cardinality as Aleph_1, the first cardinal number large that Aleph_0, the first infinite cardinal. This does not mean that the numbers in [0, 1] can be put in 1-1 correspondence with 1, 2, ... You are saying that there is a 1-1 map from [0, 1] to the numbers 1, 2, ... hence [0, 1] is countable.

Continuum hypothesis says there is no cardinality strictly between Aleph_0, the first infinite cardinal = the cardinality of the set of the natural numbers, and 2^Aleph_0, the set of functions from Aleph_0 to {0, 1}, which is easily seen to be the same cardinality as that of the unit interval on the real line. So no infinite set which cannot be put into one-to-one correspondence with the natural numbers but which is the domain of some one-to-one map into the unit interval but which cannot be put into one-to-one correspondence with the whole unit interval.

Maybe you are mixing up cardinals and ordinals?

I didn't say that the unit interval can be put into a one-to-one correspondence with the naturals; I said that it can be put into a one-to-one correspondence with the countable ordinals. The set of countable ordinals goes way beyond the naturals. The naturals are the finite ordinals, not the countable ordinals.

Aleph_1 is (if one uses the Von Neumann ordinals) equal to the set of all countable ordinals. So there are uncountably many countable ordinals. The continuum hypothesis implies that Aleph_1 has the same cardinality as the continuum, and so it implies that the unit interval can be put into a one-to-one correspondence with the countable ordinals.
 
  • #129
stevendaryl said:
I didn't say that the unit interval can be put into a one-to-one correspondence with the naturals; I said that it can be put into a one-to-one correspondence with the countable ordinals. The set of countable ordinals goes way beyond the naturals. The naturals are the finite ordinals, not the countable ordinals.

Aleph_1 is (if one uses the Von Neumann ordinals) equal to the set of all countable ordinals. So there are uncountably many countable ordinals. The continuum hypothesis implies that Aleph_1 has the same cardinality as the continuum, and so it implies that the unit interval can be put into a one-to-one correspondence with the countable ordinals.

Hold it. Aleph_1 is the first uncountable *cardinal* not ordinal.

AFAIK, the continuum hypothesis does not say that the unit interval is in one-to-one correspondence with the set of countable *ordinals*. But maybe you know things about the continuum hypothesis which I don't know. Please give a reference.
 
  • #130
stevendaryl said:
I didn't say that the unit interval can be put into a one-to-one correspondence with the naturals; I said that it can be put into a one-to-one correspondence with the countable ordinals. The set of countable ordinals goes way beyond the naturals. The naturals are the finite ordinals, not the countable ordinals.

Aleph_1 is (if one uses the Von Neumann ordinals) equal to the set of all countable ordinals. So there are uncountably many countable ordinals. The continuum hypothesis implies that Aleph_1 has the same cardinality as the continuum, and so it implies that the unit interval can be put into a one-to-one correspondence with the countable ordinals.

The axiom of choice implies that every set can be put into a one-to-one correspondence with some initial segment of the ordinals. That means that it is possible to index the unit interval by ordinals \alpha: [0,1] = \{ r_\alpha | \alpha < \mathcal{C}\} where \mathcal{C} is the cardinality of the continuum. The continuum hypothesis implies that \mathcal{C} = \omega_1, the first uncountable ordinal (\omega_1 is the same as Aleph_1, if we use the Von Neumann representation for cardinals and ordinals). So we have:

[0,1] = \{ r_\alpha | \alpha < \omega_1\}

If \alpha < \omega_1, then that means that \alpha is countable, which means that there are only countably many smaller ordinals. That means that if we order the elements of [0,1] by using r_\alpha < r_\beta \leftrightarrow \alpha < \beta, then for every x in [0,1] there are only countably many y such that y < x.
 
  • #131
stevendaryl said:
I don't think there's a misunderstanding. I'm just saying that there is an apparent contradiction and I don't see how to resolve it.

Imagine generating a sequence of Pitowsky spin-1/2 functions:

F_1

F_2

.
.
.

For each such run, you let Alice and Bob pick a direction:

a_1, b_1
a_2, b_2

.
.
.

Then we lookup their corresponding results:

R_{A,1} = F_1(a_1), R_{B,1} = F_1(b_1)
R_{A,2} = F_2(a_2), R_{B,2} = F_2(b_2)
.
.
.

The question is: what are the statistics for correlations between Alice's results and Bob's results?

On the one hand, your finite version of Bell's inequality can show that (almost certainly) the statistics can't match the predictions of QM. On the other hand, the functions F_j were specifically constructed so that the probability of Bob getting F_j(b_j) = +1 given that Alice got F_j(a_j) = +1 is given by the QM relative probabilities. That seems to be a contradiction. So what goes wrong?

Pitowsky can come up with one function or lots but he doesn't know in advance which arguments we are going to supply it with. In the j'th run one of his functions is "queried" once (well once on each side of the experiment) and generates two outcomes +/-1. His "probabilities" are irrelevant. If he is using non-measurable functions he can't control what "probabilities" come out when these functions are queried infinitely often. I don't see any point in trying to rescue his approach. But you can try if you like. I think it is conceptually unsound.
 
  • #132
stevendaryl said:
The axiom of choice implies that every set can be put into a one-to-one correspondence with some initial segment of the ordinals. That means that it is possible to index the unit interval by ordinals \alpha: [0,1] = \{ r_\alpha | \alpha < \mathcal{C}\} where \mathcal{C} is the cardinality of the continuum. The continuum hypothesis implies that \mathcal{C} = \omega_1, the first uncountable ordinal (\omega_1 is the same as Aleph_1, if we use the Von Neumann representation for cardinals and ordinals). So we have:

[0,1] = \{ r_\alpha | \alpha < \omega_1\}

If \alpha < \omega_1, then that means that \alpha is countable, which means that there are only countably many smaller ordinals. That means that if we order the elements of [0,1] by using r_\alpha < r_\beta \leftrightarrow \alpha < \beta, then for every x in [0,1] there are only countably many y such that y < x.
Thanks, you are right!

So the set of countable ordinals is very very very large. Your "ordering" of [0, 1] is not actually countable, even though every initial segment of it is. Well that's how it has to be if we want both the axiom of choice and the continuum hypothesis to be true. But it is merely a matter of taste whether or not we want them to be true. The physics of the universe does not depend on these axioms of infinite sets being true or not. So maybe there are physical grounds to prefer not to have some of these axioms - we might get a mathematics which was more physically appealing by making different choices. There have been a number of very serious proposals on these lines. Pick axioms of the infinite not on the grounds of mathematical expediency but on the grounds of physical intuition.
 
Last edited:
  • #133
Bohm described it very well. As Bell himself said, you can't get away with "no action at a distance"...

 
Last edited by a moderator:
  • #134
EEngineer91 said:
Bohm described it very well. As Bell himself said, you can't get away with "no action at a distance"...


When/where did Bell say that? Just like Bohr, there is a young Bell and an older and wiser Bell ... Young Bell was a fan of Bohmian mechanics. Older Bell liked the CSL theory. Always (?) Bell was careful to distinguish his gut feelings about a matter, from what logic would allow us to conclude.

Look: you can't simulate quantum correlations with local hidden variables model without cheating. That's exactly what Bell's theorem says. If you *know* that there must be a hidden variables model explaining QM, then you *know* there is non-locality.

QM does not allow action-at-a-distance in the world of what we can see and feel and measure. If you want to simulate QM with hidden variables, you'll have to put action-at-a-distance into the hidden layer.
 
Last edited by a moderator:
  • #135
Please watch the video, that is not a young Bell saying this. He was always a fan of Bohm's work, but unfortunately he died early as well. The most important line in the video is at the end "you can't get away with NO action at a distance"...non-locality is fine, it just bugs the relativists and those who think c is a universal speed barrier to everything, it is just a constant of electromagnetism
 
  • #136
EEngineer91 said:
Please watch the video, that is not a young Bell saying this. He was always a fan of Bohm's work, but unfortunately he died early as well. The most important line in the video is at the end "you can't get away with NO action at a distance"...non-locality is fine, it just bugs the relativists and those who think c is a universal speed barrier to everything, it is just a constant of electromagnetism
will do

He is a bit subtle. He says. I cannot say action at a distance is not needed. I can say that you can't say it is not needed. This is like Buddha talking about self. He is saying that our usual categories of thought are *wrong*. Because of the words in our vocabulary and our narrow interpretation of what they mean, we ask stupid questions, and hence get stupid answers.

Beautiful! Exactly what I have been thinking for a long time...
 
  • #137
gill1109 said:
Hold it. Aleph_1 is the first uncountable *cardinal* not ordinal.

In the Von Neumann representation of ordinals and cardinals, a cardinal is an ordinal; \alpha is a cardinal if it is an ordinal, and for any other ordinal \beta < \alpha, there is no one-to-one correspondence between \alpha and \beta. So in the Von Neumann representation, the first uncountable ordinal is also the first uncountable cardinal.

AFAIK, the continuum hypothesis does not say that the unit interval is in one-to-one correspondence with the set of countable *ordinals*.

Yes, it does imply that. With the Von Neumann representation of ordinals, any ordinal is the set of all smaller ordinals. So the set of all countable ordinals is itself an ordinal. It has to be uncountable (otherwise, it would be an element of itself, which is impossible). So it's the smallest uncountable ordinal, \omega_1. The continuum hypothesis says that there is no cardinality between countable and the continuum, so the continuum has to equal \omega_1.

But maybe you know things about the continuum hypothesis which I don't know. Please give a reference.

I did some Googling, and I don't see the claim stated explicitly anywhere, although it's a trivial consequence of other statements.

http://en.wikipedia.org/wiki/Aleph_number
\aleph_1 is the cardinality of the set of all countable ordinal numbers...

the celebrated continuum hypothesis, CH, is equivalent to the identity

2^{\aleph_0}=\aleph_1

Together, those statements imply that the continuum has the same cardinality as the set of countable ordinals. Having the same cardinality means that they can be put into one-to-one correspondence.
 
  • #138
gill1109 said:
Pitowsky can come up with one function or lots but he doesn't know in advance which arguments we are going to supply it with. In the j'th run one of his functions is "queried" once (well once on each side of the experiment) and generates two outcomes +/-1. His "probabilities" are irrelevant. If he is using non-measurable functions he can't control what "probabilities" come out when these functions are queried infinitely often. I don't see any point in trying to rescue his approach. But you can try if you like. I think it is conceptually unsound.

Yes, that's my point--there seems to be a contradiction between the formal computed probabilities and the intuitive notion of probabilities as limits of relative frequencies. Maybe that means that the mathematical possibility of nonmeasurable sets is inconsistent with our use of probabilities for physics.

It's not so much that I'm trying to rescue Pitowsky's approach--from the very first, it seemed to me like a toy model to show the subtleties involved in Bell's proof that are easy to gloss over. At this point, I'm really trying to reconcile two different mathematical results that both seem pretty rigorous, but seem to contradict each other. Whether or not Pitowsky's functions have any relevance to the real world, we can reason about them---they are pretty well-defined, mathematically. I'm trying to understand what goes wrong in reasoning about them.
 
Last edited:
  • #139
stevendaryl said:
Together, those statements imply that the continuum has the same cardinality as the set of countable ordinals. Having the same cardinality means that they can be put into one-to-one correspondence.
Agree. This is what continuum hypothesis and axiom of choice tell us. But we are free not to believe either. Formal mathematics is consistent with them if and only if it is consistent without them. One could have other axioms instead, e.g. all subsets of [0,1] are Lebesgue measurable. Maybe that would be a nicer axiom for physics applications. No more Banach-Tarski paradox. All kinds of advantages ...
 
  • #140
gill1109 said:
will do

He is a bit subtle. He says. I cannot say action at a distance is not needed. I can say that you can't say it is not needed. This is like Buddha talking about self. He is saying that our usual categories of thought are *wrong*. Because of the words in our vocabulary and our narrow interpretation of what they mean, we ask stupid questions, and hence get stupid answers.

Beautiful! Exactly what I have been thinking for a long time...

Yes, very subtle...but important.
 
  • #141
stevendaryl said:
I don't think they are wildly different if you don't have locality. Let's do things classically, rather than quantum-mechanically. For simplicity, let's just consider
...
Now, that pair of equations is exactly equivalent to a problem in 2-D space (3D spacetime) involving just one particle ...

So I think that it's really locality that makes the dimensionality of spacetime meaningful.
With two particles each in N-dimensional space, you can destroy one and still have the other (even if you consider non-locality). With a single particle in 2N dimensional space, you can not destroy it and still have it, nor can you destroy half of it and convert it into an N-dimensional particle. You may have the same symbols in your equations but they mean totally different things even if they look the same.
 
  • #142
billschnieder said:
With two particles each in N-dimensional space, you can destroy one and still have the other (even if you consider non-locality). With a single particle in 2N dimensional space, you can not destroy it and still have it, nor can you destroy half of it and convert it into an N-dimensional particle. You may have the same symbols in your equations but they mean totally different things even if they look the same.

Okay, I guess I would amend what I said to the following: If your laws of physics are such that the number of particles is constant, then there is no difference between N particles in 3D spacetime and 1 particle in 3N - D space.

With a variable number of particles, the interpretation doesn't work, unless you also allowed the dimension of space to vary with time. (Why not?)
 
  • #143
EEngineer91 said:
Yes, very subtle...but important.
Very important indeed!
 
  • #144
Yet, for some reason, many physicists of today lambast action at a distance as some logical impossibility. Bell even brought up good holistic problems such as defining precisely "measurement", "observation device" and the paradox of their "seperate-ness" and fundamental "together-ness"
 
  • #145
EEngineer91 said:
Yet, for some reason, many physicists of today lambast action at a distance as some logical impossibility. Bell even brought up good holistic problems such as defining precisely "measurement", "observation device" and the paradox of their "seperate-ness" and fundamental "together-ness"
Yep. I think his clear thinking and clear writing (and sense of humour) is unsurpassed.
 
  • Like
Likes 1 person
  • #146
stevendaryl said:
To follow up a little bit, I feel that there is still a bit of an unsolved mystery about Pitowsky's model. I agree that his model can't be the way things REALLY work, but I would like to understand what goes wrong if we imagined that it was the way things really work. Imagine that in an EPR-type experiment, there was such a spin-1/2 function F associated with the electron (and the positron) such that a subsequent measurement of spin in direction \vec{x} always gave the answer F(\vec{x}). We perform a series of measurements and compile statistics. What breaks down?
Maybe you can provide a citation for Pitowsky's model so others can follow?

On the one hand, we could compute the relative probability that F(\vec{a}) = F(\vec{b}) and we conclude that it should be given by cos^2(\theta/2) (because F) was constructed to make that true). On the other hand, we can always find other directions \vec{a'} and \vec{b'} such that the statistical correlations don't match the predictions of QM (because your finite version of Bell's inequality shows that it is impossible to match the predictions of QM for every direction at the same time).
According to this paper by Pitowsky, http://arxiv.org/pdf/0802.3632.pdf, it would appear what breaks down is the tacit assumption that all directions are measurable at the same time.

So what that means is that for any run of experiments, there will be some statistics that don't come close to matching the theoretical probability.
Yes, if they are measured at the same time. But nobody really does that anyway.

I think this is a fundamental problem with relating non-measurable sets to experiment.
See previous point. It is not a problem at all. A non-measurable would just be an impossible/contradictory scenario physically.

The assumption that relative frequencies are related (in a limiting sense) to theoretical probabilities can't possibly hold when there are non-measurable sets involved.
It depends what those theoretical probabilities are. Nothing prevents one from adding one probability with a mutually incompatible probability theoretically. But experimentally you won't be measuring them at the same time. My guess is, its not the probabilities themselves that can't be related to relative frequencies, but the relationships between contradictory probabilities.
 
  • #147
billschnieder said:
According to this paper by Pitowsky, http://arxiv.org/pdf/0802.3632.pdf, it would appear what breaks down is the tacit assumption that all directions are measurable at the same time.
Different paper, different point. I will look up wha tis the *relevant* Pitowsky reference later. It is a very difficult and rather technical paper and I personally believe it is conceptually flawed. Sure, some fun generalized abstract nonsense. But no relevance. (just my personal opinion...). AFAIK, it has not been followed up by anyone...

If you have a LHV model, then even if you can only measure one direction at a time, the outcome that you would have had, had you actually measured in another direction is defined ... even if unavailable.
 
  • #148
gill1109 said:
It is easy to create *half* the cosine curve by LHV.
The problem is not the " particle" concept in the hidden layer, in the physics behind the scenes, it is the discreteness of the manifest outcomes. Click or no-click. +1 or -1.

Imo it is possible to generate a lhv covariance which is bigger than .5 at 45 degrees, ie it can be a cosine curve.
But the sum of the 4 covariances in chsh is still smaller than 2, and this is the point of bell's theorem : a combination of covariances.
 
  • #149
gill1109 said:
Different paper, different point. I will look up wha tis the *relevant* Pitowsky reference later. It is a very difficult and rather technical paper and I personally believe it is conceptually flawed. Sure, some fun generalized abstract nonsense. But no relevance. (just my personal opinion...). AFAIK, it has not been followed up by anyone...

Pitowsky's model appeared in Stanley Gudder's book "Quantum Probability". That's where I heard of it.
 
  • #150
jk22 said:
Imo it is possible to generate a lhv covariance which is bigger than .5 at 45 degrees, ie it can be a cosine curve.
But the sum of the 4 covariances in chsh is still smaller than 2, and this is the point of bell's theorem : a combination of covariances.
It is easy to create half a cosine curve by LHV.

It is easy to create a correlation bigger than 0.5 at 45 degrees.

CHSH says that it is impossible to have three correlations extremely large and one simultaneously extremely small, when the four are the four correlations formed by combining one of two settings on Alice's side with one of two settings on Bob's side.

Yes it is very difficult to have any feeling what this really means.

One could try something like this:

If r(a1, b2) is large, and r(a2, b2) is large, and r(a2, b1) is large, then we would expect r(a1, b1) to be large too.

Better still for pedagogical purposes, replace the usual "perfect anti-correlation at equal settings" of the singlet state version of the experiment by "perfect correlation at equal settings" by multiplying Bob's outcome by -1. Or switch from spin of electrons to polarization of photons.

For pedagogical purposes, forget about correlations and talk about the probability of equal outcomes

If Prob(A1 = B2) is large and Prob(A2 = B2) is large and Prob(A2 = B1) is large, then we would expect Prob(A1 = B1) is large.

If the first three probabilities are at least 1 - gamma then the fourth can't be smaller than 1 - 3 gamma. Take gamma = 0.25 and the first three would be 0.75 and the fourth 0.25. That's the largest one can get with LHV. This corresponds to CHSH value S = 2 = 4 * 0.5 = (2 * 0.75 - 1) + (2 * 0.75 - 1) + (2 * 0.75 - 1) - (2 * 0.25 -1)

But QM can have the first three probabilities equal to 0.85 and the fourth equal to 0.15. That corresponds to S = 2.8 = 4 * 0.7 = (2 * 0.85 - 1) + (2 * 0.85 - 1) + (2 * 0.85 - 1) - (2 * 0.15 -1) (in fact it can even be equal to 2.828... under QM but let's keep the numbers simple).
 
Last edited:
Back
Top