Bell's theorem and local realism

Click For Summary
Bell's theorem demonstrates that quantum mechanics predicts correlations between entangled particles that cannot be explained by local realism, which assumes that outcomes depend solely on local factors. The theorem is fundamentally about correlations between detector outcomes rather than the nature of particles themselves, challenging the notion of particles as realistic, localized objects. Some interpretations suggest that if detector outcomes are not identified with particles, the violation of Bell's inequalities may not necessarily negate local realism. The discussion highlights the assumptions underlying Bell's inequalities, particularly regarding causal influences and the nature of the variables involved. Ultimately, while the theorem does not explicitly mention particles, it is often interpreted within the context of particle physics, leading to debates about the implications for local realism.
  • #91
TrickyDicky said:
Sure, that is the usual non-realist "there is no quantum world" camp "a la Bohr".
The zillions of forum threads dedicated to interpretations of the quantum world are testimony that this view leaves many people unsatisfied, which in itself is not a compelling reason to think that it is not the correct way to view it.

Exactly. I think the reason for the dissatisfaction is biological and evolutionary. Our brains are built to *know* that every effect has a cause,"true" randomness does not exist. It scares us deeply or we attribute it to Gods. We do have no problem with action at a distance: Gods can, and do, do that. We know it when we are born. I wrote a small passage on this in my http://arxiv.org/abs/1207.5103 . I'm checking proofs right now for "Statistical Science", it's an invited paper in a special issue on causality.
 
Physics news on Phys.org
  • #92
stevendaryl said:
If we insist on locality, then there is a big difference, So I think that it's really locality that makes the dimensionality of spacetime meaningful.
Exactly. Locality allows a big problem with many dimensions to be decomposed, separated, into many small problems with few.
 
  • #93
gill1109 said:
Exactly. I think the reason for the dissatisfaction is biological and evolutionary. Our brains are built to *know* that every effect has a cause,"true" randomness does not exist. It scares us deeply or we attribute it to Gods. We do have no problem with action at a distance: Gods can, and do, do that. We know it when we are born. I wrote a small passage on this in my http://arxiv.org/abs/1207.5103 . I'm checking proofs right now for "Statistical Science", it's an invited paper in a special issue on causality.

I don't think that the dissatisfaction with interpretations of quantum mechanics is really about rejection of determinism. To me, it's not that hard to imagine incorporating nondeterminism into your laws of physics. Rather than having laws describing the state at time t_1 to be a deterministic function of the state at time t_0, you instead have a probability distribution P(S_1, t_1, S_0, t_0) giving the probability of being in state S_1 at time t_1 conditional on being in state S_0 at time t_0. I don't think that would be a huge challenge, conceptually, to make that transition from deterministic Newtonian physics.

But what's confounding about QM is that there doesn't seem to be any good notion of "What is the state at time t_0?" There's the wave function, or the density matrix, but that seems to be not a description of the universe, but a description of our subjective information about the universe. I think it's the lack of any coherent notion of what the universe is really doing (as opposed to what experimenters are doing) that is so confounding about QM. Nondeterminism isn't the real problem (although if QM were deterministic, then we would be able to understand the real state of the universe at any time to be the sum total of the information necessary to predict future measurements, so I guess nondeterminism is involved, indirectly).
 
  • #94
stevendaryl said:
I don't think that the dissatisfaction with interpretations of quantum mechanics is really about rejection of determinism. To me, it's not that hard to imagine incorporating nondeterminism into your laws of physics. Rather than having laws describing the state at time t_1 to be a deterministic function of the state at time t_0, you instead have a probability distribution P(S_1, t_1, S_0, t_0) giving the probability of being in state S_1 at time t_1 conditional on being in state S_0 at time t_0. I don't think that would be a huge challenge, conceptually, to make that transition from deterministic Newtonian physics.

But what's confounding about QM is that there doesn't seem to be any good notion of "What is the state at time t_0?" There's the wave function, or the density matrix, but that seems to be not a description of the universe, but a description of our subjective information about the universe. I think it's the lack of any coherent notion of what the universe is really doing (as opposed to what experimenters are doing) that is so confounding about QM. Nondeterminism isn't the real problem (although if QM were deterministic, then we would be able to understand the real state of the universe at any time to be the sum total of the information necessary to predict future measurements, so I guess nondeterminism is involved, indirectly).
Well if it were just a question of allowing Nature to toss local dice from time to time, no one would have a problem with it. And whether that were random or deterministic would be a matter of taste. One can imagine all the outcomes of all the tosses of all the dice which are going to be needed, being done in advance and stored "inside" the particles or whatever for later use. The trouble is that Bell tells us Nature doesn't do it this way. If Nature is tossing quantum dice, then the probabilities of the different joint outcomes concerning something going on both at A and at B need to depend on information which is only available at A but not at B, and vice versa. There is no way to have separate dice at separate places, the probabilities of the different outcomes for each die only depending on local information. In other words, a die which is "locally manufactured".

Whether such a local die is "truly random" or only "pseudo-random" ... makes no difference.
 
  • #95
gill1109 said:
Well if it were just a question of allowing Nature to toss local dice from time to time, no one would have a problem with it. And whether that were random or deterministic would be a matter of taste. One can imagine all the outcomes of all the tosses of all the dice which are going to be needed, being done in advance and stored "inside" the particles or whatever for later use. The trouble is that Bell tells us Nature doesn't do it this way. If Nature is tossing quantum dice, then the probabilities of the different joint outcomes concerning something going on both at A and at B need to depend on information which is only available at A but not at B, and vice versa. There is no way to have separate dice at separate places, the probabilities of the different outcomes for each die only depending on local information. In other words, a die which is "locally manufactured".

Whether such a local die is "truly random" or only "pseudo-random" ... makes no difference.

Of course, QM actually has an interpretation that is similar to your "all the outcomes..being done in advance". You could imagine an enumeration of all possible macroscopic histories of the universe, and at the beginning of time, one is chosen. The information about the chosen history would be embedded in a hidden variable in every single particle, and then each particle just carries out its predetermined program. Such a superdeterministic theory is consistent with QM (or with absolutely any theory of physics), but smacks of being a conspiracy.
 
  • #96
stevendaryl said:
Of course, QM actually has an interpretation that is similar to your "all the outcomes..being done in advance". You could imagine an enumeration of all possible macroscopic histories of the universe, and at the beginning of time, one is chosen. The information about the chosen history would be embedded in a hidden variable in every single particle, and then each particle just carries out its predetermined program. Such a superdeterministic theory is consistent with QM (or with absolutely any theory of physics), but smacks of being a conspiracy.
Yep. There's nothing wrong with determinism. But there's a lot wrong with conspiratorial superdeterminism. It explains everything but in a very "cheap" way. It has no predictive power. The smallest description of how the universe works is the history of the whole universe.
 
  • #97
gill1109 said:
Quantum mechanics is not in conflict with locality. There is no action at a distance, no "Bell telephone", no way to use the quantum correlations to communicate instantaneously over some distance. It is only when one hypothesizes an otherwise invisible hidden layer which "explains" those correlations in a classical (mechanistic, deterministic) way that one runs into locality issues.

I disagree, based on Bell's theorem, not on later misconstructions of it.
The theorem rejects locality, period. The subsequent addition of the concept "local realism" that allowed to keep locality if one gave up realistic descriptions of what was going on in order to get the probabilistic outcomes in experiments, was an ad hoc retelling, probably to avoid problems with relativistic QM.

See for instance:
Foundations of Physics, Vol. 37 No. 3, 311-340 (March 2007)
"Against 'realism'" Norsen T
 
  • Like
Likes 1 person
  • #98
TrickyDicky said:
I disagree, based on Bell's theorem, not on later misconstructions of it. The theorem rejects locality, period.

It depends on exactly how "locality" is defined. If you define it in terms of interactions--that nothing happening at one event can have a causal influence on something happening at a distant (spacelike separated) event, then QM is perfectly local. Alternatively, you can define it in terms of "local beables" (Bell's term): a theory is local if the most complete description of the state of the world factors into descriptions of what's going on in tiny, localized regions of the world. By that definition, QM is not local, because entanglement means that there are facts about what's going on in distant parts of the world that don't factor into facts about each part separately.
 
  • #99
stevendaryl said:
It depends on exactly how "locality" is defined. If you define it in terms of interactions--that nothing happening at one event can have a causal influence on something happening at a distant (spacelike separated) event, then QM is perfectly local. Alternatively, you can define it in terms of "local beables" (Bell's term): a theory is local if the most complete description of the state of the world factors into descriptions of what's going on in tiny, localized regions of the world. By that definition, QM is not local, because entanglement means that there are facts about what's going on in distant parts of the world that don't factor into facts about each part separately.

Your first definition enters in the causes of the nonlocality to avoid confrontation with relativity disallowance of ftl signals, but the theorem works irrespective of the causes, treats them like a black Box.
So it is obvious that is not a valid definition of locality regarding Bells theorem.
 
  • #100
TrickyDicky said:
Your first definition enters in the causes of the nonlocality to avoid confrontation with relativity disallowance of ftl signals, but the theorem works irrespective of the causes, treats them like a black Box.
So it is obvious that is not a valid definition of locality regarding Bells theorem.
A *reasonable* definition of locality depends on what you take to be *real* hence located in space-time, and what you don't take to be real. Most people find it reasonable to let detector clicks be part of reality (according to MWI they are not real since only the set of possible outcomes is real; one particular branch is imagination). Whether or not the wave function is real and whether or not outcomes of unperformed measurements are real etc etc are questions of metaphysics.

So the definition of *locality* is not absolute, but relative.
 
  • #101
TrickyDicky said:
Your first definition enters in the causes of the nonlocality to avoid confrontation with relativity disallowance of ftl signals, but the theorem works irrespective of the causes, treats them like a black Box.
So it is obvious that is not a valid definition of locality regarding Bells theorem.

PS I remind you that Boris Tsirelson, who may certainly be regarded as an authority in this field, states that Bell's theorem says that QM is incompatible with locality+realism+no-conspiracy and that the choice of which of those three to reject (taking QM to be true or close to true) is a matter of *taste* or if you prefer *philosophy*.

Sure, there are other authorities who say different things; and perhaps they have different definitions of locality, or perhaps are not so sharp in philosophy as they are in physics. I think that there is presently a consensus among experts on Bell's theorem that Tsirelson's statement is correct, but maybe there is a different broad consensus among physicists at large. So everyone can choose what is the "official line" and indeed according to Tsirelson everyone can choose what they like to believe.
 
  • #102
gill1109 said:
A *reasonable* definition of locality depends on what you take to be *real* hence located in space-time, and what you don't take to be real. Most people find it reasonable to let detector clicks be part of reality (according to MWI they are not real since only the set of possible outcomes is real; one particular branch is imagination). Whether or not the wave function is real and whether or not outcomes of unperformed measurements are real etc etc are questions of metaphysics.

So the definition of *locality* is not absolute, but relative.

I don't think the theorem is about realism, it is an exercise in logic, and it is concerned with locality in a quite specific and well defined way. Insisting in the definition being "relative" or in whether you take the term local as real or not seems to render the theorem totally useless. Like saying: well the conclusion of the theorem depends on the meaning you may want to give to the central concept being proved(since its definition is relative) so that you can make the theorem conclude whatever you like just by adding conditions or that they depend on whether you give a real significance to that concept.
In a theorem the definitions can't be relative in that sense, they better be specifically defined or it is not a theorem.
 
  • #103
TrickyDicky said:
I don't think the theorem is about realism, it is an exercise in logic, and it is concerned with locality in a quite specific and well defined way.

Bell's theorem is an answer to the question: "Can the correlations in EPR be explained by supposing that there are hidden local variables shared by the two particles?" The answer to that question is "no". It's not purely a question about locality, it's a question about a particular type of local model of correlations. The fact that it isn't purely about locality is proved by the possibility of superdeterministic local explanations for the EPR. (On the other hand, if you're going to allow superdeterminism, then the distinction between local and nonlocal disappears, I guess.)
 
  • #104
TrickyDicky said:
I disagree, based on Bell's theorem, not on later misconstructions of it.
The theorem rejects locality, period. The subsequent addition of the concept "local realism" that allowed to keep locality if one gave up realistic descriptions of what was going on in order to get the probabilistic outcomes in experiments, was an ad hoc retelling, probably to avoid problems with relativistic QM.

See for instance:
Foundations of Physics, Vol. 37 No. 3, 311-340 (March 2007)
"Against 'realism'" Norsen T

gill1109 said:
PS I remind you that Boris Tsirelson, who may certainly be regarded as an authority in this field, states that Bell's theorem says that QM is incompatible with locality+realism+no-conspiracy and that the choice of which of those three to reject (taking QM to be true or close to true) is a matter of *taste* or if you prefer *philosophy*.

Sure, there are other authorities who say different things; and perhaps they have different definitions of locality, or perhaps are not so sharp in philosophy as they are in physics. I think that there is presently a consensus among experts on Bell's theorem that Tsirelson's statement is correct, but maybe there is a different broad consensus among physicists at large. So everyone can choose what is the "official line" and indeed according to Tsirelson everyone can choose what they like to believe.

bohm2 has pointed out on these forums that Wiseman argues that there are two theorems and two definitions of locality, so that it depends on what one is talking about. http://arxiv.org/abs/1402.0351.
 
Last edited:
  • #105
TrickyDicky said:
I don't think the theorem is about realism, it is an exercise in logic, and it is concerned with locality in a quite specific and well defined way. Insisting in the definition being "relative" or in whether you take the term local as real or not seems to render the theorem totally useless. Like saying: well the conclusion of the theorem depends on the meaning you may want to give to the central concept being proved(since its definition is relative) so that you can make the theorem conclude whatever you like just by adding conditions or that they depend on whether you give a real significance to that concept.
In a theorem the definitions can't be relative in that sense, they better be specifically defined or it is not a theorem.

How about this method of arguing that reality is at least assumed in using a Bell test to disprove nonlocality? The Bell inequality is about the correlation between definite results. In quantum mechanics, we can put the Heisenberg cut however we want. So Bob can deny the reality that Alice had a result at spacelike separation. Bob is entitled to say that he had a result that Alice claimed a result at spacelike separation, but this result is about Alice's claim, which Bob obtained at non-spacelike separation. So there is no spacelike separation, and no Bell test.
 
  • Like
Likes 1 person
  • #106
stevendaryl said:
Bell's theorem is an answer to the question: "Can the correlations in EPR be explained by supposing that there are hidden local variables shared by the two particles?" The answer to that question is "no". It's not purely a question about locality, it's a question about a particular type of local model of correlations.
That particular type,that local model is what I call locality, making it purely a question about it.
Is this the same locality as that of relativity, and classical field theory in general? What do you think?

The fact that it isn't purely about locality is proved by the possibility of superdeterministic local explanations for the EPR. (On the other hand, if you're going to allow superdeterminism, then the distinction between local and nonlocal disappears, I guess.)
And it therefore spoils the supposed proved fact:wink:
That's why I insist there should be one unified and specific definition of locality, to avoid semantic confusion.
 
  • #107
atyy said:
How about this method of arguing that reality is at least assumed in using a Bell test to disprove nonlocality? The Bell inequality is about the correlation between definite results. In quantum mechanics, we can put the Heisenberg cut however we want. So Bob can deny the reality that Alice had a result at spacelike separation. Bob is entitled to say that he had a result that Alice claimed a result at spacelike separation, but this result is about Alice's claim, which Bob obtained at non-spacelike separation. So there is no spacelike separation, and no Bell test.
Yes. Basically as long as the quantum/classical cut is not solved this heuristic is valid.
 
  • #108
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.
 
  • #109
TrickyDicky said:
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.

So let's say we keep enough realism to do a Bell test, then you would say QM is nonlocal. However, it is consistent with relativity because relativity is consistent with nonlocality. What relativity is inconsistent with is using that nonlocality for superluminal classical communication ("causality"). Is that your argument?

Maybe something like the terminology in http://arxiv.org/abs/quant-ph/9709026, which terms quantum mechanics as "nonlocal" and "causal"?
 
Last edited:
  • #110
atyy said:
So let's say we keep enough realism to do a Bell test, then you would say QM is nonlocal.

Exactly. The problem is that I tend to think that QM's antirealism is so strong that I'm not sure it allows to keep even that bit enough for Bell.
However, it is consistent with relativity because relativity is consistent with nonlocality. What relativity is inconsistent with is using that nonlocality for superluminal classical communication ("causality").
Hmmm, let's say I would favor this view of the situation. But subject to the above disclaimer. And probably biased by my admiration for both relativity and QM :-p
 
  • #111
TrickyDicky said:
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.
Some authors have argued that the correlations in Bell-type experiments have yet to be explained by any local, non-realist model (whatever that means). Is there even any such model? I recall only 1 such model that was posted previously but it doesn't appear to be very popular and it's a difficult model to understand. I read it twice and still had trouble with it even though the author tried explaining it on this forum. Moreover, if non-locality is already implied by Bell-type experiments, why give up both realism and locality when giving up locality is all that is necessary to get results?
 
  • #112
TrickyDicky said:
Let's give some context. It is not that the theorem introduces any "particle" concept as its premise. It is about the conclusions from the theorem given certain assumption that is virtually shared by the whole physics community, namely atomism, the atomic theory as explanation of matter(the fundamental building blocks narrative) .
[..]
Now I have to say that I disagree with Neumaier that Classical field theory like electrodynamics as understood at least since Lorentz, violates Bell's inequalities as a theory. The reason is that electrodyamics includes classical particles. So it is both local and realistic.
I did not see Neumaier phrase it like that. It is true that in order to create EM radiation, one needs a radiation source; but IMHO, for his argument it's irrelevant how you model that source. It suffices that EM radiation can be modeled in a precise way.

He gave a neat illustration of how it can be sufficiently "nonlocal" for the hidden-variable analysis in his unpolished paper http://lanl.arxiv.org/abs/0706.0155. However, how EM waves could be sufficiently "nonlocal" for doing the trick with distant polarizers is still far from clear to me, although the paper by Banaszek quant-ph/9806069 seems to give, unwittingly, a hint at the end.

PS: the "fundamental building blocks" according to Neumaier are (something like) waves.
 
Last edited:
  • #113
harrylin said:
I did not see Neumaier phrase it like that. It is true that in order to create EM radiation, one needs a radiation source; but IMHO, for his argument it's irrelevant how you model that source. It suffices that EM radiation can be modeled in a precise way.

He gave a neat illustration of how it can be sufficiently "nonlocal" for the hidden-variable analysis in his unpolished paper http://lanl.arxiv.org/abs/0706.0155. However, how EM waves could be sufficiently "nonlocal" for doing the trick with distant polarizers is still far from clear to me, although the paper by Banaszek quant-ph/9806069 seems to give, unwittingly, a hint at the end.

PS: the "fundamental building blocks" according to Neumaier are (something like) waves.

I had not read Neumaier's paper linked by you when I wrote that, and now I have just read the conclusions.
He seems to center his analysis just on EM radiation and I was referring to electrodynamics whole theory so it's natural his argument has nothing to do with what I said there.

There is a trivial way in which say a plane wave is nonlocal, as it correlates its waveform for infinitely separated points.

His conclusion that "the present analysis demonstrates that a classical wave model for quantum
mechanics is not ruled out by experiments demonstrating the violation of the traditional
hidden variable assumptions" even if it was true(I don't know since I didn't read the analysis) looks to me not very useful since ruling out classical wave models explaining QM experiments doesn't need Bell's theorem.

His other conclusion:"the traditional hidden variable assumptions therefore only amount to hidden particle assumptions, and the experiments demonstrating their violation are just another chapter
in the old dispute between the particle or field nature of light conclusively resolved in favor of the field" I might agree with, as long as we use an extended notion of particle(basically any particle-like object).
 
  • #114
TrickyDicky said:
That's why I think it makes no sense to drop realism,in order to keep locality. If locality is not realistic you simply have no Bell test anymore. Anything goes.
What do you mean? Anyway, QM does not allow *anything* to go. Not at all. QM can't get the CHSH quantity S to go above 2 sqrt 2 but alternative theories could, still without violating locality. It could get all the way to 4.

It's called Tsirelson's inequality. I know that some very respectable and serious physicists have published experimental violation of Tsirelson inequality, and got that published in PRL or PRA - says something about refereeing and editing and general knowledge among physicists - but fortunately for QM, their experiment was flawed (loopholes!).
 
  • #115
bohm2 said:
Some authors have argued that the correlations in Bell-type experiments have yet to be explained by any local, non-realist model (whatever that means). Is there even any such model? I recall only 1 such model that was posted previously but it doesn't appear to be very popular and it's a difficult model to understand. I read it twice and still had trouble with it even though the author tried explaining it on this forum. Moreover, if non-locality is already implied by Bell-type experiments, why give up both realism and locality when giving up locality is all that is necessary to get results?

1) Lots of authors have argued that correlations in Bell-type experiments can be explained by local realist models. But so far none of those explanations stood up for long.
2) You could say that QM does not "explain" those correlations, it only describes them.
3) Bohmian theory does explain them, but it is non-local, of course (Bell's theorem).
4) No experiment was yet performed which was both succesfull in violating Bell type inequalities AND simultaneously satisfied the standard requirements for a "loophole-free" experiment, namely an experiment which (if succesful) can't be explained by a LHV theory. Possibly such an experiment might finally have gotten done within about a year from now. They getting pretty damned close.

For instance experiments with photons suffer from photons getting lost. You don't have a binary outcome you have a ternary outcome yes/no/disappeared (detection loophole). Experiments with atoms have the atoms so close the measurements so slow that it would be easy for one of the atoms to "know" how the other is being measured (locality loophole). Many experiments do not have fast, random, switching of detector settings, so later "particles" can easily "know" how earlier particles were being measured (memory loophole).
 
  • #116
atyy said:
So let's say we keep enough realism to do a Bell test, then you would say QM is nonlocal. However, it is consistent with relativity because relativity is consistent with nonlocality. What relativity is inconsistent with is using that nonlocality for superluminal classical communication ("causality"). Is that your argument?

Maybe something like the terminology in http://arxiv.org/abs/quant-ph/9709026, which terms quantum mechanics as "nonlocal" and "causal"?

Belavkin's eventum mechanics provides a view of QM which is both local and causal. As long as you don't ask for a mechanistic ie classical like explanation for "what is going on behind the scenes". You have to stop and accept quantum randomness. Irreducible. Intrinsic. Not like usual randomness ("merely statistical").

Sorry here I give you a reference to an unpublished unfinished manuscript by myself but it does give you some references and a quick easy (?) intro: http://arxiv.org/abs/0905.2723
 
  • #117
gill1109 said:
1) Lots of authors have argued that correlations in Bell-type experiments can be explained by local realist models. But so far none of those explanations stood up for long.

There was some interesting work done years ago by an Israeli mathematical physicist, Itamar Pitowsky, about the possibility of evading Bell's theorem by using non-measurable sets. The basic idea was to construct (in the mathematical sense) a function F of type S^2 \rightarrow \{+1,-1\} (S^2 being a unit sphere, or alternatively the set of unit direction vectors in 3D space) such that

  1. The measure of the set of points \vec{a} such that F(\vec{a}) = 1 is 1/2.
  2. For almost all points \vec{a}, the measure of the set of points \vec{b} such that F(\vec{a}) = F(\vec{b}) is cos^2(\theta/2) where \theta is the angle between \vec{a} and \vec{b}

It is actually mathematically consistent to assume the existence of such a function. Such a function could be used for a hidden variable explanation of EPR, contrary to Bell. The loophole that this model exploits is that Bell implicitly assumed that everything of interest is measurable, while in Pitowsky's model, certain joint probabilities correspond to non-measurable sets.

The problem with Pitowsky's model turns out to be that a satisfactory physical interpretation of non-measurable sets is about as elusive as a satisfactory physical interpretation of QM. In particular, if your theory predicts that a certain set of events is non-measurable, and then you perform experiments to actually count the number of events, you will get some actual relative frequency. So the assumption, vital to making probabilistic models testable, that relative frequency approaches the theoretical probability, can't possibly hold for nonmeasurable sets. In that case, it's not clear what the significance of the theoretical probability is, in the first place.

In particular, as applied to the spin-1/2 EPR experiment, I think it's true that every finite set of runs of the experiment will have relative frequencies that violate Pitowsky's theoretical probabilities. That's not necessarily a contradiction, but it certainly shows that introducing non-measurable sets makes the interpretation of experiment statistics very strange.
 
  • #118
stevendaryl said:
There was some interesting work done years ago by an Israeli mathematical physicist, Itamar Pitowsky, about the possibility of evading Bell's theorem by using non-measurable sets.
I know. As a mathematician I can tell you that this is quite bogus. Does not prove what it seems to prove. (It's not for nothing that no-one has ever followed this up).

Pitowsky has done a lot of great things! But this one was a dud, IMHO.

Here's a version of Bell's theorem which *only* uses finite discrete probability and elementary logic http://arxiv.org/abs/1207.5103. Moreover it is stronger than the conventional result since it is a "finite N" result: a probability inequality for the observed correlations after N trials. The assumptions are slightly different from the usual ones: I put probability into the selection of settings, not into the particles.

No, sorry, all the people claiming that some mathematical niceties e.g. measure theory or conventional definitions of integrability or the topology of space-time are the "way out" are barking up the wrong tree (IMHO)

Bell makes some conventional assumptions in order to write his proof out using conventional calculus. But you don't *have* to make those assumptions in order to get his main result. What you actually use is a whole lot weaker. Pitowsky only shows how Bell's line of proof would break down ... he does not realize that there are alternative lines of proof which would not break down even if one did not make measurability assumptions.

NB the existence of non-measurable functions requires the axiom of choice. A somewhat arbitrary assumption about infinite numbers of infinite sets. There exist consistent axioms for mathematics without the axiom of choice but making all sets measurable. So what are we talking about here? Formal word games, I think.
 
Last edited:
  • #119
gill1109 said:
I know. As a mathematician I can tell you that this is quite bogus. Does not prove what it seems to prove. (It's not for nothing that no-one has ever followed this up).

Pitowsky has done a lot of great things! But this one was a dud, IMHO.

Here's a version of Bell's theorem which *only* uses finite discrete probability and elementary logic http://arxiv.org/abs/1207.5103.

I think maybe I had read something along those lines, which was the reason I said that the nice(?) measure-theoretic properties of Pitowsky's model doesn't seem to imply anything about actual experiments.

Well, that's disappointing. It seemed to me that something like that might work, because non-measurable sets are weird in a way that has something of the same flavor as quantum weirdness.

An example (assuming the continuum hypothesis, this is possible) is to have an ordering (not the usual ordering) \leq on the unit interval [0,1] such that for every real number x in the interval, there are only countably many y such that y \leq x. Since every countable set has Lebesgue measure 0, we have the following truly weird situation possible:

Suppose you and I both generate a random real between 0 and 1. I generate the number x and later, you generate the number y. Before you generate your number, I look at my number and compute the probability that you will generate a number less than mine (in the special ordering). Since there are only countably many possibilities, I conclude that the probability is 0. So I should have complete confidence that my number is smaller than yours.

On the other hand, by the perfect symmetry between our situations, you could make the same argument.

So one or the other of us is going to be infinitely surprised (an event of probability zero actually happened).
 
  • #120
stevendaryl said:
I think maybe I had read something along those lines, which was the reason I said that the nice(?) measure-theoretic properties of Pitowsky's model doesn't seem to imply anything about actual experiments.

Well, that's disappointing. It seemed to me that something like that might work, because non-measurable sets are weird in a way that has something of the same flavor as quantum weirdness.

An example (assuming the continuum hypothesis, this is possible) is to have an ordering (not the usual ordering) \leq on the unit interval [0,1] such that for every real number x in the interval, there are only countably many y such that y \leq x. Since every countable set has Lebesgue measure 0, we have the following truly weird situation possible:

Suppose you and I both generate a random real between 0 and 1. I generate the number x and later, you generate the number y. Before you generate your number, I look at my number and compute the probability that you will generate a number less than mine (in the special ordering). Since there are only countably many possibilities, I conclude that the probability is 0. So I should have complete confidence that my number is smaller than yours.

On the other hand, by the perfect symmetry between our situations, you could make the same argument.

So one or the other of us is going to be infinitely surprised (an event of probability zero actually happened).
I think you are referring here to paradoxes from "model theory" namely there exist countable models for the real numbers. Beautiful. It's a self-reference paradox, really just a hyped up version of the old paradox of the barber who shaves everyone in the village who doesn't shave himself. In some sense, it is just a word game. It's a useful tool in maths - one can prove theorems by proving theorems about proving theorems. Nothing wrong with that.

Maybe there is superficially a flavour of that kind of weirdness in quantum weirdness. But after studying this a long time (and analysing several such "solution") I am certain that quantum weirdness is weirdness of a totally different nature. It is *physical*, it conflicts with our in-built instinctive understanding of the world (which got there by evolution. It allowed our ancestors to succesfully raise more kids than the others. Evolution is blind and even leads species into dead ends, again and again!). So I would prefer to see it as quantum wonderfulness, not quantum weirdness.
 

Similar threads

  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 220 ·
8
Replies
220
Views
22K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 48 ·
2
Replies
48
Views
6K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 55 ·
2
Replies
55
Views
8K
Replies
6
Views
3K
  • · Replies 40 ·
2
Replies
40
Views
2K