Bell's theorem and local realism

  • #151
If we consider chsh i think what is really important is to notice that there are only 4 time series for measurement datas to generate the 4 covariances, ie we one generate the first covariance AB then it should take the same A datas to compute the AB', else there can be a violation. Indeed to generate 4 covariances AB AB' A'B A'B' one could be tempted to use 8 set of datas in -1,1, as i generate covariance Ab then AB' aso.
 
Physics news on Phys.org
  • #152
jk22 said:
If we consider chsh i think what is really important is to notice that there are only 4 time series for measurement datas to generate the 4 covariances, ie we one generate the first covariance AB then it should take the same A datas to compute the AB', else there can be a violation. Indeed to generate 4 covariances AB AB' A'B A'B' one could be tempted to use 8 set of datas in -1,1, as i generate covariance Ab then AB' aso.
Indeed if we had one time series with four measurements A, A', B, B' in each run then it is an arithmetical impossibility to violate CHSH on the four correlations AB, AB' etc each computed on all runs.
The crucial steps in deriving CHSH for an experiment where in each run we only measure A or A' and B or B' is
(1) if LHV are true then even if we only measured, say, A and B', still A' and B are also at least mathematically defined at the same time
(2) if which of A and A' to observe, and which of B and B' to observe, is based on independent fair coin tosses, then the correlation between, say, A and B', based on only a random sample of about 1 quarter of all the runs, is not much different from what it would have been based on all the runs
 
  • #153
We can also note that if we add a time dependency A(theta_a,lambda,t_a) and so on with t_b...then Bell theorem is still valid, so that the measurement have not to be simultaneous
 
  • #154
gill1109 said:
1) Lots of authors have argued that correlations in Bell-type experiments can be explained by local realist models. But so far none of those explanations stood up for long.
I wasn't referring to realist models. I was talking about whether any local but non-realist model could get the quantum predictions.
 
  • #155
bohm2 said:
I wasn't referring to realist models. I was talking about whether any local but non-realist model could get the quantum predictions.
Straight Copenhagen interpretation QM is (IMHO) non-realist and gets the quantum predictions. And Slava Belavkin's "eventum mechanics" which is just QM with Heisenberg cut seen as "real" and placed in the Hilbert space in a way which ensures causality is even Copenhagen QM without a Schrödinger cat paradox. Finally, it can be made relativistically invariant using recent work of D. Beddingham (2011). Relativistic State Reduction Dynamics. Foundations of Physics 41, 686–704. arXiv:1003.2774
 
  • #156
gill1109 said:
Yep. There's nothing wrong with determinism. But there's a lot wrong with conspiratorial superdeterminism. It explains everything but in a very "cheap" way. It has no predictive power. The smallest description of how the universe works is the history of the whole universe.

What is your opinion about Gerard 't Hooft's Cellular Automaton Interpretation of Quantum Mechanics? It is superdeterministic but not conspiratorial.

It's true that it is not exactly an interpretation of QM, there is more work to do, but what do you think about this line of inquiry?
 
  • #157
gill1109 said:
Straight Copenhagen interpretation QM is (IMHO) non-realist and gets the quantum predictions.


But the point here, or at leas what I meant in the post about dropping realism, is that either we agree on calling nonlocal any theory able to get Quantum predictions, regardless of any other assumption like realism, or in the nonrealistic case(like QM's) it makes no sense to still calling it local, unless we are meaning the Einstein sense i.e. causal, but then it is better not to use the term local.
I think a theory can be nonlocal in the Bell sense and keep causality, it just won't be able to do it with particle-like objects in its ontology in the case the theory is realist, if it is not, i.e. intrumentalis like QM it can make up anything without the need to take it seriously as interpretation(as indeed happens).
 
  • #158
ueit said:
What is your opinion about Gerard 't Hooft's Cellular Automaton Interpretation of Quantum Mechanics? It is superdeterministic but not conspiratorial.

It's true that it is not exactly an interpretation of QM, there is more work to do, but what do you think about this line of inquiry?
My opinion is that it will get nowhere. Of course it is not conspiratorial "at the Planck scale". But at the scale of a real world Bell-CHSH type experiment it would have to have become conspiratorial.

In fact, I am absolutely certain that this approach will get nowhere. Possibly a tiny part of QM can be described in this way. But it cannot form a basis for all of conventional QM because of Bell's theorem. Or ... it can but this will lead to "Bell's fifth position": a loophole-free and successful Bell-type experiment will never take place because QM uncertainty relations itself will prevent establishing the right initial conditions. The right quantum state in the right small regions of space-time.

It would also imply that a quantum computer can never be built ... or rather, not scaled-up. As one makes computers with more and more qubits in them, quantum decoherence will take over faster, and you'll never be able to factor large integers rapidly or whatever else you want to do with them.

But I am very doubtful indeed of the viability of Bell's fifth position. I suspect that the good experiment will get done in a few years and then we can forget about that option.
 
  • #159
TrickyDicky said:
But the point here, or at leas what I meant in the post about dropping realism, is that either we agree on calling nonlocal any theory able to get Quantum predictions, regardless of any other assumption like realism, or in the nonrealistic case(like QM's) it makes no sense to still calling it local, unless we are meaning the Einstein sense i.e. causal, but then it is better not to use the term local.
I think a theory can be nonlocal in the Bell sense and keep causality, it just won't be able to do it with particle-like objects in its ontology in the case the theory is realist, if it is not, i.e. intrumentalis like QM it can make up anything without the need to take it seriously as interpretation(as indeed happens).

But then we are just talking about words, aren't we? Some people identify "non-local" with "being able to violate Bell inequalities". Well if that's what you mean by local and non-local, then we can call QM non-local.

I am sure we could discuss this for days and days. Let me just say that apparently a lot of serious people do find it meaningful to separate local-realism into "local" and "realism" and discuss rejecting one but not the other. People do see two distinct options there. The Bohmians go for non-local + realism. IMHO Copenhagen a la Belavkin deserves to be called local + non-realism. But these are just labels! A rose by any other name would smell as sweet ... Let's try to be aware of what anybody actually means by a particular label in a particular context.

Remember that "realism" aka "counterfactual definiteness" is actually a rather idealistic position: it asserts the physical existence in reality (whatever that means) of things that did never happened, things that are never seen, things which a priori one would say we don't need to "add in" to our model of reality. It's just that in classical physics (characterized as LHV), there is no problem with adding in those things, and no problem with locality after they have been added in.

Remember that EPR actually used quantum predictions (perfect anti-correlation in the singlet state) in order to argue for realism. Einstein was smart. He realized that "realism" is an unnecessary add-on, an idealistic point of view. It needed to be motivated from the physics which we do believe actually does describe the real world, namely QM.

Bell was extraordinarily smart to be able to turn this argument on its head. He noticed that Einstein was actually also using locality +QM to suggest, to motivate (not to prove) realism. And he showed that the three together locality + realism + QM leads to a contradiction (if we exclude conspiracy) ...

Bell's fifth position is a kind of weakening of the option "QM is wrong". It says "QM is right but stops us from realizing or seeing certain things in Nature or in he Lab, which do appear to be allowed in the formalism".

Classical thermodynamics has things like this. You're not ever going to see all the molecules in the air all in the same half of your Lab and it will be rather hard to engineer that situation, too. You can set up an airtight wall across the lab but not take it down in a split second.
 
Last edited:
  • #160
And then you have the distinction between "realism" and "counterfactual definiteness."

http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Counterfactual_definiteness.html

http://physics.stackexchange.com/qu...lism-in-locality-and-counterfactual-definiten

If you accept the definition of "realism" given in the stackexchange response, then Bell's Theorem really assumes counterfactual definiteness rather than realism.
 
  • #161
stevendaryl said:
The axiom of choice implies that every set can be put into a one-to-one correspondence with some initial segment of the ordinals. That means that it is possible to index the unit interval by ordinals \alpha: [0,1] = \{ r_\alpha | \alpha < \mathcal{C}\} where \mathcal{C} is the cardinality of the continuum. The continuum hypothesis implies that \mathcal{C} = \omega_1, the first uncountable ordinal (\omega_1 is the same as Aleph_1, if we use the Von Neumann representation for cardinals and ordinals). So we have:

[0,1] = \{ r_\alpha | \alpha < \omega_1\}

If \alpha < \omega_1, then that means that \alpha is countable, which means that there are only countably many smaller ordinals. That means that if we order the elements of [0,1] by using r_\alpha < r_\beta \leftrightarrow \alpha < \beta, then for every x in [0,1] there are only countably many y such that y < x.

Now I have the solution of your paradox! The ordering of the unit interval by the countable ordinals, which means that the initial segment of any number x is countable, is not-measurable. Hence cannot be constructed. If we give the unit interval the ordinary uniform measure, and pick an x at random, then we can never point out which are the countably many x' which are before x in a particular ordering we have in mind - existing according to the axiom of choice. Alternatively if we would put an ordinary discrete probability measure on that countable sequence we can never arrange to pick a number at random according to that distribution. Even if Nature could, there would be no well defined probability of ordinary Borel sets.

I think it is pretty ludicrous to think that some such mechanism as this actually underlies real world physics. We are talking about formal games played with formal systems of axioms concerning ideas about infinity. It turns out that our ordinary intuition quickly breaks down when we start playing these games.
 
  • #162
jtbell said:
And then you have the distinction between "realism" and "counterfactual definiteness."

http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Counterfactual_definiteness.html

http://physics.stackexchange.com/qu...lism-in-locality-and-counterfactual-definiten

If you accept the definition of "realism" given in the stackexchange response, then Bell's Theorem really assumes counterfactual definiteness rather than realism.

I agree with the distinction made in stackexchange. Apparently gill thinks they're the same thing:
gill1109 said:
Remember that "realism" aka "counterfactual definiteness" is...
 
  • #163
jtbell said:
And then you have the distinction between "realism" and "counterfactual definiteness."

http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Counterfactual_definiteness.html

http://physics.stackexchange.com/qu...lism-in-locality-and-counterfactual-definiten

If you accept the definition of "realism" given in the stackexchange response, then Bell's Theorem really assumes counterfactual definiteness rather than realism.
I don't think the difference is quite so clear-cut, since the word "exists" is not very well-defined! If we forget for a moment about what "exists" means, we could try to apply the phrases "counterfactual-definiteness" and "realism" to *theories about reality* rather than about reality itself. I think that we would then find that whether or not a *theory* should be considered a realist theory, and whether or not a *theory* allows counterfactual definiteness, are very close indeed.

Remember that Einstein himself felt called to give an instrumental way which allows one sometimes to decide that some things are "elements of reality".

However, everyone agrees that hidden variables models are realist. And a hidden variables model allows one to ask "what if" questions about what would have happened in the experimental conditions were different from what they actually had been. So physical theories which are described by hidden variables deserve to be called realist and to satisfy counterfactual definiteness. On the other hand, we know that if (for instance) the statistics of a two party, binary setting, binary outcome CHSH type experiment satisfies all CHSH inequalities, then there exists a local hidden variables model which generates the same statistics.

One often says that Bell's theorem and a Bell type experiment is that it allows one to experimentally distinguish between classes of possible physical theories - in fact, this is precisely what is special and unique about it. Perhaps we should concentrate on what Bell's theorem says about physical theories, and not worry so much what it says about reality.
 
  • #164
gill1109 said:
But then we are just talking about words, aren't we? Some people identify "non-local" with "being able to violate Bell inequalities". Well if that's what you mean by local and non-local, then we can call QM non-local.
But well defined words are important in science discussions to avoid confusion.

I am sure we could discuss this for days and days. Let me just say that apparently a lot of serious people do find it meaningful to separate local-realism into "local" and "realism" and discuss rejecting one but not the other. People do see two distinct options there. The Bohmians go for non-local + realism.
I mentioned a reference in a previous post that shows that BM as long as it is an interpretation of QM and not a different theory it is not realist, regardless of Bohmians pretending to be realists or not.("The Bohmian interpretation of QM:a pitfall for realism" Matzkin and Nurock)
 
  • #165
TrickyDicky said:
But well defined words are important in science discussions to avoid confusion.


I mentioned a reference in a previous post that shows that BM as long as it is an interpretation of QM and not a different theory it is not realist, regardless of Bohmians pretending to be realists or not.("The Bohmian interpretation of QM:a pitfall for realism" Matzkin and Nurock)
The stack exchange expert says that CFD is an epistemological property that allows you to ask questions about experiments, realism is a metaphysical property of a theory. I don't agree with his use of words and characterizations! After all we have a theory - QM - and it allows lots of different metaphysical interpretations.

You can also say that BM (the mathematical framework thereof) is just a tool for doing QM calculations therefore it is neither an interpretation nor a different theory!

Boris Tsirelson has thought long and hard about these things and he identifies "realism" and "counterfactual definiteness". I think that for all practical purposes, they can be identified. But of course Bell himself warned us against FAPP traps. But I think there is not a FAPP trap right here. Maybe "realism" is a metaphysical position which implies the usefulness and/or validity of "counterfactual definiteness". What proofs of Bell's theorem use, is counterfactual definiteness. And if the statistics of a Bell-type experiment satisfy all Bell inequalities, then the experiment can be described by a LHV model. If Nature can be described by LHV models then we might be philosophically inclined to imagine the variables in those models as being elements of reality...
 
  • #166
I guess my point is one of coherence.
If I'm an instrumentalist I should only be interested in the operational tools to compute predictions for observed outcomes. And I don't really take seriously anything else like whether electrons really exist, or disquisitions about matter building blocks, etc. In reality even though this position is considered prevailing among physicists it is hard to maintain it and even people that claim to be of the "shut up and calculate" school take those things seriously.

Being extremely coherent a true instrumentalist shouldn't really care about Bell's theorem either: if he doesn't care about hidden variables theories why should he care about the subset that are local.

However a true realist should care, the theorem is helpful, it inmediately makes him reject a whole class of theories(those realistically based in local particle-like objects) just using the logic of the theorem and the experimental correlations observed to violate Bell's inequalities.
 
  • #167
TrickyDicky said:
I guess my point is one of coherence.
If I'm an instrumentalist I should only be interested in the operational tools to compute predictions for observed outcomes. And I don't really take seriously anything else like whether electrons really exist, or disquisitions about matter building blocks, etc. In reality even though this position is considered prevailing among physicists it is hard to maintain it and even people that claim to be of the "shut up and calculate" school take those things seriously.

Being extremely coherent a true instrumentalist shouldn't really care about Bell's theorem either: if he doesn't care about hidden variables theories why should he care about the subset that are local.

However a true realist should care, the theorem is helpful, it inmediately makes him reject a whole class of theories(those realistically based in local particle-like objects) just using the logic of the theorem and the experimental correlations observed to violate Bell's inequalities.

Yes, I can drink to that! (It's 6pm here in Netherlands, time to relax)
 
  • #168
gill1109 said:
Yes, I can drink to that!

Cheers! :smile:
 
  • #169
gill1109 said:
Now I have the solution of your paradox! The ordering of the unit interval by the countable ordinals, which means that the initial segment of any number x is countable, is not-measurable. Hence cannot be constructed. If we give the unit interval the ordinary uniform measure, and pick an x at random, then we can never point out which are the countably many x' which are before x in a particular ordering we have in mind - existing according to the axiom of choice. Alternatively if we would put an ordinary discrete probability measure on that countable sequence we can never arrange to pick a number at random according to that distribution. Even if Nature could, there would be no well defined probability of ordinary Borel sets.

I certainly understand that nonmeasurable sets can't be useful in the real world, because we never know any real number to the precision necessary to know whether it is in a nonmeasurable set (we can only know that it is some interval, and intervals are always measurable). I'm asking for a conceptual resolution, not a "in practice, that would never come up" resolution.
 
  • #170
stevendaryl said:
I certainly understand that nonmeasurable sets can't be useful in the real world, because we never know any real number to the precision necessary to know whether it is in a nonmeasurable set (we can only know that it is some interval, and intervals are always measurable). I'm asking for a conceptual resolution, not a "in practice, that would never come up" resolution.

I claim that my resolution is a conceptual resolution. You were talking about the two observers or two experimentalists who would get different ideas about what was going on. I gave you a physical reason why they can't exist.

You said we can never know if a number is in some non-measurable set or not. But it is worse than that: that non-measurable set can't be created by any "reasonable" physical process. Moreover you are talking about things which only exist or don't exist by grace of some *arbitrary* assumptions about what pure mathematics is about. That can have nothing whatever to do with physics.
 
  • #171
gill1109 said:
I claim that my resolution is a conceptual resolution. You were talking about the two observers or two experimentalists who would get different ideas about what was going on. I gave you a physical reason why they can't exist.

I consider that an uninteresting resolution. Is it logically impossible that things could work that way?

It reminds me of when I was in college learning about Special Relativity. I asked the TA (a physics graduate student) about a scenario in which someone gets aboard a rocket and rapidly accelerates to 90% of the speed of light. His response was: "That kind of acceleration would kill you, anyway."
 
  • #172
stevendaryl said:
I consider that an uninteresting resolution. Is it logically impossible that things could work that way?

It reminds me of when I was in college learning about Special Relativity. I asked the TA (a physics graduate student) about a scenario in which someone gets aboard a rocket and rapidly accelerates to 90% of the speed of light. His response was: "That kind of acceleration would kill you, anyway."
Your analogy is false! You can replace "observers" by "measuring devices". And you are the one who said that there was a paradox because two observers would get different ideas. I say that one of those two observers does not exist, hence I resolve *your* paradox.
 
  • #173
gill1109 said:
Straight Copenhagen interpretation QM is (IMHO) non-realist and gets the quantum predictions.
Yes, but is it local? The issue was whether any local and non-realist model could get the QM predictions. Travis Norsen previously brought up an example of one such model but it doesn't get the results:
Here's a model that non-realistic but perfectly Bell local: each particle has no definite, pre-existing, pre-scripted value for how the measurements will come out. Think of each particle as carrying a coin, which, upon encountering an SG device, it flips -- heads it goes "up", tails it goes "down". That is certainly not "realistic" (in the sense that people are using that term here) since there is no fact of the matter, prior to the measurement, about how a given particle will respond to the measurement; the outcome is "created on the fly", so to speak. And it's also perfectly local in the sense that what particle 1 ends up doing is in no way influenced by anything going on near particle 2, or vice versa. Of course, the model doesn't make the QM/empirical predictions. But it's non-realist and local.
So what I'm basically asking is whether there are any well-known, non-realist models that can avoid non-locality to explain the perfect correlations that are observed in the usual EPR-Bell scenario (when a=b). If we cannot produce any such local, non-realist models that give quantum predictions, why drop both realism and locality when dropping locality alone is all that is required?
 
  • #174
bohm2 said:
So what I'm basically asking is whether there are any well-known, non-realist models that can avoid non-locality to explain the perfect correlations that are observed in the usual EPR-Bell scenario (when a=b). If we cannot produce any such local, non-realist models that give quantum predictions, why drop both realism and locality when dropping locality alone is all that is required?

There are a number of such: MWI, all retro-causal types, relational blockworld, etc all preserve locality. Your question asked another way:

"Why drop both realism and locality when dropping realism alone is all that is required?"
 
  • #175
bohm2 said:
Yes, but is it local? The issue was whether any local and non-realist model could get the QM predictions. Travis Norsen previously brought up an example of one such model but it doesn't get the results:

[non-realistic model omitted because it fails miserably.]

DrChinese's equivalent model:

Here's a model that non-local but perfectly Bell realistic: each particle has a definite, pre-existing, pre-scripted value for how the measurements will come out. Think of each particle as married to a random distant particle. Each has the same value for their spin observables, and when one changes, so does the other. Of course, the model doesn't make the QM/empirical predictions. But it's non-local and realistic.

My question is why you would reference a failed model. We can all put forth failed models, notice mine is as miserable as Travis'.
 
  • #176
gill1109 said:
Straight Copenhagen interpretation QM is (IMHO) non-realist and gets the quantum predictions. And Slava Belavkin's "eventum mechanics" which is just QM with Heisenberg cut seen as "real" and placed in the Hilbert space in a way which ensures causality is even Copenhagen QM without a Schrödinger cat paradox. Finally, it can be made relativistically invariant using recent work of D. Beddingham (2011). Relativistic State Reduction Dynamics. Foundations of Physics 41, 686–704. arXiv:1003.2774

Besides the Copenhagen interpretation getting the quantum predictions.Does this interpretation also account for the perfect anti correlations when a=b ? If the particles are in superposition, spin up spin down and detectors are aligned then measurement at A (spin up) seems to have non local effect at B (spin down )
 
  • #177
add

EEngineer91 said:
Yet, for some reason, many physicists of today lambast action at a distance as some logical impossibility. Bell even brought up good holistic problems such as defining precisely "measurement", "observation device" and the paradox of their "seperate-ness" and fundamental "together-ness"

Some call action at a distance a logical impossibility because there is no known physical mechanism and it is bizarre. While for others non-locality is an artefact created by the introduction into quantum mechanics of notions which are foreign to it.
There are physical systems that are beyond the scope of the EPR definition of reality.
One could also think of complicated scenarios where local unknown physical variables may couple together in a way that will give the (false) impression of non-local results. ( In part. Laloe, 2008)
And in this paper Statistics, Causality and Bell's Theorem. Gill, May 2014. The author argues that Bell's theorem ( and its experimental confirmation) should lead us to relinquish not locality but realism.
"Quantum randomness is both real and fundamental.Quantum randomness is non-classical, irreducible. It is not an emergent phenomenon. It is the bottom line. It is a fundamental feature of the fabric of reality".
 
Last edited:
  • #178
TrickyDicky said:
But the point here, or at leas what I meant in the post about dropping realism, is that either we agree on calling nonlocal any theory able to get Quantum predictions, regardless of any other assumption like realism, or in the nonrealistic case(like QM's) it makes no sense to still calling it local, unless we are meaning the Einstein sense i.e. causal, but then it is better not to use the term local.
I think a theory can be nonlocal in the Bell sense and keep causality, it just won't be able to do it with particle-like objects in its ontology in the case the theory is realist, if it is not, i.e. intrumentalis like QM it can make up anything without the need to take it seriously as interpretation(as indeed happens).

While dropping realism in both cases above you are still calling the theories non local: 1) Calling the theory non local regardless of an assumption like realism. 2) Or in the non realistic case (like QM's) it makes no sense to still calling it local.

If locality and realism are not conjoined then there could be a local non realism/ non linear model that reproduces the correlations.
 
Last edited:
  • #179
morrobay said:
Besides the Copenhagen interpretation getting the quantum predictions. Does this interpretation also account for the perfect anti correlations when a=b ? If the particles are in superposition, spin up spin down and detectors are aligned then measurement at A (spin up) seems to have non local effect at B (spin down )
Copenhagen interpretation doesn't *explain* this. It merely *describes* this. Perhaps that's the bottom line.

If you want to *explain* it you have to come up with something weird. That's what Bell's theorem says.

MWI doesn't explain it either. It says that reality is not real. The particular branch of the many worlds you and I are stuck in at this moment, is no more nor less real than all the others. The only reality is a unitarily evolving wave function of the universe. Call that an explanation? I don't. Still, it seems to make a lot of people happy.
 
  • #180
morrobay said:
And in this paper Statistics, Causality and Bell's Theorem. Gill, May 2014. The author argues that Bell's theorem ( and its experimental confirmation) should lead us to relinquish not locality but realism.
"Quantum randomness is both real and fundamental.Quantum randomness is non-classical, irreducible. It is not an emergent phenomenon. It is the bottom line. It is a fundamental feature of the fabric of reality".

And wise words they are. :smile: These too, from the same citation:

"It seems to me that we are pretty much forced into rejecting realism , which, please remember, is actually an idealistic concept: outcomes 'exist' of measurements which were not performed. However, I admit it goes against all instinct."

http://arxiv.org/pdf/1207.5103.pdf

Keep in mind that EPR rejected observer dependent reality in favor of objective reality (hidden variables/realism) with no scientific or logical basis. They simply thought that subjective reality was unreasonable. That matches to the comments of Gill above ("goes against all instinct") for quite the reason he presents: because a) EPR did not know of Bell; and b) the assumption of realism is an unnecessary luxury we cannot afford in light of Bell.
 
  • #181
But is there even agreement on what a non-realistic theory is? For example, MWI is considered by some of its proponents to be real http://philsci-archive.pitt.edu/8888/1/Wallace_chapter_in_Oxford_Handbook.pdf . Also CSL is considered by its author to be real, eg. http://arxiv.org/abs/1209.5082. So if Belavkin's Eventum Mechanics http://arxiv.org/abs/quant-ph/0512188 is a generalization or close relative of CSL, why should it be considered unreal?

One resolution, which I sympathize with, is Matt Leifer's explanation that there are two definitions of reality http://arxiv.org/abs/1311.0857.
 
Last edited:
  • #182
atyy said:
But is there even agreement on what a non-realistic theory is? For example, MWI is considered by most to be real. Also CSL is considered by its author to be real, eg. http://arxiv.org/abs/1209.5082. So if Belavkin's Eventum Mechanics http://arxiv.org/abs/quant-ph/0512188 is a generalization or close relative of CSL, why should it be considered unreal?
I read on wikipedia, in material written by those favourable to MWI, that MWI abandons realism .. or at least, reality.

Yes, CSL is realist and non-local. The random disturbances in the model are supposed to exist in reality and thereby determining what would have happened if the experimenter would have done something else, and because it reproduces QM, it has to be non-local.

The mathematical framework of CSL can be seen as a special case of the mathematical framework of eventum mechanics. But the interpretation of those models favoured by the people who invented them differs precisely in what they thought should be considered part of reality and what shouldn't. It's subtle and confusing. In fact it corresponds exactly to what Bell says in that Youtube video: I can't tell you that there is action at a distance in QM, and I can't tell you that it's not true that there isn't.
 
  • #183
gill1109 said:
The mathematical framework of CSL can be seen as a special case of the mathematical framework of eventum mechanics. But the interpretation of those models favoured by the people who invented them differs precisely in what they thought should be considered part of reality and what shouldn't. It's subtle and confusing.

The main place where Belavkin seems to differ from CSL in ontology, if any, is the derivation from filtering, which if you take a Bayesian interpretation, may involve non-real things (subjective probability). Is that why you say Eventum Mechanics is not real, while CSL is real, even though mathematically the final equations are a generalization of CSL?
 
  • #185
atyy said:
One resolution, which I sympathize with, is Matt Leifer's explanation that there are two definitions of reality http://arxiv.org/abs/1311.0857.
Leifer says "Scientific realism is the view that our best scientific theories should be thought of as describing an objective reality that exists independently of us". I guess most people are realists, according to that definition. Or would like to be! But the important issue is *what should be thought of as belonging to that reality*? The MWI people somehow think of reality consisting only of a unitarily evolving wave-function of the universe. What we personally experienced along our own path so far, is apparently an illusion.
I like to think of detector clicks as being part of reality. Then work from there, and see what else can be put in while still making sense. If I want to keep locality I have to give up counter-factual definiteness and I have to give up local hidden variables (I'm not going to buy conspiracy theories). So finally it all comes down to "what's in a name". Local / non-local, realist / non-realist, these maybe are only semantic squabbles.
 
  • #186
gill1109 said:
Leifer says "Scientific realism is the view that our best scientific theories should be thought of as describing an objective reality that exists independently of us". I guess most people are realists, according to that definition. Or would like to be! But the important issue is *what should be thought of as belonging to that reality*? The MWI people somehow think of reality consisting only of a unitarily evolving wave-function of the universe. What we personally experienced along our own path so far, is apparently an illusion.
I like to think of detector clicks as being part of reality. Then work from there, and see what else can be put in while still making sense. If I want to keep locality I have to give up counter-factual definiteness and I have to give up local hidden variables (I'm not going to buy conspiracy theories). So finally it all comes down to "what's in a name". Local / non-local, realist / non-realist, these maybe are only semantic squabbles.

Yes, a lot of it is semantic. I actually like the semantics that Leifer proposes at the end, so that one can consider the wave function in MWI both real and not real. So we can eat our cake and have it. (I'm not sure whether I agree with him that MWI is technically correct, but that's a different issue.)

http://arxiv.org/abs/1311.0857
"We have arrived at the conclusion that noncontextuality must be derived in terms of an analysis of the things that objectively exist. This implies a realist view of physics, or in other words “bit from it”, which seems to conflict with “it from bit”. Fortunately, this conflict is only apparent because “it” is being used in different senses in “it from bit” and “bit from it”. The things that Wheeler classifies as “it” are things like particles, fields and spacetime. They are things that appear in the fundamental ontology of classical physics and hence are things that only appear to be real from our perspective as classical agents. He does not mention things like wavefunctions, subquantum particles, or anything of that sort. Thus, there remains the possibility that reality is made of quantum stuff and that the interaction of this stuff with our question asking apparatus,
also made of quantum stuff, is what causes the answers (particles, fields, etc.) to come into being. “It from bit” can be maintained in this picture provided the answers depend not only on the state of the system being measured, but also on the state of the stuff that comprises the measuring apparatus. Thus, we would end up with “it from bit from it”, where the first “it” refers to classical ontology and the second refers to quantum stuff."
 
Last edited:
  • #187
gill1109 said:
If I want to keep locality...

It would be nice if you explained how you define locality here, so we might understand why you want to keep it at such a price.
 
  • #188
I suppose it is ok to have several definitions of reality when discussing Bell's theorem. Even if we accept experiment results at spacelike separation as real, what the theorem excludes is variables defined in the non-overlapping past light cone as being sufficient to describe the results. So in the sense that variables defined in Hilbert space are not defined in spacetime, those could be considered not real for Bell's theorem.

MWI's primary reality is Hilbert space, not spacetime, so it could be argued that it is not real in the sense of Bell excluding "local realistic variables". On the other hand, such a definition would seem to make even Bohmian Mechanics not real. However, if one allows things defined in Hilbert space to be real, then it would seem MWI and BM are both real, since the wave function really did evolve in a certain way (counterfactual definiteness).
 
  • #189
gill1109 said:
My opinion is that it will get nowhere. Of course it is not conspiratorial "at the Planck scale". But at the scale of a real world Bell-CHSH type experiment it would have to have become conspiratorial.

I think we mean different things by conspiracies. I agree that a theory that "explains" Bell correlation by fine-tuning the initial parameters is cheap and conspiratorial. However, if you can get the correlations by some physical mechanism that does not depend on any fine-tuning then we deal with a law of physics, not conspiracies.

As an example, if you consider a mechanical clock, the correlation between the displayed time and the alarm is a type of conspiracy. There is nothing in the physics of the mechanism that makes such a correlation inevitable. On the other hand the direction of rotation of two geared wheels at the beginning and the end of a raw is the same for an odd number of wheels and different for an even number. I would say that this type of correlation is not conspiratorial because it is generic, it applies to every type of mechanism regardless of the detailed way in which it was built.

't Hooft makes clear that the theory he is pursuing has to be non-conspiratorial, the correlations should appear as a result of some generic property of the evolution of the CA.

In fact, I am absolutely certain that this approach will get nowhere. Possibly a tiny part of QM can be described in this way. But it cannot form a basis for all of conventional QM because of Bell's theorem. Or ... it can but this will lead to "Bell's fifth position": a loophole-free and successful Bell-type experiment will never take place because QM uncertainty relations itself will prevent establishing the right initial conditions. The right quantum state in the right small regions of space-time.

Given the fact that this class of theories denies the statistical independence (or free-will or freedom) assumption, it has no a-priory problem with Bell. The source and detectors are part of the same CA and there are subtle correlations between them.

't Hooft's project is to find a so-called ontological basis for QM (which is supposed to consist of CA states). In this basis all variables would commute, so there is no uncertainty. Nevertheless, the theory is still QM so it should apply to virtually everything, just like the standard model.


It would also imply that a quantum computer can never be built ... or rather, not scaled-up. As one makes computers with more and more qubits in them, quantum decoherence will take over faster, and you'll never be able to factor large integers rapidly or whatever else you want to do with them.

I don't think this is true. The idea is that a quantum computer would never outrun a classical plank-scale computer.
 
  • #190
ueit said:
't Hooft makes clear that the theory he is pursuing has to be non-conspiratorial, the correlations should appear as a result of some generic property of the evolution of the CA.

There was a brief discussion about t'Hooft's ideas in the group "Beyond the Standard Model", but it didn't really go anywhere. Here's my objection--which I'm perfectly happy to be talked out of.

Consider a twin-pair EPR experiment with experimenters Alice and Bob. The usual assumption in discussions of hidden variables is that there are three independent "inputs": (1) Alice's setting, (2) Bob's setting, and (3) whatever hidden variables are carried along with the twin pairs. t'Hooft's model ultimately amounts to saying: (1) and (2) are not actually independent variables. Alice has some algorithm in mind for selecting her setting, and if we only knew enough about Alice's state, and the state of whatever else she's basing her decision on, then we could predict what her choice would be. Similarly for Bob. And if Alice's and Bob's choices are predictable, then it's not hard to generate a hidden variable model of the twin pairs that gives the right statistics. (The difficulty with hidden variables is that you have to accommodate all possible choices Alice and Bob might make. If you only have to accommodate one choice, it's much easier.)

Okay, that's plausible. Except that Alice can bring into play absolutely any other fact about the universe in making her decision about her setting. She could say: If the next batter in the Cubs game gets a hit, I'll choose setting 1, otherwise, I choose setting 2. If the hidden variable relies on knowing what Alice and Bob will choose, then potentially, it would be necessary to simulate the entire universe (or the relevant part of the backwards light cone).

A possible alternative might be to just let Alice's and Bob's results get made independently, locally, and then run time backwards and make another choice if later a conflict is discovered. That would be a real conspiracy theory, but it would be computationally more tractable, maybe.
 
  • #191
morrobay said:
[..] One could also think of complicated scenarios where local unknown physical variables may couple together in a way that will give the (false) impression of non-local results. ( In part. Laloe, 2008)[..].
http://journals.aps.org/pra/abstract/10.1103/PhysRevA.77.022108
Very interesting - macroscopic Bell-type experiments, without counterfactual definiteness.
Thanks!

But what did you mean with "(false) impression"?
 
  • #192
stevendaryl said:
There was a brief discussion about t'Hooft's ideas in the group "Beyond the Standard Model", but it didn't really go anywhere. Here's my objection--which I'm perfectly happy to be talked out of.

Consider a twin-pair EPR experiment with experimenters Alice and Bob. The usual assumption in discussions of hidden variables is that there are three independent "inputs": (1) Alice's setting, (2) Bob's setting, and (3) whatever hidden variables are carried along with the twin pairs. t'Hooft's model ultimately amounts to saying: (1) and (2) are not actually independent variables. Alice has some algorithm in mind for selecting her setting, and if we only knew enough about Alice's state, and the state of whatever else she's basing her decision on, then we could predict what her choice would be. Similarly for Bob. And if Alice's and Bob's choices are predictable, then it's not hard to generate a hidden variable model of the twin pairs that gives the right statistics. (The difficulty with hidden variables is that you have to accommodate all possible choices Alice and Bob might make. If you only have to accommodate one choice, it's much easier.)

I would say that none of the 3 variables are independent. This is my understanding of his model:

The CA is an array of plank-sized cubes including the entire universe. Each cube has some properties (say color). At each tick of the clock, the color of each cube changes following some algorithm, the input values being the color of the cube and of its surrounding cubes.

Now, this produces all sorts of patterns, and those patterns correspond to quantum particles and ultimately to the macroscopic objects that are used to perform a Bell test.

The important thing here is that those patterns can only appear in some configurations (because of the CA algorithm). These configurations correspond to the predicted quantum statistics of the Bell test. In other words it is mathematically impossible to get results that are in contradiction with QM.

Okay, that's plausible. Except that Alice can bring into play absolutely any other fact about the universe in making her decision about her setting. She could say: If the next batter in the Cubs game gets a hit, I'll choose setting 1, otherwise, I choose setting 2. If the hidden variable relies on knowing what Alice and Bob will choose, then potentially, it would be necessary to simulate the entire universe (or the relevant part of the backwards light cone).

I think that you see this backwards. It is the CA patterns from which Alice and its "decision" emerges, not the other way around. Just like you cannot take a decision which leads to a violation of conservation laws you cannot take decisions contrary to the rules of CA. Say that you are asked if you want tee or coffee, and a brain scan shows that in order to choose tee, some electrons in your brain would need to violate the momentum conservation law. Guess what you will choose!

A possible alternative might be to just let Alice's and Bob's results get made independently, locally, and then run time backwards and make another choice if later a conflict is discovered. That would be a real conspiracy theory, but it would be computationally more tractable, maybe.

How are we supposed to "run time backwards"? I do not understand this.
 
  • #193
ueit said:
I think that you see this backwards. It is the CA patterns from which Alice and its "decision" emerges, not the other way around. Just like you cannot take a decision which leads to a violation of conservation laws you cannot take decisions contrary to the rules of CA. Say that you are asked if you want tee or coffee, and a brain scan shows that in order to choose tee, some electrons in your brain would need to violate the momentum conservation law. Guess what you will choose!

I know that Alice choice isn't going to violate the laws of physics. But as I said, Alice can certainly make a meta-choice: "If in the baseball game the batter gets a hit, I"m going to drink coffee. Otherwise, I'm going to drink tea." That doesn't make the choice any less deterministic, but it means that predicting her choice would involve more than knowing what's inside her brain. You would also have to know what's going on in the baseball game miles away.

Potentially, the choice of Alice and Bob's setting in an EPR experiment could depend on the rest of the universe. So to the extent that their settings and their results are co-determined, it would require arranging things with distant baseball teams, as well as Alice and Bob. Potentially, the entire universe would have to be fine-tuned to get the right statistics for EPR-type experiments.

Suppose Alice announces: "I will measure the spin in the x-direction if the next batter gets a hit. Otherwise, I will measure the spin in the y-direction." Bob announces: "I will measure the spin in the x-direction if the juggler I'm watching drops the ball. Otherwise, I will measure the spin in the y-direction." So we generate a twin-pair, and Alice measures the spin in one direction, and Bob measures the spin in a possibly different direction. t'Hooft is saying that the four variables: Alice's direction, Bob's direction, Alice's result, Bob's result, are not a case of the first two causing the last two, but of all four being determined by the initial state of the cellular automaton. But because of the particular way that Alice and Bob choose their settings, he also has to include the baseball player and the juggler in the conspiracy. Potentially the state of the entire rest of the universe might be involved in computing whether Alice measures spin-up.

How are we supposed to "run time backwards"? I do not understand this.

I didn't say that WE are the ones doing it. The universe could work this way: Alice's result is generated under an assumption (a pure guess) as to what Bob's setting and result will be. Bob's result is generated under an assumption as to what Alice's setting and result will be. If it later turns out, after they compare results, that the guesses were wrong, you just fix Alice's and Bob's memories so that they have false memories of getting different results. I don't see how this is any less plausible than t'Hooft's model.
 
  • #194
stevendaryl said:
... t'Hooft is saying that the four variables: Alice's direction, Bob's direction, Alice's result, Bob's result, are not a case of the first two causing the last two, but of all four being determined by the initial state of the cellular automaton. But because of the particular way that Alice and Bob choose their settings, he also has to include the baseball player and the juggler in the conspiracy. Potentially the state of the entire rest of the universe might be involved in computing whether Alice measures spin-up.

This is a point I have tried to make in the past about superdeterministic programs a la t' Hooft: every particle in every spot in the universe must have a copy of the complete (and very large) playbook if there is to be local-only interaction determining the individual outcomes. That is the only way, once you consider all the different "choice" permutations (juggler, player, or a near infinite number of other combinations), that the conspiracy can operate. After all, the outcomes must otherwise appear random (when considered individually). I presume that random element is in the playbook too.
 
  • #195
I saw an article by Zeilinger et al. Which proves experimentally that a large class of non-local realistic variables were incompatible with qm :

http://www.nature.com/nature/journal/v446/n71p38/abs/nature05677.html

"giving up the concept of locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are abandoned."

This point is also in Epr article : qm disturbs the system but permits prediction with certainty since the wf before and after the measurement are respectively nonseparate and the separate.

I suppose epr have overlooked this in their quantum formalism since it seems contradictory to disturb in a non controlled way and be able to predict with certainty...
 
Last edited by a moderator:
  • #197
stevendaryl said:
I know that Alice choice isn't going to violate the laws of physics. But as I said, Alice can certainly make a meta-choice: "If in the baseball game the batter gets a hit, I"m going to drink coffee. Otherwise, I'm going to drink tea." That doesn't make the choice any less deterministic, but it means that predicting her choice would involve more than knowing what's inside her brain. You would also have to know what's going on in the baseball game miles away.

Potentially, the choice of Alice and Bob's setting in an EPR experiment could depend on the rest of the universe. So to the extent that their settings and their results are co-determined, it would require arranging things with distant baseball teams, as well as Alice and Bob. Potentially, the entire universe would have to be fine-tuned to get the right statistics for EPR-type experiments.

Suppose Alice announces: "I will measure the spin in the x-direction if the next batter gets a hit. Otherwise, I will measure the spin in the y-direction." Bob announces: "I will measure the spin in the x-direction if the juggler I'm watching drops the ball. Otherwise, I will measure the spin in the y-direction." So we generate a twin-pair, and Alice measures the spin in one direction, and Bob measures the spin in a possibly different direction. t'Hooft is saying that the four variables: Alice's direction, Bob's direction, Alice's result, Bob's result, are not a case of the first two causing the last two, but of all four being determined by the initial state of the cellular automaton. But because of the particular way that Alice and Bob choose their settings, he also has to include the baseball player and the juggler in the conspiracy. Potentially the state of the entire rest of the universe might be involved in computing whether Alice measures spin-up.

You have to understand that in a CA there are no free parameters. Everything is related to everything else. The fact that Alice "decides" to make a "meta-choice" is quite irrelevant. Her state was already related to that baseball game and to the Bob's juggler, and to whatever you may think of. It might look somehow unintuitive, but this feature is shared with very respectable physical theories, like general relativity or classical electrodynamics.

In fact, cellular automatons are used exactly for that: simulations of various field theories. From the point of view of Bell's theorem, more specifically, from the point of view of the "freedom" assumption, the CA proposal is in the same class with all local field theories.

From the point of view of their mathematical formulation all these theories are as superdeterministic and conspiratorial as CA. The only difference resides in their domain. GR or classical electrodynamics do not describe everything, and especially not humans brains. CA does that (hopefully).

I maintain that for systems which are fully described by these theories, the freedom assumption does not hold. And it is easy to see why, and why this should not be perceived as a conspiracy.

Let's assume, for the sake of the argument, that our galaxy is described completely by GR (we ignore supernovas and other events involving other forces). Let's focus now on Earth and on another planet which is situated symmetrically, in the opposite arm of the galaxy, call it Earth_B. Our experiment involves only our observation of the trajectories of these two planets.
GR is a local theory, therefore the trajectory of Earth during our observation can be perfectly predicted from the local space curvature. The same is for Earth_B. I need to make no reference to Earth_B when describing what Earth is doing and I couldn't care less about Earth while describing Earth_B. They are so far apart that no signal can travel between them during our experiment, and even in that case, the effect of one on the other would be irrelevant at such a distance.

So, we should dismiss any "conspiracies" and proclaim the trajectory of the two planets statistically independent, right? Or, if you want, we may let them depend on their solar systems, or even on the whole branch of the galaxy. They are really independent now, right?

But when the two trajectories are compared we see a perfect correlation between them. How can we explain that? It must be a non-local effect, or the universe must go forward and back in time, or our logic sucks, isn't it?

Obviously, none of these solutions are true. The fact that was forgotten was that, at the beginning of the experiment, the states of the two planets (together with the local space curvature) were correlated already, and they have been so since the Big-Bang.

So, the states of Alice and Bob and of the particle source, baseball players, and of the juggler are correlated even before the experiment begins. An they will remain so.

I didn't say that WE are the ones doing it. The universe could work this way: Alice's result is generated under an assumption (a pure guess) as to what Bob's setting and result will be. Bob's result is generated under an assumption as to what Alice's setting and result will be. If it later turns out, after they compare results, that the guesses were wrong, you just fix Alice's and Bob's memories so that they have false memories of getting different results. I don't see how this is any less plausible than t'Hooft's model.

I find 't Hoofts' proposal much more acceptable.
 
  • #198
ueit said:
You have to understand that in a CA there are no free parameters. Everything is related to everything else. The fact that Alice "decides" to make a "meta-choice" is quite irrelevant. Her state was already related to that baseball game and to the Bob's juggler, and to whatever you may think of. It might look somehow unintuitive, but this feature is shared with very respectable physical theories, like general relativity or classical electrodynamics.

I understand that, but as I said, Alice's choice, while not free, could involve essentially the rest of the universe (or at least everything in the backward light cone of her decision event). So for a cellular automaton to take advantage of this determinism, it would have to take into account everything else in the universe. As Dr. Chinese said, it would be necessary for every particle in the universe to have in essence a "script" for what everything else in the universe was going to do. That's not impossible, but it's not a very attractive model, it seems to me.
 
  • #199
ueit said:
In fact, cellular automatons are used exactly for that: simulations of various field theories. From the point of view of Bell's theorem, more specifically, from the point of view of the "freedom" assumption, the CA proposal is in the same class with all local field theories.

From the point of view of their mathematical formulation all these theories are as superdeterministic and conspiratorial as CA.

No, that's not true. The evolution of a classical field does not depend on knowing what's happening in distant regions of spacetime. Classical E&M is not superdeterministic. It's local and deterministic.
 
  • #200
stevendaryl said:
No, that's not true. The evolution of a classical field does not depend on knowing what's happening in distant regions of spacetime. Classical E&M is not superdeterministic. It's local and deterministic.

Just about any nonlocal theory can be turned into a local theory by invoking superdeterminism. So superdeterminism makes the distinction between local and nonlocal almost meaningless.
 
Back
Top