Graduate Does the Bell theorem assume reality?

Click For Summary
The discussion centers on the implications of Bell's theorem regarding the nature of reality and locality in quantum mechanics. A significant divide exists among physicists on whether Bell's theorem assumes reality, with some arguing it proves nonlocality without such an assumption, while others contend that reality is indeed a foundational aspect. Roderich Tumulka's analysis distinguishes four notions of reality, concluding that only the mildest form, referred to as (R4), is assumed by Bell's theorem. The debate includes whether abandoning (R4) is feasible, with some interpretations suggesting it is possible, while others maintain that the assumption of reality is integral to understanding Bell's inequalities. Ultimately, the conversation highlights the complexities surrounding the philosophical interpretations of quantum mechanics and the assumptions underlying Bell's theorem.
  • #91
DarMM said:
I have doubts about this though, for typical ##\psi##-epistemic reasons. The most basic being that classical uncertainty about ##\psi## doesn't manifest as something like ##\mathcal{L}^{1}(\mathcal{H})##, which you'd expect if ##\psi## was a real object you were ignorant of (because this is purely classical ignorance). Rather ##\mathcal{H}## is a subset of the observable algebra's dual (its boundary) and some of that dual has terms that mix classical and quantum probability in odd ways. So you can have a mixture ##\rho## which could be considered a mix of two states ##\psi_1## and ##\psi_2## or a mix of ##\psi_3## and ##\psi_4## and it's the exact same mixture. Hard to understand if ##\psi## is ontic (though not a killing argument of course), it makes pure states like ##\psi## just seem like a limiting type of probability assignment, not ontic.

If it's not ontic at least it's objective — something all observers have compatible beliefs about. Perhaps the missing piece is that observers are not only classically uncertain about ##\psi##, but also simultaneously occupy multiple positions in it. I mean the concept of self-locating uncertainty that helped Carroll derive the Born rule. If an observer is characterized by a mixed state exactly equal to both ##\psi_{1,2}## and ##\psi_{3,4}##, assuming they exist somewhere, then you can't say this copy is in one or the other, but that two copies of him occupy those two mixtures.

Just to help orient a reading of it, what does he think the world is like underneath the reasoning of agents? I see our typical "laws" are seen to come about as a limiting behaviour in subjective probability assignments, but does he make any conjecture about the underlying world?

It assumes bit-string physics. The observer is in some finite (or countable) strings of bits on a Turing machine. One interesting result of his analysis is that computation is free - only the complexity of the algorithm matters. Of course these bit-strings could even exist in classical computers, so no underlying world can really be picked out.
 
  • Like
Likes DarMM
Physics news on Phys.org
  • #92
Very interesting posts!

akvadrako said:
If it's not ontic at least it's objective — something all observers have compatible beliefs about. Perhaps the missing piece is that observers are not only classically uncertain about ##\psi##, but also simultaneously occupy multiple positions in it. I mean the concept of self-locating uncertainty that helped Carroll derive the Born rule. If an observer is characterized by a mixed state exactly equal to both ##\psi_{1,2}## and ##\psi_{3,4}##, assuming they exist somewhere, then you can't say this copy is in one or the other, but that two copies of him occupy those two mixtures.
I see your point.

First I would just say, I don't think Carroll derives the Born, I agree with the criticisms of his proof by Kent and Vaidman. Vaidman's attempt at a self-locating uncertainty derivation is much better I think. However it's still circular as it requires decoherence to have occured, which itself requires the Born rule. However if you accept that decoherence can be explained by some other mechanism, it seems to be a pretty good proof.

The only attempt at getting decoherence without the Born rule is the Quantum Darwinism program of Zurek, but it hasn't quite achieved this due circular issues related to the environment (incredibly strong assumptions about the form of the environment that essentially put in by hand a good amount of decoherence).

So as of yet, I don't think there is a solid derivation of the Born rule.

Secondly, I'd still have issues with ##\psi## being ontic given the above. Consider a state in quantum field theory for an inertial and accelerating observer. The same state can be a pure state for the inertial observer and a mixed state for an accelerating observer (Unruh effect), even though neither have performed measurements that would put copies of themselves in different branches of the state. This means a single ontic ##\psi## for an inertial observer is a mixed state of multiple ontic ##\psi## for an accelerating observer, even though there is no cause for self-locating uncertainty here.

This relates to another problem I have. Algebraic Field Theory, especially QFT in curved spacetime, shows that the Hilbert space structure is derivative not primary to quantum theory. Primary is the observable algebra ##\mathcal{A}## and its dual the space of algebraic states ##\mathcal{A}^{*}##. A Hilbert space comes about when given a specific ##\rho \in \mathcal{A}^{*}## the GNS theorem shows that you can construct a Hilbert space ##\mathcal{H}## in which ##\rho## is represented as a vector ##\psi## and ##\rho(A)## is represented by ##\langle\psi,A\psi\rangle##. However different observers will construct the different Hilbert spaces and the theory has several possible non-Unitarily equivalent Hilbert spaces. I find it hard to think ##\psi \in \mathcal{H}## is ontic.
 
  • Like
Likes akvadrako
  • #93
N88 said:
Well, for me at least: not any meaningful version of physical reality when you are writing in the context of EPRB.

Here's my reason. From high-school algebra, without any refence to EPRB, Bell, etc, we irrefutably obtain:

##|E(a, b) - E(a,c)| \leq 1 - E(a,b)E(a,c). \qquad (1)##

Compare this with Bell's famous 1964 inequality:

##|E(a, b) - E(a,c)| \leq 1 + E(b,c). [sic] \qquad (2)##

Given [as I read him. p.195] that Bell's aim was to provide "a more complete specification of EPRB by means of parameter ##\lambda##": I suggest that his supporters should pay more attention to his 1990 suggestion that maybe there was some silliness somewhere.

For example, let's rewrite (2). We find:

##|E(a, b) - E(a,c)| - E(b,c) \leq1. [sic] \qquad(3)##

But, under EPRB, that upper bound is ##\tfrac{3}{2}.##

Thus, in that Bell uses inequality (2 ) as proof of his theorem: I believe that Bell's writings need to be challenged --- without any reference to nonlocality, QBism, BWIT, AAD, MW, etc [which, in my view, are also silly].

I am not at all sure what point you are making. Yes, Bell's inequality is just a mathematical fact, given certain assumptions. The question is how to interpret the fact that experimentally the inequality is violated. That's where nonlocality (or some other weird possibility) comes in.

When you say "Bell's writings need to be challenged", I'm not sure what specific claims by Bell you are objecting to.
 
  • #95
stevendaryl said:
I am not at all sure what point you are making. Yes, Bell's inequality is just a mathematical fact, given certain assumptions. The question is how to interpret the fact that experimentally the inequality is violated. That's where nonlocality (or some other weird possibility) comes in.

When you say "Bell's writings need to be challenged", I'm not sure what specific claims by Bell you are objecting to.
The point I seek to make is that Bell's inequality is a mathematical fact of limited validity.

1. It is algebraically false.

2. It is false under EPRB (yet Bell was seeking a more complete specification of EPRB).

3. So IF we can pinpoint where Bell's formulation departs from #1 and #2, which I regard as relevant boundary conditions, THEN we will understand the reality that Bell is working with.

4. Now IF we number Bell's 1964 math from the bottom of p.197: (14), (14a), (14b), (14c), (15): THEN Bell's realism enters between (14a) and (14b) via his use of his (1).

So the challenge for me is to understand the reality that he introduces via the relation ...

##B(b,\boldsymbol{\lambda})B(b,\boldsymbol{\lambda}) = 1. \qquad(1)##

... since this is what is used --- from Bell's (1) --- to go from (14a) to (14b).

And that challenge arises because it seems to me that Bell breaches his "same instance" boundary condition; see that last line on p.195. That is, from LHS (14a), I see two sets of same-instances: the set over ##(a,b)## and the set over ##(a,c)##. So, whatever Bell's realism [which is the question], it allows him to introduce a third set of same-instances, that over ##(b,c)##.

It therefore seems to me that Bell is using a very limited classical realism: almost as if he had a set of classical objects that he can non-destructively test repeatedly, or he can replicate identical sets of objects three times; though I am open to -- and would welcome -- other views.

Thus, from my point of view: neither nonlocality nor any weirdness gets its foot in the door: for [it seems to me], it all depends on how we interpret (1).

PS: I do not personally see that Bell's use of (1) arises from "EPR elements of physical reality." But I wonder if that is how Bell's use of his (1) is interpreted?

For me: "EPR elements of physical reality" correspond [tricky word] to beables [hidden variables] which I suspect Bell may have been seeking in his quest for a more complete specification of EPRB. However, toward answering the OP's question, how do we best interpret the reality that Bell introduces in (1) above?

Or, perhaps more clearly: the reality that Bell assumes it to be found in Bell's move from (14a) to (14b). HTH.
 
  • #96
I'm still not sure I understand what you're saying. To me, the key move in Bell's proof is to assume that probabilities "factor" when all relevant causal information is taken into account: He assumed that

##P(A, B | a, b) = \sum_\lambda P(\lambda) P(A | a, \lambda) P(B | b, \lambda)##

Basically, the assumption is that all correlations between two events can be explained by a common causal influence on both of them.
 
  • #97
stevendaryl said:
ITo me, the key move in Bell's proof is to assume that probabilities "factor" when all relevant causal information is taken into account: He assumed that

##P(A, B | a, b) = \sum_\lambda P(\lambda) P(A | a, \lambda) P(B | b, \lambda)##

Basically, the assumption is that all correlations between two events can be explained by a common causal influence on both of them.

This is Bell's condition that the setting at A does not affect the outcome at B, and vice versa. You could call that the Locality condition. The other one is the counterfactual condition, or Realism. Obviously, the standard and accepted interpretation of Bell is that no Local Realistic theory can produce the QM results. So both of these - Locality and Realism - must be present explicitly as assumptions.
 
  • #98
N88 said:
... almost as if he had a set of classical objects that he can non-destructively test repeatedly, or he can replicate identical sets of objects three times...

If you believe in classical realism, you don't need to talk about "non-destructive" testing. Because they pre-exist as specific values. If they pre-exist, well... what are the values? There are none that reproduce the QM expectation values.

So you have to commit. Do they exist (independent of measurement)? Or don't they? As I read it, you are taking both sides.
 
  • #99
DarMM said:
Rovelli's Relational QM
After taking a look at his 1996 paper, I should say I have finally found my favorite interpretation. I hope there has been some progress since then. Does anyone know about any recent papers on this?
 
  • Like
Likes *now*
  • #100
DarMM said:
Secondly, I'd still have issues with ##\psi## being ontic given the above. Consider a state in quantum field theory for an inertial and accelerating observer. The same state can be a pure state for the inertial observer and a mixed state for an accelerating observer (Unruh effect), even though neither have performed measurements that would put copies of themselves in different branches of the state. This means a single ontic ##\psi## for an inertial observer is a mixed state of multiple ontic ##\psi## for an accelerating observer, even though there is no cause for self-locating uncertainty here.

I don't have much to say about the other points, so I'll just comment on this one. How could a mixed state not imply multiple copies of an observer, given unitary evolution? It would seem to require that the observer is both entangled with a qubit representing a future measurement and not entangled with it. In more general terms, I would say SLU always applies to all observers, because there is a lot about their environment they are uncertain about.
 
  • #101
microsansfil said:
It seem that for qbism, quantum physics does not require non-locality. Non-locality is not a fact, but the result of an interpretation.of physical theory. An Introduction to QBism with an Application to the Locality of Quantum Mechanics.
A quote from the paper you link:
"QBist quantum mechanics is local because its entire purpose is to enable any single agent to organize her own degrees of belief about the contents of her own personal experience."

My translation of this is the following: Sure, there is objective reality, but it's just not described by (QBist) QM. The things which are described by QM do not involve objective reality. Objective reality, since it exists, is non-local as proved by Bell, but QM as a theory with a limited scope is a local theory.
 
  • #102
stevendaryl said:
I'm still not sure I understand what you're saying. To me, the key move in Bell's proof is to assume that probabilities "factor" when all relevant causal information is taken into account: He assumed that

##P(A, B | a, b) = \sum_\lambda P(\lambda) P(A | a, \lambda) P(B | b, \lambda)##

Basically, the assumption is that all correlations between two events can be explained by a common causal influence on both of them.
In offering an answer to the OP, I was expressing my view that Bell assumes reality in his move from (14a) to (14b). It seems to me that it was the result of his (15) that Bell regarded as the source and the proof of his theorem.

In my view, Bell's expression that "probabilities factor ..." came later as he refined his definition of locality.

So I think it would help the OP and myself if we could learn how you, Dr Chinese, etc., interpret the reality that Bell is defining in his move from (14a) to (14b).

It is widely used in text-books. But, in the ones I've seen, it is used mathematically without explanation of the reality that Bell is trying to capture.

PS: I don't see that he successfully captures EPR's "elements of physical reality". He says (p.195) that he was seeking a more complete specification of EPRB via λ (as I read him).
 
  • #103
DarMM said:
Well you're not going to like it, but they say the world is local because there are no mathematical variables describing it, i.e. no ##\lambda##, so no implications from Bell's theorem.
Regarding this, I think there are two types of QBists. One type says that there is no ##\lambda## in Nature. Those deny the existence of objective reality. Another type says that there is objective reality, so there is ##\lambda## in Nature, but there is no ##\lambda## in a specific theory of Nature that we call QBist QM.
 
  • #104
N88 said:
So I think it would help the OP and myself if we could learn how you, Dr Chinese, etc., interpret the reality that Bell is defining in his move from (14a) to (14b).
It is widely used in text-books. But, in the ones I've seen, it is used mathematically without explanation of the reality that Bell is trying to capture.

"From a classical standpoint we would imagine that each particle emerges from the singlet state with, in effect, a set of pre-programmed instructions for what spin to exhibit at each possible angle of measurement, or at least what the probability of each result should be…….

From this assumption it follows that the instructions to one particle are just an inverted copy of the instructions to the coupled particle……..

Hence we can fully specify the instructions to both particles by simply specifying the instructions to one of the particles for measurement angles ranging from 0 to π……….
"

see: https://www.mathpages.com/home/kmath521/kmath521.htm
 
  • #105
N88 said:
In offering an answer to the OP, I was expressing my view that Bell assumes reality in his move from (14a) to (14b). It seems to me that it was the result of his (15) that Bell regarded as the source and the proof of his theorem.

Well, I don't have Bell's paper in front of me, so that doesn't help. However, Wikipedia has derivations of the Bell inequality and the related CHSH inequality.

I don't know what 14a and 14b refer to. I see this paper of Bell's, posted by Dr. Chinese: http://www.drchinese.com/David/Bell_Compact.pdf
but it doesn't have a 14a and 14b.

PS: I don't see that he successfully captures EPR's "elements of physical reality". He says (p.195) that he was seeking a more complete specification of EPRB via λ (as I read him).

Well, I think that Einstein et al were reasoning along the lines of: If it is possible, by measuring a property of one particle to find out the value of a corresponding property of another, far distant particle, then the latter property must have already had a value. Specifically, Alice by measuring her particle's spin along the z-axis immediately tells her what Bob will measure for the spin of his particle along the z-axis. (EPR originally were about momenta, rather than spins, but the principle is the same). So to EPR, this either means that (1) Alice's measurement affects Bob's measurement (somehow, Bob's particle is forced to be spin-down along the z-axis by Alice's measurement of her particle, or (2) Bob's particle already had the property of being spin-down along the z-axis, before Alice even performed her measurement.

So EPR's "elements of reality" when applied to the measurement of anti-correlated spin-1/2 particles would imply (under the assumption that Alice and Bob are going to measure spins along the z-axis) that every particle already has a definite value for "the spin in the z-direction". If you furthermore assume that Alice and Bob are free to choose any axis they like to measure spins relative to (I don't know if the original EPR considered this issue), then it means that for every possible direction, the particle already has a value for the observable "the spin in that direction".

Bell captured this intuition by assuming that every spin-1/2 particle produced in a twin-pair experiment has an associated parameter ##\lambda## which captures the information of the result of a spin measurement in an arbitrary direction. The functions ##A(\overrightarrow{a}, \lambda)## and ##B(\overrightarrow{b}, \lambda)## are assumed to give the values for Alice's measurement along axis ##\overrightarrow{a}## and Bob's measurement along axis ##\overrightarrow{b}##, given ##\lambda##.

So it seems to me that ##\lambda## directly captures EPR's notion of "elements of reality". ##\lambda## is just the pre-existing value of the spin along an arbitrary direction.
 
Last edited:
  • #106
DrChinese said:
This is Bell's condition that the setting at A does not affect the outcome at B, and vice versa. You could call that the Locality condition. The other one is the counterfactual condition, or Realism. Obviously, the standard and accepted interpretation of Bell is that no Local Realistic theory can produce the QM results. So both of these - Locality and Realism - must be present explicitly as assumptions.

It seems to me that there are two steps involved: One (having nothing to do with locality, but instead is the Reichenbach's Common Cause Principle (whether or not Bell intended this). It's the assumption that if two things are correlated, then there exists a "common cause" for both. I gave the example earlier of twins: You randomly select a pair of 15-year-old twins out of the population, and then you separately test them for their ability to play basketball. Doing this for many pairs, you will find (probably--I haven't done it) that their abilities are correlated. The probability that they both are good at basketball is unequal to the square of the probability that one of them is good at basketball. Reichenbach's Common Cause Principle would imply that there is some common causal factor affecting both twins' basketball-playing abilities. Maybe it's genetics, maybe it's parenting style, maybe it's where they live, maybe it's what school they went to, etc. If we let ##\lambda## be the collection of all such causal factors, then it should be the case that, controlling for ##\lambda##, there is no correlation between twins' basketball-playing ability.

To me, that's where factorizability comes in. It doesn't have anything to do with locality, yet, because the common factors might conceivably include something happening on a distant star a billion light-years away. Locality is the additional assumption that the common causal factors ##\lambda## must be in the intersection of the backwards light cones of the two tests of the boys' basketball-playing ability.

Factorizability is not particularly about locality, but locality dictates what can go into the common factors.
 
  • #107
stevendaryl said:
Factorizability is not particularly about locality, but locality dictates what can go into the common factors.

An example that maybe illustrates the issue of factorizability is a pair of correlated coins. You have two identical coins. Examined separately, they seem unremarkable---they each seem to have a 50/50 chance of producing heads or tails when flipped. But they have a remarkable correlation: No matter how far separated the two coins are, the ##n^{th}## flip of one coin always produces the opposite result of the ##n^{th}## flip of the other coin. We can characterize the situation by:
  1. ##P_1(H) = \frac{1}{2}, P_1(T) = \frac{1}{2}##
  2. ##P_2(H) = \frac{1}{2}, P_2(T) = \frac{1}{2}##
  3. ##P(H, H) = \frac{1}{2}, P(H, T) = 0, P(T, H) = 0, P(T, T) = \frac{1}{2}##
If the coins were uncorrelated, then the probability of both giving a result of ##H## would be the product of the individual probabilities, 1/4. Instead, it's 1/2.

So the probabilities don't factor:
##P(H,H) \neq P_1(H) P_2(H)##

Reichenbach's common cause principle would tell us that there is something funny going on with these coins. It would suggest that immediately prior to flipping the coins for the ##n^{th}## time, there is some hidden state information affecting the coins, influencing the results of one or the other or both flips. In other words, there is some state variable ##\lambda_n## such that if we knew the value of ##\lambda_n##, then we could predict the result of the ##n^{th}## coin flip.

This is not a locality assumption. A priori, ##\lambda_n## might the conjunction of conditions in the neighborhoods of both coins.

Of course, this toy example doesn't violate Bell's inequality, because there actually is a "hidden variable" explanation for the correlations. For example, we could propose that the result of the ##n^{th}## coin flip is determined by the binary expansion of some fixed real number such as ##\pi##. That would explain the correlations without the need for nonlocal interactions.
 
  • Like
Likes martinbn
  • #108
stevendaryl said:
It seems to me that there are two steps involved: One (having nothing to do with locality, but instead is the Reichenbach's Common Cause Principle (whether or not Bell intended this). It's the assumption that if two things are correlated, then there exists a "common cause" for both. I gave the example earlier of twins: You randomly select a pair of 15-year-old twins out of the population, and then you separately test them for their ability to play basketball. Doing this for many pairs, you will find (probably--I haven't done it) that their abilities are correlated. The probability that they both are good at basketball is unequal to the square of the probability that one of them is good at basketball. Reichenbach's Common Cause Principle would imply that there is some common causal factor affecting both twins' basketball-playing abilities. Maybe it's genetics, maybe it's parenting style, maybe it's where they live, maybe it's what school they went to, etc. If we let ##\lambda## be the collection of all such causal factors, then it should be the case that, controlling for ##\lambda##, there is no correlation between twins' basketball-playing ability.

To me, that's where factorizability comes in. It doesn't have anything to do with locality, yet, because the common factors might conceivably include something happening on a distant star a billion light-years away. Locality is the additional assumption that the common causal factors ##\lambda## must be in the intersection of the backwards light cones of the two tests of the boys' basketball-playing ability.

Factorizability is not particularly about locality, but locality dictates what can go into the common factors.

I understood Bell's (2) - factorizing - as being his attempt to say that the outcome of A does not depend on the nature of a measurement at B. I also read it as saying there are initial conditions common to both, not so different than what you say. And I think in all cases, those initial conditions occur prior to the measurements of A and B (by unstated assumption).
 
  • #109
DrChinese said:
In the referenced paper, the (R3) requirement is:
There is some (“hidden”) variable λ that influences the outcome in a probabilistic way, as represented by the probability P(A, B|a, b, λ).

But it really includes these 3 to work out in Bell - this is usually ignored but to me it is the crux of the realism assumption:

P(A,B|a,b,λ)
P(A,C|a,c,λ)
P(B,C|b,c,λ)

We are assuming the existence of a counterfactual.

A counterfactual event or a counterfactual probability?

Is assuming the existence of the probability of an event the same concept as assuming the existence of the event itself? - or assuming the "counterfactual" existence of the event?

There are two different interpretations of physical probability. On interpretation is that a unique event that will or will-not occur at time t has a probability associated with it that is "real" before time t and becomes either 0 or 1 at time t. The other interpretation is that the probability of such an event is only "real" in the sense that there is "really" a large collection of "identical" ( to a certain level of detail in their description) events that will or will-not occur at time t and the probability is (really) a statistical property of the outcomes of that collection of events.

(It seems to me that both intepretations lead to hopeless logical tangles!)
 
  • #110
DrChinese said:
I understood Bell's (2) - factorizing - as being his attempt to say that the outcome of A does not depend on the nature of a measurement at B. I also read it as saying there are initial conditions common to both, not so different than what you say. And I think in all cases, those initial conditions occur prior to the measurements of A and B (by unstated assumption).

Okay, but unless you already have a complete set of causal factors, then locality does not imply factorizability. To go back to my twin basketball players example, let's make up the following binary variables: ##A##: the first twin is good at basketball. ##a##: the first twin makes his first basket attempt in the tryouts. ##B##: the second twin is good at basketball. ##b##: the second twin makes his first basket attempt.

I'm guessing that ##P(A, B | a, b) \neq P(A | a) P(B | b)##. Knowing that the first twin made his first basket attempt might very well tell you something about whether the second twin is good at basketball. But that failure to factor doesn't mean that anything nonlocal is going on. It means that you haven't identified all the causal factors.
 
  • #111
From the article:
(R4) Every experiment has an unambiguous outcome, and records and memories of that outcome agree with what the outcome was at the space-time location of the experiment.
The notion here is that the "unambiguous outcome" as it applies to the experiment, is that the "result" is either +1 or -1, and not something probabilistic. As applied to QM, it means that the result has "collapsed" to a certainty. For QM, I don't think that's a given.

On the other hand, most of us are willing to accept this as a practical reality. If we ignore the "unambiguous" part, what we have is the basis for science, scientific method, and scientific discovery.

On another point, I think that it is important to note that in actual experiments, the result is +1, -1, or not detected. In any situation where the Bell Inequality is being tested, the experimenter needs to verify that the "not detected" case is not so large as to ruin Bell's arithmetic. Otherwise, your hidden variable can be used to select which particles are easiest to detect with a given measurement angle.
 
  • #112
Stephen Tashi said:
A counterfactual event or a counterfactual probability?

There is no event, certainly. It is an expression of realism. The realist claims that there is reality independent of the act of observation, and the results at A are independent of the nature of a measurement on B. Because every possible measurement result on A can be predicted in advance, together these *imply* that every possible measurement result (on A) pre-exists. Similar logic applies to B. Therefore, every possible combination of A & B - measured at any angles - must be counterfactually real. I.e. Simultaneously real. That is the idea of an objective reality.

And yet, clearly Bell shows that is not possible.
 
  • #113
Demystifier said:
A large portion of physicists thinks that Bell's theorem shows that reality does not exist. Another large portion of physicists thinks that reality is not an assumption of Bell's theorem, so that Bell's theorem just proves nonlocality, period. A third large portion of physicists thinks that both reality and locality are assumptions of Bell's inequalities, so that the Bell theorem proves that either reality or locality (or both) are wrong. So who is right?

I don’t see where the problem is if one avoids the term “reality” which is charged with a lot of cherished philosophical beliefs. One should simply use the term "objective local theory" as, for example, done by A. J. Leggett in “Testing the limits of quantum mechanics: motivation, state of play, prospects” (J. Phys.: Condens. Matter 14 (2002) R415–R451):

As is by now very widely known, in an epoch-making 1964 paper the late John Bell demonstrated that under such conditions the two-particle correlations predicted by QM are incompatible with a conjunction of very innocuous and commonsensical-looking postulates which nowadays are usually lumped together under the definition of an ‘objective local’ theory; crudely speaking, this class of theories preserves the fundamental postulates of local causality in the sense of special relativity and a conventional concept of the ‘arrow’ of time, and in addition makes the apparently ‘obvious’ assumption that a spatially isolated system can be given a description in its own right. The intuitive plausibility (to many people) of the class of objective local theories is so high that once Bell had demonstrated that under suitable conditions (including the condition of space-like separation) no theory of this class can give experimental predictions which coincide with those made by QM, a number of people, including some very distinguished thinkers, committed themselves publicly to the opinion that it would be QM rather than the objective local postulates which would fail under these anomalous conditions.
 
  • #114
akvadrako said:
I don't have much to say about the other points, so I'll just comment on this one. How could a mixed state not imply multiple copies of an observer, given unitary evolution? It would seem to require that the observer is both entangled with a qubit representing a future measurement and not entangled with it.
A mixed state can result from simple classical ignorance of a pure state source which may pump out, say, one of four pure states. It would be described by a mixed state due to the classical ignorance, but this has nothing to do with entanglement or multiple copies of the observer, i.e. even in Many Worlds in such a case there wouldn't be multiple copies. It's the difference between a proper and improper mixture.

akvadrako said:
In more general terms, I would say SLU always applies to all observers, because there is a lot about their environment they are uncertain about.
I don't see how in this case. The Minkowski observer sees the vacuum state ##\rho_{\Omega}## as a pure state, a Rindler boosted observer sees it as a mixed state. In this case there is no environment and the "mixture" is unrelated to post-measurement entanglement.
 
Last edited:
  • Like
Likes bhobba
  • #115
Demystifier said:
My translation of this is the following: Sure, there is objective reality, but it's just not described by (QBist) QM. The things which are described by QM do not involve objective reality. Objective reality, since it exists, is non-local as proved by Bell, but QM as a theory with a limited scope is a local theory.
Bell's theorem doesn't prove reality is non-local, that's only one way out of the theorem. As I mentioned above retrocausal or acausal theories are another way out.

Also Fuchs explicitly thinks nature is local, as does any of the rest of the QBist authors I've seen talks from. Fuchs even has a cartoon of non-locality he calls the "tickle tickle world" (see @18:30):


Demystifier said:
Regarding this, I think there are two types of QBists. One type says that there is no ##\lambda## in Nature. Those deny the existence of objective reality. Another type says that there is objective reality, so there is ##\lambda## in Nature, but there is no ##\lambda## in a specific theory of Nature that we call QBist QM.
Everyone I've seen is the former. And it's not so much denying objective reality as denying that reality is fully mathematizable. In their view, there is a world out there, it's just not amenable to a complete mathematical specification. Finding out why that is and what that means is sort of what Fuchs intends as the future for QBism.
 
  • Like
Likes eloheim, Demystifier, bhobba and 1 other person
  • #116
  • Like
Likes ShayanJ
  • #117
DarMM said:
Bell's theorem doesn't prove reality is non-local, that's only one way out of the theorem. As I mentioned above retrocausal or acausal theories are another way out.

In a retrocausal or acausal contextual theory, the context is formed from a quantum system at different points in spacetime. These would not be simultaneous as you would expect either in a conventional local classical theory, or in a non-local theory.

As a result, there is no counterfactual scenario. So the realistic assumption in Bell is explicitly rejected.
 
  • Like
Likes DarMM
  • #118
stevendaryl said:
Well, I don't have Bell's paper in front of me, so that doesn't help. However, Wikipedia has derivations of the Bell inequality and the related CHSH inequality.

I don't know what 14a and 14b refer to. I see this paper of Bell's, posted by Dr. Chinese: http://www.drchinese.com/David/Bell_Compact.pdf
but it doesn't have a 14a and 14b.
Well, I think that Einstein et al were reasoning along the lines of: If it is possible, by measuring a property of one particle to find out the value of a corresponding property of another, far distant particle, then the latter property must have already had a value. Specifically, Alice by measuring her particle's spin along the z-axis immediately tells her what Bob will measure for the spin of his particle along the z-axis. (EPR originally were about momenta, rather than spins, but the principle is the same). So to EPR, this either means that (1) Alice's measurement affects Bob's measurement (somehow, Bob's particle is forced to be spin-down along the z-axis by Alice's measurement of her particle, or (2) Bob's particle already had the property of being spin-down along the z-axis, before Alice even performed her measurement.

So EPR's "elements of reality" when applied to the measurement of anti-correlated spin-1/2 particles would imply (under the assumption that Alice and Bob are going to measure spins along the z-axis) that every particle already has a definite value for "the spin in the z-direction". If you furthermore assume that Alice and Bob are free to choose any axis they like to measure spins relative to (I don't know if the original EPR considered this issue), then it means that for every possible direction, the particle already has a value for the observable "the spin in that direction".

Bell captured this intuition by assuming that every spin-1/2 particle produced in a twin-pair experiment has an associated parameter ##\lambda## which captures the information of the result of a spin measurement in an arbitrary direction. The functions ##A(\overrightarrow{a}, \lambda)## and ##B(\overrightarrow{b}, \lambda)## are assumed to give the values for Alice's measurement along axis ##\overrightarrow{a}## and Bob's measurement along axis ##\overrightarrow{b}##, given ##\lambda##.

So it seems to me that ##\lambda## directly captures EPR's notion of "elements of reality". ##\lambda## is just the pre-existing value of the spin along an arbitrary direction.
(14a) and (14b) were introduced at post #95.
 
  • #119
DrChinese said:
In a retrocausal or acausal contextual theory, the context is formed from a quantum system at different points in spacetime. These would not be simultaneous as you would expect either in a conventional local classical theory, or in a non-local theory.

As a result, there is no counterfactual scenario. So the realistic assumption in Bell is explicit rejected.
I've been thinking* and there's a possible link with QBism and these views. Bear with me, because I might be talking nonsense here and there's plenty of scare quotes because I'm not sure of the reality of various objects in these views.

In these views let's say you have a classical device ##D_1## the emitter and another classical device ##D_2## the detector, just as spacetime in Relativity is given a specific split into space and time by the given "context" of an inertial observer, in these views we have spacetimesource which is split into
  1. Space
  2. Time
  3. A conserved quantity, ##Q##
by the combined spatiotemporal contexts of those two devices.

That conserved quantity might be angular momentum, or it might be something else, depending on what ##D_1## and ##D_2## are. Then some amount of ##Q## is found at earlier times in ##D_1## and in later times in ##D_2##, not because it's transmitted, simply that's the "history" that satisfies the 4D constraints.

Quantum particles and fields only come in as a way of evaluating the constraint via a path integral, they're sort of a dummy variable and don't fundamentally exist as such.

So ultimately we have two classical objects which define not only a reference frame but a contextual quantity they "exchange". This is quite interesting because it means if I have an electron gun and an z-axis angular momentum detector, then it was actually those two devices that define the z-axis angular momentum itself ##J_z## that they exchange, hence there is obviously no counterfactual:
"X-axis angular momentum ##J_x## I would have obtained had I measured it"
since that would have required a different device, thus a different decomposition of the spacetimesource and a completely different 4D scenario to constrain. Same with Energy and so on. ##J_z## also wasn't transmitted by an electron, it's simply that integrating over fermionic paths is a nice way to evaluate the constraint on ##J_z## defining the 4D history.

However and here is the possible link, zooming out the properties of the devices themselves are no different, they are simply contextually defined by other classical systems around them. "Everything" has the properties it is required to have by the surrounding context of its environment, which in turn is made of objects for which this is also true. In a sense the world is recursively defined. Also since an object is part of the context for its constituents, the world isn't reductive either, the part requires the whole to define its properties.

It seems to me that in such a world although you can mathematically describe certain fixed scenarios, it's not possible to obtain a mathematical description of everything in one go, due to the recursive, non-reductive nature of things. So possibly it could be the kind of ontology a QBist would like? Also 4D exchanges are fundamentally between the objects involved, perhaps the sort of non-objective view of measurements QBism wants.

Perhaps @RUTA can correct my butchering of things! :nb)

*Although this might be completely off as I've read the Relational Block World book and other papers on the view, as well as papers on Retrocausal views like the Transactional interpretation, but I feel they haven't clicked yet.
 
  • #120
Who do they think they are trying to convince. We're not real.:bow:
 

Similar threads

  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 80 ·
3
Replies
80
Views
7K
  • · Replies 55 ·
2
Replies
55
Views
8K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 75 ·
3
Replies
75
Views
12K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
18
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 82 ·
3
Replies
82
Views
10K