Bell's theorem and local realism

In summary: Bell inequalities. So I think you are right that local realism is an assumption that is made in the theorem.In summary, the theorem says that quantum mechanics predicts correlations in certain pairs of highly entangled particles (say, A and B) that cannot be explained by a complete knowledge of everything in the intersection of A's and B's respective light cones. Bell's theorem refers to correlations between "classical" or "macroscopic" experimental outcomes. So as long as one believes that the experimental outcomes in a Bell test are "classical", then the violation of the inequality does rule out local realism.
  • #141
stevendaryl said:
I don't think they are wildly different if you don't have locality. Let's do things classically, rather than quantum-mechanically. For simplicity, let's just consider
...
Now, that pair of equations is exactly equivalent to a problem in 2-D space (3D spacetime) involving just one particle ...

So I think that it's really locality that makes the dimensionality of spacetime meaningful.
With two particles each in N-dimensional space, you can destroy one and still have the other (even if you consider non-locality). With a single particle in 2N dimensional space, you can not destroy it and still have it, nor can you destroy half of it and convert it into an N-dimensional particle. You may have the same symbols in your equations but they mean totally different things even if they look the same.
 
Physics news on Phys.org
  • #142
billschnieder said:
With two particles each in N-dimensional space, you can destroy one and still have the other (even if you consider non-locality). With a single particle in 2N dimensional space, you can not destroy it and still have it, nor can you destroy half of it and convert it into an N-dimensional particle. You may have the same symbols in your equations but they mean totally different things even if they look the same.

Okay, I guess I would amend what I said to the following: If your laws of physics are such that the number of particles is constant, then there is no difference between N particles in 3D spacetime and 1 particle in 3N - D space.

With a variable number of particles, the interpretation doesn't work, unless you also allowed the dimension of space to vary with time. (Why not?)
 
  • #143
EEngineer91 said:
Yes, very subtle...but important.
Very important indeed!
 
  • #144
Yet, for some reason, many physicists of today lambast action at a distance as some logical impossibility. Bell even brought up good holistic problems such as defining precisely "measurement", "observation device" and the paradox of their "seperate-ness" and fundamental "together-ness"
 
  • #145
EEngineer91 said:
Yet, for some reason, many physicists of today lambast action at a distance as some logical impossibility. Bell even brought up good holistic problems such as defining precisely "measurement", "observation device" and the paradox of their "seperate-ness" and fundamental "together-ness"
Yep. I think his clear thinking and clear writing (and sense of humour) is unsurpassed.
 
  • Like
Likes 1 person
  • #146
stevendaryl said:
To follow up a little bit, I feel that there is still a bit of an unsolved mystery about Pitowsky's model. I agree that his model can't be the way things REALLY work, but I would like to understand what goes wrong if we imagined that it was the way things really work. Imagine that in an EPR-type experiment, there was such a spin-1/2 function [itex]F[/itex] associated with the electron (and the positron) such that a subsequent measurement of spin in direction [itex]\vec{x}[/itex] always gave the answer [itex]F(\vec{x})[/itex]. We perform a series of measurements and compile statistics. What breaks down?
Maybe you can provide a citation for Pitowsky's model so others can follow?

On the one hand, we could compute the relative probability that [itex]F(\vec{a}) = F(\vec{b})[/itex] and we conclude that it should be given by [itex]cos^2(\theta/2)[/itex] (because [itex]F[/itex]) was constructed to make that true). On the other hand, we can always find other directions [itex]\vec{a'}[/itex] and [itex]\vec{b'}[/itex] such that the statistical correlations don't match the predictions of QM (because your finite version of Bell's inequality shows that it is impossible to match the predictions of QM for every direction at the same time).
According to this paper by Pitowsky, http://arxiv.org/pdf/0802.3632.pdf, it would appear what breaks down is the tacit assumption that all directions are measurable at the same time.

So what that means is that for any run of experiments, there will be some statistics that don't come close to matching the theoretical probability.
Yes, if they are measured at the same time. But nobody really does that anyway.

I think this is a fundamental problem with relating non-measurable sets to experiment.
See previous point. It is not a problem at all. A non-measurable would just be an impossible/contradictory scenario physically.

The assumption that relative frequencies are related (in a limiting sense) to theoretical probabilities can't possibly hold when there are non-measurable sets involved.
It depends what those theoretical probabilities are. Nothing prevents one from adding one probability with a mutually incompatible probability theoretically. But experimentally you won't be measuring them at the same time. My guess is, its not the probabilities themselves that can't be related to relative frequencies, but the relationships between contradictory probabilities.
 
  • #147
billschnieder said:
According to this paper by Pitowsky, http://arxiv.org/pdf/0802.3632.pdf, it would appear what breaks down is the tacit assumption that all directions are measurable at the same time.
Different paper, different point. I will look up wha tis the *relevant* Pitowsky reference later. It is a very difficult and rather technical paper and I personally believe it is conceptually flawed. Sure, some fun generalized abstract nonsense. But no relevance. (just my personal opinion...). AFAIK, it has not been followed up by anyone...

If you have a LHV model, then even if you can only measure one direction at a time, the outcome that you would have had, had you actually measured in another direction is defined ... even if unavailable.
 
  • #148
gill1109 said:
It is easy to create *half* the cosine curve by LHV.
The problem is not the " particle" concept in the hidden layer, in the physics behind the scenes, it is the discreteness of the manifest outcomes. Click or no-click. +1 or -1.

Imo it is possible to generate a lhv covariance which is bigger than .5 at 45 degrees, ie it can be a cosine curve.
But the sum of the 4 covariances in chsh is still smaller than 2, and this is the point of bell's theorem : a combination of covariances.
 
  • #149
gill1109 said:
Different paper, different point. I will look up wha tis the *relevant* Pitowsky reference later. It is a very difficult and rather technical paper and I personally believe it is conceptually flawed. Sure, some fun generalized abstract nonsense. But no relevance. (just my personal opinion...). AFAIK, it has not been followed up by anyone...

Pitowsky's model appeared in Stanley Gudder's book "Quantum Probability". That's where I heard of it.
 
  • #150
jk22 said:
Imo it is possible to generate a lhv covariance which is bigger than .5 at 45 degrees, ie it can be a cosine curve.
But the sum of the 4 covariances in chsh is still smaller than 2, and this is the point of bell's theorem : a combination of covariances.
It is easy to create half a cosine curve by LHV.

It is easy to create a correlation bigger than 0.5 at 45 degrees.

CHSH says that it is impossible to have three correlations extremely large and one simultaneously extremely small, when the four are the four correlations formed by combining one of two settings on Alice's side with one of two settings on Bob's side.

Yes it is very difficult to have any feeling what this really means.

One could try something like this:

If r(a1, b2) is large, and r(a2, b2) is large, and r(a2, b1) is large, then we would expect r(a1, b1) to be large too.

Better still for pedagogical purposes, replace the usual "perfect anti-correlation at equal settings" of the singlet state version of the experiment by "perfect correlation at equal settings" by multiplying Bob's outcome by -1. Or switch from spin of electrons to polarization of photons.

For pedagogical purposes, forget about correlations and talk about the probability of equal outcomes

If Prob(A1 = B2) is large and Prob(A2 = B2) is large and Prob(A2 = B1) is large, then we would expect Prob(A1 = B1) is large.

If the first three probabilities are at least 1 - gamma then the fourth can't be smaller than 1 - 3 gamma. Take gamma = 0.25 and the first three would be 0.75 and the fourth 0.25. That's the largest one can get with LHV. This corresponds to CHSH value S = 2 = 4 * 0.5 = (2 * 0.75 - 1) + (2 * 0.75 - 1) + (2 * 0.75 - 1) - (2 * 0.25 -1)

But QM can have the first three probabilities equal to 0.85 and the fourth equal to 0.15. That corresponds to S = 2.8 = 4 * 0.7 = (2 * 0.85 - 1) + (2 * 0.85 - 1) + (2 * 0.85 - 1) - (2 * 0.15 -1) (in fact it can even be equal to 2.828... under QM but let's keep the numbers simple).
 
Last edited:
  • #151
If we consider chsh i think what is really important is to notice that there are only 4 time series for measurement datas to generate the 4 covariances, ie we one generate the first covariance AB then it should take the same A datas to compute the AB', else there can be a violation. Indeed to generate 4 covariances AB AB' A'B A'B' one could be tempted to use 8 set of datas in -1,1, as i generate covariance Ab then AB' aso.
 
  • #152
jk22 said:
If we consider chsh i think what is really important is to notice that there are only 4 time series for measurement datas to generate the 4 covariances, ie we one generate the first covariance AB then it should take the same A datas to compute the AB', else there can be a violation. Indeed to generate 4 covariances AB AB' A'B A'B' one could be tempted to use 8 set of datas in -1,1, as i generate covariance Ab then AB' aso.
Indeed if we had one time series with four measurements A, A', B, B' in each run then it is an arithmetical impossibility to violate CHSH on the four correlations AB, AB' etc each computed on all runs.
The crucial steps in deriving CHSH for an experiment where in each run we only measure A or A' and B or B' is
(1) if LHV are true then even if we only measured, say, A and B', still A' and B are also at least mathematically defined at the same time
(2) if which of A and A' to observe, and which of B and B' to observe, is based on independent fair coin tosses, then the correlation between, say, A and B', based on only a random sample of about 1 quarter of all the runs, is not much different from what it would have been based on all the runs
 
  • #153
We can also note that if we add a time dependency A(theta_a,lambda,t_a) and so on with t_b...then Bell theorem is still valid, so that the measurement have not to be simultaneous
 
  • #154
gill1109 said:
1) Lots of authors have argued that correlations in Bell-type experiments can be explained by local realist models. But so far none of those explanations stood up for long.
I wasn't referring to realist models. I was talking about whether any local but non-realist model could get the quantum predictions.
 
  • #155
bohm2 said:
I wasn't referring to realist models. I was talking about whether any local but non-realist model could get the quantum predictions.
Straight Copenhagen interpretation QM is (IMHO) non-realist and gets the quantum predictions. And Slava Belavkin's "eventum mechanics" which is just QM with Heisenberg cut seen as "real" and placed in the Hilbert space in a way which ensures causality is even Copenhagen QM without a Schrödinger cat paradox. Finally, it can be made relativistically invariant using recent work of D. Beddingham (2011). Relativistic State Reduction Dynamics. Foundations of Physics 41, 686–704. arXiv:1003.2774
 
  • #156
gill1109 said:
Yep. There's nothing wrong with determinism. But there's a lot wrong with conspiratorial superdeterminism. It explains everything but in a very "cheap" way. It has no predictive power. The smallest description of how the universe works is the history of the whole universe.

What is your opinion about Gerard 't Hooft's Cellular Automaton Interpretation of Quantum Mechanics? It is superdeterministic but not conspiratorial.

It's true that it is not exactly an interpretation of QM, there is more work to do, but what do you think about this line of inquiry?
 
  • #157
gill1109 said:
Straight Copenhagen interpretation QM is (IMHO) non-realist and gets the quantum predictions.


But the point here, or at leas what I meant in the post about dropping realism, is that either we agree on calling nonlocal any theory able to get Quantum predictions, regardless of any other assumption like realism, or in the nonrealistic case(like QM's) it makes no sense to still calling it local, unless we are meaning the Einstein sense i.e. causal, but then it is better not to use the term local.
I think a theory can be nonlocal in the Bell sense and keep causality, it just won't be able to do it with particle-like objects in its ontology in the case the theory is realist, if it is not, i.e. intrumentalis like QM it can make up anything without the need to take it seriously as interpretation(as indeed happens).
 
  • #158
ueit said:
What is your opinion about Gerard 't Hooft's Cellular Automaton Interpretation of Quantum Mechanics? It is superdeterministic but not conspiratorial.

It's true that it is not exactly an interpretation of QM, there is more work to do, but what do you think about this line of inquiry?
My opinion is that it will get nowhere. Of course it is not conspiratorial "at the Planck scale". But at the scale of a real world Bell-CHSH type experiment it would have to have become conspiratorial.

In fact, I am absolutely certain that this approach will get nowhere. Possibly a tiny part of QM can be described in this way. But it cannot form a basis for all of conventional QM because of Bell's theorem. Or ... it can but this will lead to "Bell's fifth position": a loophole-free and successful Bell-type experiment will never take place because QM uncertainty relations itself will prevent establishing the right initial conditions. The right quantum state in the right small regions of space-time.

It would also imply that a quantum computer can never be built ... or rather, not scaled-up. As one makes computers with more and more qubits in them, quantum decoherence will take over faster, and you'll never be able to factor large integers rapidly or whatever else you want to do with them.

But I am very doubtful indeed of the viability of Bell's fifth position. I suspect that the good experiment will get done in a few years and then we can forget about that option.
 
  • #159
TrickyDicky said:
But the point here, or at leas what I meant in the post about dropping realism, is that either we agree on calling nonlocal any theory able to get Quantum predictions, regardless of any other assumption like realism, or in the nonrealistic case(like QM's) it makes no sense to still calling it local, unless we are meaning the Einstein sense i.e. causal, but then it is better not to use the term local.
I think a theory can be nonlocal in the Bell sense and keep causality, it just won't be able to do it with particle-like objects in its ontology in the case the theory is realist, if it is not, i.e. intrumentalis like QM it can make up anything without the need to take it seriously as interpretation(as indeed happens).

But then we are just talking about words, aren't we? Some people identify "non-local" with "being able to violate Bell inequalities". Well if that's what you mean by local and non-local, then we can call QM non-local.

I am sure we could discuss this for days and days. Let me just say that apparently a lot of serious people do find it meaningful to separate local-realism into "local" and "realism" and discuss rejecting one but not the other. People do see two distinct options there. The Bohmians go for non-local + realism. IMHO Copenhagen a la Belavkin deserves to be called local + non-realism. But these are just labels! A rose by any other name would smell as sweet ... Let's try to be aware of what anybody actually means by a particular label in a particular context.

Remember that "realism" aka "counterfactual definiteness" is actually a rather idealistic position: it asserts the physical existence in reality (whatever that means) of things that did never happened, things that are never seen, things which a priori one would say we don't need to "add in" to our model of reality. It's just that in classical physics (characterized as LHV), there is no problem with adding in those things, and no problem with locality after they have been added in.

Remember that EPR actually used quantum predictions (perfect anti-correlation in the singlet state) in order to argue for realism. Einstein was smart. He realized that "realism" is an unnecessary add-on, an idealistic point of view. It needed to be motivated from the physics which we do believe actually does describe the real world, namely QM.

Bell was extraordinarily smart to be able to turn this argument on its head. He noticed that Einstein was actually also using locality +QM to suggest, to motivate (not to prove) realism. And he showed that the three together locality + realism + QM leads to a contradiction (if we exclude conspiracy) ...

Bell's fifth position is a kind of weakening of the option "QM is wrong". It says "QM is right but stops us from realizing or seeing certain things in Nature or in he Lab, which do appear to be allowed in the formalism".

Classical thermodynamics has things like this. You're not ever going to see all the molecules in the air all in the same half of your Lab and it will be rather hard to engineer that situation, too. You can set up an airtight wall across the lab but not take it down in a split second.
 
Last edited:
  • #160
And then you have the distinction between "realism" and "counterfactual definiteness."

http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Counterfactual_definiteness.html

http://physics.stackexchange.com/qu...lism-in-locality-and-counterfactual-definiten

If you accept the definition of "realism" given in the stackexchange response, then Bell's Theorem really assumes counterfactual definiteness rather than realism.
 
  • #161
stevendaryl said:
The axiom of choice implies that every set can be put into a one-to-one correspondence with some initial segment of the ordinals. That means that it is possible to index the unit interval by ordinals [itex]\alpha[/itex]: [itex][0,1] = \{ r_\alpha | \alpha < \mathcal{C}\}[/itex] where [itex]\mathcal{C}[/itex] is the cardinality of the continuum. The continuum hypothesis implies that [itex]\mathcal{C} = \omega_1[/itex], the first uncountable ordinal ([itex]\omega_1[/itex] is the same as Aleph_1, if we use the Von Neumann representation for cardinals and ordinals). So we have:

[itex][0,1] = \{ r_\alpha | \alpha < \omega_1\}[/itex]

If [itex]\alpha < \omega_1[/itex], then that means that [itex]\alpha[/itex] is countable, which means that there are only countably many smaller ordinals. That means that if we order the elements of [itex][0,1][/itex] by using [itex]r_\alpha < r_\beta \leftrightarrow \alpha < \beta[/itex], then for every [itex]x[/itex] in [itex][0,1][/itex] there are only countably many [itex]y[/itex] such that [itex]y < x[/itex].

Now I have the solution of your paradox! The ordering of the unit interval by the countable ordinals, which means that the initial segment of any number x is countable, is not-measurable. Hence cannot be constructed. If we give the unit interval the ordinary uniform measure, and pick an x at random, then we can never point out which are the countably many x' which are before x in a particular ordering we have in mind - existing according to the axiom of choice. Alternatively if we would put an ordinary discrete probability measure on that countable sequence we can never arrange to pick a number at random according to that distribution. Even if Nature could, there would be no well defined probability of ordinary Borel sets.

I think it is pretty ludicrous to think that some such mechanism as this actually underlies real world physics. We are talking about formal games played with formal systems of axioms concerning ideas about infinity. It turns out that our ordinary intuition quickly breaks down when we start playing these games.
 
  • #162
jtbell said:
And then you have the distinction between "realism" and "counterfactual definiteness."

http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Counterfactual_definiteness.html

http://physics.stackexchange.com/qu...lism-in-locality-and-counterfactual-definiten

If you accept the definition of "realism" given in the stackexchange response, then Bell's Theorem really assumes counterfactual definiteness rather than realism.

I agree with the distinction made in stackexchange. Apparently gill thinks they're the same thing:
gill1109 said:
Remember that "realism" aka "counterfactual definiteness" is...
 
  • #163
jtbell said:
And then you have the distinction between "realism" and "counterfactual definiteness."

http://www.princeton.edu/~achaney/tmve/wiki100k/docs/Counterfactual_definiteness.html

http://physics.stackexchange.com/qu...lism-in-locality-and-counterfactual-definiten

If you accept the definition of "realism" given in the stackexchange response, then Bell's Theorem really assumes counterfactual definiteness rather than realism.
I don't think the difference is quite so clear-cut, since the word "exists" is not very well-defined! If we forget for a moment about what "exists" means, we could try to apply the phrases "counterfactual-definiteness" and "realism" to *theories about reality* rather than about reality itself. I think that we would then find that whether or not a *theory* should be considered a realist theory, and whether or not a *theory* allows counterfactual definiteness, are very close indeed.

Remember that Einstein himself felt called to give an instrumental way which allows one sometimes to decide that some things are "elements of reality".

However, everyone agrees that hidden variables models are realist. And a hidden variables model allows one to ask "what if" questions about what would have happened in the experimental conditions were different from what they actually had been. So physical theories which are described by hidden variables deserve to be called realist and to satisfy counterfactual definiteness. On the other hand, we know that if (for instance) the statistics of a two party, binary setting, binary outcome CHSH type experiment satisfies all CHSH inequalities, then there exists a local hidden variables model which generates the same statistics.

One often says that Bell's theorem and a Bell type experiment is that it allows one to experimentally distinguish between classes of possible physical theories - in fact, this is precisely what is special and unique about it. Perhaps we should concentrate on what Bell's theorem says about physical theories, and not worry so much what it says about reality.
 
  • #164
gill1109 said:
But then we are just talking about words, aren't we? Some people identify "non-local" with "being able to violate Bell inequalities". Well if that's what you mean by local and non-local, then we can call QM non-local.
But well defined words are important in science discussions to avoid confusion.

I am sure we could discuss this for days and days. Let me just say that apparently a lot of serious people do find it meaningful to separate local-realism into "local" and "realism" and discuss rejecting one but not the other. People do see two distinct options there. The Bohmians go for non-local + realism.
I mentioned a reference in a previous post that shows that BM as long as it is an interpretation of QM and not a different theory it is not realist, regardless of Bohmians pretending to be realists or not.("The Bohmian interpretation of QM:a pitfall for realism" Matzkin and Nurock)
 
  • #165
TrickyDicky said:
But well defined words are important in science discussions to avoid confusion.


I mentioned a reference in a previous post that shows that BM as long as it is an interpretation of QM and not a different theory it is not realist, regardless of Bohmians pretending to be realists or not.("The Bohmian interpretation of QM:a pitfall for realism" Matzkin and Nurock)
The stack exchange expert says that CFD is an epistemological property that allows you to ask questions about experiments, realism is a metaphysical property of a theory. I don't agree with his use of words and characterizations! After all we have a theory - QM - and it allows lots of different metaphysical interpretations.

You can also say that BM (the mathematical framework thereof) is just a tool for doing QM calculations therefore it is neither an interpretation nor a different theory!

Boris Tsirelson has thought long and hard about these things and he identifies "realism" and "counterfactual definiteness". I think that for all practical purposes, they can be identified. But of course Bell himself warned us against FAPP traps. But I think there is not a FAPP trap right here. Maybe "realism" is a metaphysical position which implies the usefulness and/or validity of "counterfactual definiteness". What proofs of Bell's theorem use, is counterfactual definiteness. And if the statistics of a Bell-type experiment satisfy all Bell inequalities, then the experiment can be described by a LHV model. If Nature can be described by LHV models then we might be philosophically inclined to imagine the variables in those models as being elements of reality...
 
  • #166
I guess my point is one of coherence.
If I'm an instrumentalist I should only be interested in the operational tools to compute predictions for observed outcomes. And I don't really take seriously anything else like whether electrons really exist, or disquisitions about matter building blocks, etc. In reality even though this position is considered prevailing among physicists it is hard to maintain it and even people that claim to be of the "shut up and calculate" school take those things seriously.

Being extremely coherent a true instrumentalist shouldn't really care about Bell's theorem either: if he doesn't care about hidden variables theories why should he care about the subset that are local.

However a true realist should care, the theorem is helpful, it inmediately makes him reject a whole class of theories(those realistically based in local particle-like objects) just using the logic of the theorem and the experimental correlations observed to violate Bell's inequalities.
 
  • #167
TrickyDicky said:
I guess my point is one of coherence.
If I'm an instrumentalist I should only be interested in the operational tools to compute predictions for observed outcomes. And I don't really take seriously anything else like whether electrons really exist, or disquisitions about matter building blocks, etc. In reality even though this position is considered prevailing among physicists it is hard to maintain it and even people that claim to be of the "shut up and calculate" school take those things seriously.

Being extremely coherent a true instrumentalist shouldn't really care about Bell's theorem either: if he doesn't care about hidden variables theories why should he care about the subset that are local.

However a true realist should care, the theorem is helpful, it inmediately makes him reject a whole class of theories(those realistically based in local particle-like objects) just using the logic of the theorem and the experimental correlations observed to violate Bell's inequalities.

Yes, I can drink to that! (It's 6pm here in Netherlands, time to relax)
 
  • #168
gill1109 said:
Yes, I can drink to that!

Cheers! :smile:
 
  • #169
gill1109 said:
Now I have the solution of your paradox! The ordering of the unit interval by the countable ordinals, which means that the initial segment of any number x is countable, is not-measurable. Hence cannot be constructed. If we give the unit interval the ordinary uniform measure, and pick an x at random, then we can never point out which are the countably many x' which are before x in a particular ordering we have in mind - existing according to the axiom of choice. Alternatively if we would put an ordinary discrete probability measure on that countable sequence we can never arrange to pick a number at random according to that distribution. Even if Nature could, there would be no well defined probability of ordinary Borel sets.

I certainly understand that nonmeasurable sets can't be useful in the real world, because we never know any real number to the precision necessary to know whether it is in a nonmeasurable set (we can only know that it is some interval, and intervals are always measurable). I'm asking for a conceptual resolution, not a "in practice, that would never come up" resolution.
 
  • #170
stevendaryl said:
I certainly understand that nonmeasurable sets can't be useful in the real world, because we never know any real number to the precision necessary to know whether it is in a nonmeasurable set (we can only know that it is some interval, and intervals are always measurable). I'm asking for a conceptual resolution, not a "in practice, that would never come up" resolution.

I claim that my resolution is a conceptual resolution. You were talking about the two observers or two experimentalists who would get different ideas about what was going on. I gave you a physical reason why they can't exist.

You said we can never know if a number is in some non-measurable set or not. But it is worse than that: that non-measurable set can't be created by any "reasonable" physical process. Moreover you are talking about things which only exist or don't exist by grace of some *arbitrary* assumptions about what pure mathematics is about. That can have nothing whatever to do with physics.
 
  • #171
gill1109 said:
I claim that my resolution is a conceptual resolution. You were talking about the two observers or two experimentalists who would get different ideas about what was going on. I gave you a physical reason why they can't exist.

I consider that an uninteresting resolution. Is it logically impossible that things could work that way?

It reminds me of when I was in college learning about Special Relativity. I asked the TA (a physics graduate student) about a scenario in which someone gets aboard a rocket and rapidly accelerates to 90% of the speed of light. His response was: "That kind of acceleration would kill you, anyway."
 
  • #172
stevendaryl said:
I consider that an uninteresting resolution. Is it logically impossible that things could work that way?

It reminds me of when I was in college learning about Special Relativity. I asked the TA (a physics graduate student) about a scenario in which someone gets aboard a rocket and rapidly accelerates to 90% of the speed of light. His response was: "That kind of acceleration would kill you, anyway."
Your analogy is false! You can replace "observers" by "measuring devices". And you are the one who said that there was a paradox because two observers would get different ideas. I say that one of those two observers does not exist, hence I resolve *your* paradox.
 
  • #173
gill1109 said:
Straight Copenhagen interpretation QM is (IMHO) non-realist and gets the quantum predictions.
Yes, but is it local? The issue was whether any local and non-realist model could get the QM predictions. Travis Norsen previously brought up an example of one such model but it doesn't get the results:
Here's a model that non-realistic but perfectly Bell local: each particle has no definite, pre-existing, pre-scripted value for how the measurements will come out. Think of each particle as carrying a coin, which, upon encountering an SG device, it flips -- heads it goes "up", tails it goes "down". That is certainly not "realistic" (in the sense that people are using that term here) since there is no fact of the matter, prior to the measurement, about how a given particle will respond to the measurement; the outcome is "created on the fly", so to speak. And it's also perfectly local in the sense that what particle 1 ends up doing is in no way influenced by anything going on near particle 2, or vice versa. Of course, the model doesn't make the QM/empirical predictions. But it's non-realist and local.
So what I'm basically asking is whether there are any well-known, non-realist models that can avoid non-locality to explain the perfect correlations that are observed in the usual EPR-Bell scenario (when a=b). If we cannot produce any such local, non-realist models that give quantum predictions, why drop both realism and locality when dropping locality alone is all that is required?
 
  • #174
bohm2 said:
So what I'm basically asking is whether there are any well-known, non-realist models that can avoid non-locality to explain the perfect correlations that are observed in the usual EPR-Bell scenario (when a=b). If we cannot produce any such local, non-realist models that give quantum predictions, why drop both realism and locality when dropping locality alone is all that is required?

There are a number of such: MWI, all retro-causal types, relational blockworld, etc all preserve locality. Your question asked another way:

"Why drop both realism and locality when dropping realism alone is all that is required?"
 
  • #175
bohm2 said:
Yes, but is it local? The issue was whether any local and non-realist model could get the QM predictions. Travis Norsen previously brought up an example of one such model but it doesn't get the results:

[non-realistic model omitted because it fails miserably.]

DrChinese's equivalent model:

Here's a model that non-local but perfectly Bell realistic: each particle has a definite, pre-existing, pre-scripted value for how the measurements will come out. Think of each particle as married to a random distant particle. Each has the same value for their spin observables, and when one changes, so does the other. Of course, the model doesn't make the QM/empirical predictions. But it's non-local and realistic.

My question is why you would reference a failed model. We can all put forth failed models, notice mine is as miserable as Travis'.
 

Similar threads

Replies
50
Views
4K
Replies
12
Views
1K
  • Quantum Physics
7
Replies
220
Views
18K
Replies
11
Views
1K
  • Quantum Physics
2
Replies
48
Views
5K
Replies
7
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
Replies
55
Views
6K
  • Quantum Physics
Replies
16
Views
2K
Replies
6
Views
2K
Back
Top