A question regarding the Copenhagen interpretation.

  • #51
Just a comment to "indeterminism". Quantum theory clearly tells you, which observables have a definite value (i.e., are determined) and which not, given the state of the system, because you can calculate the probability of finding a definite value. If there is one value, for which you get the probability 1, this is the determined value of that variable otherwise not, and the observable is indetermined. The determination or nondetermination of a certain observable is thus due to the preparation of the system in the (pure or mixed) state.

Further, I never have discussed something contradicting Bell's achievements. Of course, the correlations according to entanglement are in perfect agreement with Bell, and these correlations are naturally described by the quantum-theoretical formalism and are well-established empirically (including the violation of Bell's inequality, excluding local deterministic hidden-variable models with very high significance).

With the Aspect-Zeilinger setup of polarization-entangled photons, detected by far distant observers "Alice and Bob", I've described previously, you can also perform high-precision Bell-experiments. You only have to rotate one of the polarization foils against the other. At certain relative angles you get maximal violations of Bell's inequality for the photon polarization. This is all encoded in the quantum-theoretical state and thus, according to the minimal statistical interpretation, inherent in the preparation procedure, leading to the preparation of the photon pair in the entangled state and not due to any kind of collapse of the state due to one observer's measurement of the polarization of his photon.
 
Physics news on Phys.org
  • #52
vanhees71 said:
Further, I never have discussed something contradicting Bell's achievements.

Good, then we are on the same page, since one of Bell's achievements was to rule out the "common source explanation" in EPR, which was the main fuel of the 20 year long Bohr–Einstein debates.
 
  • #53
bhobba said:
The real difficulty is that it is also deterministic, or more precisely, that it combines a probabilistic interpretation with deterministic dynamics.'

Where can I learn more about this difficulty in QM?
 
  • #54
EskWIRED said:
Where can I learn more about this difficulty in QM?

The measurement problem - the need for a classical/quantum cut within the Copenhagen interpretation, and the use of two different postulates for dynamics given by the Schroedinger equation and wave function collapse are described in:

Bell, Against 'measurement'
http://www.tau.ac.il/~quantum/Vaidman/IQM/BellAM.pdf

Laloe, Do we really understand quantum mechanics?
http://arxiv.org/abs/quant-ph/0209123 (see the section "Difficulties, paradoxes", on p13)
 
  • Like
Likes 1 person
  • #55
DevilsAvocado said:
Good, then we are on the same page, since one of Bell's achievements was to rule out the "common source explanation" in EPR, which was the main fuel of the 20 year long Bohr–Einstein debates.

What do you mean by "common source explanation"? In some sense the entangelment in the two-photon example we discuss here is due to a common source of the two photons by parametric downconversion.
 
  • #56
vanhees71 said:
What do you mean by "common source explanation"? In some sense the entangelment in the two-photon example we discuss here is due to a common source of the two photons by parametric downconversion.

The entangled state is not a local common source in the sense of Bell.

http://arxiv.org/abs/0901.4255 (Eq 2)
Gisin, Foundations of Physics, 42, 80-85, 2012

Special relativity does forbid "nonlocality", but it does not forbid quantum nonlocality. Thus quantum mechanics is nonlocal in the sense of Bell. However, it is not nonlocal in the sense of violating special relativity. Rohrlich and Popescu discuss these different definitions of locality/nonlocality http://arxiv.org/abs/quant-ph/9508009 (Foundations of Physics, 24, 379-385, 1995).
 
Last edited:
  • #57
vanhees71 said:
What do you mean by "common source explanation"? In some sense the entangelment in the two-photon example we discuss here is due to a common source of the two photons by parametric downconversion.

Yes absolutely, the two photons are entangled due to a "common source" (even if it nowadays is possible to entangle photons that have never met, i.e. entanglement swapping), but this common source can never be used as an explanation of the correlations, because the correlations are ruled by the relative angle between the two space-like separated polarizers (outside each other's light cone), in the form of Malus' law: cos2(a-b)

I.e. there is no ("normal") way to know the measurement settings for Alice & Bob in advance; hence the "common source explanation" will get you nowhere.

It "worked" for 1935 EPR, because they only considered perfect correlations, it doesn't work today.
 
  • Like
Likes 1 person
  • #58
atyy said:
but it does not forbid quantum nonlocality.

If QM is non local is very interpretation dependent - that's the import of Bells Theorem and Einsteins error. QM rules out naive-reality ie local realism. If you reject realism (ie properties do not exist independent of observation) then locality is saved. If you keep it then locality is gone. But SR is still saved since it can't be used to send information which is what's required to sync clocks.

Basically all Bell type 'experiments' are doing is observing systems with spatial extent, and because of how its arranged if one thing in the system has a property on observation, so does the other thing - but they are spatially separated.

I have two pieces of paper, one black, and one white and put them in envelopes. I randomly send one to one person, and another to a different person. If any of those people open their envelope they immediately know what the other person will get when they open their envelope. Their is nothing Earth shattering going on. Same with Bell type experiments, with the twist we can't say it has the property of blackness or whiteness until observation.

Griffiths book - Consistent Quantum Theory discusses it from this interesting perspective:
https://www.amazon.com/dp/0521539293/?tag=pfamazon01-20

Thanks
Bill
 
Last edited:
  • #59
bhobba said:
If QM is non local is very interpretation dependent - that's the import of Bells Theorem and Einsteins error. QM rules out naive-reality ie local realism. If you reject realism (ie properties do not exist independent of observation) then locality is saved. If you keep it then locality is gone. But SR is still saved since it can't be used to send information which is what's required to sync clocks.

Basically all Bell type 'experiments' are doing is observing systems with spatial extent, and because of how its arranged if one thing in the system has a property on observation, so does the other thing - but they are spatially separated.

I have two pieces of paper, one black, and one white and put them in envelopes. I randomly send one to one person, and another to a different person. If any of those people open their envelope they immediately know what the other person will get when they open their envelope. Their is nothing Earth shattering going on. Same with Bell type experiments, with the twist we can't say it has the property of blackness or whiteness until observation.

Griffiths book - Consistent Quantum Theory discusses it from this interesting perspective:
https://www.amazon.com/dp/0521539293/?tag=pfamazon01-20

Thanks
Bill

Bell nonlocality is simply the violation of the Bell inequalities, and the existence of the measurement result, the measurement choice, the independence of the measurement choice, and the existence of a variable used to predict the probabilities. Certainly the inequality cannot be violated if any of these quantities don't exist, or a probability distribution cannot be defined over them. However, it does not require that black or white exist before the measurement, only that black or white exist after the measurement. The hidden variable can be the wave function itself. If one by fiat excludes the wave function from entering the inequality (and defines that as the wave function being not real) then perhaps one can escape the conclusion that quantum mechanics violates the inequality. Take a look at Eq 2 in http://arxiv.org/abs/0901.4255 (Foundations of Physics, 42, 80-85, 2012). I agree that one can use nonreality of the measurement results to avoid Bell nonlocality, but I don't think vanhees71 was challenging the violation of the Bell inequalities.
 
  • #60
bhobba said:
If QM is non local is very interpretation dependent - that's the import of Bells Theorem and Einsteins error.

For once, there are thousands of thoroughly and professional experiments, settling the options left for us to consider, i.e. the experimental results has nothing to do with interpretations as such, and the fact is – Bell's theorem is a mathematical proof – not specifically 'tied' to QM, or any interpretation. There is one task left – to close loopholes simultaneously – but no one could seriously expect any different outcome (as this would be a bigger surprise than anything else).

(At least) one of these three options has to be abandoned:

  • Locality
  • Realism
  • Free will*
*I.e. give up our freedom to choose (random) settings, which would conduce to Superdeterminism.

bhobba said:
If you reject realism (ie properties do not exist independent of observation) then locality is saved.

True, but there is one "little" problem left – unless one pursues the "Shut up and calculate!" methodology – one ought to explain how this works, and this (for sure) is not an easy task, not even for fairly 'vague' interpretations. Some attempts are Holism & Nonseparability, Two-state vector formalism, Relational Blockworld, etc. These are all very interesting but quite "nasty creatures", that has very little or nothing to do with standard QM...

bhobba said:
If you keep it then locality is gone. But SR is still saved since it can't be used to send information which is what's required to sync clocks.

True, but if you keep realism in favor of locality, you have "real stuff" out there, that must act according to Relativity of Simultaneity, and then you will surely run into trouble with SR, trying to define a preferred version of motion and rest. Catch-22.

bhobba said:
I have two pieces of paper, one black, and one white and put them in envelopes. I randomly send one to one person, and another to a different person. If any of those people open their envelope they immediately know what the other person will get when they open their envelope. Their is nothing Earth shattering going on. Same with Bell type experiments, with the twist we can't say it has the property of blackness or whiteness until observation.

I'm quite baffled by this... you surely know things that I have absolutely no clue on... but this is wrong, entirely wrong... and this is the same "trap" that vanhees seems to have fallen into... weird...

This picture of envelopes or boxes with only two possible values (i.e. black/white [cards] or left/right [gloves]), is the old 1935 EPR picture. It is proven wrong beyond any discussion.

The new Bell picture is more like the complete spectrum of the rainbow, and the final definite colors are correlated *only* by the *relative* angle *between* the settings of the two space-like separated polarizers of Alice & Bob.

... :bugeye: ...
 
  • Like
Likes 1 person
  • #61
DevilsAvocado said:
For once, there are thousands of thoroughly and professional experiments, settling the options left for us to consider, i.e. the experimental results has nothing to do with interpretations as such, and the fact is – Bell's theorem is a mathematical proof – not specifically 'tied' to QM, or any interpretation. There is one task left – to close loopholes simultaneously – but no one could seriously expect any different outcome (as this would be a bigger surprise than anything else).
I haven't read it yet but a paper published today suggests that there may be a reason for this confusion. Then again, it might be just another paper that adds more confusion:
Many of the heated arguments about the meaning of “Bell’s theorem” arise because this phrase can refer to two different theorems that John Bell proved, the first in 1964 and the second in 1976...Although the two Bell’s theorems are logically equivalent, their assumptions are not, and the different versions of the theorem suggest quite different conclusions, which are embraced by different communities...I discuss why the two ‘camps’ are drawn to these different conclusions, and what can be done to increase mutual understanding.
The Two Bell’s Theorems of John Bell
http://arxiv.org/pdf/1402.0351.pdf
 
  • Like
Likes 1 person
  • #62
DevilsAvocado said:
The new Bell picture is more like the complete spectrum of the rainbow, and the final definite colors are correlated *only* by the *relative* angle *between* the settings of the two space-like separated polarizers of Alice & Bob.
I actually don't think you are saying something different here than what bhobba said. I believe his analogy means that the envelope will be found to contain either black or white once you have chosen what "colors of the rainbow" you are measuring, but you cannot say it was black or white before you measured it, because that won't get the right correlations with the distant envelope that is being measured to be either red or blue. That's your "colors of the rainbow," as well as his "twist."

To me, the key point here is that the color of the paper in the envelope is not a property that the envelope carries inside it all the time, if you allow that you could have chosen to measure any color axis (which is what you mean by "free will.") If you give up localism, you say either that some kind of magical signal connects the envelopes and fixes their correlations consistently with all the measurement choices, or you say that the envelope is part of a larger system and the color of the paper is a joint property of that entire system, not a local property of that one envelope (that's the alternative that makes the most sense to me).

If you give up realism, you say that the colors are just concepts in the minds of the observers, but frankly I don't really see any difference in that alternative-- you still have to maintain either that a signal connects them, or that they are part of a joint system that maintains correlations because it is irreducible, whether you are talking about two minds or two envelopes. So for me, that's the same thing anyway, so I just maintain that what we mean by reality manifests a property of holding irreducible entanglements even when its parts are causally separated, and I have no real problem with this, because frankly any way that reality manifests itself is just as surprising to me as any other. The key is to not forget that argument from incredulity is really just argument from the ignorance of limited experience, in the disguise of familiarity.
 
  • #63
It's the whole point of the discussion to clearly define what's meant by "local". The most successful theory of matter and all interactions between particles (except gravity) is the Standard Model of Elementary Particles, and that's a "local relativistic QFT". What's meant by "local" here is that the action is composed as a Poincare-invariant functional of a Lagrange density that is a polynomial of fields and their first derivatives wrt. space-time coordinates at one space-time point. This particularly means that interactions are local.

The nonlocality in the violation of Bell's inequality refers to correlations, which have to be clearly distinguished from interactions. The entanglement between the photon polarizations in our Aspect-Zeilinger-experiment example can persist over very long times (as long as you can prevent one or both photons be disturbed by perturbations, leading to decoherence) and thus the two photons may be detected as far away from each other as you like and still show the entanglement, i.e., a correlation! It's the very point why I think one has to abandon the naive collapse of Copenhagen that the correlations are not caused by the (local!) measurement of, say, Alice's photon but are present all the time due to the preparation of the photon pair in the very beginning.

Also note that I said the photons can be detected by far distant observers, not that the photons are far appart. This is because for photons there is not even a position observable in the strict sense and thus it doesn't make sense to talk about a photon's position, but that's another point.
 
  • #64
Ken G said:
I actually don't think you are saying something different here than what bhobba said. I believe his analogy means that the envelope will be found to contain either black or white once you have chosen what "colors of the rainbow" you are measuring,

Maybe this is correct, but perhaps safest is to let Bill explain himself. Anyhow, "colors of the rainbow" was maybe not the most fortunate parable. Of course, when we do measure, we will always get black/white, 1/0, [spin] up/down, at one single polarizer/measurement apparatus. My "rainbow" was referring to the "wide spectrum" of all the orientations/correlations around the complete circle 0° to 360°, resulting in the famous sinusoidal EPR-Bell test experiment results.

2wr1cgm.jpg

[Credit: Alain Aspect]

My impression was that Bill was talking about [deterministic] fixed settings, always resulting in perfect correlations, and of course, in this case envelopes with black/white cards works like a dream, and hence could fool you to believe in a "common source explanation", as it did with Einstein.

But this is clearly wrong.

Ken G said:
but you cannot say it was black or white before you measured it, because that won't get the right correlations with the distant envelope that is being measured to be either red or blue. That's your "colors of the rainbow," as well as his "twist."

Let Bill explain it himself.

Ken G said:
To me, the key point here is that the color of the paper in the envelope is not a property that the envelope carries inside it all the time, if you allow that you could have chosen to measure any color axis (which is what you mean by "free will.") If you give up localism, you say either that some kind of magical signal connects the envelopes and fixes their correlations consistently with all the measurement choices, or you say that the envelope is part of a larger system and the color of the paper is a joint property of that entire system, not a local property of that one envelope (that's the alternative that makes the most sense to me).

I'm just guessing here, but I take it that "envelope is part of a larger system" means some sort of "ensemble interpretation", right? It's of course correct that we need several (thousands) measurements to get correlations like cos2(22.5°) = 85%, which is naturally impossible for a single spin up/down measurement. However, now I would like to return to Bill's [deterministic] perfect correlations and parallel settings – in this case we only need one measurement to establish the correlation "link" – hence no ensemble.

Ken G said:
If you give up realism, you say that the colors are just concepts in the minds of the observers, but frankly I don't really see any difference in that alternative-- you still have to maintain either that a signal connects them, or that they are part of a joint system that maintains correlations because it is irreducible,

Possible solutions for "sur"realism + locality are quite strange, and I'm afraid I can't talk about them in this forum. All I can say is that PF user RUTA (PhD in physics and involved in the foundations of QM) is working on a model called Relational Blockworld (RBW), where reality is fundamentally relational and non-dynamical.
 
  • #65
vanhees71 said:
It's the whole point of the discussion to clearly define what's meant by "local". The most successful theory of matter and all interactions between particles (except gravity) is the Standard Model of Elementary Particles, and that's a "local relativistic QFT". What's meant by "local" here is that the action is composed as a Poincare-invariant functional of a Lagrange density that is a polynomial of fields and their first derivatives wrt. space-time coordinates at one space-time point. This particularly means that interactions are local.

Yes, in his 1990 paper, Bell gave a formulation of what he called the "Principle of Local Causality":

"The direct causes (and effects) of events are near by, and even the indirect causes (and effects) are no further away than permitted by the velocity of light. Thus for events in a space-time region 1 [...] we would look for causes in the backward light cone, and for effects in the future light cone. In a region like 2, space-like separated from 1, we would seek neither causes nor effects of events in 1."

vanhees71 said:
The nonlocality in the violation of Bell's inequality refers to correlations, which have to be clearly distinguished from interactions.

Yes, of course, any "classical interactions" are out of the picture; however it's quite hard to neglect that there must be an influence, i.e. change in the states, and this gets even more troublesome if we adhere to realism in favor of locality.

vanhees71 said:
It's the very point why I think one has to abandon the naive collapse of Copenhagen that the correlations are not caused by the (local!) measurement of, say, Alice's photon but are present all the time due to the preparation of the photon pair in the very beginning.

Maybe an extensive discussion regarding collapse, decoherence, etc, could not takes us any further. We have to remember that the correlations exists as classical information in the measuring setup, afaik, it would make no difference whatsoever if the collapsed/uncollapsed wavefunction continues to the other end of the universe – the classical measurement data is still there for us to ponder... this stuff happens in front of our noses.

vanhees71 said:
Also note that I said the photons can be detected by far distant observers, not that the photons are far appart. This is because for photons there is not even a position observable in the strict sense and thus it doesn't make sense to talk about a photon's position, but that's another point.

Also note that EPR-Bell experiments with two entangled trapped ions has been performed.
 
  • #66
vanhees71 said:
It's the whole point of the discussion to clearly define what's meant by "local". The most successful theory of matter and all interactions between particles (except gravity) is the Standard Model of Elementary Particles, and that's a "local relativistic QFT". What's meant by "local" here is that the action is composed as a Poincare-invariant functional of a Lagrange density that is a polynomial of fields and their first derivatives wrt. space-time coordinates at one space-time point. This particularly means that interactions are local.
Sure, interactions are still local, but that is just not what is meant by local in "local realism", and it was not the motivation for EPR. Einstein wanted all the information, including correlations between widely separated yet local interactions, to be carried with each particle, that was the kind of local realism that EPR aspired to. It was naive, we can agree, but that was his goal, and that is what Bell showed cannot be the case. So we agree-- interactions are local, but information is more global, so even though it cannot be propagated between observers faster than c, it also cannot be regarded as being "contained locally in the particles."
 
  • #67
DevilsAvocado said:
I'm just guessing here, but I take it that "envelope is part of a larger system" means some sort of "ensemble interpretation", right?
No, I just mean the "larger system" includes the other envelope, all the entanglements of the one envelope being measured. That's the crux of entanglement, as you know-- it imposes a holistic quality to its subsystems, that can persist even when the subsystems are interacted with outside of each other's light cones. But bhobba is completely aware of that too, so I don't think I'm stretching his words too far by attaching the interpretation I gave.

The main point I was making is that I don't think the key issue is whether we choose to drop realism (drop that the system in some sense "contains" its own attributes and information) or drop localism (drop that the information and attributes of an object move strictly with that object), because it doesn't really matter where we "put" the attributes, what matters is that we have two observers who are going to notice Bell-inequality-breaking correlations when they compare notes. Thus what we really have to decide between is whether we regard the source of those correlations to be some special kind of signal that can propagate superluminally without propagating any information between the points, or whether we regard the system as having a holistic correlation that does not require any kind of propagation whatsoever to maintain. Once you choose between those two issues, then whether you say you are "dropping locality" or "dropping realism" becomes a rather moot issue, because depending on other assumptions I make, I could characterize your choice in either of those terms, but not in any way that matters much-- you've already chosen the key language for speaking about the source of the correlations.
 
  • #68
@Ken G: Another good idea is, never to talk about "realism" or even "local realism" in discussions about the interpretation of quantum theory without to define very clearly what you mean by that. I've never heard a clear definition in mathematical terms, what's meant by the words "realism" or "local realism" yet. Usually it's used by philosophers in a kind of muttering rather than scientifically clearly defined terms. So I'm not able to discuss that notions properly.

@DevilsAvocado: Yes, that's another very clear feature of local relativistic QFTs! They all fulfill automatically the linked-cluster principle: I.e., there is no influence by space like separated events on measurements by means of local interactions.

In our case: No matter what Bob does with his photon, i.e., whether he determines its polarization state before or after Alice does her experiment (or if Bob and Alice are in relative motion to each other, no matter whether Bob's measurement act is in the future or past lightcone or space-like separated to Alice's measurement act), Alice will always simply measure a stream of unpolarized photons (supposed Alice and Bob are sent a sequence of independently prepared entangled photon pairs from the parametric-down-conversion source). That's so, because the experiment is well-described by standard QED, which is a local relativistic QFT.
 
  • #69
vanhees71 said:
@Ken G: Another good idea is, never to talk about "realism" or even "local realism" in discussions about the interpretation of quantum theory without to define very clearly what you mean by that. I've never heard a clear definition in mathematical terms, what's meant by the words "realism" or "local realism" yet. Usually it's used by philosophers in a kind of muttering rather than scientifically clearly defined terms. So I'm not able to discuss that notions properly.
Yes, clarity is always essential. For me, I understand what "local realism" is intended to mean well enough by defining it to mean essentially "that which, the absence of which was Einstein's main objection to quantum mechanics, as evidenced by the EPR paper." Or equivalently "that property which neither quantum mechanics has, nor real experiments have, that Einstein felt both should have, as evidenced by his arguments in the EPR paper." You are welcome to translate that into more precise mathematical terms if it helps, that might be a service to many! But in simple terms, it means "that the information required to statistically predict the outcome of any set of experiments on different subsystems must be able to be regarded as contained within and carried along with those individual subsystems, entirely by themselves." This also means the information must be completely collapsed by measurements on the subsystems, because a local collapse must be able to access or define the full array of information carried by that subsystem. Or even more succinctly: "no spooky actions or correlation mediation at a distance." I think it may be said that Einstein's main objection to quantum mechanics was its "top-down", or holistic, approach to the wave function, whereas Einstein believed reality needed to be "bottom-up", i.e., reducible to its local elements. Bell showed that reality must have a top-down character, or else we have to imagine very strange things like we are not allowed to pick whatever observation we want to do.
 
Last edited:
  • #70
vanhees71 said:
It's the whole point of the discussion to clearly define what's meant by "local". The most successful theory of matter and all interactions between particles (except gravity) is the Standard Model of Elementary Particles, and that's a "local relativistic QFT". What's meant by "local" here is that the action is composed as a Poincare-invariant functional of a Lagrange density that is a polynomial of fields and their first derivatives wrt. space-time coordinates at one space-time point. This particularly means that interactions are local.

The nonlocality in the violation of Bell's inequality refers to correlations, which have to be clearly distinguished from interactions. The entanglement between the photon polarizations in our Aspect-Zeilinger-experiment example can persist over very long times (as long as you can prevent one or both photons be disturbed by perturbations, leading to decoherence) and thus the two photons may be detected as far away from each other as you like and still show the entanglement, i.e., a correlation! It's the very point why I think one has to abandon the naive collapse of Copenhagen that the correlations are not caused by the (local!) measurement of, say, Alice's photon but are present all the time due to the preparation of the photon pair in the very beginning.

Regarding the collapse - if we take a frame in which Alice and Bob measure simultaneously, and consider that to be the end of the experiment, then yes, there is no need for non-unitary time evolution, since one just uses the Born rule without collapse to get the joint probability. But what happens in a frame in which their measurements are not simultaneous, and Alice does a non-demolition measurement first and gets a particular result? The two particles must still propagate after the non-demolition measurement, and in Copenhagen there is non-unitary time evolution. In interpretations with a classical/quantum cut (as I understand, both Copenhagen and the Minimal Statistical Interpretation have a classical/quantum cut), is there any description in which the time evolution is unitary in all frames? I thought you had earlier agreed that there was non-unitary time evolution (your post #37)?
 
  • #71
Ken G said:
The main point I was making is that I don't think the key issue is whether we choose to drop realism (drop that the system in some sense "contains" its own attributes and information) or drop localism (drop that the information and attributes of an object move strictly with that object), because it doesn't really matter where we "put" the attributes, what matters is that we have two observers who are going to notice Bell-inequality-breaking correlations when they compare notes. Thus what we really have to decide between is whether we regard the source of those correlations to be some special kind of signal that can propagate superluminally without propagating any information between the points, or whether we regard the system as having a holistic correlation that does not require any kind of propagation whatsoever to maintain. Once you choose between those two issues, then whether you say you are "dropping locality" or "dropping realism" becomes a rather moot issue, because depending on other assumptions I make, I could characterize your choice in either of those terms, but not in any way that matters much-- you've already chosen the key language for speaking about the source of the correlations.

I'm not sure I understand... "I could characterize your choice in either of those terms, but not in any way that matters much", to me sounds like if the question of non-locality vs. non-realism is not a "big deal"...

Huum, if we take the non-local hidden variables in Bohmian mechanics, they give us quite a "cozy world" (in the couch watching football). Yes, non-locality is a little bit strange, but it will never cause any turmoil on NYSE or TSE, with stockbrokers cheating with "EPRB-FTL-orders", creating crashes on the binary options market. This will never happen (of course Bohmian mechanics has "some" work left to do on the SR side).

On the other hand... if we take the Two-state vector formalism, in which the present is caused by quantum states of the past and future, taken in combination... god knows what will happen if any stockbroker finds out how to 'utilize' these quantum states... :)

Just as an example of quite drastic differences in possible solutions.
 
  • #72
vanhees71 said:
@Ken G: Another good idea is, never to talk about "realism" or even "local realism" in discussions about the interpretation of quantum theory without to define very clearly what you mean by that. I've never heard a clear definition in mathematical terms, what's meant by the words "realism" or "local realism" yet. Usually it's used by philosophers in a kind of muttering rather than scientifically clearly defined terms. So I'm not able to discuss that notions properly.

I can't say it's a "global agreement", but many would adhere to that definite values is a 'prerequisite' for realism, i.e. the moon is there even when nobody is watching, and quantum particles do have definite states (before leaving a common source).

If you like this definition, then DrC has a beautiful page that will take you through the process of mathematically prove that these definite states, if local, are not compatible with the predictions of QM, in a quite simple and straightforward form.

vanhees71 said:
@DevilsAvocado: Yes, that's another very clear feature of local relativistic QFTs! They all fulfill automatically the linked-cluster principle: I.e., there is no influence by space like separated events on measurements by means of local interactions.

In our case: No matter what Bob does with his photon, i.e., whether he determines its polarization state before or after Alice does her experiment (or if Bob and Alice are in relative motion to each other, no matter whether Bob's measurement act is in the future or past lightcone or space-like separated to Alice's measurement act), Alice will always simply measure a stream of unpolarized photons (supposed Alice and Bob are sent a sequence of independently prepared entangled photon pairs from the parametric-down-conversion source). That's so, because the experiment is well-described by standard QED, which is a local relativistic QFT.

Yes of course, Alice & Bob will always measure 100% random outcomes in their local apparatus, no doubt about that, i.e. there will be absolutely no difference whatsoever (at this stage) from measuring single "normal" photons. However, the 'weirdness' comes when Alice & Bob compare their data (through classical channels) and discover that (contrary to "normal" photons) their entangled photons shows correlations that can be coupled to the relative settings between their polarizers, i.e. if Alice has put her polarizer at 32.5° and Bob his polarizer at 10°, the relative angle between Alice & Bob will be 32.5 – 10 = 22.5, which gives cos2(22.5°) = 85%, i.e. if sending 1,000 entangled photon pairs, 850 will be correlated (or anti-correlated depending on type of Bell state).

This is just the way it is, and it's obviously not compatible with local realism.
 
  • #73
DevilsAvocado said:
On the other hand... if we take the Two-state vector formalism, in which the present is caused by quantum states of the past and future, taken in combination... god knows what will happen if any stockbroker finds out how to 'utilize' these quantum states... :)

Just as an example of quite drastic differences in possible solutions.

I am not convinced there would be such drastic consequences. Time symmetric QM has a reverse causality, but appears to be limited in its extent to the area of uncertainty so it would not improve the stockbroker's lot! Bear in mind that there are still many possible final states so intermediate states would not be uniquely determined, just narrowed down somewhat.

http://jamesowenweatherall.com/SCPPRG/AharonovPopescuTollaksen2010PhysToday_TimeSymQM.pdf
 
  • #74
Jilang said:
I am not convinced there would be such drastic consequences.

Of course you are right. As interpretations, there can't be any 'practical' difference between Bohmian mechanics and Two-state vector formalism, since they are both forced to yield the same predictions as standard quantum mechanics (otherwise it would not be interpretations).

However, in some distant future, someone could make a clever experiment that tells us the correct interpretation, which ought to make some difference... (I hope)

However, there are other "beasts" out there (can't talk about it) which makes for example Bohmian mechanics look like your "Familiar Grandma" or a walk in the park, in comparison (trust me), and this is something completely different already on the drawing board, which then makes locality vs. realism a quite significant question (i.e. if this turns out to be true in the end).
 
  • #75
DevilsAvocado said:
Huum, if we take the non-local hidden variables in Bohmian mechanics, they give us quite a "cozy world" (in the couch watching football). Yes, non-locality is a little bit strange, but it will never cause any turmoil on NYSE or TSE, with stockbrokers cheating with "EPRB-FTL-orders", creating crashes on the binary options market. This will never happen (of course Bohmian mechanics has "some" work left to do on the SR side).

On the other hand... if we take the Two-state vector formalism, in which the present is caused by quantum states of the past and future, taken in combination... god knows what will happen if any stockbroker finds out how to 'utilize' these quantum states...
But that point isn't disagreeing what what I said. I'm claiming that the important choice you make there is between some kind of instantaneous signal which you regard as passing between the subsystems as you sit on the couch, or some kind of holistic supersystem that encodes the correlation into something larger than the subsystems, as in the case of your two-state vectors. I'm saying that it doesn't matter which of those you regard as the one that breaks locality, and which breaks realism, because I can claim that either can be regarded as doing either, depending on supplemental assumptions that don't matter much. If you say that Bohmian pilot waves are an example of nonlocalism, and the two-state vectors are nonrealism, I'll just say, why can't I regard the pilot wave as something unreal yet local (there is a location to the pilot wave signal at any time, and it communicates no information to any observers so it is not nonlocal, but it cannot itself be observed, so it is unreal), and the two-state vector as something real but nonlocal (it is real because I can choose to regard those two vectors as perfectly real, but I can't localize them, because they counterpropagate in time). In either case, what matters is the mechanism you are invoking to sustain the correlations, and its possible ramifications as you say, but not whether you regard that mechanism as something nonlocal, or nonreal.
 
Last edited:
  • #76
DevilsAvocado said:
Maybe this is correct, but perhaps safest is to let Bill explain himself.

Its easy.

In Bell type setups things are so arranged that one person will get a 'black' paper and the other a 'white'. We just don't know which got which, and will not know until its observed, or even if it has the property at all prior to observation, which is the peculiar twist QM has.

What Bells theorem shows is if its analogous to the paper situation where it has properties independent of observation then locality is violated. But otherwise its exactly the same. The purpose of the envelopes is to simulate as much as possible not knowing until observed. Of course in this analogy it does have the property, there is no classical situation that will mimic it exactly, but its just to get a bit of a handle on it.

BTW its not something I came up with - its in the standard textbook I quoted.

Thanks
Bill
 
Last edited:
  • #77
Ken G said:
In either case, what matters is the mechanism you are invoking to sustain the correlations, and its possible ramifications as you say, but not whether you regard that mechanism as something nonlocal, or nonreal.

Okay, I understand your point now, thanks.
 
  • #78
bhobba said:
In Bell type setups things are so arranged that one person will get a 'black' paper and the other a 'white'. We just don't know which got which, and will not know until its observed,

I'm afraid I must oppose again. This is not the proper description of a Bell type setup, and the crucial point is that to get correlations like 85%, there must be cases where Alice & Bob gets 'black' + 'black', as well as 'white' + 'white'. This is actually the one thing that finally settled the Bohr–Einstein debates.

It looks like you and vanhees are reading the same standard textbook:

vanhees71 said:
Now you can ask, what's the probability that Alice and Bob find their photons in any possible combination. According to the rules of quantum theory you get

Code:
Alice's photon      Bob's photon     Probability
---------------------------------------------------
H                       H                    0
H                       V                    1/2
V                       H                    1/2
V                       V                    0

If this is what your textbook is stating, it really needs an update ...
 
  • #79
DevilsAvocado said:
I'm afraid I must oppose again. This is not the proper description of a Bell type setup, and the crucial point is that to get correlations like 85%, there must be cases where Alice & Bob gets 'black' + 'black', as well as 'white' + 'white'.

Are you sure you got this the right way round? I understood that the (opposite) correlations are greater than you would expect from non-correlated particles.

You never get a black + black , but you get a black + white more times than you would expect from just computing the cosines between an artitary angle of spin relative to an arbitrary angle of measurement. (For Black read spin up and White spin down). If you want to try it the maths is pretty easy. Grab a pencil and start drawing, evaluate the cosines and compare to what is observed - pretty amazing!
 
  • #80
DevilsAvocado said:
If this is what your textbook is stating, it really needs an update ...

Run that by me again.

In Bell type experiments the spins are anti-correlated. This means it's IMPOSSIBLE to not get opposite spins - its inherent in the setup.

This is the analogue of the paper situation I described - as much as a classical situation can describe Quantum weirdness anyway.

Thanks
Bill
 
  • #81
To get the weirdness, you must have Alice doing a black/white observation, and Bob doing a blue/red observation, or some such thing. If they both do black/white, you get the simple table in post 78, and there's nothing weird about that. But bhobba's point still holds, if I understand his point correctly, that if Alice does black/white, and Bob does blue/red, then the correlations between those outcomes simply cannot be explained by saying that the papers already know what outcome each observation will give before the measurement is done. If Alice could choose to do white/black, or blue/red, then half the papers cannot be thinking "if she does white/black, I'll be black" while the other half think "if she does white/black, I'll be white," independently of Bob's choices. If Alice measures white/black on an ensemble, half will be white and half black, but the way those populations will correlate with Bob's blue/red measurement will require that the envelopes themselves could not have "known" which outcome they'd get, independently of Bob's choice to measure blue/red. So it's about the envelope "not knowing" what color is inside it, that's the strangeness. It requires either a signal to tell it what Bob chose to measure (without communicating any information to Alice about that choice), or it requires a holistic character of the system that it is not just an envelope with a paper in it, but two envelopes with two papers, until all the data is collected. (Of course, there's no real advantage in replacing the particles with up/down spins with envelopes with paper in them, because real envelopes with paper in them could not retain the entanglements, but I believe the purpose of the device is to say "we don't really know how spins ought to behave because we have so little experience with them, but we do know how envelopes should behave, so the contrast is why the spins are behaving weirdly.")
 
Last edited:
  • #82
Jilang said:
Are you sure you got this the right way round? I understood that the (opposite) correlations are greater than you would expect from non-correlated particles.

Not sure I understand the question, but generally it depends on the cut-angle of the BBO crystal in the SPDC process, which generates two types of Bell states, Type I and Type II, which means that for Type I the photons share the same polarization |HH\rangle + |VV\rangle and for Type II they are orthogonally polarized |HV\rangle + |VH\rangle, which means that if Alice & Bob use a BBO Type I crystal and set their polarizers at same angle they will always get the same results i.e. [1, 1] or [0, 0], whereas if they use a BBO Type II they will always get opposite results i.e. [1, 0] or [0, 1]*.

*[1] means the photon went through the polarizer, [0] means stopped.

Jilang said:
You never get a black + black , but you get a black + white more times

There seems to be some fundamental confusion around this issue, to keep things as simple as possible, I'll give you this example:

  • Alice & Bob are using a BBO Type II crystal.
  • Alice & Bob set their polarizers at angle 0°.
  • Alice & Bob run a set of 1,000 entangled photons.
  • When Alice & Bob compare their results, they are always opposite i.e. [1, 0] or [0, 1].
  • Now Alice keeps her setting at 0° and Bob change his to 90°.
  • Alice & Bob run a new set of 1,000 entangled photons.
Question: What will the results show now?
 
Last edited:
  • #83
bhobba said:
In Bell type experiments the spins are anti-correlated. This means it's IMPOSSIBLE to not get opposite spins - its inherent in the setup.

Are you sure? Check out post #82. :biggrin:

Seriously, what are we talking about – spin or correlations in measurement outcome?

bhobba said:
This is the analogue of the paper situation I described

You wrote "one person will get a 'black' paper and the other a 'white'. We just don't know which got which" and this now means the anti-correlated spin coming out from the BBO crystal...

Is this really the main question in EPR-Bell??
 
  • #84
DevilsAvocado said:
Seriously, what are we talking about

What we are talking about is EPR-Bell type experiments:
http://en.wikipedia.org/wiki/EPR_paradox
'Alice now measures the spin along the z-axis. She can obtain one of two possible outcomes: +z or −z. Suppose she gets +z. According to the Copenhagen interpretation of quantum mechanics, the quantum state of the system collapses into state I. The quantum state determines the probable outcomes of any measurement performed on the system. In this case, if Bob subsequently measures spin along the z-axis, there is 100% probability that he will obtain −z. Similarly, if Alice gets −z, Bob will get +z.'

The outcomes are anti-correlated with 100% certainty - nothing else is possible. Anti-correlated means only two outcomes are possible called + and -. If one measures + the other is automatically - and conversely.

You can never get ++ or --.

This is from the definition of the experiment. There is no ifs or buts about it - it's its very definition.

Thanks
Bill
 
Last edited:
  • #85
DevilsAvocado said:
Is this really the main question in EPR-Bell??

The main question in EPR-Bell is Bells Theroem's consequences.

I have zero idea what you are even arguing about.

Thanks
Bill
 
  • #86
bhobba said:
You can never get ++ or --.

This is from the definition of the experiment. There is no ifs or buts about it - it's its very definition.

But Bill... your link is to the 1935 EPR paradox...

Check out this instead:
http://en.wikipedia.org/wiki/Bell's_theorem
 
  • #87
Ken G said:
To get the weirdness, you must have Alice doing a black/white observation, and Bob doing a blue/red observation, or some such thing. If they both do black/white, you get the simple table in post 78, and there's nothing weird about that. But bhobba's point still holds, if I understand his point correctly, that if Alice does black/white, and Bob does blue/red, then the correlations between those outcomes simply cannot be explained by saying that the papers already know what outcome each observation will give before the measurement is done.

I think there is no question (in standard QM) that the entangled photons are in a (shared) superposition (of spin); also the particles cannot be described as two "individuals", but as a "composite system" (or something like that).

Maybe Bill's "black and white paper" isn't a huge problem as such, as there is no escape from the simple formula cos2(a-b), where a and b are the space-like separated polarizer-settings of Alice & Bob.

However, it could be an issue if we only talk about "one person will get a 'black' paper and the other a 'white'" and combine this with statements like "there's nothing Earth shattering going on" and "common source explanation", etc ...

I might be exaggerating, but to me it looks like there is (at least some) risk that the "casual reader" (in worst case) could get the "wrong impression".

We don't want to do that, do we?
 
  • #88
DevilsAvocado said:
But Bill... your link is to the 1935 EPR paradox...

Errrrr.

What's your point?

I have been talking about EPR type experiments as proposed by Bell, Einstein etc

What have you been talking about?

Thanks
Bill
 
  • #89
DevilsAvocado said:
I think there is no question (in standard QM) that the entangled photons are in a (shared) superposition (of spin); also the particles cannot be described as two "individuals", but as a "composite system" (or something like that).
Yes, the standard interpretation takes the "holistic" approach, though Bohmians prefer the "instantaneous signal" approach because they want everything to be as locally classical as possible. Either way, it seems weird to us, in our daily lives. The usual point of invoking some classical image like black and white paper is to place the issue in classical language, to underscore how strange it seems. Granted, it is something of a cheat to make the situation appear strange by framing it in a language that has no real business applying, but it's a standard device when we want to stress how much different QM is from our daily experience, rather than when we want to just explain what QM says.
However, it could be an issue if we only talk about "one person will get a 'black' paper and the other a 'white'" and combine this with statements like "there's nothing Earth shattering going on" and "common source explanation", etc ...
I'm not sure what those elements were trying to evince. I was happy with the simple remark that the envelope can't know what color paper is inside it (which actually follows from the simple fact that it doesn't know what colors Alice will choose to look for), and what's more, it can't even know what color it will give in advance of any given choice Alice might make, because it wouldn't produce the right correlations for it to know that independently of the rest of what is happening in the entanglement. If that is what bhobba was saying, then the two of you are not disagreeing, if he was saying something else, I don't know what he would have meant.
 
  • #90
Ken G said:
I was happy with the simple remark that the envelope can't know what color paper is inside it (which actually follows from the simple fact that it doesn't know what colors Alice will choose to look for), and what's more, it can't even know what color it will give in advance of any given choice Alice might make, because it wouldn't produce the right correlations for it to know that independently of the rest of what is happening in the entanglement. If that is what bhobba was saying, then the two of you are not disagreeing, if he was saying something else, I don't know what he would have meant.

What I was saying isn't hard - and yes that's part of it ie the envelope doesn't know what color is inside, nor does the person that opens it until its opened.

All I am saying, and all the people that invoke this analogy are saying, is its not this really weird thing some seem to think it is. Its very much like the envelope analogy with the twist it doesn't have the property until observed. In fact within the decoherent histories interpretation its a very close analogy using the concept of frameworks they use.

You can get the detail in the textbook I alluded to before.

Thanks
Bill
 
  • #91
bhobba said:
If QM is non local is very interpretation dependent - that's the import of Bells Theorem and Einsteins error. QM rules out naive-reality ie local realism. If you reject realism (ie properties do not exist independent of observation) then locality is saved. If you keep it then locality is gone. But SR is still saved since it can't be used to send information which is what's required to sync clocks.

Basically all Bell type 'experiments' are doing is observing systems with spatial extent, and because of how its arranged if one thing in the system has a property on observation, so does the other thing - but they are spatially separated.

I have two pieces of paper, one black, and one white and put them in envelopes. I randomly send one to one person, and another to a different person. If any of those people open their envelope they immediately know what the other person will get when they open their envelope. Their is nothing Earth shattering going on. Same with Bell type experiments, with the twist we can't say it has the property of blackness or whiteness until observation.

Griffiths book - Consistent Quantum Theory discusses it from this interesting perspective:
https://www.amazon.com/dp/0521539293/?tag=pfamazon01-20

Thanks
Bill

I think Griffiths's book indicates that the nonrealism has to be much stronger than just not having the properties before measurement. The nonrealism has to assume that "reality" can be described in various incompatible ways which cannot be combined http://quantum.phys.cmu.edu/CQT/chaps/cqt27.pdf. Griffiths says his interpretation is realistic and local, but if one wants to argue that it is not realistic, that seems plausible. According to an FAQ about consistent histories, "Colored slips of paper, one red and one green, are placed in two opaque envelopes, which are then mailed to scientists in Atlanta and Boston. The scientist who opens the envelope in Atlanta and finds a red slip of paper can immediately infer, given the experimental protocol, the color of the slip of paper contained in the envelope in Boston, whether or not it has already been opened. There is nothing peculiar going on, and in particular there is no mysterious influence of one "measurement" on the other slip of paper." http://quantum.phys.cmu.edu/CHS/quest.html#EPR. Similarly, Griffths's book says that measurements reveal properties of a system before the measurement took place, and further says there is an independent reality within the consistent histories framework. http://quantum.phys.cmu.edu/CQT/chaps/cqt27.pdf. So I don't think the twist is that the cards don't have the colour before the measurement, but that reality can be described in incompatible ways.
 
Last edited:
  • #92
atyy said:
So I don't think the twist is that the cards don't have the colour before the measurement, but that reality can be described in incompatible ways.

In consistent histories things are analysed in terms of frameworks. One must specify a framework to analyse a particular situation and can't use incompatible ones. The other aspect is a history which is a sequence of projection operators and its only when an observation is made does a particular history become real.

If you want to see the exact detail of how Consistent Histories analyses EPR then its probably best to get the book. I have a copy and have gone through it - and in that interpretation everything works out perfectly OK - but its not an interpretation that can be discussed superficially - although I will make a stab.

I like Consistent Histories, but my issue with it is it started out as a minimalist interpretation, but turned out to be a bit more complicated.

In many ways its Many Worlds without the worlds. Because of that it has to cater to some things that is trivial in MW eg all histories occur in MW, so its trivial why a particular history happens - not so in Consistent Histories. Of course the Consistent History guys would say its just what's necessary to avoid the weirdness of MW.

I dug up the following discussing EPR in terms of Consistent Histories:
http://www.siue.edu/~evailat/pdf/qm12.pdf

'Although CH allows a realist understanding of quantum mechanics, it does not follow EPR in attributing quantum mechanically incompatible properties to a system. Griffiths gives an instructive story about what happens if one insists that P intersection Q is defined even if P and Q are incompatible. Consider a spin-half particle, and to simplify the notation, let [Z +] stand for the projector associated with Sz =1, and similarly for other projectors. Suppose now that the composite property Sz =1 intersection Sx =1 existed. Then its corresponding projector would have to project onto a subspace P of the two dimensional Hilbert space H of the spin-half particle. However, no such subspace can exist.'

Basically in EPR there are two consistent frameworks - one with +1 and -1 and the converse. Others simply do not exist so no problems can arise.

One framework corresponds to one envelope getting black and the other white, and the other framework the converse.

Thanks
Bill
 
Last edited:
  • #93
bhobba said:
The outcomes are anti-correlated with 100% certainty - nothing else is possible. Anti-correlated means only two outcomes are possible called + and -. If one measures + the other is automatically - and conversely.

You can never get ++ or --.

This is from the definition of the experiment. There is no ifs or buts about it - it's its very definition.

bhobba said:
One framework corresponds to one envelope getting black and the other white, and the other framework the converse.

bhobba said:
Errrrr.

What's your point?

I have been talking about EPR type experiments as proposed by Bell, Einstein etc

What have you been talking about?

Bill, you have helped me many times and you know a lot more than I do, about different issues. However, it's only human to do mistakes – just ask me – I've done terribly embarrassing mistakes many times. But this time, all facts available clearly shows that, any confusion/mistake is not on my behalf.

My experience is also that the longer one defends an obvious mistake – the worse things get...

Therefore, as a friend, it is not in my interest to prolong this "unfortunate situation". If you don't get it this time, I'm afraid there's nothing more I can do:

[PLAIN said:
https://en.wikipedia.org/wiki/Bell's_theorem]The[/PLAIN] probability of *the same result* being obtained at the two locations varies, depending on the relative angles at which the two spin measurements are made, and is strictly between *zero* and *one* for all relative angles other than perfectly parallel alignments (0° or 180°). Bell's theorem is concerned with correlations defined in terms of averages taken over very many trials of the experiment. [...] if the pairs of outcomes are always the same, the correlation is +1, no matter which same value each pair of outcomes have. If the pairs of outcomes are always opposite, the correlation is -1. Finally, if the pairs of outcomes are perfectly balanced, being 50% of the times in accordance, and 50% of the times opposite, the correlation, being an average, is 0. [...]

Measuring the spin of these entangled particles along anti-parallel directions—i.e., along the same axis but in opposite directions, the set of all results is perfectly correlated. On the other hand, if measurements are performed along parallel directions they always yield opposite results, and the set of measurements shows perfect anti-correlation. Finally, measurement at perpendicular directions has a 50% chance of matching, and the total set of measurements is uncorrelated. These basic cases are illustrated in the table below.

nvtx55.png
 
Last edited by a moderator:
  • #94
bhobba said:
All I am saying, and all the people that invoke this analogy are saying, is its not this really weird thing some seem to think it is. Its very much like the envelope analogy with the twist it doesn't have the property until observed.
Yet it seems to me that what needs to be clarified, by any classical analogy, is that there are actually two twists here, one which is already quite a bit different from our daily experience, but also another which is so different it is regarded as "spooky." The first twist is just that the envelope doesn't hold a particular color, but this has nothing to do with entanglement, it is just the concept of complementarity (a pure state in regard to one measurement is not a pure state in regard to another). So we cannot say "the envelope contains a white piece of paper and not a red one" until we know if white/black or red/blue is the measurement. That is a strange enough state of affairs, but we might still wish to imagine the envelope is thinking "if Alice measures white/black, I'll be white, and if red/blue, I'll be blue", independently with anything else that could be going on elsewhere. But entanglement ruins even that-- the envelope cannot independently know what result it will give to any given measurement, because there need to be correlations with other measurements that the envelope does not know about. These two twists are often contrasted with "Bertlmann's socks", where there's a left sock and a right sock so the envelope "knows" the measurement in question is going to be about leftness/rightness and so the envelope can know which it will produce (and it will be opposite the other envelope), even if Alice does not. But note that Bertlmann's socks fails to include either of the two types of twists I mentioned above, so sometimes it is not clear there are two completely different twists there to appreciate, not just one.
 
  • #95
The "twist" that quantum mechanics places on correlated values makes all the difference in the world, it seems to me.

In the classical situation there are two envelopes, one containing a black piece of paper, and one containing a white piece of paper. Alice gets one envelope, and Bob gets the other. When Alice opens her envelope, she immediately knows what's in Bob's envelope. This is explained as purely a change in knowledge--the envelope already was in a definite, but unknown state, and opening it just revealed information about this pre-existing state.

The difference with the quantum EPR experiment is that it is not consistent (at least not without jumping through strange hoops) to assume that each particle had a definite, but unknown state prior to measuring its spin. So an explanation purely in terms of a change knowledge doesn't seem possible.

On the other hand, an explanation in terms of causal influence doesn't seem very plausible, either, since this influence would have to propagate faster-than-light (which means back-in-time for some observers). To me, it's a really tough nut to crack. Maybe retrocausality or super-determinism would explain it, or MWI.
 
  • #96
This paper from Studies in History and Philosophy of Modern Physics

http://www.lophisc.org/wp-content/uploads/Price.pdf

sums up very nicely the sort of choices we are faced with for the different interpretations. The gist of it seems to be that if we are any sort of realist other than an Everettian or a Bohmian there is a choice to be made between time-asymmetric ontology and retro-causality and that this is true in QM in a manner that is not true in classical physics.
 
  • #97
The question I ask is, why not just conclude the marriage of causation and local realism is a classical notion? Both Einstein and Bohm seemed to say we should take classical thinking as our basic paradigm, and use it whenever we can, no matter how badly we need to retrofit it. That seems kind of opposite to me-- we built classical notions from a set of experiences, now we are having new experiences, so we need to be ready to relax old notions and build new ones. Local realism, and causation, seem like they need to be chucked, and retrocausation seems to have simply not gotten the memo that the whole causation notion should have been let go, other than as a perfectly viable theory of information propagation between people doing experiments and asking questions. In that spirit, we just say we have an entangled system, so we get entangled correlations, and leave it at that. Let QM speak for itself, with classical analogs used only as devices to show how QM isn't classical, not to show what's wrong with how we think about QM that needs creative retrofitting.
 
Last edited:
  • #98
Ken, nice post. If I was a betting type I might put my money on retrocausality being able to be demonstrated by experiment. With MWI it would seem no chance would exist. If there is an accommodating universe there should always be a way to move forward. We live in interesting times!
 
  • #99
Jilang said:
Ken, nice post. If I was a betting type I might put my money on retrocausality being able to be demonstrated by experiment. With MWI it would seem no chance would exist.
GRW and even Bohmian might also be experimentally distinguishable from OQM.
 
  • #100
atyy said:
Griffiths says his interpretation is realistic and local, but if one wants to argue that it is not realistic, that seems plausible. According to an FAQ about consistent histories, "Colored slips of paper, one red and one green, are placed in two opaque envelopes, [...]

I smell a rat.

On PF we've seen numerous cranky attempts with "African Doctors", "French Epidemics" and god knows what, "explaining" EPR-Bell experiments preserving local realism. Everybody has failed, catastrophically, and of course the "Slip/Envelope" case is no different.

The "Slip/Envelope" case is mentioned in the FAQ, explaining how the 1935 EPR paradox gets handled in Consistent Histories. Of course, Bell's 1964 theorem is not mentioned once, since this would ruin the case completely.

In the Brief Introduction to Consistent Histories we are served the same moldy dish:

Brief Introduction to Consistent Histories said:
Consistent histories can be used to analyze various quantum paradoxes, such as the interference produced by a particle passing through a double slit, or the correlated pair of particles considered by Einstein, Podolksy, and Rosen. This allows the paradox to be understood in quantum terms, without any need to invoke peculiar long-range influences or other ghostly effects.

But what does Griffiths say about this in his book Consistent Quantum Theory?

Well, the "Slip/Envelope" case gets introduced in chapter 23.4 about Stern-Gerlach and Measurements of One Spin, where we have the mutual exclusion outcome of Z+ or Z-, in which the "Slip/Envelope" case naturally fit like a glove. If we combine this with the eccentric methodology of stopping time at 1935, and 'forgetting' everything about Bell's theorem, we're almost there...

Wow, Griffiths is a crackpot??

No, certainly not, because in chapter 24.4 about Bell Inequalities, the "Slip/Envelope" case is not mentioned once (only in 24.1 about the 1935 EPR Paradox). Instead we get this restriction:

Consistent Quantum Theory - 24.4 Bell inequalities said:
Thus the point at which the derivation of (24.10) begins to deviate from quantum principles is in the assumption that a function \alpha(wa ,λ) exists for different directions wa . As long as only a single choice for wa is under consideration there is no problem, for then the “hidden” variable λ can simply be the value of Saw at some earlier time. But when two (excluding the trivial case of wa and -wa) or even more possibilities are allowed, the assumption that \alpha(wa ,λ) exists is in conflict with basic quantum principles.

The phrase "the “hidden” variable λ can simply be the value of Saw at some earlier time" is just a circumscription of the "Slip/Envelope" case, where this prerequisite is an absolute necessity, thus the "Slip/Envelope" case only works as long as "only a single choice for wa is under consideration".

Hence, Griffiths himself, in chapter 24.4 points out that the "Slip/Envelope" case in not compatible with Bell Inequalities, and he sums up the chapter with the following:

[my bolding]
Consistent Quantum Theory - 24.4 Bell inequalities said:
In summary, the basic lesson to be learned from the Bell inequalities is that it is difficult to construct a plausible hidden variable theory which will mimic the sorts of correlations predicted by quantum theory and confirmed by experiment. Such a theory must either exhibit peculiar nonlocalities which violate relativity theory, or else incorporate influences which travel backwards in time, in contrast to everyday experience.

Everybody, including Einstein, understands that the "Slip/Envelope" case is a hidden variable theory. End of discussion.

What's the problem?

The problem is when guys like postdoctoral fellow Vlad Gheorghiu packages the whole thing into an unrecognizable quicksand of delusion, capable of making an erudite SA swallow it hook, line, and sinker – into the hole of scientific deception.
 
Back
Top