nanosiborg said:
@bohm2
This (and the minimal statistical, or probabilistic, or ensemble) 'interpretation' wins because it doesn't involve any metaphysical speculation about what nature 'really is'. It just recognizes that what's 'really happening' in the deep reality of quantum experimental phenomena is unknown.
Yes, and physics is the attempt to describe objective reproducible facts about our observations of phenomena as precisely as possible. The question, why this works so well and in relatively simple mathematical terms is or even why nature behaves as we observe her is not a matter of physics (or any natural science) but of philosophy or even religion.
That's why I'm a follower of the minimal statistical interpretation (MSI): It uses as much assumptions (postulates) as needed to apply quantum theory to the description of (so far) all known observations in nature but no more. It also avoids the trouble with interpretations with a collapse (which, I think, is the only real difference between the Bohr-Heisenberg Copenhagen point of view and the MSI).
Also, it should be clear that the violation of Bell's inequality, when interpreted within the MSI. Take as an example an Aspect-Zeilinger like "teleportation" experiment with entangled photons and let's analyze it is terms of the MSI.
Within MSI the state is described by a statistical operator (mathematical level of understanding) and related to the real world (physics level of understanding dealing with real objects like photons, crystals, lasers, polarization filters, and what else the experimental quantum opticians have in their labs) as an equivalence class of preparation procedures that is appropriate to prepare the system in question (with a high enough accuracy) in this state.
Of course, a given preparation procedure has to be checked to really produce this state, which means according to the MSI that I have to be able to reproduce this procedure to a high enough accuracy such that I can prepare as many systems in this state, independently from each other, such to create a large enough ensemble to verify the probabilistic prediction of the claim that each system in the ensemble is, through this preparation procedure, prepared in a way such that its statistical behavior is described (at least up to the accuracy reachable by the measurement procedure used) by this state.
In the Zeilinger experiment, what's done in the preparation step is to produce a two-photon Fock state via parametric down conversion by shooting a laser beam on a birefrigerent crystal and then let the photon pair alone (i.e., there must be no interactions of either of the photons with anything around such that we can be sure that the pair stays in this very state). In the most simple case the photon pair used in a helicity 0 state, i.e., the polarization part is described by the pure state
|\Psi \rangle=\frac{1}{2}(|HV \rangle-|VH \rangle).
The single-photon polarization states are then given by the corresponding partial traces over the other photon and turns out to be the maximum-entropy statistical operators
\hat{R}_A=\hat{R}_B=1/2 (|H \rangle \langle H|+|V \rangle \langle V|).
Thus the single photons are unpolarized (i.e., an ensemble behaves like a unpolarized beam of light when taking the appropriate averaging procedure over many single-photon events). In terms of information theory the single-photon polarization is maximally indetermined (maximal von Neumann entropy).
In principle, it's possible to wait a very long time and then to perform some polarization analysis at very distant places. Then Alice and Bob can do their measurements in any chronological order. E.g., Alice measures her photon first and Bob after, and they can do this, however at "space like distances", i.e., such that a causal effect of Alices measurement on Bob's photon could only occur if their is faster-than-light-signal propagation. They can even do their experiment at the same time, so that one would need signal propagation at an arbitrarily large speed to have a causal effect of one measurement on the other.
It's well known, that the prediction of quantum theory is fulfilled to an overwhelming accuracy: If both Alice and Bob measure the polarization in the same direction there is a one-to-one correspondence between their results: If Alice finds her photon in horizontal (vertical) polarization, Bob finds his in vertical (horizontal) polarization.
Now it is a matter of interpretation, how you conclude about "faster-than-light-signal propagation" (FTLSP): Within the MSI there is no problem to stay with the conservative point of view that there is no FTLSP. The one-to-one correlation between the spins is due to the preparation of the two-photon state in the very beginning, and it's a statistical property of the ensemble which can verified only by doing a lot of experiments with a lot of equally prepared photon pairs to prove the predicted one-to-one correlation. At the same time it can be verified that the single-photon ensembles at Alice and Bob's place behave as an unpolarized beam of light, i.e., both measure (on average!) in 50% of the cases horizontal and in 50% vertical polarization of their photon. Subsequently they can match their measurement protocols and verify the one-to-one correlation. No FTLSP has been necessary to explain this one-to-one correlation since this has been a (only statistical) property of the preparation procedure for the photon pair, and no causal influence of the measurement at Alice's place on the measurement on Bob's has been necessary as an explanation for the outcome of the measurement. According to standard QED the interactions of each photon is a local one with the polarization filters and the detector at both places and one measurement of the photon cannot influence the measurement of the other photon, and within MSI one doesn't need anything that violates this very successful assumption, on which all our (theoretical) knowledge (summarized in the standard model of elementary particles) on elementary particles and also photons is based: On the very foundations of relativistic QFT and the definition of the S matrix we use the microcausality + locality assumption. So there is no need (yet) to give up this very successful assumptions.
Now if you adhere to a collapse interpretation a la (some flavors of the) Copenhagen interpretation (CI), you believe that at the moment when Alice detector has registered her photon as being horizontally polarized, instantaneously the two-photon state must collapse to the new pure state, described by |HV \rangle. This happens in 50% of all cases, and then of course Bob, who detects his photon after Alice (but the detection events are supposed to be separated by a space-like distance in Minkowski space) must necessarily find his photon in the vertical polarization state. Thus, concerning the outcome of the experiment, this interpretation is not different from the MSI, but it causes of course serious problems with the local causal foundations of relativistic QFT. If the collapse of the state would be a physical process on the single photon pair, there must be FTLSP, and since the detection events of the photons were space-like separated, an observer in an appropriate reference frame could claim that Bob's measurement was before Alice's and thus the causality sequence would be reversed: From his point of view, Bob's measurement caused the instantaneous collapse before Alice could detect her photon. This, however would mean that the very foundation of all physics is violated, namely the causality principle, without which there is no sense to do physics at all.
That's why I prefer the MSI and dislike any interpretation invoking (unnecessarily as we have seen above!) an instantaneous collapse of the state. Of course, the MSI considers QT as a statistical description of ensembles of independently from each other but equally prepared systems and not as a description of any single system within such an ensemble. Whether or not that's a complete description of nature, is an open question. If it is incomplete the violation of Bell's inequality leaves only the possibility that there is a non-local deterministic theory. The problem is that neither we have one such theory that is consistent nor is there any empirical hint that we would need such a theory, because all observations so far are nicely described by QT in the MSI.