ttn said:
What you are calling "no statistical dependence" is equivalent to the factorization of the joint probability, yes?
Yes -- when there is no statistical dependence between A and B, we have P(A and B) = P(A) P(B) and we have P(A|B) = P(A).
ttn said:
Obviously I don't agree. I'd say: by thinking (erroneously) that Bell Locality means nothing but statistical independence, you are missing the whole point. Incidentally, I find it interesting that you cannot apparently resist converting Bell Locality (which is a *physical* condition) into factorizability (which is a purely mathematical condition).
I don't think Bell Locality means nothing but statistical independence -- but it's that aspect of Bell Locality from which this whole debate stems. (At least the part in which I'm involved)
I'm willing to grant the other aspects of Bell Locality, just not the assumption of statistical independence...
Bell said:
Very often such factorizability is taken as the starting point of the analysis. Here we have preferred to see it not as the *formulation* of 'local causality', but as a consequence thereof.
... or whatever Bell happened to be assuming that is equivalent to assuming statistical independence.
In one of the papers you linked before, Bell Locality was formulated in terms of three postulates: parameter independence, statistical independence (I believe it was called "observation independence"), and something else I can't remember.
ttn said:
I take as an axiom that physical theories should respect relativistically local causation. And then it is proved that no theory consistent with that axiom can agree with the data. So I say "oops!" I guess that axiom is *false*. No locally causal theory can explain the data.
Wait a minute, I thought it was proved no Bell Local hidden variable theory could agree with the data.
I'm going to rewind back to the original theory I posted, where I simply posited the existence of a pair of coins that were governed by a joint probability distribution. Upon reflection, I realize that there is a problem with this, but for reasons
entirely different than what you've said. So let me make a slight modification before we continue.
Let us start with special relativity, but also add in some additional postulates:
(1) There exist objects called "magic coins", and they come in pairs.
(2) Magic coins can be in one of three states (in addition to whatever SR says): U, H, or T.
(3) Any two pairs of magic coins are otherwise identical.
(4) There is some sort of interaction called "flipping" that can be triggered in a laboratory setting that causes a magic coin in the U state to nondeterministically transition into the H or T state.
(5) Otherwise, magic coins do not change their state.
(6) The flipping interaction for each pair of magic coins is governed by the joint probability distribution P(HH) = P(TT) = 1/2, P(HT) = P(TH) = 0. (And the distribution over all pairs of coins factors into those of the individual pairs)
The important things to note are:
(A) The theory is
nondeterministic. There is
nothing to determine whether the coin undergoes U-->T or U-->H.
(B) The probabilities are understood in the frequentist interpretation: we say the probability of an event E is p when the ratio of the number of times event E occurs over the number of (identical) experiments approaches p as the number of experiments goes to infinity.
In particular, this means probabilities represent nothing more than
asymptotic behavior: it is entirely nonsensical to try and use them to describe an individual experiment.
It follows from the axioms of the theory that:
For the experiment where we take a magic coin and flip it, we have P(H)=1/2.
For an experiment where Alice and Bob take a pair of magic coins and flip them, we have
P(Bob sees H | Alice sees H) = 1.
and
P(Bob sees H | Alice sees T) = 0.
It does
not follow from the axioms of the theory that it is impossible for Bob to see H and Alice to see T. (But the probability of it happening is zero)
This theory claims to be complete and local in the respect that everything that can be determined can be entirely determined with the local beables.
The important thing this is trying to convey relates to the usage of probabilities. Normally, (as far as I can tell) the usage of statistics in physics is either entirely aphysical, or it is based upon a very shaky logical foundation.
For example, the frequentist definition of probability requires examining a hypothetical infinite sequence of similar experiments. But we have problems such as:
(1) The limiting ratio may not be well defined.
(2) The sequence of events is hypothetical, and cannot be physical.
(3) We don't know how to classify experiments as similar! (Well, we do know one way, but then we'd never see probabilities other than 0% or 100%)
But, at least in the above theory, we can put the frequentist definition on a rigorous footing -- since there are
no factors that affect the outcome of a coin flip, it's clear that we can consider any two coin flipping experiments as similar. (And experiments involving multiple coins are similarly easy) We don't have to worry if we can define a hypothetical sequence of experiments and if the limiting ratio will be defined, because one of the axioms of the theory is that we can do so without any worries.
ttn said:
You seem to be using/assuming the MWI without being willing to admit the well-known weirdness of such a view. Sure, unitary evolution can get you that superposition, and you can twist and turn and eventually connect this up with what we do experience (by denying that what we see in front of our face is the truth, i.e., by postulating that we're all deluded about what the outcomes of the experiments actually were).
I do prefer a view that's MWI-like, although I do not know if it leads to MWI. I don't like a nondeterministic theory -- I would prefer to make a statistical theory, and take statistics seriously.
In this interpretation, when we say something is a random variable, we
do not mean that it is something that can acquire an outcome! That means that it is something that assigns nonnegative real numbers to the possibles outcomes that add up to one.
Other interpretations suffer from two
very difficult philosophical problems:
(1) They are nondeterminsitic. They assert that there is no reason the measurement turns out the way it did... it just did. But at least we have this probability distribution that describes the results!
(2) It is mysterious why and how the frequentist definition of probability manages to describe anything!
But my interpretation solves both problems.
(1) It is a deterministic theory of random variables.
(2) Probabilities are fundamental elements of reality -- so we don't have to use the frequentist definition to talk about probabilities.
It does raise the philosophical issue about why it looks like we see outcomes, but that question
does have an answer.
But, if you insist that it's too radical of an approach, we can stick with nondeterminism, and assert that quantum mechanics without the collapse postulate is capable of describing everything that can be described -- it has a unitarily evolving "state of the universe", and the only other things that can possibly be described are the probabilities involving the outcomes of measurements. But, as you remember, the classical usage of probabilities are statements about asymptotic behavior, and not statements about individual events.