reilly said:
vanesch --
A knowledge based QM interp, as with any probabilistic theory, necessarily obeys unitarity.
I don't understand what this could mean.
As usually formuted, QM is local, point interactions and all that.
The *unitary* part of QM is local, yes.
I wonder what it could possibly mean for something to interact "locally" if it is just a knowledge description. What does it mean that my "knowledge of electron A" interacts locally with "my knowledge of proton B" ?
Assuming that this is not the objective state of the electron A or the proton B, I don't see what can be "local" to it, and why my 'knowledge of electron A' cannot have any interaction with my knowledge of muon C, which is - or rather, I know that it is - 7 lightyears from here.
So how do you implement something like lorentz invariance for knowledge ?
I've always thought of MWI as odd. Seems to me that it's just another attempt to subvert probability. If it is such a great idea, why has it not been in place since at least Fermat's notions about games of chance?
The reason for MWI is of course NOT to circumvert probability or something. In fact (although many MWI proponents trick themselves IMO into the belief that they can do without it - I'm convinced that they are wrong, and that probabilistic concepts are needed also there) the only reason for MWI is to be able to take the wavefunction as an objective description of reality. You run into a lot of problems and paradoxes if you take the wavefunction as describing objective reality while accepting the probabilistic projection, but the problem is not the probabilistic aspect of it ; the problem is twofold:
1) the fact that all elementary interactions between quantum systems are described as strictly unitary operations on the quantum state (its derivative being a Hermitean operator which we call the Hamiltonian) - so there is no known mechanism to implement a non-unitary evolution, which is a projection
2) the fact that this projection cannot be formulated in a Lorentz invariant way
The problem is NOT, the probabilistic aspect.
There is a difference between the relationship between classical physics and probability, and between the quantum state and probability, and that's the following. When we use probability in classical physics, the probability distribution itself plays no physical role.
When a classical system evolves from A to A', and from B to B', then, if we assign probability p1 to A and p2 to B, we'll have an outcome B with probability p1 and an outcome B' with probability p2. If we learn that the system was finally in B', then we can "update backward" our probabilities, say that the system, after all, was in state B, and that A was just part of our ignorance. As you state, there's no reason to introduce a "parallel world" in which A was there, after all, but we happen to be in a universe where B happened.
The reason why this is superfluous is that the numbers p1 and p2 never enter into any physical consideration. They are just carried along, with the classical physics, WITHOUT INFLUENCING THE DYNAMICS.
But in quantum theory, this is not true. If the state is a |u> + b |v> , and if we now evolve this into the state a |u'> + b |v'> and we work now in the basis |x> = |u'> + |v'> and |y> = |u'> - |v'>, and measure x/y, then the probability of having x or y will depend on the numbers a and b. It is not that the coefficients a and b are somehow, a measure of our lack of knowledge, which get updated after the measurement. Because if this were true, there would be no difference between a STATISTICAL MIXTURE of states |u> and |v> and the state a|u> + b|v>
To illustrate that this is not the case, consider a = b. A statistical mixture of 50% |u> and 50% |v> will yield in an outcome which gives us 50% |x> and 50% |y>. Nevertheless, the state |u> + |v> (which has identical statistical value, right) will result in 100% state |x> and 0% state |y>.
So the values of a and b CANNOT be interpreted as describing our lack of knowledge which gets updated during the measurement. It would be hard to imagine that NOT KNOWING something (having non-zero values for a and b) would make it impossible to obtain the outcome |y>, while KNOWING something (like knowing that a = 1 and b = 0) would suddenly make the states |x> and |y> appear 50% each.
So those numbers a and b HAVE PHYSICAL MEANING. They influence what will happen later, and this cannot be seen in a purely "I didn't know, and now I learned" fashion, as probability CAN be seen in a classical context. It is the phenomenon of quantum interference which makes the "knowledge" view of the wavefunction, IMO, untenable.
The state |u> + |v> has simply DIFFERENT PHYSICAL CONSEQUENCES than the state |u>. One cannot say that |u> + |v> expresses our lack of knowledge about whether it is |u> or |v>, while the state |u> expresses our certainty of having the system in state u for sure, because if that were so, then it is strange that a lack of knowledge leads to more certainty (namely, that we will NOT have the result y) than when we know more.
Why is this ideas absent from virtually all books on probability and statistis? (I would say all, but there are such books that I've never read.)Why not a universe in which I broke the Bank at Monte Carlo, when I found and extra million in my bank acount, why not a universe in which the Boston Red Sox won 25 World Series in a row? What good does such speculation bring?
It doesn't bring any good in a classical setting, because of the fact that this "parallel possibility" has no influence what so ever on the physical dynamics. You can say, in this context, that the "parallel universe" where the initial conditions where such that the Boston Red Sox will win 25 World series in a row, has, FROM THE BEGINNING, never been there, and that we just entertained its possibility because we didn't knew all the details. When we did find out that this didn't happen, we could simply scrap this parallel universe from our list with no harm, BECAUSE IT HAS NEVER BEEN PART OF THE ONTOLOGICAL STATE OF THE UNIVERSE in the first place (only, we didn't know).
But when we know that the state is |u>, and we find |x>, we cannot go back 'scrap' somehow a state from our list. We cannot say that it actually meant that the state was actually |u> + |v>, back then. Because we MEASURED u back then, and we found u, and not v. So it is not "an imaginary parallel universe which turned out not to be the right one".
The knowledge approach works, and, I have yet to hear of any practical arguments against it.
The most important argument against it IMO, is that there is no description of reality in this view. It is hard to work with things of which you have constantly to remind yourself that "it isn't really there", and nevertheless devellop a physical intuition for.
I will admit that the stupendous ego of David Deutsch, which permeates his book, The Fabric of Realty, turned me away, in part, from MWI. It's one of these books that says,"Trust me, I'm right." He's a great spinner, probably could do well as a political consultant, with such catchy ideas as shadow photons. Small wonder that his views remain relatively unknown.
Didn't read it. My only attempt was to write a paper showing that his proof was flawed, but (as has been discussed here), it was not accepted.
With all due respect, I have yet to see anything about MWI that solves any problem of QM other than with fanciful suppositions of universes we can never know. Deus ex Machina
You are probably right that it doesn't have much practical implications. In my opinion, the most important function of MWI is to rehabilitate QM as a description of reality, and to be able to put all this positivist considerations aside. As such, it removes all ambiguity about WHEN one should apply the projection postulate, and removes the need of the distinction between a physical interaction and a non-physical measurement. In most situations, this distinction is so clear, that it doesn't need any specific treatment, but in situations such as delayed choice quantum erasers or EPR setups, one can wonder about when one should apply the projection postulate. Well, MWI solves that situation unambiguously.
I would also like to point out that "the universes we can never know" are NOT introduced or postulated. They are simply not ELIMINATED by a projection postulate.