# Many-Worlds, Deriving the Born Rule?

1. Dec 7, 2013

### MHuiq

Lately I have been interested in the many-worlds interpretation, and in particular the way it is described by Wallace in his latest book The Emergent Multiverse.

In the book he tries (or succeeds) to derive the Born rule from unitary dynamics by using game-theoretic arguments. But for this he uses decoherence to explain the emergence of different branches.

I have read criticism that decoherence itself relies on the assumption of the Born-rule, by stating that the low probability outcomes vanish. Thus it seems that Wallace's argument is circular.

The criticism seems pretty straightforward and I think Wallace must have thought of this too, but I am not sure how he or other MWI-proponents resolve this issue.

Can someone shed some light on how this is resolved, if it is resolved?

2. Dec 7, 2013

### Staff: Mentor

I have gone through Wallace's derivation from his book. If I recall correctly he has a FAQ bit that answers questions like yours - looked it up - the issue of circularity is addressed on page 253.

It's not circular, but it does have a tacit assumption that the quantum formalism needs to be respected, in particular any confidence level you are in a particular world (and basically that's what decision theoretic means - its a Baysian approach) does not depend on a chosen basis. That being the case one has a well known theorem, Gleason's theorem, that can be used anyway.

Decoherence doesn't use the Born rule - you get the improper mixed state from it regardless - interpreting the mixed state does. But regardless Gleason's theorem is independent of decoherence.

Thanks
Bill

Last edited: Dec 7, 2013
3. Dec 8, 2013

### tom.stoer

Wallace himself admits that it's still a matter of debate whether his proof proves what he intends to prove ...

4. Dec 8, 2013

### MHuiq

Thank you for your replies. Let's see if I understand this correctly. By Gleason's theorem there can be no other probability measure than the Born rule over a Hilbert space of dimension 3 or greater.

Thus if I want to interpret the improper mixed state from decoherence probabilistically I have to use the Born rule. I can not use some kind of inverse measure 1/|amplitude| such that the small amplitude states are the most probable.

Furthermore I see that Wallace explains at page 253 that the Hilbert-space norm is like a "natural measure" of state pertubations in Hilbert-space, which he concludes by looking at the microphysical dynamics. Thus a small change in an amplitude causes a small change in the dynamics. I assume this is asserted by Gleason's theorem, because if there were a 1/|amplitude| measure then off course small changes in the amplitude of a state would result in physically big changes in the dynamics.

I am not sure if I am completely convinced yet, but his argument is becoming clearer to me.

5. Dec 8, 2013

### tom.stoer

The problem with the probabilities in the Everett interpretation is the following:

1st there are no probabilities at all! Neither is the Born rule introduced as an axiom, nor is there a reason why there should be any probabilities at all. All there is is an Hilbert space and a (one-dim.) ray subject to deterministic, unitary time evolution. So Gleason's theorem simply does not apply.

2nd what the theory has to explain (via a mathematical derivation) is how and why there is some branch structure with appropriate amplitudes corresponding to the many worlds "interpretation" (the MWI is - as far as I can see - not an interpretation but a research program which tries to derive certain results from the mathematical formalism which have been introduced as axioms in a collapse interpretation). Assume we have such a branch structure and assume there are emergent amplitudes with the correct values (via decoherence) then this still does not provide any reason to talk about a probability.

From a eye perspective the a's in

$\sum_n a_n |n\rangle$

need not (must not) be interpreted as a probability (no vector is ever interpreted as a probability). So why in QM? I think Wallace is rather clear about the problem in chapter 4.5 and especially at the end of chapter 4.6 of http://arxiv.org/abs/0712.0149. I have not seen any paper going beyond these results, but I would be happy to find one.

(to be clear about that: I believe that what we observe in the real world is just a set of probabilities to find ourselves in certain branches; whether the other ones survive as unobservable branches or collapse to zero cannot be decided experimentally)

6. Dec 8, 2013

### stevendaryl

Staff Emeritus
The mathematics of tracing out the environmental degrees of freedom does not depend on any assumptions about the interpretation of the wavefunction. But there is a part that seems to require a probabilistic argument of some sort, and that is irreversibility. Both the decoherence process and the measurement process are assumed to be irreversible. But doesn't irreversibility depend on some kind of law of large numbers, which depends on a notion of probability?

7. Dec 8, 2013

### Staff: Mentor

I think you had better give the details of that one. I haven't seen it in any of the models.

But even if true it makes zero difference. Gleason's theorem doesn't require that - simply basis independence.

Thanks
Bill

8. Dec 8, 2013

### tom.stoer

But Gleason's theorem is irrelevant in this context.

It states that the only possible probability measure is the tr(Pρ), but it doesn't explain why there should be a probability at all. So if we want to introduce probability in QM (formulated using a Hilbert space) then the probability measure must be tr(Pρ) for some subspace described by P. But it does not explain why we should introduce any probability at all, given that we have a deterministic, unitary time evolution.

9. Dec 8, 2013

### Staff: Mentor

Gleason's theorem has nothing to do with dynamics - its only assumption is the measure can't depend on the basis. You wouldn't really use a vector space formalism if it did would you? Its very natural, so natural it actually took a little while after Gleason proved it to disentangle its physical basis (it's non-contextuality which is how DBB for example evades it). But again, if the state, which is the fundamental thing in Many Worlds, is an element of a vector space, it would be reasonable to think the measure does depend on a basis - which is purely an arbitrary man made thing - laws of nature shouldn't really depend on that. DBB gets around it by the pilot wave and actual particle being the fundamental thing - not the state.

Thanks
Bill

Last edited: Dec 8, 2013
10. Dec 8, 2013

### MHuiq

The reason I included Gleason's theorem in the dynamics was the following. Wallace writes as a response to "What makes the pertubations that are small in Hilber-space norm 'slight', if it's not the probability interpretation of them":

"Small changes in the energy eigenvalues of the Hamiltonian, in particular, lead to small changes in quantum state after some period of evolution.[...] Ultimately, the Hilbert-space norm is just a natural measure of state perturbations in Hilbert-space."

I still don't completely see how he get's out the circular argument as one can ask what makes the 'small changes' in the quantum state small? So I hoped that because of Gleason's theorem any interpretation of whether changes are small or big would have to use the Born rule and that this would settle that small changes are physically small under any interpretation of the amplitudes of the states. But I would agree that is a bit far-fetched.

@Tom.Stoer Thanks for the article, I will look into that

Last edited: Dec 8, 2013
11. Dec 8, 2013

### Staff: Mentor

In MWI we want a measure giving the confidence we are in a particular world - its not a probability - but rather a Bayesian 'number' giving that confidence.

MWI is a deterministic theory and the assumption is we do not have sufficient information to determine the world. Its a subtle, but very important, difference. But because of that we use the Baysian view of hypothesis testing from which probabilities is a derived concept.

This was all discussed in a long thread:

No use going over it again. I am with MFB in that thread - it would simply be repeating the same thing over and over.

Thanks
Bill

12. Dec 8, 2013

### Staff: Mentor

My suggestion after reading that book is don't worry about Wallace's derivation - stick with Gleason. I suspect the reasonableness assumptions his derivation makes is really equivalent to basis independence anyway - at least that's what I thought after going through one of his answers to objections. But even if it isn't the case, there is zero doubt he is making some reasonableness assumptions and I cant see how they are any better or worse than basis independence.

Thanks
Bill

13. Dec 8, 2013

### stevendaryl

Staff Emeritus
Irreversibility to me seems an essential part of getting a classical world out of quantum amplitudes, because we see definite values for macroscopic variables. Irreversibility is responsible for the transition from a pure state (with a superposition of alternatives) to an apparently mixed state (where apparently one alternative is "chosen").

If a system is simple enough that everything is reversible, then collapse doesn't come into play, and neither do probabilities. The interpretation of the square of the wave function as a probability only comes into play when the subsystem interacts with a larger system in an irreversible way.

14. Dec 8, 2013

### tom.stoer

Could you please do me a favor and write down explicitly which hypothesis we shall test and which number we shall assign in the MWI context.

15. Dec 8, 2013

### Staff: Mentor

Its obvious.

The hypothesis is what world you are in.

The number giving the confidence is via Gleasons, Wallice's argument or even simply accepting Trace formula as an axiom.

We are just rehashing that thread again. I seem to recall you never got it. That being the case I am not going to go though it again. The OP can go through that thread and form his own view.

Thanks
Bill

16. Dec 8, 2013

### Staff: Mentor

I don't think it has anything to do with decoherence. Sure - its true in that it's generally not reversible statistically, and that is responsible for the arrow of time - but its got nothing to do with the the math of the emergence of an improper mixed state - at least I have never seen it as an assumption.

Thanks
Bill

17. Dec 8, 2013

### stevendaryl

Staff Emeritus
Okay, I'll ask you: is there EVER an example of a transition from pure state to mixed state that does not involve an irreversible process? If it's not irreversible, then the reverse--from mixed state to pure state--could happen just as easily. So the interpretation of the emergence of mixed states as an effective "collapse of the wave function" makes no sense for reversible processes.

Mathematically, mixed states arise by tracing out some of the degrees of freedom, and you can always do that. But that tracing out has no physical significance without something like an irreversible change.

18. Dec 8, 2013

### stevendaryl

Staff Emeritus
I actually didn't think that I was saying anything controversial. I thought it was well-known that both measurement processes and decoherence due to interaction with the environment both involved irreversibility. In the Wikipedia article about decoherence is the sentence:

http://en.wikipedia.org/wiki/Quantum_decoherence

Of course, the author of the Wikipedia article might be as confused as I am, but I'm just quoting from Wikipedia as a way of showing that I'm not the only one under this misconception, if it is a misconception.

I found another article that makes the same claim--that there is a connection between decoherence and irreversibility:
http://arxiv.org/pdf/quant-ph/0106006v1.pdf

19. Dec 8, 2013

### atyy

Just a note that the simplest and most common way of deriving a meaning for the improper mixed state in a reduced density matrix is to assume the system and environment together are in a pure state, and together obey the Born rule. To get the reduced density matrix, one assumes that the observable is a local observable on the system "times" the identity on the environment. Applying the Born rule to the pure state automatically traces out the environment for such local observables. This works not matter how small the environment is.

20. Dec 8, 2013

### Staff: Mentor

The issue isn't that it involves irreversibility, the issue is - does it involve the Born Rule.

http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf

Have a look at section 1.2.3 on proper and improper mixtures.

System 3 considers an entangled system of A and B. But removing B from consideration by tracing over the environment (ie system B) gives equation 1.23 which is an improper mixed state. No Born rule evoked yet. If we had a huge number of systems and traced over that environment, instead of just system B, then it's the same - but un-entangling such a large number of systems is practically impossible - that why its irreversible - and the Born rule not yet invoked. What the concern may be is seeing that tracing over the environment is the thing to do - that indeed does require the Born rule - but at this point its simply a process to remove system B.

Now we need to interpret the improper mixed state - that requires the Born Rule - and indeed understanding why you trace over the environment does as well but I am including that in the interpretation bit. What it shows is the pi of the improper mixed state Ʃ pi |bi><bi| is the probability of the system being in |bi><bi|.

The MWI interprets the |bi><bi| as a separate world. The issue is which world will be experienced? MWI does not detail that. The best you can do is try to figure out some kind of objective confidence for that. You can simply assume the Born rule for that to give a confidence - not yet a probability. But you can do better - you can use Gleason's theorem to derive it from basis independence, or the argument of Wallace (which I believe is really invoking the same assumption, but it's not something I am particularly motivated to pursue - it was simply something that struck me when I went through Wallace's text). Take your pick - its not really critical. Then using Baysian hypothesis testing you can derive long term averages and hence probabilities.

Thanks
Bill

Last edited: Dec 8, 2013