# Many Worlds, a discussion.

1. Aug 18, 2004

### vanesch

Staff Emeritus
There's an interesting text on Many Worlds,
http://www.anthropic-principle.com/preprints/manyworlds.html
and I'd like to discuss it.

There are 2 aspects with many worlds that I do not fully grasp. The first one, the more technical one, is how the correct probabilities emerge in MW. But it is related to the second, more philosophical one. However, let me concentrate on the first one, because I think it is a technical misunderstanding on my part.

Let me first state what I understand of MW. It is in fact "normal" quantum mechanics, with the extra discovery that "macroscopic" or "irreversible" interactions give rise to decoherence, in the following way:

Consider a (microscopic) system s, in t's lab, which is in a quantum superposition, and t is going to perform a measurement on s. R is the rest of the world.
Before, we have:
( a |s1> + b|s2> + c|s3>) x (|t0>) x |R> (1)

and after that subsystems s and t have irreversibly interacted, we obtain a state like:
a |s1>x|t1>x|R> + b|s2>x|t2>x|R> + c|s3>x|t3>x|R> (2)

and the 3 terms are called "different worlds".
This is nice, no problem, a priori. It is now assumed that each of these individual terms will never notice anything anymore from its neighbours, so let us concentrate on the term:
b|s2> x |t2>x|R> = |u0>
Imagine now that the state |u0> can be written as (d |k1>+ e|k2>)x|v>x|S>.
It is clear what happened: in another lab, a microscopic system k was prepared in a superposition, and v, the scientist in that lab, is also going to perform a measurement. S represents the rest of the world, including the first laboratory, s and t.

After measurement in the second lab, we'll have that:
|u0> = d |k1>x|v1>x|S> + e |k2>x|v2>x|S> (3)
Let us call the first term |uu0>, it is a new "world" just as the second term.

In the other "u" states, maybe this experiment didn't take place.

Anyway, ALL vectors in hilbert space representing different worlds are constantly getting shorter and shorter (for instance, u0 had norm b, while uu0 has norm bxd). This is also understandable, because their sum is the "universal state" which keeps length 1, and there are more and more "decohered" worlds as time evolves.

If I understand well, my mind is being cloned all the time, but there seems, each time, to be one copy that is "me". Now, my difficulty is, how can that "me" experience the right probabilities if it is not postulated that the "true me" follows a random choice path between worlds that is dictated by the amplitudes squared of the individual worlds ? If I can only talk about the "true me" with hindsight, because all "me"'s are equivalent, then I have a problem.

Let me explain: there's a copy of "me" in each of the terms of (2). I do this experiment several times and I can label each of the individual terms by the successive measurement results that lead to that term. Take the term 121112312, for instance, which means that the result of the first experiment was 1, the result of the second was 2, the result of the third was 1 etc...
There will be exactly one final state corresponding to each possible sequence.
If at each "split", my mind splits equally in 3 clones (1,2,and 3), the big majority of minds ending up after many measurements (the big majority of sequences given above) will find roughly equal times a measurement 1, a measurement 2 and a measurement 3, with probabilities, 1/3, 1/3 and 1/3 for each. But we shouldn't find that ! We should find a^2, b^2 and c^2 !! How do these numbers get into the history of most minds ? True, the length of the vector in which they are is longer, but as they are decohering all the time, the individual lengths don't matter, they are independently evolving.

So where is my mistake ? This is something that has been bothering me with the way I see many worlds for a long time, and nobody seems to be able to answer the question.

cheers,
Patrick.

2. Aug 18, 2004

### ZapperZ

Staff Emeritus
I am only in posession of a rather superficial knowledge of MW interpretation. However, I will contribute these two links which may be quite relevant, especially the 2nd one which casts doubt into this interpretation.

[Hey, if I don't throw a wrench into one of these, you people will think I'm not well or something! :) ]

http://arxiv.org/abs/astro-ph/0302131
http://xxx.lanl.gov/abs/gr-qc/9703089

Zz.

3. Aug 18, 2004

### jcsd

Apparently probabilties don't emerge in the MWI, which is meant to be it's biggest problem - i.e. probabilty no longer has any actual 'physical meaning', even though we know it does.

4. Aug 18, 2004

### Ontoplankton

http://philsci-archive.pitt.edu/archive/00001742/

as well as earlier papers on quantum mechanics and decision theory, like this one.

(If I recall correctly, the first paper discusses somewhere why assigning uniform probabilities on worlds doesn't work; and that the Born probability assignment, while not the only possible one, is the least arbitrary.)

(edit: the argument why you can't assign uniform probabilities turns out to be in sections 5.2 and 5.3 of the first paper, but these may not be comprehensible for those who haven't read the rest.)

Last edited: Aug 18, 2004
5. Aug 18, 2004

### Ontoplankton

I recommend looking at http://philsci-archive.pitt.edu/perl/user_eprints?userid=143 [Broken] as well, by the way. I think they give a clear and coherent explanation/defense of MWI. The version of MWI advocated in the FAQ is essentially the same, I think, though less detailed.

Last edited by a moderator: May 1, 2017
6. Aug 18, 2004

### vanesch

Staff Emeritus
Allright, thanks for all the pointers. I'll tell you what I make of it at the moment: I was more or less convinced that MWI was "weird but all right" ; I had bumped in this counting prob versus hilbert norm prob difficulty long ago but was told that "if you work out the details then everything comes out ok" and simply believed it. But I could never find this detailed account.
Ok, so now I think I'll put MWI in the dustbin for the moment :surprise:
What I thought was a technical misunderstanding on my side is a serious flaw! I do not buy into these "rational decision" bayesian probability definitions, honestly. Maybe I didn't understand exactly what is was about, but to me, two points remain:

For a finite number (say 100) of identical experiments (with prepared state 0.99 A + 0.01 B), the vast majority of worlds (2^100, or about 10^30 in total) has an accumulated record of results which is close to 50 cases of A and 50 cases of B, while the frequentist interpretation should give us something around 98 cases of A and 2 cases of B. I know that there is a circular argument in frequentist definition of probability, but hey, we're not talking about very small deviations here ! In MOST of the worlds, the sequence of experimental results recorded is simply VERY HIGHLY UNPROBABLE according to all possible statistical tests one could do on them.

The argument that the total Hilbert length of all these numerous state vectors is much smaller than the Hilbert length of the very few states which have statistically "acceptable" histories (DeWitt's proof) is totally irrelevant, because we haven't attached any interpretation to "hilbert length". The only "hilbert length" that matters is strict 0 in MWI. Moreover, the individual "world" vectors are continuously shrinking, so what is "so small we should neglect it" today, will be "a very big statevector" by the end of the week. But, as I said, NO interpretation is attached to the hilbert length in MWI.
It is only when we introduce a probability postulate based upon hilbert length that we can neglect the small contributions. Introducing it is de facto introducing a probability postulate, and doing away with MWI in the first place. In fact, this proof of correct probabilities for long state vectors is more an argument AGAINST MWI than in favor of MWI. Indeed, this argument says that, whenever we apply the measurement postulate (which is the anti-thesis of MWI), we get out, with very high probability, that world which has correct probabilities in its history !!

I didn't follow the entire argument about decision-theoretic definition of probability. But I think it is irrelevant. The point is not that we want to *predict* outcomes and who we will be after the next split. The point is that *we have done* a certain number of experiments, that we have a record of them, and that we want to confront the frequencies in that record with the calculated Born probabilities, BECAUSE THAT IS EXACTLY WHAT WE DO IN THE LAB. This has been wavered away as "ugly frequentist stuff", but I'm sorry, that's how things are done in the lab, and that's where most of quantum mechanics got its empirical confirmation from.

I'm probably "naive" by making such bold statements on "subtle reasoning schemes" of which I even admit not having understood everything. But I have the conviction that if the idea was not messed up, then by now there would be a much clearer explanation!

It's funny, I considered myself rather in favor of MWI (except my small technical problem), and I think that now I cannot buy into it anymore.
Of course some things remain, for instance the theorem of DeWitt, which indicate that you can have "many worlds" (but in this context it is usually called simply superposition of states) of measurement systems of measurement systems of quantum systems UNTIL YOU DECIDE TO LOOK YOURSELF. If at that point, you consider that you project the state out, then you will have recorded histories in your measurement systems that are fully compatible with their Born rule, as if THEY projected out the state.
But we knew that already ; it is decoherence !

cheers,
Patrick.

(and thanks again for all the pointers!)

7. Aug 18, 2004

### Ontoplankton

Did you read the sections 5.2 and 5.3 in the first paper I linked? Wallace's 2003 paper has more on this, too.

As I understand it, the argument is that in the messy world of statistical mechanics, there is no well-defined number of worlds; it depends heavily on small changes in your choice of basis. This would mean that there is no such thing as a uniform probability assignment on all worlds; such a thing would be incoherent. The notion of "most" worlds would stop making sense, too. (In the version of MWI these people are talking about, a "world" is just an imprecisely defined structure rather than something written directly in the formalism.)

The intuition I have here (it may be wrong) is that of a block of stuff that you cut into slices; while you can unambiguously point at the thickness of a piece of the block, you can't point at the number of slices it contains, because there's nothing to stop you from calling the top half and the bottom half of a slice each a slice of its own. (This probably doesn't correspond well to how it really is; please read the sections of the paper instead.)

But Deutsch has proven that you have to equate probability with Hilbert length, *if* you accept a few axioms; I think all but one of these axioms are uncontroversial, and whether the last one is controversial depends on whether you take what they call the "subjective indeterminism" or "objective determinism" point of view: either way it's very reasonable.

Not really, as I see it; in the view of the first paper I linked, it's not really a probability postulate but a decision about how large an influence each branch has on your utility function (how much you "care").

I don't think this matters. A prediction of outcomes based on MWI translates directly into a test of whether MWI is compatible with past experiments. If we choose such axioms for our decision-theoretical preferences that we would have predicted frequencies close to Born, then the past observation of frequencies close to Born does not refute MWI. If, on the other hand, we choose such axioms for our decision-theoretical preferences that we would have predicted frequencies that deviate far from Born, then MWI has been refuted by past experiments. The papers I linked argue that it's reasonable to choose axioms of the first kind, the kind where it's rational to predict (retrodict) that frequencies conform to the Born rule.

I don't know---probabilities get really strange when you mix processes of yourself splitting and subjective uncertainty about the future. Also, it looks to me like quantum statistical mechanics is a complicated subject in itself (very hard to visualize at the foundations).

Hoping this hasn't also turned everyone else off MWI. :)

Last edited: Aug 18, 2004
8. Aug 18, 2004

### Olias

Try search for: J B Hartle, he has a number of papers, all deling with inerpretations of what you seek:

http://uk.arxiv.org/abs/quant-ph?0401108

is his latest paper, if you just click on his name (blue text), then you will find a myriad of many fine interpetations/worlds!

Last edited: Aug 18, 2004
9. Aug 19, 2004

### vanesch

Staff Emeritus
Don't worry, I might come back when I understand more of it :-)
But what I understand of it right now doesn't make sense.

cheers,
Patrick.

10. Aug 19, 2004

### vanesch

Staff Emeritus
All right, I've studied paper:

http://philsci-archive.pitt.edu/archive/00001030/00/decshortarx.pdf [Broken],
at least the proof of Deutsch, and I think I see where the "trick" is.

The trick is made clear in stage 3 of the proof, on page 15.

Let us reduce a bit the possibilities, for I don't need all the generality. Let us assume that |psi> lives in an n-dimensional hilbertspace, and that X is an observable with n distinct eigenvalues x_i. Let P be a set of n real numbers, c_i.
What Deutsch tries to prove is that given a triple: {psi, X, P}, we can define one and only one function V, that satisfies the properties cited (dominance, substitutivity etc...), and then V = |<psi|x_i>|^2 x c_i.
From the properties V has to satisfy, it is clear that any "probabilities" one could reasonably assign to the set of eigenvalues could build up a V = p_i x c_i. I haven't too many difficulties with this one, so I can accept that any "probability rule" one could possibly invent, it should give us a V in the form above. So indeed, taking Deutsch's theorem at first sight, if he can prove that the ONLY possible values of p_i are those given by the Born rule, then there's no discussion about it, the only consistent way of assigning probabilities to a state psi and an observable X is the Born rule.

But what then about the ORIGINAL Everett interpretation ?
Everett assigned "equal" probabilities to each possibility of outcome. So in fact, originally, we should be able to set p_i = 1/n. But that is against Deutsch's theorem. So what gives ?
Well, if you look at the proof in step 3, he sneaks in the hilbert length of the components by using the unitary equivalence (ME). How does he do this ?
In the proof of step 3, our n is set equal to 2. He builds a unitary correspondence between the 2-dim original hilbert space, and a much bigger hilbert space, where he assigns to the image of lambda-1 as many dimensions as needed to have the length of a1 and to lambda-2 as many dimensions as needed to have length a2. So it is as if we could say that a Everett world of length a1 can be considered as split in a1 "equivalent" worlds and one of length a2 identically split in a2 worlds ; so after assigning "equal probabilities" to these split worlds we can recover our Born rule.

So Deutsch's theorem is a direct consequence of ME (measurement equivalence) where it is allowed to have unitary transformations which map between hilbert spaces of different dimensions.

And now we understand of course how he could "sneak in" the Born rule:
Why do we consider unitary transformations in quantum mechanics as "equivalence transformations" ? BECAUSE THEY LEAVE THE IN-PRODUCT INVARIANT, and hence the probabilities, normally calculated. In other words, because they LEAVE THE BORN RULE INVARIANT.
So one shouldn't be surprised, after all, that when one REQUIRES equivalence under unitary transformations, that one FINDS THAT THE ONLY CONSISTENTLY LEFT INVARIANT PROBABILITY IS THE BORN RULE.

cheers,
Patrick.

Last edited by a moderator: May 1, 2017
11. Aug 19, 2004

### danitaber

I need some help, here.

OK, I had a great question and then I leaned on the keyboard to get my coffee and it was gone. So I'm going to reformulate it and hope it's even better this time:

I'm about to make a "gut" statement: The above quote feels like cheating in this context.

It seems to me (and this is purely speculative, since I got lost in the papers and realized I need to go back and re-learn about ten years of Math, starting with Linear Algebra, LOL) that in MWI, one could select, weigh, distribute, frequencize, or whatever else, other histories any way he feels like. Like maybe the Born rule is reflected in our experiments as a fluke. I don't like this not-assigning-frequencies or probabilities thing, and I really wish I had the mathematical formalism to state it a little more assertively, but I don't. Could someone explain (in a close approximation of English), and without assigning arbitrary algorithms for "choice" (someone's gonna jump on me for that), how an MWI-guy could refute my statement that it is only in this history that we have experienced our results, and in every other those results are different?

I know, I'm throwing my ignorance out for all to see, but I'd like a stepping stone to learning more about this while I catch up on some long overdue mathematical review.

12. Aug 19, 2004

### vanesch

Staff Emeritus
The more I think about it, it is clear that the entire contents of Deutsch's theorem rests on the requirement that we have equivalence of probabilities under "unitary" transformations between hilbert spaces of different dimension (which is a stretched concept of unitarity, because this operator cannot be inverted). If it were not for this requirement, then the "equal probability rule" which sets V = 1/n with n the number of eigenvalues of X, satisfies also all axioms proposed by Deutsch (even under a unitary transformation, as long as the dimensionality, which is the number of degrees of freedom, remains the same). It is by artificially requiring that a change of dimensionality conserves the probability, that he eliminates 1/n (clearly, 1/n cannot be 1/N!) and he puts in place the Born rule as unique survivor (exactly because that is the only quantity that is conserved under such equivalences).

So we can do away with it and POSTULATE that we should use the Born rule

cheers,
Patrick.