Question regarding the Many-Worlds interpretation

  • #151
I am still not covinced that the Born rule is sufficient. It misses what I called "bottom-up" perspective.

The Born rule says that
- results of a measurement of an observable A will always be one of its eigenvalues a
- the probability for the measurement of a in an arbitrary state psi is given by a projection to the eigenstate

##p(a) = \langle\psi|P_a|\psi\rangle##

This is a probability formulated on the full Hilbert space.

I still do not see how this is answers the question:

What is the probability p(Ba) that I find me as an observer in a certain branch Ba where a is realized as measurement result?

One could reformulate the problem as follows: The Born rule says that the probability to find a is p(a). What I am asking for is the probability to find a, provided that I am in a certain branch Ba where a is realized (the expectation is 100%, so we need some kind of Bayesian argument to extract the probability p(Ba) the branch)

I would like to see a mathematical expression based on the MWI assumptions which answers this question.

The Born rule as stated above is formulated on the full Hilbert space and therefore provides a top-down perspective, but I as an observer within one branch do have a bottom-up perspective. I still don't see why these two probabilities are identical and how this can be proven to be a result of the formalism. There is some additional (hidden) assumption.
 
Physics news on Phys.org
  • #152
tom.stoer said:
The Born rule says that
- results of a measurement of an observable A will always be one of its eigenvalues a
- the probability for the measurement of a in an arbitrary state psi is given by a projection to the eigenstate

That's the 'kiddy' version. The real version is given an observable O a positive operator of unit trace P exists such that the expected value of O is Tr(PO). P is defined as the state of the system. States of the form |u><u| are called pure states and is what you more or less are talking about above. However it implies more that what you said above.

This is actually quite important in MWI because a collapse actually never occurs - instead it interprets the pure states of an improper mixed state Ʃpi |ui><ui| after decoherence as separate worlds. Interpreting the pi as probabilities of an observer 'experiencing' a particular world is what the Born rule is used for in the MWI and that requires its full version.

Thanks
Bill
 
  • #153
bhobba said:
That's the 'kiddy' version.
Nobody is able to answer, but it's kiddy; strange conclusion.

bhobba said:
The real version is given an observable O a positive operator of unit trace P exists such that the expected value of O is Tr(PO). P is defined as the state of the system. States of the form |u><u| are called pure states and is what you more or less are talking about above. However it implies more that what you said above.

This is actually quite important in MWI because a collapse actually never occurs - instead it interprets the pure states of an improper mixed state Ʃpi |ui><ui| after decoherence as separate worlds.
I know this, but that's not my question.

bhobba said:
Interpreting the pi as probabilities of an observer 'experiencing' a particular world is what the Born rule is used for in the MWI and that requires its full version.
So it's an interpretation.

Sorry to say that, but that's not an answer to my question. I think you do not even get my question.

How can you justify that it is allowed to use the probabilities pi contained in the full state (which is the top-down perspective not accessable to a single observer) as a probability observed (bottom-up) by an observer within one branch?

Top-down you derive the probability for a result of a measurement in full Hilbert space.
Bottom-up you find (as a single observer) that within your branch the same probabilities apply.
Why? What's the link?
 
Last edited:
  • Like
Likes PeterDonis
  • #154
I found a good discussion including many refences of the question I asked in the last days

http://arxiv.org/abs/0712.0149
The Quantum Measurement Problem: State of Play
David Wallace
(Submitted on 3 Dec 2007)

In 4.6 Wallace discusses what he calls the "Quantitative Problem"

The Quantitative Problem of probability in the Everett interpretation is often posed as a paradox: the number of branches has nothing to do with the weight (i. e. modulus-squared of the amplitude) of each branch, and the only reason- able choice of probability is that each branch is equiprobable, so the probabilities in the Everett interpretation can have nothing to do with the Born rule.

...

As such, the ‘count-the-branches’ method for assigning probabilities is ill- defined.But if this dispels the paradox of objective probability, still a puzzle remains: why use the Born rule rather than any other probability rule?

...

But it has been recognised for almost as long that this account of probability courts circularity: the claim that a branch has very small weight cannot be equated with the claim that it is improbable, unless we assume that which we are trying to prove, namely that weight=probability.

...

The second strategy might be called primitivism: simply postulate that weight=probability. This strategy is explicitly defended by Saunders; it is implicit in Vaidman’s “Behaviour Principle”; It is open to the criticism of being unmotivated and even incoherent.

...

The third, and most recent, strategy has no real classical analogue ... This third strategy aims to derive the principle that weight=probability from considering the constraints upon rational actions of agents living in an Everettian universe. It remains a subject of controversy whether or not these ‘proofs’ indeed prove what they set out to prove.

I hope this reference explains (from a different perspective and in a more reliable and sound manner) that there is a problem regarding probabilities and weights in the Many Worlds Interpretation.
 
  • #155
stevendaryl said:
So I have two comments about this analogy: First, the conclusion that the relative frequency of red balls should approach 70% isn't provable. It doesn't logically follow from the mere fact that 70% of the balls are red. You have to make some kind of "equally likely" assumption, which means that you're making some assumptions about probability.
Yes, that is what I mean when I say that the distribution provides a natural measure on the events, i.e. the drawing of balls from the hat. This is an assumption, yes, but it's a natural one to make; assuming something different would need some additional justification. And from this, you can derive probabilities from sequences of draws, thus showing that 100 reds is very unlikely.

However, if you start out with sequences, there's simply no analogy to this reasoning. Again, the reason is that the notion of event, i.e. the drawing of a ball, doesn't make sense in the MWI. There, the natural measure to consider would be considering every history to be equally likely---as in the case of the balls in the hat.

Basically, the problem you raise is a problem in the philosophy of probability as a whole; but provided there's a solution, it still seems to me that the MWI has some additional problem to answer.
 
  • #156
tom.stoer said:
How can you justify that it is allowed to use the probabilities pi contained in the full state (which is the top-down perspective not accessable to a single observer) as a probability observed (bottom-up) by an observer within one branch?

I have zero idea what you mean by bottom up and top down. The pi's are the pi's - that's it, that's all. Now by the definition of state as having unit trace the sum of the pi's is one and are positive suggesting they be interpreted as probabilities - not proving anything - but suggesting.

There are a number of arguments associating the pi with probabilities - I know of two:

1. A proof based on decision theory:
http://arxiv.org/abs/0906.2718

2. Envarience
http://arxiv.org/pdf/1301.7696v1.pdf

They however have been criticized as being circular. I personally don't think the decision theory one is - but rather relies on what one thinks of decision theory and rational behavior as a basis for probabilities being introduced in a deterministic theory. I suspect the envarience one is circular.

But Gleason's is possible if you assume non contextuality.

To introduce probabilities you simply imagine a suitably large number of repetitions of the same observation which will give some expected value and hence associate the pi with probabilities. But again why is it you get probabilities from a deterministic theory? Or to put it another way since the wavefunction is split into a number of worlds why do the observers in those worlds not experience each world as equally likely but instead as a probability determined by the pi in the mixed state? That is rather weird in terms of a deterministic theory.

I have David Wallace's book and he argues its not an issue based on the option that was posted previously: 'The third, and most recent, strategy has no real classical analogue ... This third strategy aims to derive the principle that weight=probability from considering the constraints upon rational actions of agents living in an Everettian universe. It remains a subject of controversy whether or not these ‘proofs’ indeed prove what they set out to prove.'

I am not personally convinced.

That's the real issue IMHO - why do you get probabilities in a deterministic theory?

I think that's what you may be getting at - or am I off the mark?

Thanks
Bill
 
Last edited:
  • #157
Bill, I think the problem has been explained several times.

S.Daedalus said:
This isn't available in the MWI, however. The reason is that the notion of an event doesn't make any sense anymore: A doesn't occur in exclusion to B, but rather, both occur. This makes the natural entities to associate probabilities with not events, but branches, or perhaps better histories, i.e. chains of observations; the sequence of values observed in elementary spin experiments, say. But there's no grounds on which one can argue that the likelihood of 'drawing' a history from all possible histories should be such that it is more likely to draw a history in which the relative frequencies are distributed according to the Born rule. If one were to associate a measure with histories at all, it seems that the only natural measure would be a uniform one---which would of course entail that you shouldn't expect to observe outcomes distributed according to the Born rule.

The proponent of many worlds is then, in my eyes, faced with justifying the use of a non-uniform measure on the set of histories, about which Gleason's theorem doesn't really say anything, it seems to me. Now of course, one can always stipulate that 'things just work out that way', but in my eyes, this would significantly lessen the attractivity of MW-type approaches, making it ultimately as arbitrary as the collapse, at least.

S.Daedalus said:
Also, I'm not at all sure I see how Gleason's theorem is relevant to probability in the MWI. What it gives is a measure on the closed subspaces of Hilbert space; but what the MWI needs is to make sense of the notion of 'probability of finding yourself in a certain branch'. It's not obvious to me how the two are related. I mean, sloppily one might say that Gleason tells you the probability of a certain observable having a certain value, but there seems to me a gap here in concluding that this is necessarily the same probability as finding yourself in the branch in which it determinately has that value. I could easily imagine a case in which Gleason's theorem, as a piece of mathematics, were true, but probability of being in a certain branch follows simple branch-counting statistics, which won't in general agree with Born probabilities.

S.Daedalus said:
But in the MWI, you don't draw a ball to the exclusion of another; rather, you always draw both a red and a blue ball. The distribution of the balls in the hat has no bearing on this; it's just not relevant. What you get is all possible strings of the form 'bbrbrr...', i.e. all possible 'histories' of drawing blue or red balls. In only a fraction of those do you observe the statistics given by the distribution of the balls; furthermore, the distribution of the balls has nothing at all to say about the distribution of the strings. You then need an argument that for some reason, those in which the correct statistics hold are more likely than those in which they don't. That the original distribution is of no help here can also be seen by considering that there isn't just one measure that does the trick: you could for instance attach 100% probability to a history in which the frequencies are correct, or 50% to either of two, or even some percentage to incorrect distributions; the setting leaves that question wholly open. And so does the MWI.

The third, and most recent, strategy has no real classical analogue ... This third strategy aims to derive the principle that weight=probability from considering the constraints upon rational actions of agents living in an Everettian universe. It remains a subject of controversy whether or not these ‘proofs’ indeed prove what they set out to prove.
Top-down: you derive the probability for a result of a measurement in the full Hilbert space.
Bottom-up I can ask: What is the probability that I find myself as a single observer in a certain branch where a is realized?
Why? What's the link?

I do not question here whether probabilities in agreement to Born's rule can be derived. I question whether these probabilities for a result of a measurement on full Hilbert space have anything to do with the probability to find myself in a certain branch.

I think you got it here
bhobba said:
Or to put it another way since the wavefunction is split into a number of worlds why do the observers in those worlds not experience each world as equally likely but instead as a probability determined by the pi in the mixed state? That is rather weird in terms of a deterministic theory.
 
  • #158
tom.stoer said:
I think you got it here

Great :thumbs::thumbs::thumbs::thumbs::thumbs:

That's exactly my concern - how does a deterministic theory accommodate probabilities.

I am not persuaded by Wallace's arguments in the book I am reading. It doesn't invalidate it but it means it doesn't do what its adherents would like - a totally deterministic theory.

Thanks
Bill
 
  • #159
bhobba said:
That's exactly my concern - how does a deterministic theory accommodate probabilities.
This may be an even deeper concern.

Mine is that we get a probability for a result of a measurement which is implicitly assumed to be valid for an observer in a specific branch. Of course a derivation of Born's rule is required in MWI, but it has a different meaning than in a collapse interpretation.

Anyway - my original idea was to start ein branch counting, but I had to accept that this is impossible.
 
  • #160
Bill, it seems that we have identified the same two problems as Wallace in the aforementioned paper.

The Incoherence Problem: In a deterministic theory where we can have perfect knowledge of the details of the branching process, how can it even make sense to assign probabilities to outcomes?

The Quantitative Problem: Even if it does make sense to assign probabilities to outcomes, why should they be the probabilities given by the Born rule?
 
  • #161
tom.stoer said:
even deeper concern.

maybe not so deep, block universe, there is not probabilities per se.
likewise mwi, anything happen, no probabilities..
 
Last edited:
  • #162
tom.stoer said:
Bill, it seems that we have identified the same two problems as Wallace in the aforementioned paper.

I agree.

Interestingly when I started my sojourn into the MWI my concern with it was this exponentially increasing branching just seems unbelievably extravagant. But after looking into it it has now shifted.

Again it doesn't disprove it, or show it's inconsistent - but it doesn't do what its adherents (at least some anyway) would like.

Thanks
Bill
 
  • #163
audioloop said:
maybe not so deep, block universe, there is not probabilities per se.
likewise mwi, anything happen, no probabilities.


.

Even in that scenario the probabilities still exist in the calculations. Why would that be?
 
  • #164
tom.stoer said:
I am still not covinced that the Born rule is sufficient. It misses what I called "bottom-up" perspective.
I don't think it makes sense to talk about probabilities from the top down perspective. The only reason to introduce probabilities is that in experiments, we observe that we end up in a single branch with a probability according to the Born rule. Imagine a godlike top down observer who just sees the evolution of the universal state (he isn't allowed to interct with it because this would lead to entanglement and thus make him a bottom up observer). Why should he assign probabilities to the coefficients? He simply sees that there are now multiple observers which can't interact with each other.

tom.stoer said:
The Born rule says that
- results of a measurement of an observable A will always be one of its eigenvalues a
I think this is what decoherence explains. However, Jazzdude seemed to object.

tom.stoer said:
- the probability for the measurement of a in an arbitrary state psi is given by a projection to the eigenstate

##p(a) = \langle\psi|P_a|\psi\rangle##

This is a probability formulated on the full Hilbert space.
This isn't correct if Pa simply projects the system to an eigenstate und does nothing else. Your expression only gives the Born probability if the full state is a product state |ψsystem>ꕕ|ψrest> which corresponds to a universe with only one branch.

In the general case, p(a) is a sum over all branches, so you have a sum of Born probabilities. You have to project on a specific branch to get the correct expression. Which branch? The single branch a specific observer perceives. So this is really the bottom up view.
 
  • #165
I agree to nearly everything, except for
kith said:
The only reason to introduce probabilities is that in experiments, we observe that we end up in a single branch with a probability according to the Born rule.
Of course you are right; this is the MWI interpretation. But as I said a couple of times, it's unclear why the expectation value of an observable evaluated on full Hilbet space and the probability to be within one single branch has anything to do with each other.

If there is a state like a|x> + b|y> it's an interpretation that being in branch "x" has anything to do with |a| squared; you can't prove it.
 
  • #166
mfb said:
I said we can care about the rule. If you are looking for a rule (for whatever reason), the Born rule is the only reasonable one.
Why? In Copenhagen, it is a postulate about probability distributions. What is it in the MWI?
 
Last edited:
  • #167
tom.stoer said:
But as I said a couple of times, it's unclear why the expectation value of an observable evaluated on full Hilbet space and the probability to be within one single branch has anything to do with each other.
I agree. My objection was against using the term "probability" wrt to the top down perspective.

It is also unclear to me how the connection could be made.
 
  • #168
Just flat out postulating the Born rule also has implications for the hypothesis. In Copenhagen it leads you to a ugly collapse which needs a physical explanation of sorts.

What the **** is mwi's ontological explanation supposed to be ? God cuts the branches he doesn't like?
 
  • #169
kith said:
I said we can care about the rule. If you are looking for a rule (for whatever reason), the Born rule is the only reasonable one.
Why? In Copenhagen, it is a postulate about probability distributions. What is it in the MWI?
Gleason's theorem. Every other assignment would lead to results we would not call "probability".
 
  • #170
mfb said:
Gleason's theorem. Every other assignment would lead to results we would not call "probability".
Gleason's theorem states that the only possible probability measure assigned to a subspace with projector P (in a system with density operator ρ) is tr(Pρ).

But the theorem cannot explain why tr(Pρ) shall be a probability for an observer to find himself in that subspace. This is an interpretation, and our discussion (including Wallace's paper) shows that it's controversial and not convincing to everybody. It is unclear why - in a deterministic theory - a probability shall arise at all.
 
  • #171
probability in deterministic theories are just talk of ignorance, i don't see how this is controversial by itself? look at the bohmian interpretation. We don't know which outcome is going to occur as we cannot measure the pilot wave itself, but I don't see how this is controversial.

*if* branch counting had worked for MWI then there would be no problem With explaining why probability arises, it would simply be branch location ignorance
 
  • #172
tom.stoer: I don't see how your post is related to the specific question I answered.

It is unclear why - in a deterministic theory - a probability shall arise at all.
To me, it is unclear why you are looking for (wanting?) probabilities, indeed.
 
  • #173
Quantumental said:
Just flat out postulating the Born rule also has implications for the hypothesis. In Copenhagen it leads you to a ugly collapse which needs a physical explanation of sorts. What the **** is mwi's ontological explanation supposed to be ? God cuts the branches he doesn't like?

It's not flat out postulated in any interpretation where the Hilbert space formalism is fundamental because of Gleason - unless of course you think in vector spaces non-contextuality is not reasonable - most would consider contextuality quite ugly.

Collapse is not ugly in any interpretation that considers the quantum state is simply knowledge about a system, like probabilities are, any more than throwing a dice collapses anything 'real' when it goes from a state where the state vector has all entries 1/6 to one with an entry of 1. Interpretations like that include Copenhagen and the Ensemble interpretation. In those interpretations its simply an example of a generalized probability model with nothing more mysterious going on than modelling something by probability.

What the issue is (with such interpretations) is people push against the idea that the world may be fundamentally probabilistic and want an underlying explanation for it. The problem lies in them - not the theory. Or to put it another way - the interpretation is fine - they just don't like it.

Thanks
Bill
 
Last edited:
  • #174
Quantumental said:
probability in deterministic theories are just talk of ignorance, i don't see how this is controversial by itself? look at the bohmian interpretation. We don't know which outcome is going to occur as we cannot measure the pilot wave itself, but I don't see how this is controversial. *if* branch counting had worked for MWI then there would be no problem With explaining why probability arises, it would simply be branch location ignorance

In BM probabilities enter into it due to lack of knowledge about initial conditions. In MWI we have full knowledge of what it considers fundamental and real - the quantum state.

Thanks
Bill
 
  • #175
mfb said:
To me, it is unclear why you are looking for (wanting?) probabilities, indeed.

I am not quite following your point here.

The reason probabilities come into it is Born's Rule ie given an observable O its expected value is Tr (OP) where P is the state of the system.

How can probabilities not be involved?

I agree there is debate over if the experience of an observer requiring probabilities is an issue in MWI, and Wallace discusses it in his book, but I don't think there is anyway of circumventing that probabilities are involved.

Thanks
Bill
 
  • #176
bhobba said:
In BM probabilities enter into it due to lack of knowledge about initial conditions. In MWI we have full knowledge of what it considers fundamental and real - the quantum state.

Thanks
Bill

Yes... exactly why it seems to be wrong.
 
  • #177
Quantumental said:
Yes... exactly why it seems to be wrong.

I am glad you used the word 'seems'. Wallace in his book argues its not an issue.

I am simply not convinced by his arguments - but it is arguable.

Added Later:

I think that's what MFB is getting at - Wallace's argument is summed up on page 115 of his book:
'Mathematically, formally, the branching structure of the Everett interpretation is a stochastic dynamical theory. And nothing more needs to be said'.

Yea - the theory is as the theory is so what's your beef. My beef is in other stochastic dynamical deterministic theories we know where the 'stochastictisity' (is that a word?) comes from - here we don't.

BTW Wallace gives all sorts of reasons - that's just one. Some are quite subtle. For example against the equal probability rule he brings up actually deciding what is an equal probability. We have an observation with two outcomes so you would naturally say its 50-50. On one of the outcomes you can do another observation with two outcomes giving 3 outcomes in total - so what is it - 1/3 for each or 1/2, 1/4 and 1/4. This boils down to a question of what is an elementary observation - a very subtle issue in QM. Its tied in with one of the quirks of QM as a generalized probability model - in normal probably theory a pure state when observed always gives the same thing - once thrown a dice always has the same face - in QM a pure state can be observed to give another pure state, which is itself tied up with having continuous transformations between pure states (as an aside Hardy believes this is the distinguishing feature of QM). His arguments are full of stuff like that - disentangling them is no easy task. Basically on some reasonableness assumptions he makes the Born rule is the only way a rational agent can assign 'likelihoods' to outcomes.

Thanks
Bill
 
Last edited:
  • #179
Quantumental said:
Bhobba: I thought this had been dealt with by several people already?

It has been DEBATED by several people already. Like just about any issue of philosophy proving one way or the other is pretty much impossible.

Now without going through the papers you mention, which doesn't thrill me greatly, suffice to say in the book I have Wallace goes to great length, and quite a few chapters, discussing objections - even the old one about the frequentest interpretation of probability which is pretty much as ancient as they come.

If you however would, in your own words, like to post a specific objection then I will happily give my view.

Overall I am not convinced by Wallace's arguments. For example merely saying a stochastic theory is - well stochastic and hence of zero concern strikes me as a cop-out of the first order. But that is not his only argument - like I say the issue is subtle and requires a lot of thought to disentangle.

I personally have no problem with the decision theoretic derivation of the Born rule - its assumptions are quite reasonable. My issue is trying to justify the likely outcomes of a deterministic theory on the basis of what a rational agent would require strikes me as not resolving the basic issue at all - yes its a reasonable way to justify the Born rule, and so is Gleason's Theorem for that matter, but does not explain why a rational being, agent or whatever would have to resort to decision theory in the first place, which assumes for some reason you can't predict with certainly the outcome. Why does a deterministic theory have that feature in the first place - blank-out. Logically its impossible to assign a value of true and false to a Hilbert space - Gleason guarantees that - you have probabilities built right into its foundations - without some device like BM's pilot wave to create contextuality no escaping it - so you are caught between the devil and the deep blue sea if you want a deterministic theory.

Again I want to emphasize it doesn't invalidate the interpretation or anything like that. Its very very elegant and beautiful, its just a question the interpretation doesn't answer, but then again all interpretations are like that - they have awkward questions they have difficulty with.

Thanks
Bill
 
Last edited:
  • #180
I would like to ask a different question which seems to be crucial for establishing the whole MWI program.

MWI as of today relies on decoherence. That means that the different branches in the representation of a state vector are defined via a preferred basis. These basis states should be
i) dynamically selected,
ii) "peaked" in classical phase space (as observed classically), and
iii) these branches should be stable w.r.t. time evolution (in the full Hilbert space)
In terms of density matrices this is mostly described as reduced density matrices becoming nearly diagonal statistical mixtures with (approximately) zero off-diagonal terms.

My question is to which extent (i-iii) can be shown to follow strictly from the formalism, i.e. from the Hamiltonian of realistic macroscopic systems.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
Replies
19
Views
471
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 313 ·
11
Replies
313
Views
24K
  • · Replies 16 ·
Replies
16
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 41 ·
2
Replies
41
Views
6K
  • · Replies 34 ·
2
Replies
34
Views
4K
Replies
11
Views
3K