Question regarding the Many-Worlds interpretation

  • #151
I am still not covinced that the Born rule is sufficient. It misses what I called "bottom-up" perspective.

The Born rule says that
- results of a measurement of an observable A will always be one of its eigenvalues a
- the probability for the measurement of a in an arbitrary state psi is given by a projection to the eigenstate

##p(a) = \langle\psi|P_a|\psi\rangle##

This is a probability formulated on the full Hilbert space.

I still do not see how this is answers the question:

What is the probability p(Ba) that I find me as an observer in a certain branch Ba where a is realized as measurement result?

One could reformulate the problem as follows: The Born rule says that the probability to find a is p(a). What I am asking for is the probability to find a, provided that I am in a certain branch Ba where a is realized (the expectation is 100%, so we need some kind of Bayesian argument to extract the probability p(Ba) the branch)

I would like to see a mathematical expression based on the MWI assumptions which answers this question.

The Born rule as stated above is formulated on the full Hilbert space and therefore provides a top-down perspective, but I as an observer within one branch do have a bottom-up perspective. I still don't see why these two probabilities are identical and how this can be proven to be a result of the formalism. There is some additional (hidden) assumption.
 
Physics news on Phys.org
  • #152
tom.stoer said:
The Born rule says that
- results of a measurement of an observable A will always be one of its eigenvalues a
- the probability for the measurement of a in an arbitrary state psi is given by a projection to the eigenstate

That's the 'kiddy' version. The real version is given an observable O a positive operator of unit trace P exists such that the expected value of O is Tr(PO). P is defined as the state of the system. States of the form |u><u| are called pure states and is what you more or less are talking about above. However it implies more that what you said above.

This is actually quite important in MWI because a collapse actually never occurs - instead it interprets the pure states of an improper mixed state Ʃpi |ui><ui| after decoherence as separate worlds. Interpreting the pi as probabilities of an observer 'experiencing' a particular world is what the Born rule is used for in the MWI and that requires its full version.

Thanks
Bill
 
  • #153
bhobba said:
That's the 'kiddy' version.
Nobody is able to answer, but it's kiddy; strange conclusion.

bhobba said:
The real version is given an observable O a positive operator of unit trace P exists such that the expected value of O is Tr(PO). P is defined as the state of the system. States of the form |u><u| are called pure states and is what you more or less are talking about above. However it implies more that what you said above.

This is actually quite important in MWI because a collapse actually never occurs - instead it interprets the pure states of an improper mixed state Ʃpi |ui><ui| after decoherence as separate worlds.
I know this, but that's not my question.

bhobba said:
Interpreting the pi as probabilities of an observer 'experiencing' a particular world is what the Born rule is used for in the MWI and that requires its full version.
So it's an interpretation.

Sorry to say that, but that's not an answer to my question. I think you do not even get my question.

How can you justify that it is allowed to use the probabilities pi contained in the full state (which is the top-down perspective not accessable to a single observer) as a probability observed (bottom-up) by an observer within one branch?

Top-down you derive the probability for a result of a measurement in full Hilbert space.
Bottom-up you find (as a single observer) that within your branch the same probabilities apply.
Why? What's the link?
 
Last edited:
  • Like
Likes PeterDonis
  • #154
I found a good discussion including many refences of the question I asked in the last days

http://arxiv.org/abs/0712.0149
The Quantum Measurement Problem: State of Play
David Wallace
(Submitted on 3 Dec 2007)

In 4.6 Wallace discusses what he calls the "Quantitative Problem"

The Quantitative Problem of probability in the Everett interpretation is often posed as a paradox: the number of branches has nothing to do with the weight (i. e. modulus-squared of the amplitude) of each branch, and the only reason- able choice of probability is that each branch is equiprobable, so the probabilities in the Everett interpretation can have nothing to do with the Born rule.

...

As such, the ‘count-the-branches’ method for assigning probabilities is ill- defined.But if this dispels the paradox of objective probability, still a puzzle remains: why use the Born rule rather than any other probability rule?

...

But it has been recognised for almost as long that this account of probability courts circularity: the claim that a branch has very small weight cannot be equated with the claim that it is improbable, unless we assume that which we are trying to prove, namely that weight=probability.

...

The second strategy might be called primitivism: simply postulate that weight=probability. This strategy is explicitly defended by Saunders; it is implicit in Vaidman’s “Behaviour Principle”; It is open to the criticism of being unmotivated and even incoherent.

...

The third, and most recent, strategy has no real classical analogue ... This third strategy aims to derive the principle that weight=probability from considering the constraints upon rational actions of agents living in an Everettian universe. It remains a subject of controversy whether or not these ‘proofs’ indeed prove what they set out to prove.

I hope this reference explains (from a different perspective and in a more reliable and sound manner) that there is a problem regarding probabilities and weights in the Many Worlds Interpretation.
 
  • #155
stevendaryl said:
So I have two comments about this analogy: First, the conclusion that the relative frequency of red balls should approach 70% isn't provable. It doesn't logically follow from the mere fact that 70% of the balls are red. You have to make some kind of "equally likely" assumption, which means that you're making some assumptions about probability.
Yes, that is what I mean when I say that the distribution provides a natural measure on the events, i.e. the drawing of balls from the hat. This is an assumption, yes, but it's a natural one to make; assuming something different would need some additional justification. And from this, you can derive probabilities from sequences of draws, thus showing that 100 reds is very unlikely.

However, if you start out with sequences, there's simply no analogy to this reasoning. Again, the reason is that the notion of event, i.e. the drawing of a ball, doesn't make sense in the MWI. There, the natural measure to consider would be considering every history to be equally likely---as in the case of the balls in the hat.

Basically, the problem you raise is a problem in the philosophy of probability as a whole; but provided there's a solution, it still seems to me that the MWI has some additional problem to answer.
 
  • #156
tom.stoer said:
How can you justify that it is allowed to use the probabilities pi contained in the full state (which is the top-down perspective not accessable to a single observer) as a probability observed (bottom-up) by an observer within one branch?

I have zero idea what you mean by bottom up and top down. The pi's are the pi's - that's it, that's all. Now by the definition of state as having unit trace the sum of the pi's is one and are positive suggesting they be interpreted as probabilities - not proving anything - but suggesting.

There are a number of arguments associating the pi with probabilities - I know of two:

1. A proof based on decision theory:
http://arxiv.org/abs/0906.2718

2. Envarience
http://arxiv.org/pdf/1301.7696v1.pdf

They however have been criticized as being circular. I personally don't think the decision theory one is - but rather relies on what one thinks of decision theory and rational behavior as a basis for probabilities being introduced in a deterministic theory. I suspect the envarience one is circular.

But Gleason's is possible if you assume non contextuality.

To introduce probabilities you simply imagine a suitably large number of repetitions of the same observation which will give some expected value and hence associate the pi with probabilities. But again why is it you get probabilities from a deterministic theory? Or to put it another way since the wavefunction is split into a number of worlds why do the observers in those worlds not experience each world as equally likely but instead as a probability determined by the pi in the mixed state? That is rather weird in terms of a deterministic theory.

I have David Wallace's book and he argues its not an issue based on the option that was posted previously: 'The third, and most recent, strategy has no real classical analogue ... This third strategy aims to derive the principle that weight=probability from considering the constraints upon rational actions of agents living in an Everettian universe. It remains a subject of controversy whether or not these ‘proofs’ indeed prove what they set out to prove.'

I am not personally convinced.

That's the real issue IMHO - why do you get probabilities in a deterministic theory?

I think that's what you may be getting at - or am I off the mark?

Thanks
Bill
 
Last edited:
  • #157
Bill, I think the problem has been explained several times.

S.Daedalus said:
This isn't available in the MWI, however. The reason is that the notion of an event doesn't make any sense anymore: A doesn't occur in exclusion to B, but rather, both occur. This makes the natural entities to associate probabilities with not events, but branches, or perhaps better histories, i.e. chains of observations; the sequence of values observed in elementary spin experiments, say. But there's no grounds on which one can argue that the likelihood of 'drawing' a history from all possible histories should be such that it is more likely to draw a history in which the relative frequencies are distributed according to the Born rule. If one were to associate a measure with histories at all, it seems that the only natural measure would be a uniform one---which would of course entail that you shouldn't expect to observe outcomes distributed according to the Born rule.

The proponent of many worlds is then, in my eyes, faced with justifying the use of a non-uniform measure on the set of histories, about which Gleason's theorem doesn't really say anything, it seems to me. Now of course, one can always stipulate that 'things just work out that way', but in my eyes, this would significantly lessen the attractivity of MW-type approaches, making it ultimately as arbitrary as the collapse, at least.

S.Daedalus said:
Also, I'm not at all sure I see how Gleason's theorem is relevant to probability in the MWI. What it gives is a measure on the closed subspaces of Hilbert space; but what the MWI needs is to make sense of the notion of 'probability of finding yourself in a certain branch'. It's not obvious to me how the two are related. I mean, sloppily one might say that Gleason tells you the probability of a certain observable having a certain value, but there seems to me a gap here in concluding that this is necessarily the same probability as finding yourself in the branch in which it determinately has that value. I could easily imagine a case in which Gleason's theorem, as a piece of mathematics, were true, but probability of being in a certain branch follows simple branch-counting statistics, which won't in general agree with Born probabilities.

S.Daedalus said:
But in the MWI, you don't draw a ball to the exclusion of another; rather, you always draw both a red and a blue ball. The distribution of the balls in the hat has no bearing on this; it's just not relevant. What you get is all possible strings of the form 'bbrbrr...', i.e. all possible 'histories' of drawing blue or red balls. In only a fraction of those do you observe the statistics given by the distribution of the balls; furthermore, the distribution of the balls has nothing at all to say about the distribution of the strings. You then need an argument that for some reason, those in which the correct statistics hold are more likely than those in which they don't. That the original distribution is of no help here can also be seen by considering that there isn't just one measure that does the trick: you could for instance attach 100% probability to a history in which the frequencies are correct, or 50% to either of two, or even some percentage to incorrect distributions; the setting leaves that question wholly open. And so does the MWI.

The third, and most recent, strategy has no real classical analogue ... This third strategy aims to derive the principle that weight=probability from considering the constraints upon rational actions of agents living in an Everettian universe. It remains a subject of controversy whether or not these ‘proofs’ indeed prove what they set out to prove.
Top-down: you derive the probability for a result of a measurement in the full Hilbert space.
Bottom-up I can ask: What is the probability that I find myself as a single observer in a certain branch where a is realized?
Why? What's the link?

I do not question here whether probabilities in agreement to Born's rule can be derived. I question whether these probabilities for a result of a measurement on full Hilbert space have anything to do with the probability to find myself in a certain branch.

I think you got it here
bhobba said:
Or to put it another way since the wavefunction is split into a number of worlds why do the observers in those worlds not experience each world as equally likely but instead as a probability determined by the pi in the mixed state? That is rather weird in terms of a deterministic theory.
 
  • #158
tom.stoer said:
I think you got it here

Great :thumbs::thumbs::thumbs::thumbs::thumbs:

That's exactly my concern - how does a deterministic theory accommodate probabilities.

I am not persuaded by Wallace's arguments in the book I am reading. It doesn't invalidate it but it means it doesn't do what its adherents would like - a totally deterministic theory.

Thanks
Bill
 
  • #159
bhobba said:
That's exactly my concern - how does a deterministic theory accommodate probabilities.
This may be an even deeper concern.

Mine is that we get a probability for a result of a measurement which is implicitly assumed to be valid for an observer in a specific branch. Of course a derivation of Born's rule is required in MWI, but it has a different meaning than in a collapse interpretation.

Anyway - my original idea was to start ein branch counting, but I had to accept that this is impossible.
 
  • #160
Bill, it seems that we have identified the same two problems as Wallace in the aforementioned paper.

The Incoherence Problem: In a deterministic theory where we can have perfect knowledge of the details of the branching process, how can it even make sense to assign probabilities to outcomes?

The Quantitative Problem: Even if it does make sense to assign probabilities to outcomes, why should they be the probabilities given by the Born rule?
 
  • #161
tom.stoer said:
even deeper concern.

maybe not so deep, block universe, there is not probabilities per se.
likewise mwi, anything happen, no probabilities..
 
Last edited:
  • #162
tom.stoer said:
Bill, it seems that we have identified the same two problems as Wallace in the aforementioned paper.

I agree.

Interestingly when I started my sojourn into the MWI my concern with it was this exponentially increasing branching just seems unbelievably extravagant. But after looking into it it has now shifted.

Again it doesn't disprove it, or show it's inconsistent - but it doesn't do what its adherents (at least some anyway) would like.

Thanks
Bill
 
  • #163
audioloop said:
maybe not so deep, block universe, there is not probabilities per se.
likewise mwi, anything happen, no probabilities.


.

Even in that scenario the probabilities still exist in the calculations. Why would that be?
 
  • #164
tom.stoer said:
I am still not covinced that the Born rule is sufficient. It misses what I called "bottom-up" perspective.
I don't think it makes sense to talk about probabilities from the top down perspective. The only reason to introduce probabilities is that in experiments, we observe that we end up in a single branch with a probability according to the Born rule. Imagine a godlike top down observer who just sees the evolution of the universal state (he isn't allowed to interct with it because this would lead to entanglement and thus make him a bottom up observer). Why should he assign probabilities to the coefficients? He simply sees that there are now multiple observers which can't interact with each other.

tom.stoer said:
The Born rule says that
- results of a measurement of an observable A will always be one of its eigenvalues a
I think this is what decoherence explains. However, Jazzdude seemed to object.

tom.stoer said:
- the probability for the measurement of a in an arbitrary state psi is given by a projection to the eigenstate

##p(a) = \langle\psi|P_a|\psi\rangle##

This is a probability formulated on the full Hilbert space.
This isn't correct if Pa simply projects the system to an eigenstate und does nothing else. Your expression only gives the Born probability if the full state is a product state |ψsystem>ꕕ|ψrest> which corresponds to a universe with only one branch.

In the general case, p(a) is a sum over all branches, so you have a sum of Born probabilities. You have to project on a specific branch to get the correct expression. Which branch? The single branch a specific observer perceives. So this is really the bottom up view.
 
  • #165
I agree to nearly everything, except for
kith said:
The only reason to introduce probabilities is that in experiments, we observe that we end up in a single branch with a probability according to the Born rule.
Of course you are right; this is the MWI interpretation. But as I said a couple of times, it's unclear why the expectation value of an observable evaluated on full Hilbet space and the probability to be within one single branch has anything to do with each other.

If there is a state like a|x> + b|y> it's an interpretation that being in branch "x" has anything to do with |a| squared; you can't prove it.
 
  • #166
mfb said:
I said we can care about the rule. If you are looking for a rule (for whatever reason), the Born rule is the only reasonable one.
Why? In Copenhagen, it is a postulate about probability distributions. What is it in the MWI?
 
Last edited:
  • #167
tom.stoer said:
But as I said a couple of times, it's unclear why the expectation value of an observable evaluated on full Hilbet space and the probability to be within one single branch has anything to do with each other.
I agree. My objection was against using the term "probability" wrt to the top down perspective.

It is also unclear to me how the connection could be made.
 
  • #168
Just flat out postulating the Born rule also has implications for the hypothesis. In Copenhagen it leads you to a ugly collapse which needs a physical explanation of sorts.

What the **** is mwi's ontological explanation supposed to be ? God cuts the branches he doesn't like?
 
  • #169
kith said:
I said we can care about the rule. If you are looking for a rule (for whatever reason), the Born rule is the only reasonable one.
Why? In Copenhagen, it is a postulate about probability distributions. What is it in the MWI?
Gleason's theorem. Every other assignment would lead to results we would not call "probability".
 
  • #170
mfb said:
Gleason's theorem. Every other assignment would lead to results we would not call "probability".
Gleason's theorem states that the only possible probability measure assigned to a subspace with projector P (in a system with density operator ρ) is tr(Pρ).

But the theorem cannot explain why tr(Pρ) shall be a probability for an observer to find himself in that subspace. This is an interpretation, and our discussion (including Wallace's paper) shows that it's controversial and not convincing to everybody. It is unclear why - in a deterministic theory - a probability shall arise at all.
 
  • #171
probability in deterministic theories are just talk of ignorance, i don't see how this is controversial by itself? look at the bohmian interpretation. We don't know which outcome is going to occur as we cannot measure the pilot wave itself, but I don't see how this is controversial.

*if* branch counting had worked for MWI then there would be no problem With explaining why probability arises, it would simply be branch location ignorance
 
  • #172
tom.stoer: I don't see how your post is related to the specific question I answered.

It is unclear why - in a deterministic theory - a probability shall arise at all.
To me, it is unclear why you are looking for (wanting?) probabilities, indeed.
 
  • #173
Quantumental said:
Just flat out postulating the Born rule also has implications for the hypothesis. In Copenhagen it leads you to a ugly collapse which needs a physical explanation of sorts. What the **** is mwi's ontological explanation supposed to be ? God cuts the branches he doesn't like?

It's not flat out postulated in any interpretation where the Hilbert space formalism is fundamental because of Gleason - unless of course you think in vector spaces non-contextuality is not reasonable - most would consider contextuality quite ugly.

Collapse is not ugly in any interpretation that considers the quantum state is simply knowledge about a system, like probabilities are, any more than throwing a dice collapses anything 'real' when it goes from a state where the state vector has all entries 1/6 to one with an entry of 1. Interpretations like that include Copenhagen and the Ensemble interpretation. In those interpretations its simply an example of a generalized probability model with nothing more mysterious going on than modelling something by probability.

What the issue is (with such interpretations) is people push against the idea that the world may be fundamentally probabilistic and want an underlying explanation for it. The problem lies in them - not the theory. Or to put it another way - the interpretation is fine - they just don't like it.

Thanks
Bill
 
Last edited:
  • #174
Quantumental said:
probability in deterministic theories are just talk of ignorance, i don't see how this is controversial by itself? look at the bohmian interpretation. We don't know which outcome is going to occur as we cannot measure the pilot wave itself, but I don't see how this is controversial. *if* branch counting had worked for MWI then there would be no problem With explaining why probability arises, it would simply be branch location ignorance

In BM probabilities enter into it due to lack of knowledge about initial conditions. In MWI we have full knowledge of what it considers fundamental and real - the quantum state.

Thanks
Bill
 
  • #175
mfb said:
To me, it is unclear why you are looking for (wanting?) probabilities, indeed.

I am not quite following your point here.

The reason probabilities come into it is Born's Rule ie given an observable O its expected value is Tr (OP) where P is the state of the system.

How can probabilities not be involved?

I agree there is debate over if the experience of an observer requiring probabilities is an issue in MWI, and Wallace discusses it in his book, but I don't think there is anyway of circumventing that probabilities are involved.

Thanks
Bill
 
  • #176
bhobba said:
In BM probabilities enter into it due to lack of knowledge about initial conditions. In MWI we have full knowledge of what it considers fundamental and real - the quantum state.

Thanks
Bill

Yes... exactly why it seems to be wrong.
 
  • #177
Quantumental said:
Yes... exactly why it seems to be wrong.

I am glad you used the word 'seems'. Wallace in his book argues its not an issue.

I am simply not convinced by his arguments - but it is arguable.

Added Later:

I think that's what MFB is getting at - Wallace's argument is summed up on page 115 of his book:
'Mathematically, formally, the branching structure of the Everett interpretation is a stochastic dynamical theory. And nothing more needs to be said'.

Yea - the theory is as the theory is so what's your beef. My beef is in other stochastic dynamical deterministic theories we know where the 'stochastictisity' (is that a word?) comes from - here we don't.

BTW Wallace gives all sorts of reasons - that's just one. Some are quite subtle. For example against the equal probability rule he brings up actually deciding what is an equal probability. We have an observation with two outcomes so you would naturally say its 50-50. On one of the outcomes you can do another observation with two outcomes giving 3 outcomes in total - so what is it - 1/3 for each or 1/2, 1/4 and 1/4. This boils down to a question of what is an elementary observation - a very subtle issue in QM. Its tied in with one of the quirks of QM as a generalized probability model - in normal probably theory a pure state when observed always gives the same thing - once thrown a dice always has the same face - in QM a pure state can be observed to give another pure state, which is itself tied up with having continuous transformations between pure states (as an aside Hardy believes this is the distinguishing feature of QM). His arguments are full of stuff like that - disentangling them is no easy task. Basically on some reasonableness assumptions he makes the Born rule is the only way a rational agent can assign 'likelihoods' to outcomes.

Thanks
Bill
 
Last edited:
  • #179
Quantumental said:
Bhobba: I thought this had been dealt with by several people already?

It has been DEBATED by several people already. Like just about any issue of philosophy proving one way or the other is pretty much impossible.

Now without going through the papers you mention, which doesn't thrill me greatly, suffice to say in the book I have Wallace goes to great length, and quite a few chapters, discussing objections - even the old one about the frequentest interpretation of probability which is pretty much as ancient as they come.

If you however would, in your own words, like to post a specific objection then I will happily give my view.

Overall I am not convinced by Wallace's arguments. For example merely saying a stochastic theory is - well stochastic and hence of zero concern strikes me as a cop-out of the first order. But that is not his only argument - like I say the issue is subtle and requires a lot of thought to disentangle.

I personally have no problem with the decision theoretic derivation of the Born rule - its assumptions are quite reasonable. My issue is trying to justify the likely outcomes of a deterministic theory on the basis of what a rational agent would require strikes me as not resolving the basic issue at all - yes its a reasonable way to justify the Born rule, and so is Gleason's Theorem for that matter, but does not explain why a rational being, agent or whatever would have to resort to decision theory in the first place, which assumes for some reason you can't predict with certainly the outcome. Why does a deterministic theory have that feature in the first place - blank-out. Logically its impossible to assign a value of true and false to a Hilbert space - Gleason guarantees that - you have probabilities built right into its foundations - without some device like BM's pilot wave to create contextuality no escaping it - so you are caught between the devil and the deep blue sea if you want a deterministic theory.

Again I want to emphasize it doesn't invalidate the interpretation or anything like that. Its very very elegant and beautiful, its just a question the interpretation doesn't answer, but then again all interpretations are like that - they have awkward questions they have difficulty with.

Thanks
Bill
 
Last edited:
  • #180
I would like to ask a different question which seems to be crucial for establishing the whole MWI program.

MWI as of today relies on decoherence. That means that the different branches in the representation of a state vector are defined via a preferred basis. These basis states should be
i) dynamically selected,
ii) "peaked" in classical phase space (as observed classically), and
iii) these branches should be stable w.r.t. time evolution (in the full Hilbert space)
In terms of density matrices this is mostly described as reduced density matrices becoming nearly diagonal statistical mixtures with (approximately) zero off-diagonal terms.

My question is to which extent (i-iii) can be shown to follow strictly from the formalism, i.e. from the Hamiltonian of realistic macroscopic systems.
 
  • #181
tom.stoer said:
My question is to which extent (i-iii) can be shown to follow strictly from the formalism, i.e. from the Hamiltonian of realistic macroscopic systems.

This is the preferred basis problem and books on Decoherence do prove that with a caveat known around here as the factorization problem which basically is, does it work for any decomposition not just the obvious environment, measuring apparatus, and system being measured.

That's up in the air right now - it's not known one way or the other, but it really seems to only gain traction on these forums - all the textbooks I know don't even mention it. I have also seen a paper that shows for a simple model it doesn't depend on the factorization - but beyond that its not known. I also have to say there are other issues in the quantum classical transition that theorems exist only for special cases. The general consensus seems to be its just a matter of crossing the t's and dotting i's sort of stuff - but one never knows.

If you want to go into it further it has been discussed many times on this forum so its really only a search away.

Thanks
Bill
 
  • #182
mfb said:
Gleason's theorem. Every other assignment would lead to results we would not call "probability".
Ok. So we could actually weaken the Born rule postulate of Copenhagen and replace it by something like "for every observable, there is a probability distribution of possible measurement outcomes (<->eigenvalues) which is determined by ρ" and get the quantitative statement from this by applying Gleason's theorem?
 
  • #183
tom.stoer said:
My question is to which extent (i-iii) can be shown to follow strictly from the formalism, i.e. from the Hamiltonian of realistic macroscopic systems.
I remember a Zurek paper which tried to derive the position basis as the preferred basis from the Coulomb interaction. I don't know if this satisfies all your criteria. Also I haven't found it on the arxiv at first glance.
 
  • #184
kith said:
Ok. So we could actually weaken the Born rule postulate of Copenhagen and replace it by something like "for every observable, there is a probability distribution of possible measurement outcomes (<->eigenvalues) which is determined by ρ" and get the quantitative statement from this by applying Gleason's theorem?

Cant quite understand what you are trying to say. But what Gleason's theorem says is the Born Rule and non contextuality (ie the probabilities are basis independent) is equivalent. But why would you choose a vector space formalism to describe states if a fundamental thing like the expected values of the outcome of observations depends on your basis? It's a very strange thing for anything fundamental to be basis dependent since basis are an arbitrary freely chosen man imposed thing. And indeed interpretations like BM where it is violated are basically saying - the Hilberst space formalism is not fundamental - the pilot wave is. The MWI is most definitely NOT like that so its quite reasonable for it to apply. The same with Copenhagen. In that interpretation the state represents the fundamental thing describing a system but is a state of knowledge like probabilities in probability theory - it doesn't exist in a real sense like in the MWI where it is very real. But because it considers the state fundamental it too would more or less would have to accept Gleason.

Thanks
Bill
 
  • #185
bhobba said:
Cant quite understand what you are trying to say.
I'm trying to find out where exactly probabilities enter in Copenhagen, how they could enter in the MWI and if the latter is even necessary. There's obviously no agreement on this among the people in this thread.
 
  • #186
kith said:
I'm trying to find out where exactly probabilities enter in Copenhagen, how they could enter in the MWI and if the latter is even necessary. There's obviously no agreement on this among the people in this thread.

When you make an observation you reasonably expect it to have an expected value. No assumption at that point is made about determinism or probabilities. But what Gleason shows is that expected value contains actual probabilities - not certainties. That's how probabilities enters into it - its inevitable from the Hilbert space formalism - no escaping it. Why does a deterministic theory like MWI contain probabilities - that's the question. Wallace basically says its not an issue - the theory is a stochastic model and that's how stochastic models behave - I respectfully disagree. He also has other arguments but you need to read the book - its a subtle and complex issue.

Thanks
Bill
 
  • #187
bhobba said:
I am not quite following your point here.

The reason probabilities come into it is Born's Rule ie given an observable O its expected value is Tr (OP) where P is the state of the system.

How can probabilities not be involved?
Why (and how?) do you add Born's rule? If you add it, you have "probabilities" hanging around, obviously (but how do they work?), but I don't see why you do that.
I think the factorization problem is similar to the question "when/where do collapses happen in the Copenhagen interpretation?". I agree that more work is necessary here, but I don't think it is specific to MWI, and I do not expect any issue arising from that.
 
  • #188
mfb said:
Why (and how?) do you add Born's rule? If you add it, you have "probabilities" hanging around, obviously (but how do they work?), but I don't see why you do that.

Any observation must have an expected outcome. The Born rule allows you to figure out what it is so you can check experiment against theory.

I may be stating to glimpse your point - is it because all you can predict is probabilities it does not mean its not deterministic?

mfb said:
I think the factorization problem is similar to the question "when/where do collapses happen in the Copenhagen interpretation?". I agree that more work is necessary here, but I don't think it is specific to MWI, and I do not expect any issue arising from that.

It's a general decoherence issue, but not the only one where more work needs to be done in the area of the quantum to classical transition. And I don't expect any issues either - but there are those that argue it. Maybe you can have better luck than me with them - some are rather 'taken' with the idea there is an issue.

Thanks
Bill
 
Last edited:
  • #189
bhobba said:
Any observation must have an expected outcome.
Why?

I may be stating to glimpse your point - is it because all you can predict is probabilities it does not mean its not deterministic?
How can you test probabilities? There is no measurement that can be described as probability. Either you measure result A or result B, but never 10% A and 90% B.
If you cannot measure them, what is the point in predicting probabilities?
 
  • #190
mfb said:
Why? How can you test probabilities? There is no measurement that can be described as probability. Either you measure result A or result B, but never 10% A and 90% B.
If you cannot measure them, what is the point in predicting probabilities?

Obviously any single measurement is like that but the same measurement repeatably done yields an expected result and probabilities.

And to forestall the next objection the frequentest approach to probability is perfectly valid and non circular when done properly as found in standard textbooks like Feller. I don't really feel like having a sojourn into that one again because its an old issue that has been solved ever since Kolmogorov devised his axioms.

Thanks
Bill
 
Last edited:
  • #191
mfb said:
How can you test probabilities? There is no measurement that can be described as probability. Either you measure result A or result B, but never 10% A and 90% B.
If you cannot measure them, what is the point in predicting probabilities?
That was exactly my point when I started this thread: the difference between the (expected) result of a measurement and a sequence of measurements. Whereas the result of a single measurement of observable A is described by the expectation value, experimentally the probability is related to a statistical frequency which is not not related to a single expectation value but to a sequence of projections.

So the basic fact is that we DO observe statistical frequencies in experiments (for identical preparations) which we identify with matrix elements interpreted as probabilities. The "interpreted as" is the non-trivial step!

The expectation value of a single measurement follows from a single calculation which I called the top-down perspective using the full state in the full Hilbert space; the statistical frequency in a sequence of experiments is not observed top-down b/c a physical observer "within" one branch has no access to the full Hilbert space; instead this what I called the bottom-up perspective of individual observers within specific branches.

I started with the fact that in any collapse interpretation the relation between both perspectives is trivial b/c the collapse postulate forces the top-down and the bottom-up perspective to become identical.

Then I continued with sequences of measurements (and thefore branchings) in the MWI. Here the above mentioned relation is no longer available b/c we still have the expectation value calculated on full Hilbert space and the statistical frequency recorded in the individual branches, but we do no longer have a collapse forcing the two perspectives to be identical.

The probability to be in a specific branch, either |x> or |y>, after a measurement cannot be deduced from the formalism, as far as I can see. It seems to be a proven fact the the only way to assign probabilities consistently is given by the Born rule (Gleason), but the fact that we interpret these matrix elements as probabilities related to statistical frequencies is by no means obvious and still subject to discussion.
 
Last edited:
  • #192
Bhobba:

1. What is your response to the factorization problem? J-M Schwindt put up a paper about that late last year that Demystifier wrote a summary of which was the following:

To define separate worlds of MWI, one needs a preferred basis, which is an old well-known problem of MWI. In modern literature, one often finds the claim that the basis problem is solved by decoherence. What J-M Schwindt points out is that decoherence is not enough. Namely, decoherence solves the basis problem only if it is already known how to split the system into subsystems (typically, the measured system and the environment). But if the state in the Hilbert space is all what exists, then such a split is not unique. Therefore, MWI claiming that state in the Hilbert space is all what exists cannot resolve the basis problem, and thus cannot define separate worlds. Period! One needs some additional structure not present in the states of the Hilbert space themselves.
 
  • #193
Quantumental said:
What is your response to the factorization problem?

Already answered that - see post 181.

It has been discussed many times in many threads, easy to do a search and form your own view.

Thanks
Bill
 
Last edited:
  • #194
bhobba said:
Already answered that - see post 181.

It haws been discussed many times in many threads, easy to do a search and form your own view.

I don't see how you can just assume that it'll be fixed somehow by decoherence?

It seems to me like you are forced to postulate that there is additional structure and a dynamics there which isn't present in the formalism
 
  • #195
Quantumental said:
I don't see how you can just assume that it'll be fixed somehow by decoherence? It seems to me like you are forced to postulate that there is additional structure and a dynamics there which isn't present in the formalism

Why do you think I am just assuming it?

Didn't you see my comment about what a simple model proved?

Why do you think more complex models will not confirm what the simple model showed?

Thanks
Bill
 
  • #196
bhobba said:
Why do you think I am just assuming it?

Didn't you see my comment about what a simple model proved?

Why do you think more complex models will not confirm what the simple model showed?

Thanks
Bill

I just don't see how it's really relevant.
The main point of Schwindts paper is that the state vector of the universe does not contain any information at all, because all unit vectors look the same. The information is only in the choice of factorization. And...how, even hypothetically, could decoherence suddenly choose?

I think even Wallace concedes this in the Emergent Multiverse when he postulates additional structure because he realize that nothing but Hilbert Space = won't work.
 
  • #197
Quantumental said:
I just don't see how it's really relevant.

Then I simply do not agree with you.

Its obviously relevant that a simple model singled out a basis regardless of how it was decomposed. To spell out the detail that should be obvious, the statement 'Namely, decoherence solves the basis problem only if it is already known how to split the system into subsystems (typically, the measured system and the environment).' is incorrect for the simple model. Its an open question if its true for more realistic models or even for the entire universe, but most experts in the field, by the fact it doesn't even rate a mention in the textbooks on the matter, seem to think it true, as do I. If you can't see that, shrug.

And that a state vector of anything, the universe, anything, contains no information at all directly contradicts the foundation axioms of QM.

There are two of them as detailed by Ballentine and information is what both of them are about.

Because it's so at odds with those axioms can you state them and explain exactly how a state vector can not contain information?

And all unit vectors looking the same? Sounds like a nonsense statement to me.

Thanks
Bill
 
Last edited:
  • #198
mfb said:
How can you test probabilities? There is no measurement that can be described as probability. Either you measure result A or result B, but never 10% A and 90% B. If you cannot measure them, what is the point in predicting probabilities?
I still don't get your main point. In the Copenhagen interpretation, you postulate probability distributions. From them you get expectation values which can be shown to be consistent with experimental data by hypothesis testing. You seem to suggest that the MWI can do this too. But how do you derive these hypotheses without talking about probability distributions?
 
Last edited:
  • #199
kith said:
I still don't get your main point. In the Copenhagen interpretation, you postulate probability distributions. From them you get expectation values which can be shown to be consistent with experimental data by hypothesis testing. You seem to suggest that the MWI can do this too. But how do you derive these hypotheses without talking about probability distributions?

Its confusing to me as well. The two axioms of QM from Ballentine are:

1. Observable's are Hermitian operators whose eigenvalues are the possible outcomes of an observation.

2. A positive operator P of unit trace exists, called the state, such that the expected value of the observable O is Tr(OP).

Axiom 2 is the Born rule and in fact via Gleason, and the assumption of non-contextuality, follows from axiom 1.

It would seem probabilities are built right into the very definition of a quantum state.

bhobba said:
Because it's so at odds with those axioms can you state them and explain exactly how a state vector can not contain information?

To Quantumental:

The above are the axioms of QM. As you can see the very definition of state is about information. Now there is a bit of an issue about its meaning with regard to the entire universe but just to ensue we are on the same page is that the issue you are talking about? It has a simple solution in the context of MWI, but if you can explain the problem as you see it that would really help.

Thanks
Bill
 
Last edited:
  • #200
bhobba said:
Obviously any single measurement is like that but the same measurement repeatably done yields an expected result and probabilities.
It does not, it still leads to a single result - using the initial example, a string of x and y.
What was the probability of that string? Why did you get this specific string, and not something much more likely, like y only?
If you get the most likely single measurement result, you even reject your initial hypothesis! Why? This question has an answer, but not a probabilistic one. In terms of hypothesis testing, you don't have to assign probabilities to anything. You can do it, but it is not necessary. You can use MWI as well.

You can consider every set of measurements as a single measurement, you don't get rid of that issue just by repeating things.

kith said:
I still don't get your main point. In the Copenhagen interpretation, you postulate probability distributions. From them you get expectation values which can be shown to be consistent with experimental data by hypothesis testing. You seem to suggest that the MWI can do this too. But how do you derive these hypotheses without talking about probability distributions?
The corresponding MWI hypotheses are hypotheses about amplitudes.
 
Back
Top