Many-Worlds, Deriving the Born Rule?

In summary: Born rule?In summary, the many-worlds interpretation is described by Wallace in his latest book as relying on game-theoretic arguments, and some criticism has been raised that the argument relies on the Born rule. However, Wallace has a FAQ bit that addresses the issue of circularity and Gleason's theorem shows that the Born rule is not required. There is still some debate about how this is resolved, but it seems that Wallace is clear about the problem.
  • #1
MHuiq
3
0
Lately I have been interested in the many-worlds interpretation, and in particular the way it is described by Wallace in his latest book The Emergent Multiverse.

In the book he tries (or succeeds) to derive the Born rule from unitary dynamics by using game-theoretic arguments. But for this he uses decoherence to explain the emergence of different branches.

I have read criticism that decoherence itself relies on the assumption of the Born-rule, by stating that the low probability outcomes vanish. Thus it seems that Wallace's argument is circular.

The criticism seems pretty straightforward and I think Wallace must have thought of this too, but I am not sure how he or other MWI-proponents resolve this issue.

Can someone shed some light on how this is resolved, if it is resolved?
 
Physics news on Phys.org
  • #2
I have gone through Wallace's derivation from his book. If I recall correctly he has a FAQ bit that answers questions like yours - looked it up - the issue of circularity is addressed on page 253.

It's not circular, but it does have a tacit assumption that the quantum formalism needs to be respected, in particular any confidence level you are in a particular world (and basically that's what decision theoretic means - its a Baysian approach) does not depend on a chosen basis. That being the case one has a well known theorem, Gleason's theorem, that can be used anyway.

Decoherence doesn't use the Born rule - you get the improper mixed state from it regardless - interpreting the mixed state does. But regardless Gleason's theorem is independent of decoherence.

Thanks
Bill
 
Last edited:
  • #3
Wallace himself admits that it's still a matter of debate whether his proof proves what he intends to prove ...
 
  • #4
Thank you for your replies. Let's see if I understand this correctly. By Gleason's theorem there can be no other probability measure than the Born rule over a Hilbert space of dimension 3 or greater.

Thus if I want to interpret the improper mixed state from decoherence probabilistically I have to use the Born rule. I can not use some kind of inverse measure 1/|amplitude| such that the small amplitude states are the most probable.

Furthermore I see that Wallace explains at page 253 that the Hilbert-space norm is like a "natural measure" of state pertubations in Hilbert-space, which he concludes by looking at the microphysical dynamics. Thus a small change in an amplitude causes a small change in the dynamics. I assume this is asserted by Gleason's theorem, because if there were a 1/|amplitude| measure then off course small changes in the amplitude of a state would result in physically big changes in the dynamics.

I am not sure if I am completely convinced yet, but his argument is becoming clearer to me.
 
  • #5
The problem with the probabilities in the Everett interpretation is the following:

1st there are no probabilities at all! Neither is the Born rule introduced as an axiom, nor is there a reason why there should be any probabilities at all. All there is is an Hilbert space and a (one-dim.) ray subject to deterministic, unitary time evolution. So Gleason's theorem simply does not apply.

2nd what the theory has to explain (via a mathematical derivation) is how and why there is some branch structure with appropriate amplitudes corresponding to the many worlds "interpretation" (the MWI is - as far as I can see - not an interpretation but a research program which tries to derive certain results from the mathematical formalism which have been introduced as axioms in a collapse interpretation). Assume we have such a branch structure and assume there are emergent amplitudes with the correct values (via decoherence) then this still does not provide any reason to talk about a probability.

From a eye perspective the a's in

##\sum_n a_n |n\rangle##

need not (must not) be interpreted as a probability (no vector is ever interpreted as a probability). So why in QM? I think Wallace is rather clear about the problem in chapter 4.5 and especially at the end of chapter 4.6 of http://arxiv.org/abs/0712.0149. I have not seen any paper going beyond these results, but I would be happy to find one.

(to be clear about that: I believe that what we observe in the real world is just a set of probabilities to find ourselves in certain branches; whether the other ones survive as unobservable branches or collapse to zero cannot be decided experimentally)
 
  • Like
Likes 1 person
  • #6
bhobba said:
Decoherence doesn't use the Born rule - you get the improper mixed state from it regardless - interpreting the mixed state does.

The mathematics of tracing out the environmental degrees of freedom does not depend on any assumptions about the interpretation of the wavefunction. But there is a part that seems to require a probabilistic argument of some sort, and that is irreversibility. Both the decoherence process and the measurement process are assumed to be irreversible. But doesn't irreversibility depend on some kind of law of large numbers, which depends on a notion of probability?
 
  • #7
stevendaryl said:
But there is a part that seems to require a probabilistic argument of some sort, and that is irreversibility.

I think you had better give the details of that one. I haven't seen it in any of the models.

But even if true it makes zero difference. Gleason's theorem doesn't require that - simply basis independence.

Thanks
Bill
 
  • #8
But Gleason's theorem is irrelevant in this context.

It states that the only possible probability measure is the tr(Pρ), but it doesn't explain why there should be a probability at all. So if we want to introduce probability in QM (formulated using a Hilbert space) then the probability measure must be tr(Pρ) for some subspace described by P. But it does not explain why we should introduce any probability at all, given that we have a deterministic, unitary time evolution.
 
  • #9
MHuiq said:
I assume this is asserted by Gleason's theorem, because if there were a 1/|amplitude| measure then off course small changes in the amplitude of a state would result in physically big changes in the dynamics.

Gleason's theorem has nothing to do with dynamics - its only assumption is the measure can't depend on the basis. You wouldn't really use a vector space formalism if it did would you? Its very natural, so natural it actually took a little while after Gleason proved it to disentangle its physical basis (it's non-contextuality which is how DBB for example evades it). But again, if the state, which is the fundamental thing in Many Worlds, is an element of a vector space, it would be reasonable to think the measure does depend on a basis - which is purely an arbitrary man made thing - laws of nature shouldn't really depend on that. DBB gets around it by the pilot wave and actual particle being the fundamental thing - not the state.

Thanks
Bill
 
Last edited:
  • #10
The reason I included Gleason's theorem in the dynamics was the following. Wallace writes as a response to "What makes the pertubations that are small in Hilber-space norm 'slight', if it's not the probability interpretation of them":

"Small changes in the energy eigenvalues of the Hamiltonian, in particular, lead to small changes in quantum state after some period of evolution.[...] Ultimately, the Hilbert-space norm is just a natural measure of state perturbations in Hilbert-space."

I still don't completely see how he get's out the circular argument as one can ask what makes the 'small changes' in the quantum state small? So I hoped that because of Gleason's theorem any interpretation of whether changes are small or big would have to use the Born rule and that this would settle that small changes are physically small under any interpretation of the amplitudes of the states. But I would agree that is a bit far-fetched.

@Tom.Stoer Thanks for the article, I will look into that
 
Last edited:
  • #11
tom.stoer said:
It states that the only possible probability measure is the tr(Pρ), but it doesn't explain why there should be a probability at all.

In MWI we want a measure giving the confidence we are in a particular world - its not a probability - but rather a Bayesian 'number' giving that confidence.

MWI is a deterministic theory and the assumption is we do not have sufficient information to determine the world. Its a subtle, but very important, difference. But because of that we use the Baysian view of hypothesis testing from which probabilities is a derived concept.

This was all discussed in a long thread:
https://www.physicsforums.com/showthread.php?t=706927

No use going over it again. I am with MFB in that thread - it would simply be repeating the same thing over and over.

Thanks
Bill
 
  • #12
MHuiq said:
I still don't completely see how he get's out the circular argument as one can ask what makes the 'small changes' in the quantum state small?

My suggestion after reading that book is don't worry about Wallace's derivation - stick with Gleason. I suspect the reasonableness assumptions his derivation makes is really equivalent to basis independence anyway - at least that's what I thought after going through one of his answers to objections. But even if it isn't the case, there is zero doubt he is making some reasonableness assumptions and I can't see how they are any better or worse than basis independence.

Thanks
Bill
 
  • #13
bhobba said:
I think you had better give the details of that one. I haven't seen it in any of the models.

But even if true it makes zero difference. Gleason's theorem doesn't require that - simply basis independence.

Irreversibility to me seems an essential part of getting a classical world out of quantum amplitudes, because we see definite values for macroscopic variables. Irreversibility is responsible for the transition from a pure state (with a superposition of alternatives) to an apparently mixed state (where apparently one alternative is "chosen").

If a system is simple enough that everything is reversible, then collapse doesn't come into play, and neither do probabilities. The interpretation of the square of the wave function as a probability only comes into play when the subsystem interacts with a larger system in an irreversible way.
 
  • #14
bhobba said:
... because of that we use the Baysian view of hypothesis testing from which probabilities is a derived concept.
Could you please do me a favor and write down explicitly which hypothesis we shall test and which number we shall assign in the MWI context.
 
  • #15
tom.stoer said:
Could you please do me a favor and write down explicitly which hypothesis we shall test and which number we shall assign in the MWI context.

Its obvious.

The hypothesis is what world you are in.

The number giving the confidence is via Gleasons, Wallice's argument or even simply accepting Trace formula as an axiom.

We are just rehashing that thread again. I seem to recall you never got it. That being the case I am not going to go though it again. The OP can go through that thread and form his own view.

Thanks
Bill
 
  • #16
stevendaryl said:
Irreversibility to me seems an essential part of getting a classical world out of quantum amplitudes, because we see definite values for macroscopic variables. Irreversibility is responsible for the transition from a pure state (with a superposition of alternatives) to an apparently mixed state (where apparently one alternative is "chosen").

If a system is simple enough that everything is reversible, then collapse doesn't come into play, and neither do probabilities. The interpretation of the square of the wave function as a probability only comes into play when the subsystem interacts with a larger system in an irreversible way.

I don't think it has anything to do with decoherence. Sure - its true in that it's generally not reversible statistically, and that is responsible for the arrow of time - but its got nothing to do with the the math of the emergence of an improper mixed state - at least I have never seen it as an assumption.

Thanks
Bill
 
  • #17
bhobba said:
I don't think it has anything to do with decoherence. Sure - its true in that it's generally not reversible statistically, and that is responsible for the arrow of time - but its got nothing to do with the the math of the emergence of an improper mixed state - at least I have never seen it as an assumption.

Okay, I'll ask you: is there EVER an example of a transition from pure state to mixed state that does not involve an irreversible process? If it's not irreversible, then the reverse--from mixed state to pure state--could happen just as easily. So the interpretation of the emergence of mixed states as an effective "collapse of the wave function" makes no sense for reversible processes.

Mathematically, mixed states arise by tracing out some of the degrees of freedom, and you can always do that. But that tracing out has no physical significance without something like an irreversible change.
 
  • #18
I actually didn't think that I was saying anything controversial. I thought it was well-known that both measurement processes and decoherence due to interaction with the environment both involved irreversibility. In the Wikipedia article about decoherence is the sentence:

Decoherence occurs when a system interacts with its environment in a thermodynamically irreversible way

http://en.wikipedia.org/wiki/Quantum_decoherence

Of course, the author of the Wikipedia article might be as confused as I am, but I'm just quoting from Wikipedia as a way of showing that I'm not the only one under this misconception, if it is a misconception.

I found another article that makes the same claim--that there is a connection between decoherence and irreversibility:
http://arxiv.org/pdf/quant-ph/0106006v1.pdf
 
  • #19
Just a note that the simplest and most common way of deriving a meaning for the improper mixed state in a reduced density matrix is to assume the system and environment together are in a pure state, and together obey the Born rule. To get the reduced density matrix, one assumes that the observable is a local observable on the system "times" the identity on the environment. Applying the Born rule to the pure state automatically traces out the environment for such local observables. This works not matter how small the environment is.
 
  • #20
stevendaryl said:
I thought it was well-known that both measurement processes and decoherence due to interaction with the environment both involved irreversibility. In the Wikipedia article about decoherence is the sentence

The issue isn't that it involves irreversibility, the issue is - does it involve the Born Rule.

See my standard link about about decoherence:
http://philsci-archive.pitt.edu/5439/1/Decoherence_Essay_arXiv_version.pdf

Have a look at section 1.2.3 on proper and improper mixtures.

System 3 considers an entangled system of A and B. But removing B from consideration by tracing over the environment (ie system B) gives equation 1.23 which is an improper mixed state. No Born rule evoked yet. If we had a huge number of systems and traced over that environment, instead of just system B, then it's the same - but un-entangling such a large number of systems is practically impossible - that why its irreversible - and the Born rule not yet invoked. What the concern may be is seeing that tracing over the environment is the thing to do - that indeed does require the Born rule - but at this point its simply a process to remove system B.

Now we need to interpret the improper mixed state - that requires the Born Rule - and indeed understanding why you trace over the environment does as well but I am including that in the interpretation bit. What it shows is the pi of the improper mixed state Ʃ pi |bi><bi| is the probability of the system being in |bi><bi|.

The MWI interprets the |bi><bi| as a separate world. The issue is which world will be experienced? MWI does not detail that. The best you can do is try to figure out some kind of objective confidence for that. You can simply assume the Born rule for that to give a confidence - not yet a probability. But you can do better - you can use Gleason's theorem to derive it from basis independence, or the argument of Wallace (which I believe is really invoking the same assumption, but it's not something I am particularly motivated to pursue - it was simply something that struck me when I went through Wallace's text). Take your pick - its not really critical. Then using Baysian hypothesis testing you can derive long term averages and hence probabilities.

Thanks
Bill
 
Last edited:
  • #21
bhobba said:
The issue isn't that it involves irreversibility, the issue is - does it involve the Born Rule.

Okay, I thought you were saying that decoherence is unconnected with irreversibility.
 
  • #22
atyy said:
Just a note that the simplest and most common way of deriving a meaning for the improper mixed state in a reduced density matrix is to assume the system and environment together are in a pure state, and together obey the Born rule. To get the reduced density matrix, one assumes that the observable is a local observable on the system "times" the identity on the environment. Applying the Born rule to the pure state automatically traces out the environment for such local observables. This works not matter how small the environment is.

Indeed - and here is the detail (see Lubos's reply):
http://physics.stackexchange.com/qu...ake-the-partial-trace-to-describe-a-subsystem

But MWI simply has to assume that after tracing over the environment, without yet saying why that's what you should do, it's a pretty obvious sort of thing, but justifying it does require the Born rule, it is considered a separate world. It is then the Born rule needs to be invoked to interpret it.

I also want to emphasize I am not particularly worried about if you strictly need the Born rule for decoherence or not. Gleason's Theorem applies regardless.

Thanks
Bill
 
Last edited:
  • #23
bhobba said:
Its obvious.
It isn't!

bhobba said:
The hypothesis is what world you are in.

The number giving the confidence is via Gleasons, Wallice's argument or even simply accepting Trace formula as an axiom.
Gleason's theorem doesn't provide any means regarding a derivation, only for consistency (*)
Accepting the trace formula as an axiom is not the intention of the Everett interpretation.
Wallace's argument is far from obvious: in his summary paper from 2007 he writes "it remains a subject of controversy whether or not these 'proofs' indeed prove what they set out to prove.

So that being the case I don't think anything is "obvious" here.

(*) Example: Let's say we have a billiard table represented by [0,a] * [0,b] and we're doing Newtonian mechanics. Then somebody asks us to introduce a probability, and provides some consistency conditions for it. Of course we would reject that b/c we do not need any probability in deterministic Newtonian mechanics.

So even if we know how to do it we still do not know WHY we should do it at all. But I agree that I have to understand Wallace argument in more detail. My hope was that you can explain it in some sense.
 
  • #24
tom.stoer said:
Gleason's theorem doesn't provide any means regarding a derivation, only for consistency

Errrr. Come again. It's a theorem - its beyond doubt. The only issue is its assumptions, and its only one is the measure is basis independent. That's it, that's all. And its unequivocal - the trace formula is the only one.

If we wish to define a measure, and for it to be basis independent then Born's rule is it - no out.

Does MWI accomplish what its adherents claim? After reading Wallace's book my opinion is strictly speaking it doesn't. Other assumptions are required than Schrodinger's equation and state evolution. I find them rather innocuous reasonableness sort of stuff - but they are there.

Regarding the probabilities, that was discussed at length in that long thread - its the Bayesian thing - if you think that resolves how a deterministic theory introduces it depends purely on why you think a confidence level is required - if you think it's due to a lack of knowledge then determinism is not violated.

I thought I was pretty clear - I am not enamored with Wallace's argument - I believe it has a tacit assumption of basis independence anyway - which is why I prefer Gleason.

Added Later:
Went through Wallace's book to get the detail. Its on page 475. He proves the Non-Contextuality theorem that basically says his argument only works iff its non-contextual ie basis independent. It's really Gleason in another guise IMHO.

But if you want to go through it, its freely available:
http://arxiv.org/pdf/0906.2718v1.pdf

The utility he talks about is the level of confidence I am talking about - its basically the Baysian view.

Thanks
Bill
 
Last edited:
  • #25
bhobba said:
Errrr. Come again. It's a theorem - its beyond doubt. ... And its unequivocal - the trace formula is the only one.
I think you still don't get where the problem really is.

First of all: there is no doubt that Gleason's theorem says that the probability measure on Hilbert spaces is uniquely determined by the trace formula. I never said something else.

But Gleason's only says that IF you want to introduce a probability measure THEN it must be via trace formula. Gleason's theorem does NOT explain WHY you should introduce a probability at all. So there are two choices:
1) introduce the trace formula / Born's rule as an axiom; this does not explaint the WHY, either.
2) derive the trace formula from the formalism or from some other natural argument; this is what MWI has to do.

The reason why the WHY matters in the MWI is that it provides a bird's eye perspective on the full Hilbert space w/o any probability at all. So strictly speaking there is no probability or hypothesis testing, and therefore there is no good reason to apply Gleason's theorem at all.

Of course you can say that we observe something like probabilities in anture, and therefore they should be present in the formalism as well. But of course I prefer the theory to explain the reason WHY there should be probabilities. This the question I am asking, and this is what Gleason's theorem does not explain.

Anyway - thanks for the link to Wallace's paper; I'll try to understand his arguments, perhaps it becomes cledarer then.
 
  • #26
bhobba said:
...but un-entangling such a large number of systems is practically impossible - that why its irreversible - and the Born rule not yet invoked.

I guess the question is what "practically impossible" means here. I assumed that it was a matter of probability--that the probability of a decoherent mixture returning to a pure state is vanishingly small. But that requires probability to be meaningful. But I suppose you could emphasize the word "practically" and make it about what is possible for humans to arrange. We can't arrange for a mixture to unmix and become a pure state, just because of the huge number of degrees of freedom that would have to be controlled. That notion of "practically" doesn't involve probability. (Maybe)
 
  • #27
tom.stoer said:
But Gleason's only says that IF you want to introduce a probability measure THEN it must be via trace formula. Gleason's theorem does NOT explain WHY you should introduce a probability at all.
It is still unclear to me what exactly the MWI has to explain and how such an explanation could look like.

I don't think we should expect a derivation of the concept of probability from the bird's eye point of view. The imaginary bird simply sees the evolution and branching of the worlds. What sense does it make for him to say one world is more "probable" than another one? How is the bird conceptually different from Laplace's demon in classical mechanics?

And if we consider the inside perspective of the frog why isn't his incomplete knowledge -namely about which world he is in- sufficient to justify the use of probabilities?
 
  • #28
kith said:
It is still unclear to me what exactly the MWI has to explain and how such an explanation could look like.

I don't think we should expect a derivation of the concept of probability from the bird's eye point of view. The imaginary bird simply sees the evolution and branching of the worlds. What sense does it make for him to say one world is more "probable" than another one? How is the bird conceptually different from Laplace's demon in classical mechanics?

And if we consider the inside perspective of the frog why isn't his incomplete knowledge -namely about which world he is in- sufficient to justify the use of probabilities?

In a branching universe, from the point of view of an observer in the universe, there is definitely a subjective notion of probability, which is the relative frequencies of events in his past. The problem with connecting this with quantum mechanics (or ANY theoretical basis for computing probabilities) is that these relative frequencies will be different in different branches. The best you can do is say something like this:

The set of branches in which the relative frequencies do not converge to probabilities predicted by QM (or whatever theory you're using) has very small measure. That is, the probability of being in such a branch is very small.​

What's philosophically unsatisfying to me about this is that there is no operational meaning to this measure on branches. You can't give it meaning in terms of relative frequencies. Ultimately, the best you can say is something like this:

We have a measure on branches. Based on this measure, we can identify a set of "typical" branches. Among these typical branches, relative frequencies work out like our theories say they should. If we happen to be on a typical branch, then we can use our theory to compute likelihoods for future events.​

But this is a little misleading, itself, in that we aren't on a branch. We are simultaneously on every branch that has the same past events as this branch. So we're actually on typical and atypical branches, simultaneously.

To me, to really make sense of our use of probabilities in science, we have to make an artificial assumption that is not actually justified by our theory. We assume that we are on some specific complete branch (so that it makes sense to talk about the future as a singular object), and furthermore, we assume that our complete branch is typical. In a sense, this is a kind of deterministic hidden-variables theory, where the hidden variable is the branch we're on. Of course, it's very highly nonlocal.

All the above applies to ANY theory with intrinsic probabilities. Quantum mechanics has the additional conceptual complication that there are no objective "branches". The branching structure depends on which basis you are using to describe the universe. (Maybe decoherence tells us which one to use?)
 
  • Like
Likes 1 person
  • #29
tom.stoer said:
But Gleason's only says that IF you want to introduce a probability measure THEN it must be via trace formula.

That's not what Gleason says. It concerns introducing a measure - how that is interpreted it's silent about.

bhobba said:
Regarding the probabilities, that was discussed at length in that long thread - its the Bayesian thing - if you think that resolves how a deterministic theory introduces it depends purely on why you think a confidence level is required - if you think it's due to a lack of knowledge then determinism is not violated.

I am specifically claiming it's NOT interpreted as a probability - but as a Bayesian level of confidence of which world you are in. Probabilities becomes a derived concept via Bayesian hypothesis testing. This was the key point in the longer thread, and a key point I have been making here.

Thanks
Bill
 
  • #30
bhobba said:
I am specifically claiming it's NOT interpreted as a probability - but as a Bayesian level of confidence of which world you are in. Probabilities becomes a derived concept via Bayesian hypothesis testing. This was the key point in the longer thread, and a key point I have been making here.

I always thought of Bayesian "level of confidence" as a probability. It's a subjective notion of probability, but it's a probability--it obeys the usual rules of probability.
 
  • #31
stevendaryl said:
I always thought of Bayesian "level of confidence" as a probability. It's a subjective notion of probability, but it's a probability--it obeys the usual rules of probability.

Are you making the distinction that a "probability" refers to a nondeterministic event that has yet to happen, while a "level of confidence" can refer to any uncertain fact (whether about the future, the past, the laws of physics, etc.)?
 
  • #32
kith said:
And if we consider the inside perspective of the frog why isn't his incomplete knowledge -namely about which world he is in- sufficient to justify the use of probabilities?

What MW's does is not specify which world you experience. Its not part of the theory. All you can do is assign a rational level of confidence to which world will be experienced. With that level of confidence you can do Bayesian hypothesis testing. Probabilities become a derived concept in that view.

This has all been discussed in the longer thread. MFB did many posts there explaining it eg:
https://www.physicsforums.com/showthread.php?p=4487215#post4487215

At first I didn't get it, but then I did and kicked myself for not seeing it earlier.

Thanks
Bill
 
  • #33
bhobba said:
It concerns introducing a measure - how that is interpreted it's silent about.
OK, I agree. Gleason's theorem says nothing about the interpretation. That's why any interpretation, probability, confidence level etc. has to be motivated by something else - not by Gleason's theorem. yes, I think we agree on that.

bhobba said:
I am specifically claiming it's NOT interpreted as a probability - but as a Bayesian level of confidence of which world you are in.
I still can agree.

But then please explain WHY we should introduce a concept of Bayesian hypothesis testing in a fully deterministic theory?

I agree that - as soon as you are convinced to introduce a Bayesian level of confidence - Gleason's theorem tells you how to do that. But looking at the bare formalism (= Hilbert space plus unitary time evolution; w/o the Born rule, w/o any collapse interpretation, w/o knowledge about the history of QM!), why should I introduce a Bayesian level of confidence. Why should I do that?

w/o any further explanation it seems that the MWI has to introduce or derive the Born rule simply b/c we know that it works in the collapse interpretation. But the MWI should be able to explain this w/o referring to anything else. So I am asking for this specific reason.
 
  • #34
stevendaryl said:
I always thought of Bayesian "level of confidence" as a probability. It's a subjective notion of probability, but it's a probability--it obeys the usual rules of probability.

You can view it that way. But its a different view reflecting simply a lack of information.

Thanks
Bill
 
  • #35
tom.stoer said:
But then please explain WHY we should introduce a concept of Bayesian hypothesis testing in a fully deterministic theory?

If it is sensible for us to introduce a notion of probability to describe the apparent nondeterminism in our historical record of past events, then how could it cease to be a sensible thing to do once there are alternative worlds where things turned out differently? The subjective notion of probability isn't affected by the existence of multiple worlds.

The evolution of the whole many-worlds may be deterministic, but the perspective of an observer in a single branch is certainly not deterministic. Past history in that branch doesn't determine the future.
 
  • Like
Likes 1 person

Similar threads

  • Quantum Interpretations and Foundations
Replies
34
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
47
Views
1K
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
  • Quantum Interpretations and Foundations
5
Replies
174
Views
9K
  • Quantum Interpretations and Foundations
Replies
8
Views
2K
  • Quantum Interpretations and Foundations
10
Replies
321
Views
17K
  • Quantum Interpretations and Foundations
Replies
11
Views
669
  • Quantum Interpretations and Foundations
Replies
4
Views
3K
  • Quantum Interpretations and Foundations
3
Replies
76
Views
5K
  • Quantum Interpretations and Foundations
Replies
11
Views
3K
Back
Top