Question regarding the Many-Worlds interpretation

  • #91
mfb, this is no issue in the collapse interpretation b/c you do not have to derive the probability for the collapse; it's postulated.

First of all we know that the QM probability

##p_i = |\langle i|\psi\rangle|^2##

is correct, simply b/c it agrees with our observation.

In the MWI case the problem is to derive this QM probability from branch counting (or something else in the formalism). You have two "probability measures" and two "perspectives", one which is known from other interpretations and experiments, another one from branch counting.

In the collapse case the collapse to i=x,y happens with the correct QM probability by definition! And when comparing it with experiment I see that it perfectly agrees. I cannot explain why it works (b/c I cannot explain why and how the collapse happens), but it works since nearly one century. In the collapse case there is no branch counting, and the two perspectives are identical! So I would never have the idea not to take this probability into account.

In the MWI case there are less axioms, so there must be more theorems ;-)
 
Physics news on Phys.org
  • #92
tom.stoer said:
mfb, this is no issue in the collapse interpretation b/c you do not have to derive the probability for the collapse; it's postulated.
Right, but how do you measure this probability experimentally? No measurement result will ever be a probability.

First of all we know that the QM probability

##p_i = |\langle i|\psi\rangle|^2##

is correct, simply b/c it agrees with our observation.
As shown in my previous post, this is already a non-trivial interpretation of the measurement results.

In the MWI case the problem is to derive this QM probability from branch counting (or something else in the formalism).
You don't have to do this, in the same way you don't care about highly improbable events (as calculated with Born) in collapse interpretations.
 
  • #93
Jazzdude: what puzzles me is that you seem to reject Zeh, Zurek, Wallace, Saunders and Deutsch like it's common place. But it really isn't.

More and more people have become sympathetic and some have become downright proponents of MWI in the last 5 years.

Look at skeptics like Matt Leifer, Scott Aaronson and Peter J. Lews, while none of them are downright MWI'ers, they all have written well about it lately.

Peter J. Lewis has written extensively about the Born Rule issue in MWI and so on, but look at his review of Wallace's book: http://ndpr.nd.edu/news/38878-the-emergent-multiverse-quantum-theory-according-to-the-
everett-interpretation/Note that I am *not* a proponent of MWI as I think the born Rule issue is still a issue, but I'd love to understand better how you dismiss the Decoherence approach in terms of bases
 
  • #94
mfb said:
Right, but how do you measure this probability experimentally? No measurement result will ever be a probability.
@mfb, it's trivial.

In the collapse interpretation I have
1) a statistical frequency written down as a result string "xyxxyx..." by one observer
2) a probability which is postulated for the collapse and which can be calculated directly
Both (1) and (2) agree, that's why QM and "Kopenhagen" work

In the MWI I have
1) statistical frequencies written down as a set of result strings by a set of observers
2) no probability postulate

So if MWI shall be correct, then
1) the probability must be derived
2) it must not only work out top-down but also bottom-up

I think the problem I indicated in the very beginning can be fixed by replacing
a) the sum over branches with a branch-specific measure with
b) a sum over branches where the probability is replaced by in infinite number of sub-branches (with the same result string!)
then there is no probability anymore, but the correct statistical frequency is carried by the measure = by branch counting

If this is correct, then my conclusion changes slightly: if MWI shall be correct, then
1) the measure must be derived from branch counting
(it will work out top-down and bottom-up automatically)

Anyway, having one axiom less than in collapse models, MWI has to deliver what it promised as a theorem.
 
Last edited:
  • #95
tom.stoer said:
@mfb, it's trivial.

In the collapse interpretation I have
1) a statistical frequency written down as a result string "xyxxyx..." by one observer
2) a probability which is postulated for the collapse and which can be calculated directly
Both (1) and (2) agree, that's why QM and "Kopenhagen" work
It is not trivial, if you don't use handwaving.
Perform your 10%/90% experiment 1000 times. I am highly confident (from an everyday perspective) you will not get 100 X and 900 Y. Does this mean your theory is wrong? Certainly not.

How do you test "probabilities" predicted by your theory? How do you distinguish between correct and wrong predictions?

I know this can be done. And if you write down a formal way to do this, you can do exactly the same for MWI, if you are interested in hypothesis testing.

I think the problem I indicated in the very beginning can be fixed by replacing
a) the sum over branches with a branch-specific measure with
b) a sum over branches where the probability is replaced by in infinite number of sub-branches (with the same result string!)
then there is no probability anymore, but the correct statistical frequency is carried by the measure = by branch counting

If this is correct, then my conclusion changes slightly: if MWI shall be correct, then
1) the measure must be derived from branch counting
(it will work out top-down and bottom-up automatically)

Anyway, having one axiom less than in collapse models, MWI has to deliver what it promised as a theorem.
(a) is fine.
Having less axioms is better in terms of Occam's razor.
 
  • #96
mfb said:
It is not trivial, if you don't use handwaving.
Perform your 10%/90% experiment 1000 times. I am highly confident (from an everyday perspective) you will not get 100 X and 900 Y. Does this mean your theory is wrong? Certainly not.

How do you test "probabilities" predicted by your theory? How do you distinguish between correct and wrong predictions?
It's about statistical hypothesis tests, levels of significance and all that.

mfb said:
And if you write down a formal way to do this, you can do exactly the same for MWI, if you are interested in hypothesis testing.
Provided it works for both top-down and bottom-up.

mfb said:
(a) is fine.
Having less axioms is better in terms of Occam's razor.
(a) is nice in theory (top-down) but not in practice (bottom-up) as I tried to explain several times.

And yes, having less axioms is fine in terms of Ockham's razor - provided that the required theorems can be proven. But what I read here seems to indicate that there is by no means agreement on these bold claims.

So my conclusion is that MWI has no philosophical acceptance problem but a physical problem. It is not clear whether the required theorems regarding probabilities / measures / branching and branch counting / Born rule etc.. follow from the formalism.
 
  • #97
tom.stoer said:
It's about statistical hypothesis tests, levels of significance and all that.

Maybe this would be considered argumentative, but I don't personally regard those standard tools of statistical analysis as having a whole lot of first-principles, theoretical support for them. They are simply rules of thumb for using statistical data. You can use the same sorts of rules of thumb, regardless of whether you believe the statistics arise from randomness, or ignorance of hidden variables, or many-worlds type multiplicity. If they are just rules of thumb, then the only justification you need for them is that they seem to work pretty well, and that justification is empirical, not theoretical. You don't need a different justification for each interpretation of quantum mechanics.
 
  • #98
stevendaryl said:
you don't need a different justification for each interpretation of quantum mechanics.
I don't think this is true.

You have the Born rule as an axiom in collapse interpretations. You do not have this axiom in MWI, instead you have branch counting. So you need a theoretical derivation and experimental tests, otherwise it's not physics.

The claim that MWI is fully equivalent to other interpretations can't be true if their axioms differ and if the gap cannot be closed by a theorem.
 
  • #99
stevendaryl said:
Maybe this would be considered argumentative, but I don't personally regard those standard tools of statistical analysis as having a whole lot of first-principles, theoretical support for them. They are simply rules of thumb for using statistical data. You can use the same sorts of rules of thumb, regardless of whether you believe the statistics arise from randomness, or ignorance of hidden variables, or many-worlds type multiplicity. If they are just rules of thumb, then the only justification you need for them is that they seem to work pretty well, and that justification is empirical, not theoretical. You don't need a different justification for each interpretation of quantum mechanics.

or symply, data probabilistically infected.
convenient coincidences.

http://plato.stanford.edu/entries/probability-interpret/
 
  • #100
tom.stoer said:
I don't think this is true.

You have the Born rule as an axiom in collapse interpretations. You do not have this axiom in MWI, instead you have branch counting. So you need a theoretical derivation and experimental tests, otherwise it's not physics.

The claim that MWI is fully equivalent to other interpretations can't be true if their axioms differ and if the gap cannot be closed by a theorem.

I don't think it's really true that you have "branch counting" in MWI. The branches are not discrete, there are infinitely many of them. With an infinite collection of possibilities, there is no way to count the numbers. You have to have a measure on the sets of possibilities. I'm not sure whether it is possible to derive the measure to use from first principles, but if it has to be an additional axiom, I don't see how that's any worse than the standard collapse interpretations.

I also think that you're glossing over the conceptual problems with the standard (Copenhagen) interpretation. You say you have the Born rule, but as others have pointed out, there is no way to absolutely test the correctness of that rule. The best you can do is have a rule of thumb for saying when the discrepancy between relative frequencies and the probabilities predicted by the Born rule are great enough to falsify your theory. What such a rule of thumb amounts to is ASSUMING that our actual history is fairly typical of the possible histories described by the theory. Without such an assumption, a probabilistic theory is not testable. The only difference (as far as the meaningfulness of probabilities) that I can see between the standard collapse interpretation and the Many Worlds interpretation is that in the first case, the set of possibilities are considered to be theoretical possibilities, while in the second case, they are considered to be actual alternatives.
 
  • #101
stevendaryl said:
I don't think it's really true that you have "branch counting" in MWI. The branches are not discrete, there are infinitely many of them. With an infinite collection of possibilities, there is no way to count the numbers. You have to have a measure on the sets of possibilities.
It doesn't matter whether they are continuous or discrete. It's key that you have some well-defined measure.

stevendaryl said:
I'm not sure whether it is possible to derive the measure to use from first principles, but if it has to be an additional axiom, I don't see how that's any worse than the standard collapse interpretations.
An axiom would not really make sense. A theorem is required, but w/o a sound proof it's unclear how MWI is viable.
 
  • #102
tom.stoer said:
It's about statistical hypothesis tests, levels of significance and all that.
That's exactly the hand-waving I mentioned.

I am not interested in precise numbers, but can you suggest an experimental test which can verify or disprove some hypothesis about the squared amplitudes of quantum-mechanical systems?




Here is what I would suggest:

Probabilistic interpretations:

Find a test that you can repeat as often as you like (like shooting photons with a specific polarization at a polarizer, rotated by some specific angle). For each photon, detect if it passed the polarizer. Let's assume this detection is 100% efficient and has no background.

Let's test the hypothesis "the squared amplitude* of the wave going through is 10% of the initial squared amplitude". I will call this hypothesis A. In probabilistic interpretations, this translates to "as expectation value, 10% of the photons pass through" via an additional axiom.
I will call this event X, and the opposite event Y.

*and let's ignore mathematical details, it should be clear how this is meant

We decide to test 100,000 photons. If every photon has a 10% probability for x and all photons are independent, we expect 10,000 the result "x", with a standard deviation of (roughly) 100. This is standard probability theory, nothing physical so far.

To distinguish our hypothesis from other hypotheses (like "20% probability of x" - hypothesis B), we look for measurement results which are in agreement with A, but not with B or a large class of other possible hypotheses.
A natural choice is "we see agreement with hypothesis A if we see x between 9800 and 10200 times".
Mathematics tells us that with hypothesis A, we should see agreement with a probability of ~95%, while with hypothesis B, the probability is basically 0.
We can perform the test. If we get a result between 9800 and 10200 we are happy that hypothesis A passed the test and that we could reject hypothesis B and many others.


There are hypotheses we could not reject with that test. Consider hypothesis C: The number of x-events will be even with 95% probability. Can we test this? Sure. Make another test with "we see agreement with hypothesis C if the number of x is even". If we get 10044, we do not reject C, if we get 10037, we do.

Actually, it is completely arbitrary which events we consider as "passing the test" versus "failing", as long as the sum of probabilities of all events in the class "passing the test" is some reasonably large number (like 95% or whatever you like).

To test more and more hypotheses with increasing precision, we can perform multiple experiments, which is basically the same as one larger experiment.

The result?
A true hypothesis will most likely (->as determined by the probabilities of the true hypothesis) pass the tests, while a wrong hypothesis will most likely (->as determined by the probabilities of the true hypothesis) fail.

Most possible results will reject the true hypothesis. Consider the first test, for example: Only a fraction of ~10-16000 of all possible results will pass the test. Even the most probable single result (no x at all) is part of the "reject the test" fraction of the possible measurements.
This small fraction of measurements passing the test is not fixed and depends on the test design, but for large tests it is always extremely small.

How can we "hope" that we hit one of those few events (in order to confirm the correct hypothesis? Well, we cannot. We just know that they get a large amplitude, and call this a large "probability". The "probability" to accept the true hypothesis and reject as many others as possible can go towards 1.
--> We cannot get physics right with certainty, we cannot even get it right with most possible measurement results, but we can get it right with a high probability (like "1-epsilon").

MWI

The QM formalism stays the same, and we can make hypotheses about amplitude.
We can define and perform the same tests as above. Again, most results will reject the true hypothesis - the true hypothesis will get rejected in most branches. But at the same time, most of the measure (we can let this fraction go towards 1 for many tests) will see passed tests for the true hypothesis only.
--> We cannot get physics right in all branches, we cannot even get it right with most branches, but we can get it right within branches with a large measure (like "1-epsilon").

That's all I want to get from tests.
 
  • #103
tom.stoer said:
It doesn't matter whether they are continuous or discrete. It's key that you have some well-defined measure.

It's the same measure as is used in the collapse interpretation.

An axiom would not really make sense. A theorem is required, but w/o a sound proof it's unclear how MWI is viable.

I don't agree. It's an analogous situation in both the collapse interpretation and the MWI. In the collapse interpretation, there is a set of "possible" histories (possible according to the theory), and then there is our actual history. The Born rule only makes a testable prediction if we assume that our actual history is "typical" out of the set of all possible histories. In MWI, the only difference is that the alternative histories are not considered just theoretical possibilities, but are ACTUAL. They're just not our history. To get a prediction from MWI, you have to have a notion of a typical history, and assume that ours is typical. I don't see much difference, as far as the meaningfulness of probabilistic predictions.
 
  • #104
tom.stoer said:
An axiom would not really make sense. A theorem is required, but w/o a sound proof it's unclear how MWI is viable.

Why doesn't it make sense to have an axiom giving the measure to use?
 
  • #105
The_Duck said:
Why doesn't it make sense to have an axiom giving the measure to use?

Here's a "toy" universe that has some of the properties of MWI:

The universe is deterministic, except for a mysterious, one-of-a-kind perfect coin. When you flip it, it's completely impossible to predict ahead of time whether it will end up "heads" or "tails".

Behind the scenes, this is what really happens: Whenever someone flips the coin, God (or if you want to be non-religious about it, the Programmer---you can assume that the world is really a simulation conducted inside a supercomputer) stops everything for a moment, and makes two copies of the world that are identical in every respect, except that in one copy, the coin lands head-up, and in the other copy, the coin lands tails-up.

As time goes on, some worlds will have histories in which the coin has landed heads-up half the time, and tails-up the other half the time. Other worlds will have different relative frequencies.

Now, a person living in one of the worlds can come up with a measure on possible histories, by using the assumption that every coin flip has probability 50/50 of landing heads or tails. He can define a "typical world" is one in which the relative frequencies approach 1/2 in the limit. He can prove that, according to his measure, the set of worlds that are "typical" have measure 1, and the set that are "atypical" have measure 0. So if he assumes that his own world is typical, he can make probabilistic predictions.

But not everybody will live in a world where the relative frequency for heads approaches 1/2. Some people will live in a world where the relative frequency approaches 1/3, or 1/5, or any other number you choose. So you can't deduce from the many-worlds theory (I'm talking about the theory of the many worlds created by God or the programmer, not Everett's Many Worlds) what the relative frequency must be, because it's different in different possible worlds. You can assume that you live in a world with a particular relative frequency, but that's an additional assumption; it doesn't follow from the theory.
 
  • #106
stevendaryl, The_Duck,

In the meantime I am totally confused whether it makes sense to talk about a well-defined may-worlds-interpretation at all, or whether there is only a collection of guesses.

I copied the following text from Wikipedia, but I could use other refences (Zurek, Zeh, Wallace, ...) as well. This is what I understood and this is what I am talking about here. And this is what makes sense to me and what has the potential to turn an interpretation into a well-defined theory:

A consequence of removing wavefunction collapse from the quantum formalism is that the Born rule requires derivation, since many-worlds claims to derive its interpretation from the formalism. Attempts have been made, by many-world advocates and others, over the years to derive the Born rule, rather than just conventionally assume it, so as to reproduce all the required statistical behaviour associated with quantum mechanics. There is no consensus on whether this has been successful.[24][25][26]

Everett (1957) briefly derived the Born rule by showing that the Born rule was the only possible rule, and that its derivation was as justified as the procedure for defining probability in classical mechanics. ... Andrew Gleason (1957) and James Hartle (1965) independently reproduced Everett's work, known as Gleason's theorem[27][28] which was later extended.[29][30]

Bryce De Witt and his doctoral student R. Neill Graham later provided alternative (and longer) derivations to Everett's derivation of the Born rule. They demonstrated that the norm of the worlds where the usual statistical rules of quantum theory broke down vanished, in the limit where the number of measurements went to infinity.

MWI removes the observer-dependent role in the quantum measurement process by replacing wavefunction collapse with quantum decoherence. Since the role of the observer lies at the heart of most if not all "quantum paradoxes," this automatically resolves a number of problems ... MWI, being a decoherent formulation, is axiomatically more streamlined than the Copenhagen and other collapse interpretations; and thus favoured under certain interpretations of Occam's razor.

So what we are talking about here is a physical branching on the level of the state vector. It becomes a superposition of "nearly uncoupled" or "dynamically disconnected" superselection sectors (= branches) which are stable w.r.t. time evolution.

This means that branching, number of branches and especially the measure, factorization in orthogonal subspaces and their stability etc. must follow from the theory, i.e. Hilbert space + Schrödinger equation + decoherence (or some other physical process). It means that postulating Born's rule again doesn't help since a) then we exactly replace the unphysical collapse by unphysical branching (which is no progress but choosing between the devil and the deep blue sea) and since b) it does not resolve the problem of the bottom-up perspective (which I tried to explain a couple of times). And I would say that this is mainstream; many agree that Born's rule has to follow as a result, and many have worked on a derivation.
 
  • #107
tom.stoer said:
It means that postulating Born's rule again doesn't help since a) then we exactly replace the unphysical collapse by unphysical branching (which is no progress but choosing between the devil and the deep blue sea) and since b) it does not resolve the problem of the bottom-up perspective (which I tried to explain a couple of times). And I would say that this is mainstream; many agree that Born's rule has to follow as a result, and many have worked on a derivation.

It still follows from Gleason's and non contextuality is very reasonable in the MWI.

The issue though is not that the Born Rule is not derivable within the MWI, it's why you get probabilities at all from a deterministic theory. That's what the other proofs like Wallace's are trying to do. They have a rational definition of probability based on decision theory and derive it via that method. Whether it accomplishes what they want is open to debate. The theorem is valid but exactly what's its saying, or even if it not circular (here meaning decision theory itself contains an implicit appeal to a notion of probability making the whole thing circular like probabilities based entirely on the frequentest interpretation is) is arguable.

These are deep questions and all I can suggest is get a hold of a good book on it and go through it for yourself eg the one I am studying right now:
https://www.amazon.com/dp/0199546967/?tag=pfamazon01-20

Thanks
Bill
 
Last edited by a moderator:
  • #108
bhobba said:
It still follows from Gleason's and non contextuality is very reasonable in the MWI.

The issue though is not that the Born Rule is not derivable within the MWI, ...
That's one step, but by no means sufficient.

What we need in addition is
tom.stoer said:
... a physical branching on the level of the state vector, a superposition of "dynamically disconnected" superselection sectors (= branches) which are stable w.r.t. time evolution.

This means that branching, number of branches and especially the measure, factorization in orthogonal subspaces and their stability etc. must follow from the theory, i.e. Hilbert space + Schrödinger equation + decoherence (or some other physical process).

It seems that the idea behind MWI is compelling philosophically, but by no means complete mathematically.
 
  • #109
tom.stoer said:
That's one step, but by no means sufficient.

Can you elaborate. Gleason's plus non-contextuality implies Born.

If you mean it also requires the assumption the outcomes are described by a probability measure then yes - that is an assumption at odds with the foundations of MWI.

Thanks
Bill
 
  • #110
stevendaryl said:
Here's a "toy" universe that has some of the properties of MWI:

The universe is deterministic, except for a mysterious, one-of-a-kind perfect coin. When you flip it, it's completely impossible to predict ahead of time whether it will end up "heads" or "tails".

Whenever someone flips the coin, God ... stops everything for a moment, and makes two copies of the world that are identical in every respect, except that ...

So you can't deduce from the many-worlds theory (I'm talking about the theory of the many worlds created by God or the programmer, not Everett's Many Worlds) what the relative frequency must be, because it's different in different possible worlds. You can assume that you live in a world with a particular relative frequency, but that's an additional assumption; it doesn't follow from the theory.
This toy model is irrelevant for MWI as I understand it.

I am talking about a theory with a Hilbert space, a Hamiltonian H and a time evolution operator U = exp(-iHt), and a singe state (ray) to start with.

Nobody copies states or Hilbert spaces.
Everything follows from H w/o additional assumption.
The problem "top-down" vs. "bottom-up" does not arise for 50% probability.
 
  • #111
bhobba said:
Can you elaborate. Gleason's plus non-contextuality implies Born.
I think I did this already a couple of times.

What we need in addition is
tom.stoer said:
... a physical branching on the level of the state vector, a superposition of "dynamically disconnected" superselection sectors (= branches) which are stable w.r.t. time evolution.

This means that branching, number of branches and especially the measure, factorization in orthogonal subspaces and their stability etc. must follow from the theory, i.e. Hilbert space + Schrödinger equation + decoherence (or some other physical process).

It seems that the idea behind MWI is compelling philosophically, but by no means complete mathematically.
 
  • #112
tom.stoer said:
What we need in addition is

QM follows from the two axioms in Ballentine. The first axiom is simply the existence of observables. The second is the Born Rule. What MWI assumes contains observables. It simply needs the Born Rule to imply all of QM.

The logic is as follows. QM proceeds exactly as normal with decoherence occurring at an observation. The elements of mixed states from decoherence are interpreted as separate worlds. The issue is why are they experienced with a probability related to their weight in the mixed state. The Born rule implies that.

Thanks
Bill
 
  • #113
Honestly, I am a bit confused by a couple of recent posts here. mfb and stevendaryl seem to claim that the Born rule is not needed at all in MWI. I get that we can deduce the Born rule from our actual history, just like people who observed results not in accordance with the Born rule could deduce a different rule from their history. So a priori, the Born rule would be nothing special. Is this the basis of your argument?
 
  • #114
No we are running round in circles.

We do not introduce Born's rule as an axiom, even if Ballantine does, simply b/c we do not discuss Ballantine.

Even if we are able to explain how Born's rule can be derived, it's by no means clear why the bottom-up perspective within one branch should care about Born's rule (which applies to the top-down perspective).
 
  • #115
bhobba said:
Can you elaborate. Gleason's plus non-contextuality implies Born.
How does that work? As things look to me, non-contextuality doesn't hold in QM, and besides, the contextuality of QM can essentially be derived from Gleason's theorem; that's how Bell did it originally.

Also, I'm not at all sure I see how Gleason's theorem is relevant to probability in the MWI. What it gives is a measure on the closed subspaces of Hilbert space; but what the MWI needs is to make sense of the notion of 'probability of finding yourself in a certain branch'. It's not obvious to me how the two are related. I mean, sloppily one might say that Gleason tells you the probability of a certain observable having a certain value, but there seems to me a gap here in concluding that this is necessarily the same probability as finding yourself in the branch in which it determinately has that value. I could easily imagine a case in which Gleason's theorem, as a piece of mathematics, were true, but probability of being in a certain branch follows simple branch-counting statistics, which won't in general agree with Born probabilities.
 
  • #116
kith said:
Honestly, I am a bit confused by the last few posts.
I am confused as well b/c I still think that there is not one well-defined MWI but only a collection of related ideas.
 
  • #117
tom.stoer said:
Even if we are able to explain how Born's rule can be derived, it's by no means clear why the bottom-up perspective within one branch should care about Born's rule (which applies to the top-down perspective).
For future experiments.

I guess my previous post was too long :(.
 
  • #118
S.Daedalus said:
Also, I'm not at all sure I see how Gleason's theorem is relevant to probability in the MWI. What it gives is a measure on the closed subspaces of Hilbert space; but what the MWI needs is to make sense of the notion of 'probability of finding yourself in a certain branch'. It's not obvious to me how the two are related. I mean, sloppily one might say that Gleason tells you the probability of a certain observable having a certain value, but there seems to me a gap here in concluding that this is necessarily the same probability as finding yourself in the branch in which it determinately has that value. I could easily imagine a case in which Gleason's theorem, as a piece of mathematics, were true, but probability of being in a certain branch follows simple branch-counting statistics, which won't in general agree with Born probabilities.
THANKS A LOT!

This is what I try to explain here!
 
  • #119
kith said:
Honestly, I am a bit confused by the last few posts. mfb and stevendaryl seem to claim that the Born rule is not needed at all in MWI.

It is needed.

What the hope of MWI adherents is it can be deduced from the Hilbert space formalism alone. There are a number of proofs about that purport to do that.

Its a matter of opinion and debate if they do. I believe on their own terms they do just that - but the key caveat is - ON THEIR OWN TERMS. One issue for example, as I mentioned, is if the decision theory proof they use subtly contains what they are trying to prove in its assumptions. Other proofs based on the idea of envariance exist as well that have been criticised for circularity as well. The debate rages and the issues are complex and suptle.

There are also issues associated with decoherence such as the so called factoring problem, but they need further investigation to be resolved one way or another.

Thanks
Bill
 
  • #120
S.Daedalus said:
but what the MWI needs is to make sense of the notion of 'probability of finding yourself in a certain branch'.
A simple question: Why?
What is wrong if we do not have this?
What does "probability" even mean in a deterministic theory?
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
Replies
19
Views
471
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 313 ·
11
Replies
313
Views
24K
  • · Replies 16 ·
Replies
16
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 41 ·
2
Replies
41
Views
6K
  • · Replies 34 ·
2
Replies
34
Views
4K
Replies
11
Views
3K