# The Assumptions of MWI

Everett created the Relative State Formulation in order to resolve the contradictions, as he saw it, of QM. He introduced his doctoral thesis by stating that conventional QM works perfectly well - it predicts quantum statistics. However, interpreting conventional QM realistically leads to paradoxes. These are largely associated with the collapse of the wavefunction.

Everett's solution was to remove the collapse. The system evolves under linear wave mechanics and nothing else. Everett developed a primitive theory of measurement which predicted the appearence of collapse. He thus replaced the axiom of collapse with a result that is derived from wave mechanics alone. It is now generally believed that Everett's theory does not provide the quantitative predictions of collapse theory but needs the addition of decoherence. (This is an important criticism of the original theory but I do not want to discuss whether Everett's treatment of the measurement problem was complete, suffice it to say that it can be made so or at least that any remaining problems are common to all interpretations.)

Everett showed that the appearence of wavefunction collapse follows from wave mechanics. To do this he considered the interaction of two subsystems, the system under observation, S, and the observer system O. He modelled the observation as a simple interaction that creates a state of O that is strongly correlated with the state of S. This is fairly trivial if S is in an eigenstate of the interaction and O enters a pointer state. (Again, the reasons why this is even possible let alone normal are part of the measurement problem.)

Given that the states of S correlate with the states of O, what happens when S is a superposition? Interaction leaves the states entangled - neither subsystem has a state of its own, the state can only be specified by reference to the state of the rest of the system. The total system is in a superposition of states which can be labelled by the observed outcome, xi, thus
∑ ai |xi>(S)|xi>(O)

S is said to be in the state |xi> relative to the state of O, |xi>, and vice versa. The states of the total system are popularly described as worlds though Everett did not like the term. As there are typically many of them, the treatment is usually called the Many Worlds Interpretation.

There is considerable justification for calling the states "worlds". An observer experiences the world. In this case, O's experience is the state he/she or it enters as a result of observing S. Thus the different xis characterise different phenomenal states. Each state of S evolves independently in the superposition so the picture of separate worlds coexisting is natural - as long as no new physics like "universe splitting" is added.

However, MWI has been criticised for making assumptions. Can someone please tell me what assumptions other than wave mechanics are implicit in the RSF? Or in calling the relative states "worlds" given the caveats above?

Thanks,
Derek

Last edited:

bhobba
Mentor
However, MWI has been criticised for making assumptions. Can someone please tell me what assumptions other than wave mechanics are implicit in the RSF? Or in calling the relative states "worlds" given the caveats above?

Its not a criticism - its simply what the interpretation says.

By interpreting relative states as separate worlds you are ascribing a definite reality to the wave function that goes beyond a mere definition. One consequence of this, is, for example energy dilution in each of the worlds:

But the relative state view of MW is well and truly superseded by the modern version based on decoherence eg relative state doest say exactly when an observation has occurred.

Here its utterly simple. After decoerence you have the mixed state ∑pi |bi><bi|. The |bi><bi is interpreted as a world. But for that to make sense the quantum state must be real having things like energy etc - unless of course you want to go the many minds route.

Thanks
Bill

The worlds do not have to be real. However the question of reality is important. The contradictions that Everett perceived occur when one tries to interpret QM to mean something physical. However, as Kant notably pointed out, existence is not a predicate. It is not a quality, not a measurable quantity. Asserting that the wavefunction describes something that exists does not alter the relative state model one bit nor affect its derivation from wave mechanics. Once relative states are derived, you can try out an ontic assumption and you then find that the realist contradictions have disappeared. However, onticity is not part of the formulation: you could concoct a relative state formulation without saying anything about existence. I'm therefore inclined to say that realism is not an assumption, relative state is compatible with realism but does not assume it. If you still want to say it is an assumption, I won't argue, but you must not imply it is an assumption in the formulation, it is merely a bystander.

Relative state doesn't need to identify when an observation is made. It deals with interaction and the resultant entanglement. There is no class of interaction called "observation" that plays a special role in MW, so the question "when does an observation occur" is specious. Now, I have already said that decoherence needs to be added to the picture. And when you do, a mixed state emerges. Since there is no wavefunction collapse in MW, the mixed state is improper, an unimportant distinction to the "shut up and calculate" approach, but vital if one assumes that the environment has a quantum state and wishes to say what it all means. In the latter case, the system is simply entangled with the environment just as in any other interaction. The overall system remains in superposition, not a proper mixed state. Each state exists relative to the state of the environment - just as in any other entanglement.

Energy conservation is simple: energy is conserved in each world. Your link actually explains this. The question "where does the energy/matter come from?" is misleading because it implies that you can add the energies together meaningfully. For this to be done you would have to be able to see the worlds simultaneously and add their energies. This means you would be looking at the system under a different basis. You then find that in your world, energy is conserved. For instance, a particle goes through a double slit. The canonical decomposition is into |left> and |right>. There is only one particle in either state. You cannot observe |left> and |right> at the same time. You can, however let them interfere and see the result. Does this mean you are seeing |left> and |right> separately, making two particles - or worse still, two half-particles? No, of course not, it means you have changed basis and are now observing a diffuse wave (or delocated particle). Your observations are incompatible with the |left> and |right> basis. There is still just one particle. Now, if you can produce a basis in which |left> and |right> are observed simultaneously then I will agree that there are two particles. And two Schrodinger Cats.

Perhaps I should point out that MW does not insist on one, preferred, canonical basis. On the contrary all possible bases are equally valid and thus there are not just many worlds, there are many sets of worlds. And yes, I do prefer the Many Minds route, though I find that the mere mention of "minds" is an open invitation to woo-mongers to parade their favourite bit of crackpottery. As far as I can see, if one's subjective experience supervenes on the state of one's brain then the MW picture entails Many Minds. The picture then becomes one universe which can be viewed from an almost infinite number of perspectives (bases), many of which describe a vast superposition of differently experiencing brains. The nightmare world in which you tunnel into a state of unspeakable agony with no understanding of how you got there is presumably as real to your brain in that world as is the relatively humdrum world that you (or most of you, plural) experience while posting to PF. Don't blame me; as far as I can tell it all follows logically from wave mechanics. I don't want this thread to degenerate into a discussion of whether there is a threshold amplitude below which states are not phenomenally real.

bhobba
Mentor
The worlds do not have to be real.

So you believe something that has energy doesn't have to be real - interesting view.

I suspect most physicists and applied mathematicians wouldn't agree.

Energy conservation is simple: energy is conserved in each world.

Of course - but how that works with it not being real - that has me beat.

Thanks
Bill

So you believe something that has energy doesn't have to be real - interesting view.

I suspect most physicists and applied mathematicians wouldn't agree.

Of course - but how that works with it not being real - that has me beat.

Thanks
Bill
Would you mind not derailing this thread with deliberate misunderstandings? You have managed to get one thread closed already, I have re-opened the subject in the faint hope that you actually have something constructive to say. You are far from stupid. A less competent poster might have genuinely missed the point but it is beyond belief that you have. I dare say the mods will now close this thread. Thanks.

kith
Everett showed that the appearence of wavefunction collapse follows from wave mechanics. To do this he considered the interaction of two subsystems, the system under observation, S, and the observer system O.
This may be considered an assumption. If the universal state and the universal Hamiltonian are the fundamental constituents of reality, it isn't clear why we should decompose the universal Hilbert space this way. In an arxiv paper which has been discussed here quite a bit, Schwindt argues that there is always a decomposition of the universal Hilbert space where nothing happens at all. So the familiar dynamics we observe in experiments would be only an artifact of using a certain decomposition. Why not use another one?

There's also the problem of deriving the Born rule. The MWI as such doesn't involve probabilities. So how do they enter the picture? Don't we need to assume some kind of probability structure in order to talk about probabilities?

Some people think that acknowledging these things takes away the appeal of the MWI. In my opinion, they simply show that if we want to use the MWI to describe experimental situations, we still can't disregard the fact that we perform an experiment. We use the decomposition into system + apparatus + ourselves because we use the apparatus to look at results, and the probabilities reflect something about our experience.

This overlaps a bit with what you write. I just wrote down they way I currently think about these matters.

Last edited:
kith
It is now generally believed that Everett's theory does not provide the quantitative predictions of collapse theory but needs the addition of decoherence.
What do you mean by this? Decoherence is a measureable phenomenon so it is present in all interpretations. What exactly do you think needs to be added to Everett's theory?

This may be considered an assumption. If the universal state and the universal Hamiltonian are the fundamental constituents of reality, it isn't clear why we should decompose the universal Hilbert space this way. In an arxiv paper which has been discussed here quite a bit, Schwindt argues that there is always a decomposition of the universal Hilbert space where nothing happens at all. So the familiar dynamics we observe in experiments would be only an artifact of using a certain decomposition. Why not use another one?
Indeed why not? MW, correctly applied, implies not just many worlds but many sets of worlds. If some of these turn out to have no dynamical evolution, that is a surprising result but it doesn't affect the MW picture. In some decompositions, there is dynamical evolution and that is enough to ensure the existence of dynamical worlds such as ours.
There's also the problem of deriving the Born rule. The MWI as such doesn't involve probabilities. So how do they enter the picture? Don't we need to assume some kind of probability structure in order to talk about probabilities?
Well the requirement is that the observer can look at a history of repeated experiments and mistake the frequencies for the workings of true randomness. So we don't need actual probability. Deriving the Born rule is easy if we assume that frequentist probability is dependent on the magnitude of the state. Born initially made the naive mistake of assuming proportionality - as if the observer were an outside agent who lands at random in a particular world. The probability would depend how big the world is. This doesn't work - if probability depends on the magnitude of a vector state then it's not hard to show the Born rule has to be a square law in order to satisfy elementary rules of probability. But do we need to assume that it depends on magnitude? Or could it be nothing else? I really am not sure about this.
I just found a quotation from Zurek: Indeed, Gleason’s theorem [30] is now an accepted and rightly famous part of quantum foundations. It is rigorous – it is after all a theorem about measures on Hilbert spaces. However, regarded as a result in physics it is deeply unsatisfying: it provides no insight into physical signiﬁcance of quantum probabilities – it is not clear why the observer should assign probabilities in accord with the measure indicated by Gleason’s approach.
What do you mean by this? Decoherence is a measureable phenomenon so it is present in all interpretations. What exactly do you think needs to be added to Everett's theory?
Fair comment. Nothing needs to be added to the theory to make it work - Everett set out to remove the collapse postulate and set a derived result in its place. The theory does this perfectly well on its own. However people seem to expect more of it - that it should provide a complete theory of measurement. For my part I prefer to keep the two ideas separate.

kith
But do we need to assume that it depends on magnitude? Or could it be nothing else? I really am not sure about this.
I don't know Gleason's theorem well but I think it pretty much says that the Born rule is the only consistent probability assignment. But I don't think that's what people are concerned with. The time evolution of the universal state is perfectly deterministic. So why should we assign probabilities in the first place?

However people seem to expect more of it - that it should provide a complete theory of measurement.
What would you consider to be a complete theory of measurement? What would we need to add?

I don't know Gleason's theorem well but I think it pretty much says that the Born rule is the only consistent probability assignment. But I don't think that's what people are concerned with. The time evolution of the universal state is perfectly deterministic. So why should we assign probabilities in the first place?
Because we experience probabilities. We do an experiment and get a statistical result. A canonical interpretation of our experience is that the numbers, the frequencies of different outcomes, are set by inherently random events. So we need MW to predict the numbers even though it has no random processes to account for them.
What would you consider to be a complete theory of measurement? What would we need to add?
The theory needs to describe the process of observation and what it means to measure a quantity. It needs to account for the Born rule and the preferred basis. That is by no means a formal definition but it should be enough to be going on with. Everett's theory accounts for the appearence of a random wavefunction collapse even though it doesn't really happen. So, as long as we stick to what the system appears to be doing in the observer's own world, everything in ordinary QM should then follow. Decoherence is integral to QM's understanding of measurement. So in that sense it gets added to MW when the latter is expanded as a complete interpretation. If you leave MW (the RSF) as Everett intended, it is merely a foundation for QM - it replaces the conventional axiom of collapse. Take your pick as to what MW is, the question is purely semantic.

bhobba
Mentor
I don't know Gleason's theorem well but I think it pretty much says that the Born rule is the only consistent probability assignment.

That's true.

But here consistent means non-contextuality ie it doesn't depend on the basis. Hidden variable theories break that. It wouldn't be reasonable for MW to be like that though. Indeed if you have a look at Wallace's book and carefully go through his arguments non-contextuality lies at the heart of it - he has a non-contextuality theorem that shows it must be non-contextual, although he uses a direct proof from decision theory rather than Gleason.

Thanks
Bill

Last edited:
kith
kith
Because we experience probabilities. We do an experiment and get a statistical result. A canonical interpretation of our experience is that the numbers, the frequencies of different outcomes, are set by inherently random events. So we need MW to predict the numbers even though it has no random processes to account for them.
Yes, so we need to add probabilities by hand. There are consistency conditions -like Gleason's theorem- which force us to do it in a certain way but the MWI alone doesn't yield probabilities. Some people consider this to be a serious flaw because they feel the MWI should explain this. They don't want to play the experience of the observer a role at all.

The theory needs to describe the process of observation and what it means to measure a quantity. It needs to account for the Born rule and the preferred basis. That is by no means a formal definition but it should be enough to be going on with. Everett's theory accounts for the appearence of a random wavefunction collapse even though it doesn't really happen. So, as long as we stick to what the system appears to be doing in the observer's own world, everything in ordinary QM should then follow. Decoherence is integral to QM's understanding of measurement. So in that sense it gets added to MW when the latter is expanded as a complete interpretation. If you leave MW (the RSF) as Everett intended, it is merely a foundation for QM - it replaces the conventional axiom of collapse. Take your pick as to what MW is, the question is purely semantic.
Ok, I think I get what you say now. The relative state formulation simply takes the von Neumann measurement scheme and leaves out the last step (the reduction of the state vector). It can do this because it reinterprets the components of the maximally entangled state as states relative to an observed measurement outcome and it takes all outcomes as equally real. It still takes the measurement as a black box. But decoherence in the system is implicitly present because it can be derived from unitary time evolution if we include the environment.

There is a caveat, however: decoherence is not perfect, so the different final states in the mixture still show a tiny bit of interference. Strictly speaking, this means that we can't interpret them as different relative states. It doesn't matter for practical purposes but I think it will always leave a bit of vagueness in the definition of what constitutes a world. Recently, this has led to interpretations with many interacting worlds.

Yes, so we need to add probabilities by hand. There are consistency conditions -like Gleason's theorem- which force us to do it in a certain way but the MWI alone doesn't yield probabilities. Some people consider this to be a serious flaw because they feel the MWI should explain this. They don't want to play the experience of the observer a role at all.
The role of the observer is to mistake an improper mixed state for a proper one. :)
Ok, I think I get what you say now. The relative state formulation simply takes the von Neumann measurement scheme and leaves out the last step (the reduction of the state vector). It can do this because it reinterprets the components of the maximally entangled state as states relative to an observed measurement outcome and it takes all outcomes as equally real. It still takes the measurement as a black box. But decoherence in the system is implicitly present because it can be derived from unitary time evolution if we include the environment.
There is a caveat, however: decoherence is not perfect, so the different final states in the mixture still show a tiny bit of interference. Strictly speaking, this means that we can't interpret them as different relative states. It doesn't matter for practical purposes but I think it will always leave a bit of vagueness in the definition of what constitutes a world. Recently, this has led to interpretations with many interacting worlds.
Well people can be as vague as they like but worlds only make sense if they are synonymous with relative states. The different states in the mixture to which you refer are different states in superposition decohered by the environment. So the final decomposition is of observed states relative to observer plus environment. Nothing vague about that - Everett seems to have omitted environment but we can add it in.
I think the problem you are describing is simply the assumption of pointer states. There must be many states which are pointer states so it is reasonable to expect the final state of the whole system to be a massive superposition. The fact that the interference terms are not completely eliminated just reflects this. In each component state, however, the state of the system under observation is unabiguously associated with a state of the observer plus environment. This is the meaning of a world - a single state, not a bundle of states which have the same macroscopic pointer states.
Is that fair?

kith
Well people can be as vague as they like but worlds only make sense if they are synonymous with relative states. The different states in the mixture to which you refer are different states in superposition decohered by the environment. So the final decomposition is of observed states relative to observer plus environment. Nothing vague about that.
If decoherence is not perfect, the final state of the system and the observer is not maximally entangled. So using a suggestive notation, it is not $|OS\rangle + |O'S' \rangle$ like in the von Neumann measurement scheme, but has additional terms $\varepsilon|OS'\rangle$ and $\varepsilon|O'S \rangle$. This can only be interpreted as two relative states $|S\rangle^{O}$ and $|S'\rangle^{O'}$ if you ignore the $\varepsilon$-terms. So I think the following statement is only approximately true:

In each component state, however, the state of the system under observation is unabiguously associated with a state of the observer plus environment.

Last edited:
If decoherence is not perfect, the final state of the system and the observer is not maximally entangled. So using a suggestive notation, it is not $|OS\rangle + |O'S' \rangle$ like in the von Neumann measurement scheme, but has additional terms $\varepsilon|OS'\rangle$ and $\varepsilon|O'S \rangle$. This can only be interpreted as two relative states $|S\rangle^{O}$ and $|S'\rangle^{O'}$ if you ignore the $\varepsilon$-terms. So I think the following statement is only approximately true:
In each component state, however, the state of the system under observation is unabiguously associated with a state of the observer plus environment.
Oops. I should have been talking about "system plus environment" not "observer plus environment" as it is of course the superposition of Schrodinger's cat that decoheres. One is then left with the observer's state being relative to the state of the system plus environment. This would seem to make the idea of a world perfectly consistent - it is identified by the particular observer state. No doubt the observer's state is a superposition and it decoheres too but I don't think this makes any difference as it just results in more worlds. Is there any problem with this? And, more importantly for this thread, does it involve any assumptions peculiar to MWI?

But this leads us to the epsilon terms. What do they signify? I'll switch notation to "SE" to denote the system plus environment. I would suggest that $|O'[SE] \rangle$ is simply another world. It may seem odd that the observer enters a state of having (apparently) seen a dead Schrodinger cat when the cat in its environment is actually alive but is that not exactly what we would expect if the entanglement - the correlation that we call observation - is not perfect?

Or we can say the pair of terms, which it would seem always occur together, can be taken as a single world in a different basis.

This is not so far-fetched. Consider a case where decoherence is slow or non-existent: photons in free space. We glibly talk about linearly polarized photons but the canonical quantum number is spin: |left> and |right> are the basis states. Linear polarization is thus a superposition. We are perfectly free to observe it as linear, or we can decompose a linearly polarized beam into two circular polarized ones.

There are two sets of worlds: the |left> and |right> set and the |H> and |V> set. We could devise many more. If we choose to measure the circular polarization, our world becomes two (not as a dynamic process but as a logical consequence!) In neither world does the observer measure the true polarization which was linear, the choice of basis means he/she sees circular polarizations. In a manner of speaking, the final state is all epsilon terms.

Does that make sense?

Last edited:
atyy
If decoherence is not perfect, the final state of the system and the observer is not maximally entangled. So using a suggestive notation, it is not $|OS\rangle + |O'S' \rangle$ like in the von Neumann measurement scheme, but has additional terms $\varepsilon|OS'\rangle$ and $\varepsilon|O'S \rangle$. This can only be interpreted as two relative states $|S\rangle^{O}$ and $|S'\rangle^{O'}$ if you ignore the $\varepsilon$-terms. So I think the following statement is only approximately true:

But this leads us to the epsilon terms. What do they signify? I'll switch notation to "SE" to denote the system plus environment. I would suggest that $|O'[SE] \rangle$ is simply another world. It may seem odd that the observer enters a state of having (apparently) seen a dead Schrodinger cat when the cat in its environment is actually alive but is that not exactly what we would expect if the entanglement - the correlation that we call observation - is not perfect?

If there are small terms, can one even say that there is a preferred basis? After all, defining the terms as "small" assumes the preferred basis. But if we cannot even define a preferred basis, then we are back to why don't we see the superposition of dead and alive cats.

If there are small terms, can one even say that there is a preferred basis?
After all, defining the terms as "small" assumes the preferred basis.
But if we cannot even define a preferred basis, then we are back to why don't we see the superposition of dead and alive cats.[/QUOTE]

I don't understand you. The preferred basis seems to follow from decoherence. If decoherence is suppressed then we observe the superposition. Only as there is no decoherence, there is no preferred basis and there is no particular reason to think of it as a superposition! For example, as I said, photon polarization does not decohere significantly. So we can think of a diagonally polarized photon as being in a superposition of circular polarizations or a superposition of vertical/horizontal polarizations, or as in an eigenstate of the +/- 45 degree operator. Photon polarization is so humdrum that we tend to forget we are seeing Schrodinger's cat before decoherence, in its full "superposed" state.

Anyway, as they say, what has this to do with the price of cheese? Remember, I am asking about essential assumptions peculiar to MWI, I'm not trying to pad Everett's theory out to include the measurement problem. Not here anyway. Does the measurement problem get solved in other interpretations but not in MWI? If not then MWI doesn't need extra assumptions. But that's what I'm asking about.

Last edited:
atyy
I don't understand you. The preferred basis seems to follow from decoherence. If decoherence is suppressed then we observe the superposition. Only as there is no decoherence, there is no preferred basis and there is no particular reason to think of it as a superposition! For example, as I said, photon polarization does not decohere significantly. So we can think of a diagonally polarized photon as being in a superposition of circular polarizations or a superposition of vertical/horizontal polarizations, or as in an eigenstate of the +/- 45 degree operator. Photon polarization is so humdrum that we tend to forget we are seeing Schrodinger's cat before decoherence, in its full "superposed" state.

Anyway, as they say, what has this to do with the price of cheese? Remember, I am asking about essential assumptions peculiar to MWI, I'm not trying to pad Everett's theory out to include the measurement problem. Not here anyway. Does the measurement problem get solved in other interpretations but not in MWI? If not then MWI doesn't need extra assumptions. But that's what I'm asking about.

The preferred basis does not follow from decoherence alone unless it is perfect. If decoherence is not perfect, the decoherence does not give the preferred basis, so saying the terms are small is begging the question.

I understand your definition of MWI, which is fine. However, MWI is usually understood to be an attempt to solve the measurement problem. (And I thought that was what you were discussing with kith, since your definition of the MWI is simply a renaming of linearity).

QUOTE="atyy, post: 5122937, member: 123698"] However, MWI is usually understood to be an attempt to solve the measurement problem. (And I thought that was what you were discussing with kith, since your definition of the MWI is simply a renaming of linearity). The preferred basis does not follow from decoherence alone unless it is perfect. If decoherence is not perfect, the decoherence does not give the preferred basis, so saying the terms are small is begging the question.[/QUOTE]

Yes I agree. Though Everett might have been a little surprised to learn that his doctoral thesis was just a matter of renaming something. But I think you're saying what I'm saying, that Everettian MWI follows from linear wave mechanics without any further assumptions. I certainly said that Everett's theory does not address the measurement problem properly and that decoherence is needed. But I did not mean to imply that it is solved in MWI any better than it is solved in collapse theories. But I'd rather not get bogged down in solving the measurement problem - in this thread I'm asking about assumptions not how well the theory works. What would be of greatest interest would be assumptions that are only required because of the Everettian foundation. In which context let me ask you: you say "the preferred basis does not follow from decoherence alone unless it is perfect". What assumptions are implied by that word "alone"?

atyy
Yes I agree. Though Everett might have been a little surprised to learn that his doctoral thesis was just a matter of renaming something. But I think you're saying what I'm saying, that Everettian MWI follows from linear wave mechanics without any further assumptions. I certainly said that Everett's theory does not address the measurement problem properly and that decoherence is needed. But I did not mean to imply that it is solved in MWI any better than it is solved in collapse theories. But I'd rather not get bogged down in solving the measurement problem - in this thread I'm asking about assumptions not how well the theory works. What would be of greatest interest would be assumptions that are only required because of the Everettian foundation. In which context let me ask you: you say "the preferred basis does not follow from decoherence alone unless it is perfect". What assumptions are implied by that word "alone"?

Well, I think Everett would not agree with your definition of MWI. I think Everett wanted to solve the measurement problem.

I'm not entirely sure what the extra assumption should be, but perhaps something like the perdictability sieve: http://arxiv.org/abs/quant-ph/0509174 or the quantum discord http://arxiv.org/abs/1303.4659.

Well, I think Everett would not agree with your definition of MWI. I think Everett wanted to solve the measurement problem.
I'm not entirely sure what the extra assumption should be, but perhaps something like the perdictability sieve: http://arxiv.org/abs/quant-ph/0509174 or the quantum discord http://arxiv.org/abs/1303.4659.

Hmm, Everett stated that his intention was to provide a foundation for QM, replacing the collapse postulate with something that follows from linear wave mechanics. That would seem to entail deriving the Born rule since it is part of the collapse postulate, but not necessarily solving the preferred basis problem which is not.

The papers you have cited will be worth looking into if I'm not too far out of my depth. The first seems to be about finding criteria to define what we mean by pointer states, which presumably relates trivially to the preferred basis. The last line of the abstract does not inspire confidence that these criteria are 100% successful. The second, if it does what it says on the tin and "explains why, in a quantum Universe, we perceive objective classical reality" also would appear to be explanatory. Neither, on the face of it, seems to be introducing assumptions.

I have simple agenda here. I *believe* that MWI is built on a minimalist set of assumptions and that a complicated issue like the measurement problem can be approached without introducing additional postulates. I'd like to be sure I know what assumptions it does make.

Coming up: a thread entitled "Where MWI fails". But not yet..

atyy
The papers you have cited will be worth looking into if I'm not too far out of my depth. The first seems to be about finding criteria to define what we mean by pointer states, which presumably relates trivially to the preferred basis. The last line of the abstract does not inspire confidence that these criteria are 100% successful. The second, if it does what it says on the tin and "explains why, in a quantum Universe, we perceive objective classical reality" also would appear to be explanatory. Neither, on the face of it, seems to be introducing assumptions.

The additional assumptions are the use of quantities like the predictability sieve or quantum discord - unless they can be argued to be unique in a natural way.

The additional assumptions are the use of quantities like the predictability sieve or quantum discord - unless they can be argued to be unique in a natural way.
I imagine that an argument might well be prefaced by "Let us assume that the observed preponderance of dead cats and the scarcity of Schrodinger cats is adequately characterized in terms of the lower entropy of dead cats. The following shows that low entropy states, contrary to intuition, predominate in the observed ensemble." It doesn't matter if I've scrambled the physics in that, the point is the assumption is about how we can characterize our experience. It is an assumption that the correpondence is justified. So it doesn't contribute to the development of the model mathematically and ontologically - it is brought in to allow us to see how well the model fits with our experience. Reasonable?

atyy
I imagine that an argument might well be prefaced by "Let us assume that the observed preponderance of dead cats and the scarcity of Schrodinger cats is adequately characterized in terms of the lower entropy of dead cats. The following shows that low entropy states, contrary to intuition, predominate in the observed ensemble." It doesn't matter if I've scrambled the physics in that, the point is the assumption is about how we can characterize our experience. It is an assumption that the correpondence is justified. So it doesn't contribute to the development of the model mathematically and ontologically - it is brought in to allow us to see how well the model fits with our experience. Reasonable?

Of course not. The beauty of Copenhagen is that the assumptions and how the mathematics fits with our experience are all clearly stated. A solution of the measurement problem should be just as clear, and even better than Copenhagen.

stevendaryl
Staff Emeritus