# MWI -- Infinite number of worlds?

• I
Gold Member

## Main Question or Discussion Point

If we would, for sake of argument, adopt the MWI interpretation, then are there wavefunctions (like for instance position) that have a continuous probability spectrum, and will MWI then propose that there are an infinite number of actual universes that each represent a position in that probability spectrum?

In other words: can a single measurement require an infinite number of resulting universes?

ScientistAlexandrus

Related Quantum Physics News on Phys.org
DarMM
Gold Member
In the modern form of MWI there are an infinite number of worlds. Even in cases with a finite amount of outcomes.

Basically when you go to do an experiment with even just two outcomes ##a## and ##b## with probabilities 40% and 60% each then prior to the experiment there are an infinite number of worlds and after the experiment 40% of them have developed the ##a## outcome and 60% the ##b## outcome.

dextercioby and Demystifier
Gold Member
Then it seems to me that MWI is really telling us that we can't say anything about the ontology of reality, right?

DarMM
Gold Member
Then it seems to me that MWI is really telling us that we can't say anything about the ontology of reality, right?
No, MWI lays out the ontology pretty clearly. Getting that ontology to match experiment is a bit difficult, but the "picture of the world" in MWI is pretty clear.

Gold Member
No, MWI lays out the ontology pretty clearly. Getting that ontology to match experiment is a bit difficult, but the "picture of the world" in MWI is pretty clear.
So if we have an infinite number of universes, there is always another infinite number of universes that wasn't taken into account, but should have been, it seems to me in the case of a continuous probability spectrum. What do I overlook?

Does a position never have an exact value? Even if it is measured?

Last edited:
Worlds in MWI are emergent after decoherence and continuously branching, so it doesn't make sense to discuss how many there are in absolute terms. See section 6 here: https://arxiv.org/abs/1111.2189

DarMM
Gold Member
So if we have an infinite number of universes, there is always another infinite number of universes that wasn't taken into account, but should have been, it seems to me in the case of a continuous probability spectrum. What do I overlook?

Does a position never have an exact value? Even if it is measured?
No there's just a "volume" of coarse-grained quasi-classic worlds. In a given dichotomic experiment a portion develop one way and another portion develop another way.

Worlds in MWI are emergent after decoherence and continuously branching, so it doesn't make sense to discuss how many there are in absolute terms. See section 6 here: https://arxiv.org/abs/1111.2189
That's unfortunately one of the features that effects Wallace's own proof of the Born rule as it ruins two of axioms. The fact that decoherence isn't exact allows worlds to develop tails out of a given reward subspace.

Staff Emeritus
2019 Award
will MWI then propose that there are an infinite number of actual universes
No.

In MWI there are not many worlds. There is only one world.

DarMM
Gold Member
No.

In MWI there are not many worlds. There is only one world.
Well there is only one quantum universe in MWI. However usually in Many Worlds the phrase "World" refers to a large scale quasi-classical branch of which there are many.

In Hugh Everett's 1957 paper, in which he called the MWI "the relative state formulation," he argued that the brain, like Schrodinger's cat is a superposition of states -- after observing a quantum experiment with two possible results, A and B, the brain would be in a superimposed state with the wave function representing the knowledge of A superimposed on that representing the knowledge of B.

Thus, in the original version, the "many worlds" were ontologically one world of superimposed states neurally representing alternate quantum results. I know of no argument for ontologically many worlds.

Everett's argument is impressive and logical valid, but unsound. It is based on the unconfirmed premise that bulk matter, like isolated quanta, is subject to linear dynamics.

This is assumption is not made by physicists actually dealing with with many-electron systems, for example in the Hartree-Fock Method for many-body quantum systems and the Gross-Pitaevskii approximation for Bose condensates. Instead, these theories, which are approximate but confirmed, recognize that electron-electron interactions (EEIs), which bind bulk matter, are nonlinear. Since bulk matter has nonlinear dynamics, the sum of two solutions is not a solution. Thus, the superposition principle fails for bulk matter such as quantum detectors and brains. In other words, Schrodinger cats and superimposed brain states do not exist.

julcab12, dextercioby and Mentz114
Everett's argument is impressive and logical valid, but unsound. It is based on the unconfirmed premise that bulk matter, like isolated quanta, is subject to linear dynamics.

This is assumption is not made by physicists actually dealing with with many-electron systems, for example in the Hartree-Fock Method for many-body quantum systems and the Gross-Pitaevskii approximation for Bose condensates. Instead, these theories, which are approximate but confirmed, recognize that electron-electron interactions (EEIs), which bind bulk matter, are nonlinear. Since bulk matter has nonlinear dynamics, the sum of two solutions is not a solution. Thus, the superposition principle fails for bulk matter such as quantum detectors and brains. In other words, Schrodinger cats and superimposed brain states do not exist.
That's odd - how did all those macroscopic objects go into superposition before decoherence took over? (eg the diamond experiment back in 2011: https://www.nature.com/news/entangled-diamonds-vibrate-together-1.9532)

That's odd - how did all those macroscopic objects go into superposition before decoherence took over? (eg the diamond experiment back in 2011: https://www.nature.com/news/entangled-diamonds-vibrate-together-1.9532)
Thank you for commenting.

Entanglement (which depends on conservation laws and so ultimately on symmetry), does not require linear EEIs. All interactions, linear or nonlinear, are subject to symmetry constraints, and so conservation laws and entanglement.

The entangled diamond experiment of Walmsley et al., which you cite, is not looking at the electron wave function, but at a different mode of oscillation, viz. phonons, which are sound waves, i.e., vibrations of atomic positions. While such phonon are quantized, they are not electron wave functions. So, nonlinear EEIs can co-exist with linear sound waves. Nor does the nonlinearity of EEIs prevent the phonon-electromagnetic interactions that produce Stokes photons.

Mentz114
DarMM
Gold Member
Everett's argument is impressive and logical valid, but unsound. It is based on the unconfirmed premise that bulk matter, like isolated quanta, is subject to linear dynamics.
You also have to add the assumption that the wavefunction is ontic to get Many Worlds.

A. Neumaier
2019 Award
No there's just a "volume" of coarse-grained quasi-classic worlds. In a given dichotomic experiment a portion develop one way and another portion develop another way.
What happens upon a position measurement? Exact measurement seems impossible, since the associated eigenstates are not normalizable. Thus what is branching and how?

DarMM
Gold Member
What happens upon a position measurement? Exact measurement seems impossible, since the associated eigenstates are not normalizable. Thus what is branching and how?
It doesn't really depend on eigenstates. Just whatever basis ##e_{i}## is selected out by decoherence each element of the basis is taken to give a class of worlds.

In a position measurement typically a coarse-graining of the position basis is selected out.

A. Neumaier
2019 Award
It doesn't really depend on eigenstates. Just whatever basis ##e_{i}## is selected out by decoherence each element of the basis is taken to give a class of worlds.

In a position measurement typically a coarse-graining of the position basis is selected out.
Since the coarse-graining depends on who specifies the details, it is a partially subjective setting.
This means that the ''worlds'' cannot have any reality content.

What were the worlds before there was any physicist doing measurements?

Mentz114
You also have to add the assumption that the wavefunction is ontic to get Many Worlds.
I am unsure that this would be adequate. As I said earlier, I know of no argument for ontological as opposed to epistemological, multiplicity. On the other hand, the nonlinearity of EEIs is accepted physics and provides a simple explanation for the quantum-classical transition. We can even use it estimate the relation between transition time and object mass.

Mentz114
DarMM
Gold Member
Since the coarse-graining depends on who specifies the details, it is a partially subjective setting.
This means that the ''worlds'' cannot have any reality content.

What were the worlds before there was any physicist doing measurements?
Physicists do actually create additional worlds in measurements and yes it is the physicists who select the device which via decoherence picks out the basis and thus the class of worlds.

It's not a view I hold, but that's the description of it. It's biggest problem remains the derivation of the Born Rule.

I am unsure that this would be adequate. As I said earlier, I know of no argument for ontological as opposed to epistemological, multiplicity. On the other hand, the nonlinearity of EEIs is accepted physics and provides a simple explanation for the quantum-classical transition. We can even use it estimate the relation between transition time and object mass.
What I'm saying is that macroscopic objects being in superposition is not enough on its own for Many Worlds. You need Macroscopic superposition and the quantum state to be ontic (as well as some other assumptions)

This is simply standard Quantum Foundations.

Thank you for commenting.

Entanglement (which depends on conservation laws and so ultimately on symmetry), does not require linear EEIs. All interactions, linear or nonlinear, are subject to symmetry constraints, and so conservation laws and entanglement.

The entangled diamond experiment of Walmsley et al., which you cite, is not looking at the electron wave function, but at a different mode of oscillation, viz. phonons, which are sound waves, i.e., vibrations of atomic positions. While such phonon are quantized, they are not electron wave functions. So, nonlinear EEIs can co-exist with linear sound waves. Nor does the nonlinearity of EEIs prevent the phonon-electromagnetic interactions that produce Stokes photons.
Thank you for that elaborate explanation. :)

Worlds in MWI are emergent after decoherence and continuously branching, so it doesn't make sense to discuss how many there are in absolute terms. See section 6 here: https://arxiv.org/abs/1111.2189
thank you for that link. I write sci-fi and do try to get facts. This is indeed invaluable. Appreciated.

Everett's argument is impressive and logical valid, but unsound. It is based on the unconfirmed premise that bulk matter, like isolated quanta, is subject to linear dynamics.
Unless you're going to modify quantum mechanics with something like GRW's spontaneous collapse theory then I'm afraid you're stuck with superposition as a general principle. And since no deviation from linearity has shown itself despite steady progress towards larger and more complex superposition experiments over the years (not to mention research into quantum computers), the onus should fall on supporters of nonlinear dynamics to prove that their theory is the correct one.

almostvoid
Physicists do actually create additional worlds in measurements and yes it is the physicists who select the device which via decoherence picks out the basis and thus the class of worlds.
...

What I'm saying is that macroscopic objects being in superposition is not enough on its own for Many Worlds. You need Macroscopic superposition and the quantum state to be ontic (as well as some other assumptions)

This is simply standard Quantum Foundations.
I think we are in agreement -- more assumptions are needed beyond macroscopic superposition. Still, since bulk matter is bound by EEIs (A ⋅ J terms), which are quartic in ψ, it is clear that macroscopic superposition is a myth.

A linear approximation will work until the phase shift due to the nonlinear terms becomes significant. This makes the time during which linearity works dependent on the number of electrons involved and so on the mass.

Unless you're going to modify quantum mechanics with something like GRW's spontaneous collapse theory then I'm afraid you're stuck with superposition as a general principle. And since no deviation from linearity has shown itself despite steady progress towards larger and more complex superposition experiments over the years (not to mention research into quantum computers), the onus should fall on supporters of nonlinear dynamics to prove that their theory is the correct one.
It is accepted physics that electrons interact with each other via the mediation of the EM field, with the relevant interactions represented by A ⋅ J, with A being the vector potential and j the current density. It is also accepted physics that j is quadratic in ψ and that A is generated by the current of the other electrons. These nonlinarities prevent an analytic solution of even the two electron problem. So, we solve the problem with perturbation theory, using a sequence of linear approximations.

It is hard to see what objection there can be to applying these well-confirmed principles to bulk matter, such as quantum detectors. Any skepticism should be laid to rest by the fact that calculations based on nonlinear dynamics, such as Hartree-Fock method and the Gross-Pitaevskii approximation, show reasonable agreement with observations -- confirming the nonlinear view.

On the other had, linearity in bulk matter, which is bound by EEIs, is an unconfirmed postulate. It is underpinned by no accepted physics other than linearity working reasonably well for isolated quanta and small samples of matter. As the phase shift induced by nonlinear EEI terms increases with time and the number of electrons involved, the observed small sample quantum behavior is fully consistent with nonlinear EEI dynamics.

Thus, the burden of proof rests on those ignoring the known physics of EEIs by postulating macroscopic quantum superposition.

Finally, we need no gratuitous or ad hoc assumptions to explain the collapse of the wave function. As long as quantum wave packets are far from bulk matter, EEIs can be safely ignored. Once the detection process begins, the incident quantum is interacting with the detector's electrons, and we can no longer ignore the nonlinear terms in the Hamiltonian. As the sum of solutions of nonlinear sets of equations is not a solution of those equations, a superpositions of states is no longer possible. So the wave function inevitably collapses.

Mentz114
Finally, we need no gratuitous or ad hoc assumptions to explain the collapse of the wave function. As long as quantum wave packets are far from bulk matter, EEIs can be safely ignored. Once the detection process begins, the incident quantum is interacting with the detector's electrons, and we can no longer ignore the nonlinear terms in the Hamiltonian. As the sum of solutions of nonlinear sets of equations is not a solution of those equations, a superpositions of states is no longer possible. So the wave function inevitably collapses.
So really there is no measurement problem that we need to resolve?

So really there is no measurement problem that we need to resolve?
I see it a whole ensemble of problems revolving around quantum measurement, so I'm unsure what specific problem you have in mind. Knowing that detection events involve nonlinear EEIs is just one step toward a better understanding. If there is some specific problem which you consider "the measurement problem," I will try to be more specific.