Many-worlds: When does the universe split?

In summary, the many-worlds interpretation of quantum mechanics is that the wave function of the universe splits into two when a measurement is performed, and this can be interpreted in a variety of ways.
  • #36
DrChinese said:
One minor detail that is not covered by most discussions of MWI: that light cone MUST include the future as well as the past. If you don't consider that, the results of Bell tests on entanglement-swapped pairs cannot be explained. Since the swapping can occur after the detection, and also alternately the detection correlations can occur far outside the light cone to the past.

So my point is that if MWI is really such a simple explanation, it must also match any and all QM setups simply too. And that requires acknowledgment of a future context.


Could you elaborate? Not sure how this affects the validity or simplicity of MWI?
 
Physics news on Phys.org
  • #37
DrChinese said:
One minor detail that is not covered by most discussions of MWI: that light cone MUST include the future as well as the past. If you don't consider that, the results of Bell tests on entanglement-swapped pairs cannot be explained. Since the swapping can occur after the detection, and also alternately the detection correlations can occur far outside the light cone to the past.

So my point is that if MWI is really such a simple explanation, it must also match any and all QM setups simply too. And that requires acknowledgment of a future context.

I'm not sure what you're getting at. Of course making a measurement affects the future lightcone of the measurement event - all events do. That's standard relativistic causality. Am I missing your point?
 
  • #38
jambaugh said:
Don't equate "simple" with "easy" some may find MWI easier to rap ones head around but that doesn't make it simpler...

"Simple" means "simple rules".

you are, in the theory, creating a continuum of multiverses

Simple rules often lead to complex consequences. That's normal and expected. Newton's laws are very simple, but the chaotic dynamics of systems of many classical particles are extremely complicated.

and one has not even yet given the mechanism for the causing of their interactions and such,... one only gives an ad hoc description of how they behave...

No, that's not the case - or rather it's the case only insofar as it's hard to solve Schrodinger's equation. Again, this is analogous to Newtonian dynamics. Would you abandon that because you can't solve the 3-body problem in closed form?

CI is simpler as it rejects the necessity and ability to provide an ontological role to the wave function. It is not as easy to understand though due to our history of though and classically trained intuition. I would ask you to consider the same argument as you make here, but instead supporting the aether theory of light (empirically equivalent to SR but with the Lorentz's object contraction and clock slowing time dilation taken as literally now as they were originally proposed.) It is easier to get one's head around a theory where the aetheric wind messes up our clocks and measuring rods just enough to hide its existence from our ability to observe. But it is a complicated attempt to reconcile our wrong intuitions with the right evidence. CI simplifies in the same way as SR does... simply trimming via Occam's razor that unnecessary component... in CI's case the "reality" of the "state vector" as in SR's case it was the absoluteness of time.

I think you have that exactly backwards. Occam prefers simple explanations. In the MWI, you solve the Schrodinger equation and apply the Born rule, full stop. In the CI, you do all of that and in addition have to make some kind of arbitrary and ill-defined decision to project out part of the wavefunction. In the end, the physical predictions are supposed to be identical - so in my book, MW wins hands down.
 
  • #39
The_Duck said:
Pretty sure you've just postulated local hidden variables, which are ruled out by Bell's theorem. If you want the results of measurements to be fully determined by their past light cone, this is the same as having some hidden variable at each point in space with the hidden variables evolving in time according to local interactions. Any such model is ruled out by the observed violations of Bell's inequality.

Amongst all that Scott falsely claimed, this one may actually have been right. The way I understood it, he didn't want to remove the non-local aspects but only locate the cause of the "random" event in the backwards lightcone (or on its shell). This doesn't violate Bell, because it doesn't automatically generate a fully local theory. In fact, explanations like this are being considered as possible solutions.

Cheers,

Jazz
 
  • #40
.Scott said:
I just read the article you cited, scanning Chapter 1, quickly reading most of sections 2 through 4. Of course I paid particular attention to certain sections, including section 3.2.3.
An essential part of this concept has not changed since I first read about it quite some time ago. The essential part is that "branches" or "worlds" or "partitions" or whatever you wish to call them are continuously generated - each eventually affecting the macroscopic world and each continuing on without ever remerging with the others.

Your article describes it as an interpretation - and it does not include the expanding information issue. I do believe the expanding information issue is what puts this into the category of theory.

As for getting it published, I have little connection to the science community. I am a Software Engineer who hasn't any notion of what process to follow to become published. And, of course, it would be a lengthy process for me to form the paper in terms acceptable for publication.

Nevertheless, there is clearly a potential issue with information density during superpositioning. If superpositioning is allowed to compound without limit, as with the MWI, the capacity of local space to hold anyone of the "worlds" will be exceeded.

Scott, I some questions for you that may clear this up and are relevant to your line of work.

If you have a 1 TB Hard Drive - how much information does it contain?

What if it's empty ie. all zeros?

What if we compress its contents into a file of only few bytes size?

What if we decompress this file onto an empty hard drive?

When we talk of information is there distinction between the storage capacity and the actual information stored?

It seems to me that you're describing a hypothetical storage capacity scenario, but the process of decoherence ensures that this capacity is mostly unused.
 
Last edited:
  • #41
Jazzdude said:
Amongst all that Scott falsely claimed, this one may actually have been right. The way I understood it, he didn't want to remove the non-local aspects but only locate the cause of the "random" event in the backwards lightcone (or on its shell). This doesn't violate Bell, because it doesn't automatically generate a fully local theory. In fact, explanations like this are being considered as possible solutions.
You have already mentioned this in another thread. I don't think I consider this to be important enough to read a paper about it. Maybe you can outline the basic idea how this could work?
 
  • #42
.Scott said:
Nevertheless, there is clearly a potential issue with information density during superpositioning. If superpositioning is allowed to compound without limit, as with the MWI, the capacity of local space to hold anyone of the "worlds" will be exceeded.
I think it is wrong to picture the worlds as being contained in space. Also "superpositioning" is not a meaningful notion if you want to talk about information. Being a superposition is not a property of the state vector. A superposition is related to a basis and I can always make it disappear by changing to a basis which contains the state vector.

Also I think what craigi has said is relevant.
 
Last edited:
  • #43
kith said:
You have already mentioned this in another thread. I don't think I consider this to be important enough to read a paper about it. Maybe you can outline the basic idea how this could work?

The idea is to start with bare quantum theory in a realist setting. That means the universe has a "real" quantum state and the only other structure is the time evolution of this state. The Hilbert space is at this point completely unstructured (apart from the inner product) and all the physical features (like spacetime, particles, etc) are supposed to emerge from the local interaction generating the evolution.

In this setting, even without knowing the exact emergence mechanisms, you can make certain general statements about the uniqueness of quantum states with respect to their evolution. In other words, if you see a certain evolution, how accurately can you tell which quantum state the universe was in? If you are able to identify a general symmetry of dynamically indistinguishable states (and the symmetry mapping is continuous, commutes with the evolution, and does not rely on emerging structures), then you can remove that symmetry from the Hilbert space by taking the quotient with the induced equivalence relation. The resulting space does then still contain all features that could possibly dynamically emerge. So in the bare quantum theory setting that space and its dynamics would be one step closer to perceived emergent physical reality.

Now in the presence of an interaction constraint like that of Einstein locality, such a symmetry actually arises and the resulting quotient space looks just like the original Hilbert space, but the dynamics on the reduced space can have discontinuous and seemingly random transitions.

So you get a local description of the quantum universe that evolves unitarily most of the time, but upon certain interactions with (unobserved and randomly polarized) incoming photons you get sudden jumps that change the local state exactly like demanded by the measurement postulate (i.e. Born rule + collapse). These interactions look random, but they're really just determined by the SU(2) uniform distribution of the polarization of ambient radiation.

To me, this is the most sensible and rigorous result concerning the measurement problem I've seen so far. And it's even testable because the defining processes can be isolated and investigated. So could it get any more important?

Cheers,

Jazz
 
  • #44
kaplan said:
I'm not sure what you're getting at. Of course making a measurement affects the future lightcone of the measurement event - all events do. That's standard relativistic causality. Am I missing your point?

A splitting now must be done in consideration of a future context, as shown be delayed choice type experiments. The light cone seems to extend to the future in a shape that does not look like a cone, such that there are correlations between distant objects outside of a traditional einsteinian cone.
 
  • #45
Jazzdude said:
To me, this is the most sensible and rigorous result concerning the measurement problem I've seen so far. And it's even testable because the defining processes can be isolated and investigated. So could it get any more important?
Well, it depends on personal priorities. Right now, I don't want to spend much time on reading papers on quantum foundations or discussing them in great depth. I didn't intend to imply that the idea is not relevant to the measurement problem.

What you write sounds interesting. Is there already a thread on this or do you care to open one? I think this is also related to the factorization issue of the MWI which has been debated quite a bit on the forum.
 
  • #46
kith said:
Well, it depends on personal priorities. Right now, I don't want to spend much time on reading papers on quantum foundations or discussing them in great depth. I didn't intend to imply that the idea is not relevant to the measurement problem.

No offense taken :). I just wanted to stress the potential of this approach and point out that it's not just some nearly esoteric speculation.

What you write sounds interesting. Is there already a thread on this or do you care to open one? I think this is also related to the factorization issue of the MWI which has been debated quite a bit on the forum.

I would love to start a thread on this. However I didn't get a lot of feedback in this forum and the general interest seemed to be rather low. So I'm still not sure this is appreciated.

And you're right, it's very much related to the factorization issue. In fact the new approach allows to define subsystems that are not factor spaces and explains why we can describe such subsystems (that also contain us describing the subsystem) with a pure state instead of a density operator. A fact that has been known and used experimentally, but had no proper explanation.

Cheers,

Jazz
 
  • #47
craigi said:
Scott, I some questions for you that may clear this up and are relevant to your line of work.

If you have a 1 TB Hard Drive - how much information does it contain?

What if it's empty ie. all zeros?

What if we compress its contents into a file of only few bytes size?

What if we decompress this file onto an empty hard drive?

When we talk of information is there distinction between the storage capacity and the actual information stored?
Let's take your 1TB Hard Drive and put into its own MWI.
Given the exponential growth of worlds in MWI, in a very short time, we will have many 1TB Hard Drives - more than 2^(2^43) of them. When that number is reached, you have a problem. Beyond this number, not all of the drives can be different. Your many-world hard drives just reached their limit in MWI evolution.

It was probably rash of me to move MWI into the "theory" category since you might be able to find other non-MWI interpretations that have something equivalent to this data limitation. It's just easier to see it with MWI.

craigi said:
It seems to me that you're describing a hypothetical storage capacity scenario, but the process of decoherence ensures that this capacity is mostly unused.
I have a suspicion that decoherence may be induced when this capacity is reached.
One way or relieving the capacity crunch would be to include more volume to allow more capacity, but eventually you're going to run into other activities looking to expand into your territory.

In the MWI model, how quickly are new "partitions" created? For example, if I have a universe of 1KM in diameter, or any other size that you wish, filled with whatever you wish, under MWI, how many of "worlds" will I have in Plank time?
 
Last edited:
  • #48
An example of what I mentioned about light cones is here:

http://arxiv.org/abs/quant-ph/0201134

Although they didn't suitably separate photons 0 and 3 in this particular implementation, theory says that those photons become entangled even when they never share any einsteinian cone.

So MWI, in my view, has a bit of explaining to do to claim the simplest interpretation crown. Obviously it is a bit more than saying that the wave function evolves deterministically and in keeping with c.
 
  • #49
.Scott said:
Let's take your 1TB Hard Drive and put into its own MWI.
Given the exponential growth of worlds in MWI, in a very short time, we will have many 1TB Hard Drives - more than 2^(2^43) of them. When that number is reached, you have a problem. Beyond this number, not all of the drives can be different. Your many-world hard drives just reached their limit in MWI evolution.

What problem? There's nothing about the MWI that says that any possible history or future needs to be unique.

.Scott said:
I have a suspicion that decoherence may be induced when this capacity is reached.

We can have some very large, complex coherent systems or some very small systems that undergo decoherence. It's hard to see how decoherence, which is caused by irreversibility, could be related to the amount of state generated in other histories and futures.

.Scott said:
One way or relieving the capacity crunch would be to include more volume to allow more capacity, but eventually you're going to run into other activities looking to expand into your territory.

Not if the space increases at the rate required to account for the new states and that's exactly what happens. This space isn't a physical resource like the space-time continuum, it's purely a mathematical construct created to store these states, so there's always exactly enough of it.
 
Last edited:
  • #50
craigi said:
What problem? There's nothing about the MWI that says that any possible history or future needs to be unique.
Excelent!
Now that you have a solution for the information inflation, may I presume that you recognize that such an inflation is taking place?

My whole point here was to make it obvious that when an MWI "split" happens, each world ends up with more information than it started with. If it splits into 32 worlds, at least for the moment while they are still unique, each of those 32 worlds has 5 bits more than it did before the split.

To go back to my original assertion, a "random" split should be fully dependent on the contents of its past light cone. If it is dependent on anything else, then the process is introducing information into the new worlds.
 
  • #51
.Scott said:
My whole point here was to make it obvious that when an MWI "split" happens, each world ends up with more information than it started with. If it splits into 32 worlds, at least for the moment while they are still unique, each of those 32 worlds has 5 bits more than it did before the split.

To go back to my original assertion, a "random" split should be fully dependent on the contents of its past light cone. If it is dependent on anything else, then the process is introducing information into the new worlds.

This is false, for a number of reasons. First of all, the unitary evolution of the universal state preserves all sensible information measures. Any additional world-specific entropy increase is due to the lack(!) of information about the rest of the state. This is counter intuitive, but mathematically consistent.

Secondly, you cannot apply the Beckenstein bound to the entirety of all multiverses, because each has to produce its own copy of quantum space time, which is what the bound applies to. So there is no measurable effect of the collection of multiverses on the space time structure of a single one of them.

Cheers,

Jazz
 
  • #52
Jazzdude said:
Secondly, you cannot apply the Beckenstein bound to the entirety of all multiverses, because each has to produce its own copy of quantum space time, which is what the bound applies to. So there is no measurable effect of the collection of multiverses on the space time structure of a single one of them.

Cheers,

Jazz
I'm only applying the Bekenstein bound to each of the worlds, not to the collection. Each individual post-event world (Wn) will have additional information not contained in the original pre-random-event world (W0). I'm not addressing exactly how this information would show up in terms of entropy. I'm simply applying an overall "check" to whatever detailed calculations were used when someone concludes that there is no additional information.

Here's another way to say it. If everything available in W0 isn't enough to tell you which Wn we were going to end up in, then the choice that was made to get us to Wn involved other non-W0 information. And it's that other non-W0 information that changed our Wn - making it different from all the other Wn's.

Jazzdude said:
This is false, for a number of reasons. First of all, the unitary evolution of the universal state preserves all sensible information measures. Any additional world-specific entropy increase is due to the lack(!) of information about the rest of the state. This is counter intuitive, but mathematically consistent.
I had a more physics-oriented colleague translate that for me. Apparently you're claiming that the which-world information was already hidden in the pre-event world, W0. So when the "random" event occurs, it simply goes from hidden to apparent.

Unfortunately my argument against that is hard to state, but it's basically this: If there is hidden information affecting the transition from W0 to Wn, then let's try to find out more about that hidden information before we presume that n>1.
 
  • #53
.Scott said:
So how does this affect the minimum size of the universe? It increases the Berkenstein bound.

Unless there was a way of remerging universes, the result would be a universe that was continuously growing.

Worlds can merge. It happens all the time. Worlds "merge" whenever two lumps of probability amplitude converge in configuration space. For example, just after a particle passes through a double slit we could choose to describe its wave function as depicting two separate worlds, one in which the particle is just behind slit A and one in which the particle is just behind slit B. As the wave packets corresponding to these worlds propagate and expand, they overlap and the two worlds merge to some extent.

However, currently most of configuration space is empty and it is much more probable for lumps of probability amplitude to expand into empty regions of configuration space than it is for lumps to encounter other preexisting lumps of amplitude and merge.

In the far future, if there is a maximum entropy for the universe, eventually the wave function of the universe will have some nonzero amplitude for each possible configuration of the universe. The wave function of the universe will eventually "fill up" its configuration space. Then it will no longer make sense to talk of worlds splitting: there will already be a world for each configuration of the universe and all that can happen will be minor reshufflings of the amplitudes for each possibility. This is what the heat death of the universe looks like in MWI.

.Scott said:
There are a couple of ways of using the Schrodinger's cat story. I always took it as a way of illustrating "measurement" as if the cat was subject to QM collapse - the notion that the cat's destiny was not determined until the box was opened. Since I never expected anyone would seriously think that the cat was really ever both dead and alive, I extended the cats predicament to what a particle would do when it crosses through both the left and right slit. Since the left and right particle "interact", so would the dead and alive cat.

The only "interaction" going on in either case is the interference of probability amplitudes. If you interrogate the final particle about its experiences, either you will find that either (a) there is no record of which slit it went through, and it is impossible to determine, or (b) it went through a definite slit and experienced no interaction with any version of itself that went through the other slit. In neither case will you find any "communication" between versions of the particle that went through different slits. Similarly, in no version of the experiment could Schrodinger's cat come out alive but with the experience of having communicated with the dead version of itself.
 
Last edited:
  • #54
DrChinese said:
An example of what I mentioned about light cones is here:

http://arxiv.org/abs/quant-ph/0201134

Although they didn't suitably separate photons 0 and 3 in this particular implementation, theory says that those photons become entangled even when they never share any einsteinian cone.

No, that's not accurate. Referring to Fig. 1 of that paper, theory makes predictions regarding what Victor will see when he makes a measurement on the photons he receives. But Victor's measurement is within the future lightcones of both Alice and Bob's measurements, so there is clearly no problem with causality even if Victor's measurements exhibit correlations.

It's true that in the (naive version of the) Copenhagen interpretation, Alice's measurement instantly entangles 0 and 3, even though they may be far out of her measurement's lightcone. Not so in MW, where Alice's measurement has no effect whatsoever on the states of 0 and 3. What it does do is produce a state that is very close to a density matrix diagonal in the basis corresponding to her measurements. That, together with Bob's measurement, produces a set of worlds in which Victor's measurements will exhibit correlations that he can detect given some information from Alice.
 
  • #55
In thinking about this it's perhaps useful to revisit the basic EPR setup. There, you can think of the entangled state of the two particle as corresponding to two worlds: one where Alice has the particle with spin up and Bob has the particle with spin down, and another with the opposite situation. Prior to making a measurement, Alice and Bob do not know which world they are in (actually, an identical copy of them exists in both worlds). After making a measurement the copies are no longer identical, because the state of the particle is now known to them, and each copy finds out which world it is in. In other words if Alice measures up, she knows she is the Alice in the world where Bob must measure down. Of course there is another branch with an Alice that measured down, and that Alice knows Bob must measure up.

So it's not that Alice's measurement affected the state of Bob's particle, it's simply that Alice learned which Alice she was, and hence what Bob must measure in her world (or already did measure, if his measurement is in her past).

The same logic works perfectly well in every QM experiment, although expressing it in words that way gets very cumbersome when the setup is as complicated as the one DrChinese linked to.
 
  • #56
The_Duck said:
Worlds can merge. It happens all the time. Worlds "merge" whenever two lumps of probability amplitude converge in configuration space. For example, just after a particle passes through a double slit we could choose to describe its wave function as depicting two separate worlds, one in which the particle is just behind slit A and one in which the particle is just behind slit B. As the wave packets corresponding to these worlds propagate and expand, they overlap and the two worlds merge to some extent.
From what I gather, most of the MWI advocates on this forum do not want to treat superpositioning and the MWI as basically the same thing. And I think they are right. In the case of superpositioning, you're modelling the photon in the period between the time a it is emitted and the time it is detected on the other side of the slits. But that is not to say that anything really happened during that period. The model may suggest intermediate states, but we know that there really isn't any which-way information. In contrast, the multi-world model says that you are creating multiple real worlds - each with its own permanent measurable photon location.

The_Duck said:
More to the point, if there is a maximum entropy for the universe, eventually the wave function of the universe will have some nonzero amplitude for each possible configuration of the universe. The wave function of the universe will eventually "fill up" its configuration space. Then it will no longer make sense to talk of worlds splitting: there will already be a world for each configuration of the universe and all that can happen will be minor reshufflings of the amplitudes for each possibility. This is what the heat death of the universe looks like in MWI.
I am not clear on the connection between entropy and information. Supposedly, entropy is increasing continuously while quantum information is neither created nor destroyed. Some of the conversation here seems to tie them together very closely.
 
  • #57
The_Duck said:
The only "interaction" going on in either case is the interference of probability amplitudes. If you interrogate the final particle about its experiences, either you will find that either (a) there is no record of which slit it went through, and it is impossible to determine, or (b) it went through a definite slit and experienced no interaction with any version of itself that went through the other slit. In neither case will you find any "communication" between versions of the particle that went through different slits. Similarly, in no version of the experiment could Schrodinger's cat come out alive but with the experience of having communicated with the dead version of itself.
Okay. Let's take the interferometer. If a particle shows up in the darkest area of the interference pattern, then could that be described as interacting with it's virtual partner that struck an obstacle before completing it path to interference?
 
  • #58
.Scott said:
From what I gather, most of the MWI advocates on this forum do not want to treat superpositioning and the MWI as basically the same thing. And I think they are right. In the case of superpositioning, you're modelling the photon in the period between the time a it is emitted and the time it is detected on the other side of the slits. But that is not to say that anything really happened during that period. The model may suggest intermediate states, but we know that there really isn't any which-way information. In contrast, the multi-world model says that you are creating multiple real worlds - each with its own permanent measurable photon location.

The difference between the worlds of the MWI and a single particle in a superposition is only that in distinct worlds of the MWI, macroscopic objects are in superpositions of states with macroscopically distinct properties ("Schrodinger's cat states"), instead of just particles being in states with distinct particles. For example, a particle in a superposition of (approximate) position eigenstates separated by 10 meters versus a human being in a superposition of (approximate) position eigenstates separated by 10 meters.

The difference is simply one of degree - there is nothing fundamental that distinguishes between the two (especially when you remember that all states can be written as superpositions). The reason to draw the distinction is that due to decoherence, interference between states with macoscopic differences in at least some properties (like position) is extremely small, by contrast to interference between states in which only a few particles have properties that differ.
 
  • #59
.Scott said:
I'm only applying the Bekenstein bound to each of the worlds, not to the collection.

In post #47 you add up the information on hard drives from all worlds that split off and argue that this information gets too much. So you ARE applying the bound on all the worlds, which you cannot do for reasons I listed earlier.

Also, you cannot even add information in general. Information is sub-additive.

Cheers,

Jazz
 
  • #60
Jazzdude said:
In post #47 you add up the information on hard drives from all worlds that split off and argue that this information gets too much. So you ARE applying the bound on all the worlds, which you cannot do for reasons I listed earlier.

Also, you cannot even add information in general. Information is sub-additive.

Cheers,

Jazz
No. You missed the point of that post. Any one drive can hold 1TB which is 2^43 bits. That allows for each drive to be in any of 2^(2^43) states. So when a drive "splits" it moves to multiple states. And splits again into many more states. Ultimately, there will be more than 2^(2^43) 1TB drives in a single generation. At that point, some of those drives will have to have identical content to each other - a form of de-facto merging.

The point of that exercise was to demonstrate that you really are using information capacity as you allow the 1TB drives to morph.

For example, let's say that the "physics" of our 1TB drive is that each 1 Planck time period, all bytes are shift towards the end and the first byte is set to a random value. In the WMI of this drive, if you look at the contents of the drive, the first byte will tell you which world this drive entered on the last Planck time cycle, and subsequent bytes tell you what happened on the cycles before that. After 1T cycles, information will be lost as the last byte on the drive is shifted into oblivion.

This is all fine for our 1T drive, but the prevailing thought in our universe is that information is never obliterated or created.

I am looking at the explanations that have been posted about how real-world MWI avoids this information inflation. I am not convinced that any of them avoid trivializing information conservation.
 
Last edited:
  • #61
.Scott said:
For example, let's say that the "physics" of our 1TB drive is that each 1 Planck time period, all bytes are shift towards the end and the first byte is set to a random value.

There are no random events according to the MWI. The Schrodinger equation is a differential equation, so the wavefunction of the universe evolves deterministically.

.Scott said:
Here's another way to say it. If everything available in W0 isn't enough to tell you which Wn we were going to end up in, then the choice that was made to get us to Wn involved other non-W0 information. And it's that other non-W0 information that changed our Wn - making it different from all the other Wn's.

There is no "Wn we end up in". We end up in all the worlds.

You seem to have some basic misunderstandings of the MWI.
 
  • #62
kaplan said:
There are no random events according to the MWI. The Schrodinger equation is a differential equation, so the wavefunction of the universe evolves deterministically.
.


Right !



.
 
Last edited:
  • #63
kaplan said:
There are no random events according to the MWI. The Schrodinger equation is a differential equation, so the wavefunction of the universe evolves deterministically.

There is no "Wn we end up in". We end up in all the worlds.

You seem to have some basic misunderstandings of the MWI.
As I understand it, a different version of us ends up in each world. For example, a dead cat in one and a live one in another.

It's nice that the Schrodinger equation is deterministic, but does it give a single unique result of a wave function collapse. If it gives a selection of possible results, then it is deterministic but incomplete. In order to complete it, you would need additional information.
 
  • #64
.Scott said:
It's nice that the Schrodinger equation is deterministic, but does it give a single unique result of a wave function collapse. If it gives a selection of possible results, then it is deterministic but incomplete. In order to complete it, you would need additional information.

Why? In MWI there is no wave function collapse.
 
  • #65
.Scott said:
As I understand it, a different version of us ends up in each world. For example, a dead cat in one and a live one in another.

Yes.

It's nice that the Schrodinger equation is deterministic, but does it give a single unique result of a wave function collapse.

There's no collapse (in the sense of a projection) in MW. Instead, there's a split. And yes, the split is deterministic.

If it gives a selection of possible results, then it is deterministic but incomplete. In order to complete it, you would need additional information.

When you make a measurement and cause a split, one version of you obtains one result and the other version obtains the other. That's all there is, and no additional information is needed. (It's true that the Born rule is needed for splits that aren't equally weighted, but I've always suspected there's a way to derive it from some statistical considerations.)
 
  • #66
.Scott said:
It's nice that the Schrodinger equation is deterministic, but does it give a single unique result of a wave function collapse. If it gives a selection of possible results, then it is deterministic but incomplete. In order to complete it, you would need additional information.

You just don't seem to get it.

In the MWI there is no collapse.

There is an issue about assigning a confidence level or probability to what world you experience - and how such comes about in a deterministic theory - but that is a subtle issue that has been thrashed out before - you can do a search and find the thread if interested.

Thanks
Bill
 
  • #67
kaplan said:
but I've always suspected there's a way to derive it from some statistical considerations.)

There is eg Gleasons Theorem and arguments from decision theory by Wallace and others.

However they are somewhat controversial and a bit of a Google search will bring up both sides of the argument.

Thanks
Bill
 
  • #68
.Scott said:
No. You missed the point of that post.

Fair enough. Then feel free to take my comments in the context of your post #24, where you argue like I thought you would also do in your later post.

Any one drive can hold 1TB which is 2^43 bits. That allows for each drive to be in any of 2^(2^43) states. So when a drive "splits" it moves to multiple states. And splits again into many more states. Ultimately, there will be more than 2^(2^43) 1TB drives in a single generation. At that point, some of those drives will have to have identical content to each other - a form of de-facto merging.

No. First of all, the complexity of a physical hard drive clearly exceeds the classical information stored on it. Secondly, you have an infinite(!) dimensional environment that can get entangled with your subsystem in any possible way. You will never run out of new different states, ever. This is pretty much Hilbert's Hotel.

The point of that exercise was to demonstrate that you really are using information capacity as you allow the 1TB drives to morph.

What is "morph" supposed to mean in this context? And like I said earlier, the unitarity of the evolution preserves global information.

For example, let's say that the "physics" of our 1TB drive is that each 1 Planck time period, all bytes are shift towards the end and the first byte is set to a random value. In the WMI of this drive, if you look at the contents of the drive, the first byte will tell you which world this drive entered on the last Planck time cycle, and subsequent bytes tell you what happened on the cycles before that. After 1T cycles, information will be lost as the last byte on the drive is shifted into oblivion.

I have no idea what you're trying to say with that.

This is all fine for our 1T drive, but the prevailing thought in our universe is that information is never obliterated or created.
I am looking at the explanations that have been posted about how real-world MWI avoids this information inflation. I am not convinced that any of them avoid trivializing information conservation.

There is no objective information inflation at all. All the information that you think is generated is merely subjective to the restricted view of one world.

Also, in your past posts you have been demonstrating your confusion about some key concepts quite clearly. Just a few things: Decoherence is not random, it's a deterministic process. Virtual particles have nothing to do with interference minima.

Cheers,

Jazz
 
  • #69
Jazzdude said:
I would love to start a thread on this. However I didn't get a lot of feedback in this forum and the general interest seemed to be rather low. So I'm still not sure this is appreciated.
As I mentioned in my previous post, I probably wouldn't read up on issues which take much time but I would at least ask some questions and share my point of view. The people who are interested in the factorization issue would probably at least comment, too. So give it a try ;-)

You could simply describe the idea like you did in the last post and link to paper(s). However, if they are arxiv-only, you may want to be cautious with claims and describe it in a more question-like manner.
 
  • #70
.Scott said:
I am looking at the explanations that have been posted about how real-world MWI avoids this information inflation. I am not convinced that any of them avoid trivializing information conservation.
I think the problem of this thread is that you are talking about classical information while the other people talk about quantum information.

In a classical setup, you can predict the outcome of a coin toss if you know it's position, it's orientation and the respective velocities. So in principle a series of measurements which determines these values yields a state of maximal information which allows you to calculate whether you get a head or a tail.

In the QM setup, you can't know position and velocity at the same time. If you know the position of a coin, a measurement of the velocity has the effect that your coin will be put in a superposition of position eigenstates. So a state of maximal information can't include maximal information about position as well as velocity and doesn't allow you to calculate the outcome of a quantum coin toss.

In terms of classical information, information about position is destroyed by the velocity measurement of the quantum coin. You "forget" it's position by creating a superposition of position eigenstates. This is why quantum information is defined in a way which assigns equal information content to all pure quantum states. This is also reflected by the fact that being a superposition is not a property of the state (as I explained in post 42).
 

Similar threads

  • Quantum Interpretations and Foundations
Replies
1
Views
353
  • Quantum Interpretations and Foundations
Replies
7
Views
1K
  • Quantum Interpretations and Foundations
Replies
30
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
41
Views
3K
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
  • Quantum Interpretations and Foundations
Replies
1
Views
2K
  • Quantum Interpretations and Foundations
Replies
11
Views
669
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
  • Quantum Interpretations and Foundations
Replies
14
Views
977
Back
Top