Question regarding the Many-Worlds interpretation

  • #361
S.Daedalus said:
No. There's a mathematical argument that explains why things look the way they do in situation A, but not in situation B; so from the point of view of explaining why things look a certain way, A gives the better explanation.


No, but it mentions subspaces on Hilbert space. And the general state won't be in any of the relevant subspaces. That's why you need the collapse.

You're drawing conclusions that aren't actually in the theorem, and don't follow from the theorem, as far as I can see.
 
Physics news on Phys.org
  • #362
stevendaryl said:
You're drawing conclusions that aren't actually in the theorem, and don't follow from the theorem, as far as I can see.
Yes, I do, and I never claimed to be doing anything else. I am talking about how and where the theorem applies and why; to expect that the theorem itself should supply this kind of information would be a bit much, no? Rather, it's the assumptions made by the interpretation that determine whether the theorem applies.
 
  • #363
S.Daedalus said:
Yes, I do, and I never claimed to be doing anything else. I am talking about how and where the theorem applies and why; to expect that the theorem itself should supply this kind of information would be a bit much, no? Rather, it's the assumptions made by the interpretation that determine whether the theorem applies.

But the "collapse" hypothesis seems irrelevant to the conclusion.

Here's a no-collapse, no-preferred basis variant of Many Worlds that I'll call "Many Observers Interpretation":

  1. Define an "observer" to be any mutually commuting set of Hermitian operators.
  2. Define an "observation" to be an assignment of an eigenvalue to every operator associated with an observer.
  3. Postulate an ensemble of observations, with the measure given by the Born rule.

I don't see how collapse is needed. We have a probability distribution on observations; we don't need observations to be mutually exclusive. If I'm an observer making observation O_1, I don't see how it's relevant to me that there might be another identical observer making observation O_2
 
  • #364
stevendaryl said:
But the "collapse" hypothesis seems irrelevant to the conclusion.

Here's a no-collapse, no-preferred basis variant of Many Worlds that I'll call "Many Observers Interpretation":

  1. Define an "observer" to be any mutually commuting set of Hermitian operators.
  2. Define an "observation" to be an assignment of an eigenvalue to every operator associated with an observer.
  3. Postulate an ensemble of observations, with the measure given by the Born rule.

I don't see how collapse is needed.
Well, you're just postulating the Born rule by fiat, so it's not.
 
  • #365
S.Daedalus said:
Well, you're just postulating the Born rule by fiat, so it's not.

My point is that the extra assumption of "collapse" is completely irrelevant to the Born rule.
 
  • #366
stevendaryl said:
My point is that the extra assumption of "collapse" is completely irrelevant to the Born rule.
It is. But not to deriving the Born rule using Gleason's theorem.
 
  • #367
stevendaryl said:
My point is that the extra assumption of "collapse" is completely irrelevant to the Born rule.

It seems to me that Gleason's theorem just as much implies that the Born rule is the only sensible measure on my "Many Observers" interpretation. Collapse basically amounts to saying that you pick one of the observations (according to the measure) and call that one "actual" and then removing all the others (or calling them "counterfactuals"). But I don't see how this additional step does anything for the Born rule. The measure existed before the collapse (it has to, since the measure is used to select one outcome for the collapse)
 
  • #368
S.Daedalus said:
It is. But not to deriving the Born rule using Gleason's theorem.

I just don't understand, when the statement of the theorem and its proof make no reference to a "collapse", how you can say that a "collapse" is needed for the theorem to be applicable. That doesn't make sense to me.
 
  • #369
stevendaryl said:
It seems to me that Gleason's theorem just as much implies that the Born rule is the only sensible measure on my "Many Observers" interpretation. Collapse basically amounts to saying that you pick one of the observations (according to the measure) and call that one "actual" and then removing all the others (or calling them "counterfactuals").
No, collapse means taking (for instance) a pure, superposed state, and reducing it to a proper mixture. Since the latter is a state that is in one of the eigenspaces of the observable we're measuring, and Gleason's theorem provides a measure on subspaces, we can thereafter associate a probability of being in one of the eigenspaces with the state, which we could not before, as we had a superposed state that was not in any of the eigenspaces. If you keep this superposed state, then Gleason gives you just as much a measure on the subspaces; it's just that the state is not in any of them.
 
  • #370
stevendaryl said:
I just don't understand, when the statement of the theorem and its proof make no reference to a "collapse", how you can say that a "collapse" is needed for the theorem to be applicable. That doesn't make sense to me.
The theorem provides a measure on subspaces; it's just that, in general, the state is not in any of the subspaces that interest us, i.e. those in which an observable has a certain value. The collapse is needed to get it in there.
 
  • #371
S.Daedalus said:
The theorem provides a measure on subspaces; it's just that, in general, the state is not in any of the subspaces that interest us, i.e. those in which an observable has a certain value. The collapse is needed to get it in there.

Why do we need the wave function to be in one of the subspaces?
 
  • #372
stevendaryl said:
Why do we need the wave function to be in one of the subspaces?
For one, because then we can use Gleason's theorem to furnish a probability interpretation. :-p But also, because we want experiments to be repeatable, i.e. if we observed the value o_i measuring the observable \mathcal{O}, then making the same measurement again, we want to observe o_i again; but this we only will if the state is in the subspace spanned by the states |o_i\rangle such that \mathcal{O}|o_i\rangle = o_i|o_i\rangle.
 
  • #373
S.Daedalus said:
For one, because then we can use Gleason's theorem to furnish a probability interpretation

I just don't think that's correct. Gleason's theorem says that if we are going to use the wavefunction to assign probabilities to subspaces (with certain assumptions), then pretty much the only sensible choice is the Born rule. It doesn't say anything about the collapse of the wave function into that subspace.
 
  • #374
S.Daedalus said:
For one, because then we can use Gleason's theorem to furnish a probability interpretation. :-p But also, because we want experiments to be repeatable, i.e. if we observed the value o_i measuring the observable \mathcal{O}, then making the same measurement again, we want to observe o_i again; but this we only will if the state is in the subspace spanned by the states |o_i\rangle such that \mathcal{O}|o_i\rangle = o_i|o_i\rangle.

You don't need collapse to get the result that repeated observations give the same value for an observable.
 
  • #375
stevendaryl said:
You don't need collapse to get the result that repeated observations give the same value for an observable.

Maybe I should expand on this point.

Let's let |u\rangle and |d\rangle be the "up" and "down" states of an electron. Let |UUDDDU...\rangle be the state of the observer/detector when it has measured spin "up" for the first two times that the electron's spin was measured, and spin "down" for the next three times, etc. We assume that the detector's interaction with the electron causes its state to become correlated with that of the electron. That is:

|\rangle \otimes |u\rangle \rightarrow |U\rangle \otimes |u\rangle
|\rangle \otimes |d\rangle \rightarrow |D\rangle \otimes |d\rangle

So if you start off with the electron in a superposition of "up" and "down", then we have:

|\rangle \otimes (\alpha |u\rangle + \beta |d\rangle)
\rightarrow \alpha (|U\rangle \otimes |u\rangle) + \beta(|D\rangle \otimes |d\rangle)

Then the observer/detector measures the spin again, and it evolves further to
\rightarrow \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)

You don't need collapse to guarantee that repeated measurements of the same observable give consistent results.
 
  • #376
stevendaryl said:
I just don't think that's correct. Gleason's theorem says that if we are going to use the wavefunction to assign probabilities to subspaces (with certain assumptions), then pretty much the only sensible choice is the Born rule. It doesn't say anything about the collapse of the wave function into that subspace.
You continue to miss my point. I'm not saying that Gleason says anything about collapse; but it just gives a measure on subspaces, and for that measure to apply, the state must be in one of them. It's like, you have a distribution of marbles in a hat, but in order for that to apply, you must draw a marble from that hat; if you don't, the distribution just doesn't, and can't, tell you anything. The superposed state simply does not correspond to a marble in the hat.

I've laid out the argument in more detail in these two posts; I think this is as clear as I can make it.

stevendaryl said:
Maybe I should expand on this point.

Let's let |u\rangle and |d\rangle be the "up" and "down" states of an electron. Let |UUDDDU...\rangle be the state of the observer/detector when it has measured spin "up" for the first two times that the electron's spin was measured, and spin "down" for the next three times, etc. We assume that the detector's interaction with the electron causes its state to become correlated with that of the electron. That is:

|\rangle \otimes |u\rangle \rightarrow |U\rangle \otimes |u\rangle
|\rangle \otimes |d\rangle \rightarrow |D\rangle \otimes |d\rangle

So if you start off with the electron in a superposition of "up" and "down", then we have:

|\rangle \otimes (\alpha |u\rangle + \beta |d\rangle)
\rightarrow \alpha (|U\rangle \otimes |u\rangle) + \beta(|D\rangle \otimes |d\rangle)

Then the observer/detector measures the spin again, and it evolves further to
\rightarrow \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)

You don't need collapse to guarantee that repeated measurements of the same observable give consistent results.
Well, this only works if you assume that your memory changes upon each new measurement, that is, after having observed U, the state evolves to your \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle), in which you have a chance of |\beta|^2 of now observing D, but also believing of having observed D before; this is in fact a version of Everett Bell considered (and rejected, for its obvious empirical incoherence: if your memory were subject to such 'fakeouts', you could never rationally build up belief in a theory). There's also the problem that it's typically simply false to consider a state as having a definite property while being in a superposition.
 
  • #377
stevendaryl said:
So if you start off with the electron in a superposition of "up" and "down", then we have:

|\rangle \otimes (\alpha |u\rangle + \beta |d\rangle)
\rightarrow \alpha (|U\rangle \otimes |u\rangle) + \beta(|D\rangle \otimes |d\rangle)
This is what we observe, but for the MWI to work a proof is required. Currently we do believe in decoherence to provide the proof that this is approximately true, i.e. that states like |U>*|d> are "dynamically suppressed".

stevendaryl said:
Then the observer/detector measures the spin again, and it evolves further to
\rightarrow \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)
Again this is what we observe, and for which a proof is required.

The 1st step means that a preferred basis is singled out i.e. that off-diagonal terms are suppressed, the 2nd step means that the preferred basis (branching) is stable i.e. that off-diagonal terms stay suppressed.

But besides the fact that a sound proof for realistic systems seems to be out of reach, it is unclear whether Gleason's theorem shall tell us anything in the MWI context (I think this is what mfb wanted to stress). The theorem says that the only valid probability measure is the Born measure. The theorem (no theorem!) tells us why we should interpret a measure as a probability! The question is "probability for what?"
For the state being in a subspace? No, the state is still in a superposition
For observing "UU"? Why shall a prefactor of a specific subspace be a probability?

I know that many do not like why-questions in physics, but this why-question is key for the whole MWI debate!

There is one fact regarding the MWI which is really very disappointing: the whole story starts with a clear and minimalistic setup. But the ideas to prove (or to motivate) the Born rule have become awefully complicated over the last decades. That means that MWI misses the point!
 
  • #378
mfb said:
It [the MWI] allows to formulate theories that predict amplitudes, and gives a method to do hypothesis testing based on those predictions.
I still have some more tangible questions than the apparent dead end of this thread "But what about the probabilities?". One is, how do I measure these amplitudes? The result of a measurement is an eigenvalue. The result of many measurements is a string of eigenvalues. How do I deduce amplitudes from this?
 
  • #379
stevendaryl said:
The terminology of the universe "splitting" is not really accurate. A better way to think of it, in my opinion, is that the set of possibilities is always the same, and the "weight" associated with each possibility just goes up or down with time.
The naive approach to branches is that every state in the (improper) mixture after decoherence defines a branch. So before the measurement we have only one branch and afterwards we have many.

Do you suggest that we should take every one-dimensional subspace of the full Hilbert space as a branch?
 
  • #380
S.Daedalus said:
Well, this only works if you assume that your memory changes upon each new measurement, that is, after having observed U, the state evolves to your \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle), in which you have a chance of |\beta|^2 of now observing D, but also believing of having observed D before;
You are talking about a different measurement which an additional external observer would have to perform. |β|² is the probability for an outcome in a measurement of the composite system electron + steven's observer/detector.

This is very similar to the starting point of Everett: if you introduce a second observer who observes the composite system, the QM calculations for the different observers contradict each other if you make the collapse assumption. So Copenhagen has to limit the applicability of QM in order to be consistent.
 
Last edited:
  • #381
S.Daedalus said:
You continue to miss my point. I'm not saying that Gleason says anything about collapse; but it just gives a measure on subspaces, and for that measure to apply, the state must be in one of them.

You keep saying that, but that just doesn't follow.
 
  • #382
S.Daedalus said:
Well, this only works if you assume that your memory changes upon each new measurement,

If your memory DOESN'T change with each new measurement, then how could you possibly compute relative frequencies to compare with the theory?

that is, after having observed U, the state evolves to your \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle), in which you have a chance of |\beta|^2 of now observing D

In my notation, D isn't an "observation", it's a state of the observer in which the observer remembered measuring the spin and finding it spin-down.

but also believing of having observed D before; this is in fact a version of Everett Bell considered (and rejected, for its obvious empirical incoherence: if your memory were subject to such 'fakeouts', you could never rationally build up belief in a theory). There's also the problem that it's typically simply false to consider a state as having a definite property while being in a superposition.

I don't know what you're talking about. What definite state are you talking about?
 
  • #383
stevendaryl said:
You keep saying that, but that just doesn't follow.
I have a probability distribution over marbles in a hat as follows: P(Green)=0.5, P(Blue)=0.3, P(Red)=0.2. I draw a marble from a nearby urn. What's the probability that it's green?
 
  • #384
stevendaryl said:
If your memory DOESN'T change with each new measurement, then how could you possibly compute relative frequencies to compare with the theory?
I think he means that an observer with state |U> could get into a state |DD> by measuring the state of the electron again and thus would have to change his knowledge of the past. I have explained why this is wrong in my previous post.
 
  • #385
kith said:
You are talking about a different measurement which an additional external observer would have to perform. |β|² is the probability for an outcome in a measurement of the composite system electron + steven's observer/detector.
I took the setup to be essentially that of a quantum system and a detector with a memory; in order to decide the contents of the memory, you have to do a measurement on the complete system. In the first measurement, for instance, you observe the detector to find that it has observed the system in the spin-up state, and that it consequently has its register in the 'having observed spin up' state. Now the detector measures again, and once you observe it and its memory, you (may) find that it has now observed it in the state spin-down, and consequently, has the register in the 'have observed spin down' state; but furthermore, it will also now 'remember' having observed the system in the spin down state before, since the register for the last observation will now also be in the 'have observed spin down'-state. (Which of course will be completely in line with your own memory, provided memory works as a kind of self-measurement of some sort of 'register'; this is basically Bell's 'Bohmian mechanics without the trajectories' reading of Everett).
 
  • #386
tom.stoer said:
This is what we observe, but for the MWI to work a proof is required. Currently we do believe in decoherence to provide the proof that this is approximately true, i.e. that states like |U>*|d> are "dynamically suppressed".


Again this is what we observe, and for which a proof is required.

You're absolutely correct there. That (in my opinion) is a real problem with any no-collapse interpretation of quantum mechanics. Measurement has to be an irreversible process, and there is a question of how irreversible can arise from reversible dynamics. Of course, it's the same sort of problem with classical physics, but there people have the argument that fundamental physics is reversible, but that the initial conditions are such that one direction is much more probable than the other. I don't know how that argument would work in MWI.

The 1st step means that a preferred basis is singled out i.e. that off-diagonal terms are suppressed, the 2nd step means that the preferred basis (branching) is stable i.e. that off-diagonal terms stay suppressed.

My step 1 assumes that there is such a thing as a measurement device. By definition, a measuring device for a property such as spin must become correlated with the spin in a definite way. So you're certainly right that there is a gap in the argument, which is the proof that there are such things as measurement devices. That's a difficult problem, it seems to me, because it necessarily, as I said, involves irreversibility, which involves huge numbers of particles. But I don't see that that is particularly a problem for MWI. You have to have measurement devices to make a "collapse" interpretation work, as well.

But besides the fact that a sound proof for realistic systems seems to be out of reach, it is unclear whether Gleason's theorem shall tell us anything in the MWI context (I think this is what mfb wanted to stress). The theorem says that the only valid probability measure is the Born measure. The theorem (no theorem!) tells us why we should interpret a measure as a probability! The question is "probability for what?"
For the state being in a subspace? No, the state is still in a superposition
For observing "UU"? Why shall a prefactor of a specific subspace be a probability?

When you're talking measures on "possible worlds", you're really not talking about probability, in the strict sense, because probability is connected with the results of repeated measurements in a single "possible world". So it's not a probability, it's a measure on "possible worlds", where a possible world is given by what I was calling an "observation", which is an assignment of eigenvalues to a mutually commuting set of observables.


I know that many do not like why-questions in physics, but this why-question is key for the whole MWI debate!

There is one fact regarding the MWI which is really very disappointing: the whole story starts with a clear and minimalistic setup. But the ideas to prove (or to motivate) the Born rule have become awefully complicated over the last decades. That means that MWI misses the point!

To me, what motivates MWI is the fact that alternatives such as the collapse hypothesis propose the existence of an interaction (the collapse) which only affects macroscopic objects with persistent memories (like humans and devices) but not microscopic objects like electrons and atoms. That is very suspicious to me. Surely, the physics of macroscopic objects should follow from the physics of the microscopic objects that it's made out of?

So to me, intellectual coherence requires either that quantum mechanics (the smooth evolution of the wave function according to Schrodinger's equation) applies to all objects, no matter how small, or else there is some new type of interaction that should be observable in the small. There are "stochastic" interpretations of quantum mechanics that don't have a measurement-induced collapse, but instead particles are always randomly having their wave functions collapse. I don't very much like that, but I like it better than the usual "collapse" interpretation, which makes an unsatisfying distinction between macroscopic and microscopic objects.

I think of MWI as more of a research program than an interpretation--it's really seeing how far can we push a version of QM that does not have a "collapse". If you don't have collapse, then macroscopic superpositions are inevitable, and "Many Worlds" is just a way to picture macroscopic superpositions.
 
  • #387
S.Daedalus said:
I took the setup to be essentially that of a quantum system and a detector with a memory; in order to decide the contents of the memory, you have to do a measurement on the complete system.

If you have a detector with memory, you can let it run for, say, 100 measurements. Afterward, assuming the correctness of the detector and its memory, the combination of Detector + Electron will, I'm claiming, be in the state:

\alpha (|UUUU...U\rangle \otimes |u\rangle) + \beta (|DDD...D\rangle \otimes |d\rangle)

So, in a "collapse" interpretation, we can push the moment of collapse back to the time that a human being examines the content of the detector's memory, and he will find 100 "up" measurements with probability |\alpha|^2 and 100 "down" measurements with probability |\beta|^2. So there is no need to assume that the measuring device "collapsed" the wave function--you can put off the collapse till later.

But then, the same sort of putting off of collapse can happen with the experimenter. You can assume that the experimenter (+ detector + electron) is in a superposition of states until he reports his findings to his advisor. Then the advisor's observing his student's state causes the student's state to collapse. Or you can put off the moment of collapse further...

So if you stick with a "collapse" interpretation, then my claim is that there is no feasible experiment that can tell you when the collapse took place: at the detector, at the experimenter, at his advisor, etc. I consider MWI as a kind of limit in which you put off the moment of collapse indefinitely far into the future.
 
Last edited:
  • #388
stevendaryl said:
So if you stick with a "collapse" interpretation, then my claim is that there is no feasible experiment that can tell you when the collapse took place: at the detector, at the experimenter, at his advisor, etc.
This I agree with totally. (And such 'Wigner's friend'-type tales are the reason I consider all collapse theories to be unworkable.)
 
  • #389
stevendaryl said:
To me, what motivates MWI is the fact that alternatives such as the collapse hypothesis propose the existence of an interaction (the collapse) which only affects macroscopic objects with persistent memories (like humans and devices) but not microscopic objects like electrons and atoms. That is very suspicious to me. Surely, the physics of macroscopic objects should follow from the physics of the microscopic objects that it's made out of?

So to me, intellectual coherence requires either that quantum mechanics (the smooth evolution of the wave function according to Schrodinger's equation) applies to all objects, no matter how small, or else there is some new type of interaction that should be observable in the small. There are "stochastic" interpretations of quantum mechanics that don't have a measurement-induced collapse, but instead particles are always randomly having their wave functions collapse. I don't very much like that, but I like it better than the usual "collapse" interpretation, which makes an unsatisfying distinction between macroscopic and microscopic objects.

I think of MWI as more of a research program than an interpretation--it's really seeing how far can we push a version of QM that does not have a "collapse". If you don't have collapse, then macroscopic superpositions are inevitable, and "Many Worlds" is just a way to picture macroscopic superpositions.
Excellent, I fully agree, especially to your last remark regarding whether MWI is an interpretation or a research program. It seems that what has been an interpretation was - at least partially - turned into a research program: "emergence of dynamically isolated and stable branches due to decoherence", "derivation of Born's rule", ... So there is less room for interpretations and more need for theorems. In addition there are means to motivate statements or even derive them based on the formalism, which is not possible in collapse interpretations. So MWI is really a long and stony path for the brave ...

And it seems that we agree on the key issues, namely how time-asymmetry observed "within a branch" does emerge from a time-symmetric formalism, and how observed statistical frequencies can be explained via a probability (or a measure or whatever) to emerge from a fully causal and deterministic formalism.
 
  • #390
S.Daedalus said:
I have a probability distribution over marbles in a hat as follows: P(Green)=0.5, P(Blue)=0.3, P(Red)=0.2. I draw a marble from a nearby urn. What's the probability that it's green?

I don't know, and I don't see the relevance. What I'm saying is that the wavefunction is used to calculate a measure on "possible worlds", where a possible world is (in my "many-observers" interpretation) a possible result of a measurement--an assignment of eigenvalues to a collection of mutually commuting Hermitian operators. You seem to be saying that I can't use the wave function to compute a measure unless I collapse the wave function afterward. That doesn't make a bit of sense.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
Replies
19
Views
471
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 313 ·
11
Replies
313
Views
24K
  • · Replies 16 ·
Replies
16
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 41 ·
2
Replies
41
Views
6K
  • · Replies 34 ·
2
Replies
34
Views
4K
Replies
11
Views
3K