Question regarding the Many-Worlds interpretation

  • #351
S.Daedalus said:
All I've heard you say in that direction is something about 'hypothesis testing'. And yes: you can form, test, and validate the hypothesis that the relative frequencies are Born-distributed. But if you do so, it's wholly independent of the MWI: it neither implies nor contradicts this hypothesis.
Right, we cannot distinguish between interpretations experimentally, all interpretations give the same observations. That's why they are called interpretations.

But 'where are the branches' has a clear-cut answer in Copenhagen: the collapse gets rid of them.
"But this is an issue, Copenhagen has no way to generate multiple branches!"
(Don't take this seriously, that's how some argument here look like to me).


Okay, seriously, I don't think further discussion with me here will help anyone.
 
Physics news on Phys.org
  • #352
mfb said:
Right, we cannot distinguish between interpretations experimentally, all interpretations give the same observations. That's why they are called interpretations.
But that's the point: the MWI you propose does not account for our observations.
 
  • #353
mfb said:
I just don't think new posts would add anything new, or make anything better. We need someone who can explain that better than me.

"is not required" comes from the attempts to apply Copenhagen to MWI. It is like asking "where are the additional branches in Copenhagen?
mfb, the argument against "is not required" is not Copenhagen but experiment. You constantly ignore the fact that there are experimentally observed statistical frequencies which require an interpretation. You ignore the fact that we can use tr(Pρ) to calculate something formally which approximates these observed statistical frequencies.

The simple questions to MWI are
Why does this work in so many cases?
What replaces the Born rule probabilities and explains the statistical frequencies of 90% - 10% in my original question?

The answer "there are no probabilities" may eliminate the concept of probability from an interpretation but it does not eliminate the entity tr(Pρ) from the formalism. So if you want to interpret the formalism and its application to the real world you should be able to explain why tr(Pρ) does work FAPP even so it is not required.

In order to make this claim against your position work I do rely on Copenhagen but on "shut up and calculate"; I am asking why this works, so please explain.
 
  • #354
S.Daedalus said:
But 'where are the branches' has a clear-cut answer in Copenhagen: the collapse gets rid of them.

I asked this question earlier in this thread, and got no answer: How does getting rid of the other branches change anything, such as the predictiveness of the Born rule, on THIS branch?
 
  • #355
stevendaryl said:
I asked this question earlier in this thread, and got no answer: How does getting rid of the other branches change anything, such as the predictiveness of the Born rule, on THIS branch?
Because you end up with a state in some subspace, over which Gleason's theorem provides a measure, which gives you the Born rule. Also, because it introduces alternatives in the first place: this thing happens rather than that one, making an appeal to probability coherent.
 
  • #356
S.Daedalus said:
Because you end up with a state in some subspace, over which Gleason's theorem provides a measure, which gives you the Born rule. Also, because it introduces alternatives in the first place: this thing happens rather than that one, making an appeal to probability coherent.

I'm saying that the nonexistence of other "branches" is not something that is observable in this branch (well, not in practice--to observe the effects of other branches would require detecting interference among branches, and if the branches involve macroscopically different states, then this is impossible). So whatever rule of thumb you are using to extract predictive content from QM doesn't actually require collapse. You can do the same procedure whether or not collapse happens, and you'll get the same results.
 
  • #357
stevendaryl said:
I'm saying that the nonexistence of other "branches" is not something that is observable in this branch (well, not in practice--to observe the effects of other branches would require detecting interference among branches, and if the branches involve macroscopically different states, then this is impossible). So whatever rule of thumb you are using to extract predictive content from QM doesn't actually require collapse. You can do the same procedure whether or not collapse happens, and you'll get the same results.
As I said, the applicability of Gleason's theorem to extract the Born probabilites depends on having a state that is really in one of the subspaces, rather than a superposition. This is where the collapse comes in; without it, Gleason's theorem simply doesn't talk about the probabilities. There are no observable differences, no, but important conceptual ones that mean you must reason differently if assuming different ontologies. Assuming a collapse, probabilities follow; without it, I don't see how.
 
  • #358
S.Daedalus said:
As I said, the applicability of Gleason's theorem to extract the Born probabilites depends on having a state that is really in one of the subspaces, rather than a superposition. This is where the collapse comes in; without it, Gleason's theorem simply doesn't talk about the probabilities. There are no observable differences, no, but important conceptual ones that mean you must reason differently if assuming different ontologies. Assuming a collapse, probabilities follow; without it, I don't see how.

There is something fishy about this argument. There is no observable difference between situation A and situation B, but the fact that there is a mathematical technique that works for situation A, but not for situation B is an argument that A must be the case?

The statement of Gleason's theorem does not mention the collapse hypothesis. It's about a measure on subspaces of a Hilbert space.
 
  • #359
stevendaryl said:
There is something fishy about this argument. There is no observable difference between situation A and situation B, but the fact that there is a mathematical technique that works for situation A, but not for situation B is an argument that A must be the case?

This business about collapse destroying all the branches but one reminds me of a philosophical argument about Star Trek's teleporter.

Suppose that teleporters are invented some day, and the way they work is this:
  1. At the transmitting end, a laser, or X-ray, or some kind of beam scans every atom in your body and records its state.
  2. As a side-effect, it blasts your body into its component atoms.
  3. At the receiving end, a matter assembler takes the information and builds a new body with the same atomic states. (I'm disregarding quantum mechanics here.)

For all intents and purposes, the traveler leaves the transmitter, and is transported at the speed of light to the receiver. But now let's change things so that step 2 doesn't happen. The original "you" is NOT destroyed in the process. Would you still consider this a way of traveling at the speed of light? From the point of view of the original "you", what happens is that you enter a booth, are scanned, and then walk out of the booth in the same location you started in. Except you are poorer by the cost of the teleportation fees. You'd feel ripped off. But for some reason, you wouldn't be comforted by the offer to have your body torn into its component atoms, reducing the situation to the previous case.
 
  • #360
stevendaryl said:
There is something fishy about this argument. There is no observable difference between situation A and situation B, but the fact that there is a mathematical technique that works for situation A, but not for situation B is an argument that A must be the case?
No. There's a mathematical argument that explains why things look the way they do in situation A, but not in situation B; so from the point of view of explaining why things look a certain way, A gives the better explanation.

The statement of Gleason's theorem does not mention the collapse hypothesis. It's about a measure on subspaces of a Hilbert space.
No, but it mentions subspaces on Hilbert space. And the general state won't be in any of the relevant subspaces. That's why you need the collapse.
 
  • #361
S.Daedalus said:
No. There's a mathematical argument that explains why things look the way they do in situation A, but not in situation B; so from the point of view of explaining why things look a certain way, A gives the better explanation.


No, but it mentions subspaces on Hilbert space. And the general state won't be in any of the relevant subspaces. That's why you need the collapse.

You're drawing conclusions that aren't actually in the theorem, and don't follow from the theorem, as far as I can see.
 
  • #362
stevendaryl said:
You're drawing conclusions that aren't actually in the theorem, and don't follow from the theorem, as far as I can see.
Yes, I do, and I never claimed to be doing anything else. I am talking about how and where the theorem applies and why; to expect that the theorem itself should supply this kind of information would be a bit much, no? Rather, it's the assumptions made by the interpretation that determine whether the theorem applies.
 
  • #363
S.Daedalus said:
Yes, I do, and I never claimed to be doing anything else. I am talking about how and where the theorem applies and why; to expect that the theorem itself should supply this kind of information would be a bit much, no? Rather, it's the assumptions made by the interpretation that determine whether the theorem applies.

But the "collapse" hypothesis seems irrelevant to the conclusion.

Here's a no-collapse, no-preferred basis variant of Many Worlds that I'll call "Many Observers Interpretation":

  1. Define an "observer" to be any mutually commuting set of Hermitian operators.
  2. Define an "observation" to be an assignment of an eigenvalue to every operator associated with an observer.
  3. Postulate an ensemble of observations, with the measure given by the Born rule.

I don't see how collapse is needed. We have a probability distribution on observations; we don't need observations to be mutually exclusive. If I'm an observer making observation O_1, I don't see how it's relevant to me that there might be another identical observer making observation O_2
 
  • #364
stevendaryl said:
But the "collapse" hypothesis seems irrelevant to the conclusion.

Here's a no-collapse, no-preferred basis variant of Many Worlds that I'll call "Many Observers Interpretation":

  1. Define an "observer" to be any mutually commuting set of Hermitian operators.
  2. Define an "observation" to be an assignment of an eigenvalue to every operator associated with an observer.
  3. Postulate an ensemble of observations, with the measure given by the Born rule.

I don't see how collapse is needed.
Well, you're just postulating the Born rule by fiat, so it's not.
 
  • #365
S.Daedalus said:
Well, you're just postulating the Born rule by fiat, so it's not.

My point is that the extra assumption of "collapse" is completely irrelevant to the Born rule.
 
  • #366
stevendaryl said:
My point is that the extra assumption of "collapse" is completely irrelevant to the Born rule.
It is. But not to deriving the Born rule using Gleason's theorem.
 
  • #367
stevendaryl said:
My point is that the extra assumption of "collapse" is completely irrelevant to the Born rule.

It seems to me that Gleason's theorem just as much implies that the Born rule is the only sensible measure on my "Many Observers" interpretation. Collapse basically amounts to saying that you pick one of the observations (according to the measure) and call that one "actual" and then removing all the others (or calling them "counterfactuals"). But I don't see how this additional step does anything for the Born rule. The measure existed before the collapse (it has to, since the measure is used to select one outcome for the collapse)
 
  • #368
S.Daedalus said:
It is. But not to deriving the Born rule using Gleason's theorem.

I just don't understand, when the statement of the theorem and its proof make no reference to a "collapse", how you can say that a "collapse" is needed for the theorem to be applicable. That doesn't make sense to me.
 
  • #369
stevendaryl said:
It seems to me that Gleason's theorem just as much implies that the Born rule is the only sensible measure on my "Many Observers" interpretation. Collapse basically amounts to saying that you pick one of the observations (according to the measure) and call that one "actual" and then removing all the others (or calling them "counterfactuals").
No, collapse means taking (for instance) a pure, superposed state, and reducing it to a proper mixture. Since the latter is a state that is in one of the eigenspaces of the observable we're measuring, and Gleason's theorem provides a measure on subspaces, we can thereafter associate a probability of being in one of the eigenspaces with the state, which we could not before, as we had a superposed state that was not in any of the eigenspaces. If you keep this superposed state, then Gleason gives you just as much a measure on the subspaces; it's just that the state is not in any of them.
 
  • #370
stevendaryl said:
I just don't understand, when the statement of the theorem and its proof make no reference to a "collapse", how you can say that a "collapse" is needed for the theorem to be applicable. That doesn't make sense to me.
The theorem provides a measure on subspaces; it's just that, in general, the state is not in any of the subspaces that interest us, i.e. those in which an observable has a certain value. The collapse is needed to get it in there.
 
  • #371
S.Daedalus said:
The theorem provides a measure on subspaces; it's just that, in general, the state is not in any of the subspaces that interest us, i.e. those in which an observable has a certain value. The collapse is needed to get it in there.

Why do we need the wave function to be in one of the subspaces?
 
  • #372
stevendaryl said:
Why do we need the wave function to be in one of the subspaces?
For one, because then we can use Gleason's theorem to furnish a probability interpretation. :-p But also, because we want experiments to be repeatable, i.e. if we observed the value o_i measuring the observable \mathcal{O}, then making the same measurement again, we want to observe o_i again; but this we only will if the state is in the subspace spanned by the states |o_i\rangle such that \mathcal{O}|o_i\rangle = o_i|o_i\rangle.
 
  • #373
S.Daedalus said:
For one, because then we can use Gleason's theorem to furnish a probability interpretation

I just don't think that's correct. Gleason's theorem says that if we are going to use the wavefunction to assign probabilities to subspaces (with certain assumptions), then pretty much the only sensible choice is the Born rule. It doesn't say anything about the collapse of the wave function into that subspace.
 
  • #374
S.Daedalus said:
For one, because then we can use Gleason's theorem to furnish a probability interpretation. :-p But also, because we want experiments to be repeatable, i.e. if we observed the value o_i measuring the observable \mathcal{O}, then making the same measurement again, we want to observe o_i again; but this we only will if the state is in the subspace spanned by the states |o_i\rangle such that \mathcal{O}|o_i\rangle = o_i|o_i\rangle.

You don't need collapse to get the result that repeated observations give the same value for an observable.
 
  • #375
stevendaryl said:
You don't need collapse to get the result that repeated observations give the same value for an observable.

Maybe I should expand on this point.

Let's let |u\rangle and |d\rangle be the "up" and "down" states of an electron. Let |UUDDDU...\rangle be the state of the observer/detector when it has measured spin "up" for the first two times that the electron's spin was measured, and spin "down" for the next three times, etc. We assume that the detector's interaction with the electron causes its state to become correlated with that of the electron. That is:

|\rangle \otimes |u\rangle \rightarrow |U\rangle \otimes |u\rangle
|\rangle \otimes |d\rangle \rightarrow |D\rangle \otimes |d\rangle

So if you start off with the electron in a superposition of "up" and "down", then we have:

|\rangle \otimes (\alpha |u\rangle + \beta |d\rangle)
\rightarrow \alpha (|U\rangle \otimes |u\rangle) + \beta(|D\rangle \otimes |d\rangle)

Then the observer/detector measures the spin again, and it evolves further to
\rightarrow \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)

You don't need collapse to guarantee that repeated measurements of the same observable give consistent results.
 
  • #376
stevendaryl said:
I just don't think that's correct. Gleason's theorem says that if we are going to use the wavefunction to assign probabilities to subspaces (with certain assumptions), then pretty much the only sensible choice is the Born rule. It doesn't say anything about the collapse of the wave function into that subspace.
You continue to miss my point. I'm not saying that Gleason says anything about collapse; but it just gives a measure on subspaces, and for that measure to apply, the state must be in one of them. It's like, you have a distribution of marbles in a hat, but in order for that to apply, you must draw a marble from that hat; if you don't, the distribution just doesn't, and can't, tell you anything. The superposed state simply does not correspond to a marble in the hat.

I've laid out the argument in more detail in these two posts; I think this is as clear as I can make it.

stevendaryl said:
Maybe I should expand on this point.

Let's let |u\rangle and |d\rangle be the "up" and "down" states of an electron. Let |UUDDDU...\rangle be the state of the observer/detector when it has measured spin "up" for the first two times that the electron's spin was measured, and spin "down" for the next three times, etc. We assume that the detector's interaction with the electron causes its state to become correlated with that of the electron. That is:

|\rangle \otimes |u\rangle \rightarrow |U\rangle \otimes |u\rangle
|\rangle \otimes |d\rangle \rightarrow |D\rangle \otimes |d\rangle

So if you start off with the electron in a superposition of "up" and "down", then we have:

|\rangle \otimes (\alpha |u\rangle + \beta |d\rangle)
\rightarrow \alpha (|U\rangle \otimes |u\rangle) + \beta(|D\rangle \otimes |d\rangle)

Then the observer/detector measures the spin again, and it evolves further to
\rightarrow \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)

You don't need collapse to guarantee that repeated measurements of the same observable give consistent results.
Well, this only works if you assume that your memory changes upon each new measurement, that is, after having observed U, the state evolves to your \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle), in which you have a chance of |\beta|^2 of now observing D, but also believing of having observed D before; this is in fact a version of Everett Bell considered (and rejected, for its obvious empirical incoherence: if your memory were subject to such 'fakeouts', you could never rationally build up belief in a theory). There's also the problem that it's typically simply false to consider a state as having a definite property while being in a superposition.
 
  • #377
stevendaryl said:
So if you start off with the electron in a superposition of "up" and "down", then we have:

|\rangle \otimes (\alpha |u\rangle + \beta |d\rangle)
\rightarrow \alpha (|U\rangle \otimes |u\rangle) + \beta(|D\rangle \otimes |d\rangle)
This is what we observe, but for the MWI to work a proof is required. Currently we do believe in decoherence to provide the proof that this is approximately true, i.e. that states like |U>*|d> are "dynamically suppressed".

stevendaryl said:
Then the observer/detector measures the spin again, and it evolves further to
\rightarrow \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle)
Again this is what we observe, and for which a proof is required.

The 1st step means that a preferred basis is singled out i.e. that off-diagonal terms are suppressed, the 2nd step means that the preferred basis (branching) is stable i.e. that off-diagonal terms stay suppressed.

But besides the fact that a sound proof for realistic systems seems to be out of reach, it is unclear whether Gleason's theorem shall tell us anything in the MWI context (I think this is what mfb wanted to stress). The theorem says that the only valid probability measure is the Born measure. The theorem (no theorem!) tells us why we should interpret a measure as a probability! The question is "probability for what?"
For the state being in a subspace? No, the state is still in a superposition
For observing "UU"? Why shall a prefactor of a specific subspace be a probability?

I know that many do not like why-questions in physics, but this why-question is key for the whole MWI debate!

There is one fact regarding the MWI which is really very disappointing: the whole story starts with a clear and minimalistic setup. But the ideas to prove (or to motivate) the Born rule have become awefully complicated over the last decades. That means that MWI misses the point!
 
  • #378
mfb said:
It [the MWI] allows to formulate theories that predict amplitudes, and gives a method to do hypothesis testing based on those predictions.
I still have some more tangible questions than the apparent dead end of this thread "But what about the probabilities?". One is, how do I measure these amplitudes? The result of a measurement is an eigenvalue. The result of many measurements is a string of eigenvalues. How do I deduce amplitudes from this?
 
  • #379
stevendaryl said:
The terminology of the universe "splitting" is not really accurate. A better way to think of it, in my opinion, is that the set of possibilities is always the same, and the "weight" associated with each possibility just goes up or down with time.
The naive approach to branches is that every state in the (improper) mixture after decoherence defines a branch. So before the measurement we have only one branch and afterwards we have many.

Do you suggest that we should take every one-dimensional subspace of the full Hilbert space as a branch?
 
  • #380
S.Daedalus said:
Well, this only works if you assume that your memory changes upon each new measurement, that is, after having observed U, the state evolves to your \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle), in which you have a chance of |\beta|^2 of now observing D, but also believing of having observed D before;
You are talking about a different measurement which an additional external observer would have to perform. |β|² is the probability for an outcome in a measurement of the composite system electron + steven's observer/detector.

This is very similar to the starting point of Everett: if you introduce a second observer who observes the composite system, the QM calculations for the different observers contradict each other if you make the collapse assumption. So Copenhagen has to limit the applicability of QM in order to be consistent.
 
Last edited:
  • #381
S.Daedalus said:
You continue to miss my point. I'm not saying that Gleason says anything about collapse; but it just gives a measure on subspaces, and for that measure to apply, the state must be in one of them.

You keep saying that, but that just doesn't follow.
 
  • #382
S.Daedalus said:
Well, this only works if you assume that your memory changes upon each new measurement,

If your memory DOESN'T change with each new measurement, then how could you possibly compute relative frequencies to compare with the theory?

that is, after having observed U, the state evolves to your \alpha (|UU\rangle \otimes |u\rangle) + \beta(|DD\rangle \otimes |d\rangle), in which you have a chance of |\beta|^2 of now observing D

In my notation, D isn't an "observation", it's a state of the observer in which the observer remembered measuring the spin and finding it spin-down.

but also believing of having observed D before; this is in fact a version of Everett Bell considered (and rejected, for its obvious empirical incoherence: if your memory were subject to such 'fakeouts', you could never rationally build up belief in a theory). There's also the problem that it's typically simply false to consider a state as having a definite property while being in a superposition.

I don't know what you're talking about. What definite state are you talking about?
 
  • #383
stevendaryl said:
You keep saying that, but that just doesn't follow.
I have a probability distribution over marbles in a hat as follows: P(Green)=0.5, P(Blue)=0.3, P(Red)=0.2. I draw a marble from a nearby urn. What's the probability that it's green?
 
  • #384
stevendaryl said:
If your memory DOESN'T change with each new measurement, then how could you possibly compute relative frequencies to compare with the theory?
I think he means that an observer with state |U> could get into a state |DD> by measuring the state of the electron again and thus would have to change his knowledge of the past. I have explained why this is wrong in my previous post.
 
  • #385
kith said:
You are talking about a different measurement which an additional external observer would have to perform. |β|² is the probability for an outcome in a measurement of the composite system electron + steven's observer/detector.
I took the setup to be essentially that of a quantum system and a detector with a memory; in order to decide the contents of the memory, you have to do a measurement on the complete system. In the first measurement, for instance, you observe the detector to find that it has observed the system in the spin-up state, and that it consequently has its register in the 'having observed spin up' state. Now the detector measures again, and once you observe it and its memory, you (may) find that it has now observed it in the state spin-down, and consequently, has the register in the 'have observed spin down' state; but furthermore, it will also now 'remember' having observed the system in the spin down state before, since the register for the last observation will now also be in the 'have observed spin down'-state. (Which of course will be completely in line with your own memory, provided memory works as a kind of self-measurement of some sort of 'register'; this is basically Bell's 'Bohmian mechanics without the trajectories' reading of Everett).
 
  • #386
tom.stoer said:
This is what we observe, but for the MWI to work a proof is required. Currently we do believe in decoherence to provide the proof that this is approximately true, i.e. that states like |U>*|d> are "dynamically suppressed".


Again this is what we observe, and for which a proof is required.

You're absolutely correct there. That (in my opinion) is a real problem with any no-collapse interpretation of quantum mechanics. Measurement has to be an irreversible process, and there is a question of how irreversible can arise from reversible dynamics. Of course, it's the same sort of problem with classical physics, but there people have the argument that fundamental physics is reversible, but that the initial conditions are such that one direction is much more probable than the other. I don't know how that argument would work in MWI.

The 1st step means that a preferred basis is singled out i.e. that off-diagonal terms are suppressed, the 2nd step means that the preferred basis (branching) is stable i.e. that off-diagonal terms stay suppressed.

My step 1 assumes that there is such a thing as a measurement device. By definition, a measuring device for a property such as spin must become correlated with the spin in a definite way. So you're certainly right that there is a gap in the argument, which is the proof that there are such things as measurement devices. That's a difficult problem, it seems to me, because it necessarily, as I said, involves irreversibility, which involves huge numbers of particles. But I don't see that that is particularly a problem for MWI. You have to have measurement devices to make a "collapse" interpretation work, as well.

But besides the fact that a sound proof for realistic systems seems to be out of reach, it is unclear whether Gleason's theorem shall tell us anything in the MWI context (I think this is what mfb wanted to stress). The theorem says that the only valid probability measure is the Born measure. The theorem (no theorem!) tells us why we should interpret a measure as a probability! The question is "probability for what?"
For the state being in a subspace? No, the state is still in a superposition
For observing "UU"? Why shall a prefactor of a specific subspace be a probability?

When you're talking measures on "possible worlds", you're really not talking about probability, in the strict sense, because probability is connected with the results of repeated measurements in a single "possible world". So it's not a probability, it's a measure on "possible worlds", where a possible world is given by what I was calling an "observation", which is an assignment of eigenvalues to a mutually commuting set of observables.


I know that many do not like why-questions in physics, but this why-question is key for the whole MWI debate!

There is one fact regarding the MWI which is really very disappointing: the whole story starts with a clear and minimalistic setup. But the ideas to prove (or to motivate) the Born rule have become awefully complicated over the last decades. That means that MWI misses the point!

To me, what motivates MWI is the fact that alternatives such as the collapse hypothesis propose the existence of an interaction (the collapse) which only affects macroscopic objects with persistent memories (like humans and devices) but not microscopic objects like electrons and atoms. That is very suspicious to me. Surely, the physics of macroscopic objects should follow from the physics of the microscopic objects that it's made out of?

So to me, intellectual coherence requires either that quantum mechanics (the smooth evolution of the wave function according to Schrodinger's equation) applies to all objects, no matter how small, or else there is some new type of interaction that should be observable in the small. There are "stochastic" interpretations of quantum mechanics that don't have a measurement-induced collapse, but instead particles are always randomly having their wave functions collapse. I don't very much like that, but I like it better than the usual "collapse" interpretation, which makes an unsatisfying distinction between macroscopic and microscopic objects.

I think of MWI as more of a research program than an interpretation--it's really seeing how far can we push a version of QM that does not have a "collapse". If you don't have collapse, then macroscopic superpositions are inevitable, and "Many Worlds" is just a way to picture macroscopic superpositions.
 
  • #387
S.Daedalus said:
I took the setup to be essentially that of a quantum system and a detector with a memory; in order to decide the contents of the memory, you have to do a measurement on the complete system.

If you have a detector with memory, you can let it run for, say, 100 measurements. Afterward, assuming the correctness of the detector and its memory, the combination of Detector + Electron will, I'm claiming, be in the state:

\alpha (|UUUU...U\rangle \otimes |u\rangle) + \beta (|DDD...D\rangle \otimes |d\rangle)

So, in a "collapse" interpretation, we can push the moment of collapse back to the time that a human being examines the content of the detector's memory, and he will find 100 "up" measurements with probability |\alpha|^2 and 100 "down" measurements with probability |\beta|^2. So there is no need to assume that the measuring device "collapsed" the wave function--you can put off the collapse till later.

But then, the same sort of putting off of collapse can happen with the experimenter. You can assume that the experimenter (+ detector + electron) is in a superposition of states until he reports his findings to his advisor. Then the advisor's observing his student's state causes the student's state to collapse. Or you can put off the moment of collapse further...

So if you stick with a "collapse" interpretation, then my claim is that there is no feasible experiment that can tell you when the collapse took place: at the detector, at the experimenter, at his advisor, etc. I consider MWI as a kind of limit in which you put off the moment of collapse indefinitely far into the future.
 
Last edited:
  • #388
stevendaryl said:
So if you stick with a "collapse" interpretation, then my claim is that there is no feasible experiment that can tell you when the collapse took place: at the detector, at the experimenter, at his advisor, etc.
This I agree with totally. (And such 'Wigner's friend'-type tales are the reason I consider all collapse theories to be unworkable.)
 
  • #389
stevendaryl said:
To me, what motivates MWI is the fact that alternatives such as the collapse hypothesis propose the existence of an interaction (the collapse) which only affects macroscopic objects with persistent memories (like humans and devices) but not microscopic objects like electrons and atoms. That is very suspicious to me. Surely, the physics of macroscopic objects should follow from the physics of the microscopic objects that it's made out of?

So to me, intellectual coherence requires either that quantum mechanics (the smooth evolution of the wave function according to Schrodinger's equation) applies to all objects, no matter how small, or else there is some new type of interaction that should be observable in the small. There are "stochastic" interpretations of quantum mechanics that don't have a measurement-induced collapse, but instead particles are always randomly having their wave functions collapse. I don't very much like that, but I like it better than the usual "collapse" interpretation, which makes an unsatisfying distinction between macroscopic and microscopic objects.

I think of MWI as more of a research program than an interpretation--it's really seeing how far can we push a version of QM that does not have a "collapse". If you don't have collapse, then macroscopic superpositions are inevitable, and "Many Worlds" is just a way to picture macroscopic superpositions.
Excellent, I fully agree, especially to your last remark regarding whether MWI is an interpretation or a research program. It seems that what has been an interpretation was - at least partially - turned into a research program: "emergence of dynamically isolated and stable branches due to decoherence", "derivation of Born's rule", ... So there is less room for interpretations and more need for theorems. In addition there are means to motivate statements or even derive them based on the formalism, which is not possible in collapse interpretations. So MWI is really a long and stony path for the brave ...

And it seems that we agree on the key issues, namely how time-asymmetry observed "within a branch" does emerge from a time-symmetric formalism, and how observed statistical frequencies can be explained via a probability (or a measure or whatever) to emerge from a fully causal and deterministic formalism.
 
  • #390
S.Daedalus said:
I have a probability distribution over marbles in a hat as follows: P(Green)=0.5, P(Blue)=0.3, P(Red)=0.2. I draw a marble from a nearby urn. What's the probability that it's green?

I don't know, and I don't see the relevance. What I'm saying is that the wavefunction is used to calculate a measure on "possible worlds", where a possible world is (in my "many-observers" interpretation) a possible result of a measurement--an assignment of eigenvalues to a collection of mutually commuting Hermitian operators. You seem to be saying that I can't use the wave function to compute a measure unless I collapse the wave function afterward. That doesn't make a bit of sense.
 
  • #391
tom.stoer said:
Excellent, I fully agree, especially to your last remark regarding whether MWI is an interpretation or a research program. It seems that what has been an interpretation was - at least partially - turned into a research program: "emergence of dynamically isolated and stable branches due to decoherence", "derivation of Born's rule", ... So there is less room for interpretations and more need for theorems.

I think if you look at Everett's original paper, his actual contribution was showing that the use of mixed states can arise naturally, without assuming "collapse", if you take into account the entanglement between one system and a second system that "measures" the first. So you don't actually need collapse in order to understand how mixed states can arise in quantum mechanics, and you don't need collapse in order to understand why, after a measurement of an electron's spin direction, you no longer see any interference between alternatives. Both are effects of entanglement.

Historically, it was Dewitt who tried to elevate Everett's work to a new interpretation of quantum mechanics. I don't actually think it's a new interpretation, I think it's a research program.

And it seems that we agree on the key issues, namely how time-asymmetry observed "within a branch" does emerge from a time-symmetric formalism, and how observed statistical frequencies can be explained via a probability (or a measure or whatever) to emerge from a fully causal and deterministic formalism.

Yes. Some of it, I fear, might be just too hard to actually solve. Once macroscopic objects are involved, you no longer have two and three particle wave functions (which are difficult enough), but wave functions involving 10^{23} particles. We can't hope to solve equations for such a system. Hopefully, there are ways to get insights about such a system that doesn't require solving it.
 
  • #392
stevendaryl said:
I don't know, and I don't see the relevance. What I'm saying is that the wavefunction is used to calculate a measure on "possible worlds", where a possible world is (in my "many-observers" interpretation) a possible result of a measurement--an assignment of eigenvalues to a collection of mutually commuting Hermitian operators. You seem to be saying that I can't use the wave function to compute a measure unless I collapse the wave function afterward. That doesn't make a bit of sense.
The relevance is simply that you have a probability distribution over the eigenspaces (thx Gleason) of some observable you are measuring, and a state not in any of those eigenspaces---just as you have a probability distribution over marbles in a hat, and a marble not from that hat.

Your 'many observers' theory presumes a working measurement framework, in order to leave you with an assignment of eigenvalues to a collection of observables, because typically, the state won't be an eigenstate of your observables.
 
  • #393
S.Daedalus said:
This I agree with totally. (And such 'Wigner's friend'-type tales are the reason I consider all collapse theories to be unworkable.)

Well, it seems to me that you have the horns of a dilemma, then. Either there is no collapse, which to me means MWI or some variant, or there is some kind of new physics (maybe stochastic collapse at the microscopic level, or maybe some kind of nonlocal interaction as in the Bohm theory).

It's possible that new physics will solve the conundrum, but if it's necessary, that's kind of weird, it seems to me. We don't actually have any experimental evidence that quantum mechanics is ever wrong. It seems weird that we have to go beyond quantum mechanics in order to understand quantum mechanics.
 
  • #394
stevendaryl said:
Well, it seems to me that you have the horns of a dilemma, then.
Certainly. My stance regarding interpretation is roughly the same as my stance regarding political parties: I'm not really close to any, but differently far away from each. I do hope that some variant of 'unitary quantum mechanics only' can be made to work, as from a purely aesthetical point of view, they're the most appealing to me (which is why I am particularly critical regarding their problems). I don't think I like the MWI much, because the talk of worlds just seems like a kind of classical papering-over a fundamentally quantum reality (and there's the Albert/Barrett problem on what it takes to be a world that makes me think that the notion of 'worlds' is simply not a well defined one); I'm more partial towards things like the 'relative facts' proposal of Saunders, as I generally think that the quantum formalism is best interpreted in terms of correlations, rather than the actual values of observations. I was partial to modal proposals for a while, along the Dieks/Kochen/Healey line; but I'm no longer certain these things can be made to work in a really appealing way.

So I guess the bottom line for me is, it's difficult!
 
  • #395
Let's compare two experiments:

1)
A hat with 9 red and 1 green balls;
A (repeated) experiment where a single ball is drawn and placed back;
Result strings like s = "RRGRRRG...";
Statistical frequencies and calculated probabilities 0.9 and 0.1;

2)
A hat with 1 red and 1 green ball;
The red and the green balls have labels "0.9" and "0.1", respectively;
A (repeated) experiment ...
Result strings like s = "RRRGRRGRR...";
A witness confirming that NEVER a single ball is drawn but ALWAYS a PAIR like ["red with label 0.9" and "green with label 0.1"];
Statistical frequencies 0.9 and 0.1 extracted from the result strings;

My question is why the labels "0.9" and "0.1" do affect the statistical frequencies.
 
Last edited:
  • #396
tom.stoer said:
Let's compare two experiments:

1)
A hat with 9 red and 1 green balls;
An (repeated) experiment where a single ball is drawn and placed back;
Result string like s = "RRGRRRG...";
Statistical frequencies and calculated probabilities 0.9 and 0.1;

2)
A hat with 1 red and 1 green ball;
The red and the green ball have labels "0.9" and "0.1";
A (repeated) experiment ...
Result strings like s = "RRRGRRGRR...";
A witness confirming that NEVER a single ball is drawn but ALWAYS a PAIR like ["red with label 0.9" and "green with label 0.1"];
Statistical frequencies 0.9 and 0.1 extracted from the result strings;

My question is why the labels "0.9" and "0.1" do affect the statistical frequencies.

I think that the distinction is not as big as you think. How does the fact that there are 9 red balls and only 1 green ball imply that 9 out of 10 draws will result in a red ball? It doesn't, actually. It's a plausible assumption, but it doesn't logically follow without additional assumptions about how the drawing process works.

If the red balls are all truly indistinguishable, then there is really no difference between there being 9 red balls and there being a "red-ball-counter" that has value 9. It's like the transition from many-particle quantum mechanics to quantum field theory, where instead of asking which state each particle is in, you ask what the occupany number of each state is. They are equivalent.
 
  • #397
stevendaryl said:
I think that the distinction is not as big as you think. How does the fact that there are 9 red balls and only 1 green ball imply that 9 out of 10 draws will result in a red ball? It doesn't, actually. It's a plausible assumption, but it doesn't logically follow without additional assumptions about how the drawing process works.

If the red balls are all truly indistinguishable, then there is really no difference between there being 9 red balls and there being a "red-ball-counter" that has value 9. It's like the transition from many-particle quantum mechanics to quantum field theory, where instead of asking which state each particle is in, you ask what the occupany number of each state is. They are equivalent.
The basic difference is that in case (1) there is a plausability assumption we are used to in numerous contexts, whereas in case (2) there is none - not even in the MWI context - not in the sense most people will understand "plausible".

All what I am reading about "reasonable agents" etc. is horrible complicated and by no means "plausible".
 
  • #398
stevendaryl said:
I think that the distinction is not as big as you think. How does the fact that there are 9 red balls and only 1 green ball imply that 9 out of 10 draws will result in a red ball? It doesn't, actually. It's a plausible assumption, but it doesn't logically follow without additional assumptions about how the drawing process works.
Nevertheless, you wouldn't accept an even-odds bet on drawing the green ball, I presume. And presented with the results of the experiment, you'd point to the distribution of balls as an explanation for the observed relative frequencies. There's a natural hypothesis to be formulated in this setup, and one you will find confirmed. Nothing of that sort presents itself regarding probabilities in the MWI.

It's true that one must always be mindful of Humean skepticism: no amount of evidence will logically entail that the sun rises tomorrow. But that doesn't mean that all hypotheses are equal; that the sun rises deserves a far higher credibility than that it explodes, is actually the egg of a giant world-carrying turtle, or that we are living in a simulation and the cleaning lady's just about to pull the plug to get power for her vacuum cleaner. In this sense, probability in collapse interpretation simply has a far better standing than in the MWI.
 
  • #399
S.Daedalus said:
Nevertheless, you wouldn't accept an even-odds bet on drawing the green ball, I presume. And presented with the results of the experiment, you'd point to the distribution of balls as an explanation for the observed relative frequencies. There's a natural hypothesis to be formulated in this setup, and one you will find confirmed. Nothing of that sort presents itself regarding probabilities in the MWI.

Assuming that 9/10 balls are red implies that 9/10 of the time, you will draw a red ball is only natural because (1) there is no other plausible alternative, and (2) in our experience, that seems to be born out. I think that the same two apply in MWI. There is no plausible alternative to the Born rule, and besides, it is born out by experience.
 
  • #400
stevendaryl said:
Assuming that 9/10 balls are red implies that 9/10 of the time, you will draw a red ball is only natural because (1) there is no other plausible alternative, and (2) in our experience, that seems to be born out. I think that the same two apply in MWI. There is no plausible alternative to the Born rule, and besides, it is born out by experience.
But the assumption that you will draw a red ball 9/10s of the time has a plausible grounding in the situation: if nothing interferes, it's what you should rationally expect (and since it's irrational to expect an unknown interference, it's what you should expect, period). You don't expect this draw for the negative reason of lack of a plausible alternative, but for the positive reason of it being the natural conclusion to draw, given your knowledge of the situation.

There is no similarly plausible grounding of the Born rule in the MWI. Knowledge of the MWI gives you no reason to expect Born probabilities. To use tom.stoer's metaphor, it just gives you numbers painted on the balls, but no reason to expect these to correspond to anything at all. That they give you probabilities of drawing the balls would ordinarily be taken as evidence for there to be something else at work, as the MWI alone simply fails to account for it.
 
Back
Top