Question regarding the Many-Worlds interpretation

  • #251
stevendaryl said:
That was actually the chief mathematical result that Everett derived in his first paper on the Many Worlds Interpretation (which he didn't call that--that name was coined by DeWitt). He showed that density matrices arise naturally from pure wave functions in cases of entanglement.

Here's a sketch from memory:

Suppose that you have a system in state |\Psi \rangle that is made up of two subsystems. We can write |\Psi \rangle as a superposition of product states of the two subsystems:

|\Psi \rangle = \sum_{i, a} C_{i a} |\varphi_i \rangle | \chi_a \rangle

Now suppose that we have an operator O that depends only on one of the subsystems. In other words:

\langle \varphi_i' | \langle \chi_a' | O | \chi_a \rangle | \varphi_i \rangle = O_{a a'} \delta_{i i'}

In that case, the expectation value of O in state \Psi is given by:

\langle \Psi |O|\Psi \rangle = \sum_{i, a, a'} C^*_{i a'} C_{i a} O_{a a'}

This is the same result you would get using a density matrix \rho with components

\rho_{a a'} = \sum_i C^*_{i a'} C_{i a}

As long as you are only talking about measurements of one of the two subsystems, you can treat the system as if it were in a mixture, rather than a pure state.
 
Physics news on Phys.org
  • #252
kith said:
Decoherence?

stevendaryl said:
That was actually the chief mathematical result that Everett derived in his first paper on the Many Worlds Interpretation (which he didn't call that--that name was coined by DeWitt). He showed that density matrices arise naturally from pure wave functions in cases of entanglement.
This is why I specified proper mixture: improper mixtures that arise via tracing out a part of the system look mathematically indistinguishable from 'true' mixtures that arise from uncertainty about what the actual state is, but can't be given an ignorance interpretation (this error is also at the root of Art Hobson's recent 'resolution' of the measurement problem which we've discussed here).

(In fact, I've sometimes thought that the confusion behind probability in the MWI looks a lot like the confusion between proper and improper mixtures; there's an argument due to Albert and Barrett that's structurally very similar to d'Espagnats argument regarding improper mixtures, producing a contradiction between considering each term of the superposition as a world in itself and the predictions of quantum mechanics taking into account he complete quantum state.)
 
Last edited:
  • #253
S.Daedalus said:
but can't be given an ignorance interpretation (this error is also at the root of Art Hobson's recent 'resolution' of the measurement problem which we've discussed here).

Sorry - but since it is observationally equivalent to a proper mixture a perfectly valid interpretation is to simply assume it is - nothing can prove you wrong. See the paper I posted previously about decoherence where this is examined in detail. You may not like the ignorance interpretation - but valid it is.

Thanks
Bill
 
  • #254
Seems that I am the only one where "the penny did not drop".

I still have the same problems - or even more.

In experiments we can make fundamental individual observations and we can derive statistical frequencies. In many interpretations of QM we can both derive matrix elements and interpret them as probabilities to be compared with the statistical frequencies.

Now I expected that MWI - being a different interpretation of the same underlying formalism - allows for the same calculations and experimental tests. But what I read is very confusing (for me):
1) MWI is talking about branches and relies on decoherence to identify them, but is not able to count them or to derive a corresponding measure
2) My simple question regarding the "probability being in a certain branch" which I can identify via a result string seems to become meaningless
3) I still have the feeling that my concerns regarding the "missing link" between the experimentally inaccesable top-down perspective of the full Hilbert space with all its branches and the accessable bottom-up approach restricted to the branch I am observing right now have not been understood
4) We have the above mentioned statistical frequencies, but I learn that MWI does not provide the corresponding probabilities - that there are no probabilities at all
5) It is often claimed that the Born rule can derived, but what does it mean if there are no probabilities?

I guess the answers are there, written down in numerous posts, but I am not able to identify them.
 
  • #255
bhobba said:
Sorry - but since it is observationally equivalent to a proper mixture a perfectly valid interpretation is to simply assume it is - nothing can prove you wrong. See the paper I posted previously about decoherence where this is examined in detail. You may not like the ignorance interpretation - but valid it is.
Locally, that's true, but once Alice and Bob get together and compare their measurement results, or a measurement is carried out on both parts of the system, you get results that falsify the idea that the parts of the system are in some definite state, and we just don't know which---correlations that we can't account for with such a model in the first case, and interference results in the second. These are perfectly valid observations, so I don't see how it's true that the two states are 'observationally equivalent'.

tom.stoer said:
Seems that I am the only one where "the penny did not drop".
If by the penny dropping you mean understanding how (Born) probabilities arise in many worlds, then no, count me as one as confused as you are.
 
  • #256
bhobba said:
Sorry - but since it is observationally equivalent to a proper mixture a perfectly valid interpretation is to simply assume it is - nothing can prove you wrong.

You keep asserting this, but it's not true. To make this statement true, you have to employ the measurement postulate, which is what we are trying to motivate or prove. It comes in in the construction of ensemble descriptions as density matrices, which is only a sensible construction with the measurement postulate in mind. If you don't have the measurement postulate then the only complete way to describe an ensemble is a list of states along with the probability of finding each.

Cheers,

Jazz
 
  • #257
tom.stoer said:
Seems that I am the only one where "the penny did not drop".

I see it differently. I think you just don't get confused by shifting the argument between different levels all the time and drawing different conclusions. I have yet to see a way to make MWI work that doesn't rely on obfuscation of the real issues.

Cheers,

Jazz
 
  • #258
Jazzdude said:
I see it differently. I think you just don't get confused by shifting the argument between different levels all the time and drawing different conclusions. I have yet to see a way to make MWI work that doesn't rely on obfuscation of the real issues.

Cheers,

Jazz
So you say MWI can't answer these valid questions?
 
  • #259
S.Daedalus said:
This is why I specified proper mixture: improper mixtures that arise via tracing out a part of the system look mathematically indistinguishable from 'true' mixtures that arise from uncertainty about what the actual state is, but can't be given an ignorance interpretation (this error is also at the root of Art Hobson's recent 'resolution' of the measurement problem which we've discussed here).

This is a very good thought. If you want to get ensembles out you have to put ensembles in. For example you can assume that the environment is in an unknown (but single) state that you model by an ensemble and then see how interaction with the environment splits up a well defined single observed state into a real ensemble. But then you have to face another problem: The contributing states cannot be recovered from a density matrix representation. Since we're talking about definite single (but unknown) states, we want to track their individual histories. That means you have to rather use a more complete representation of a quantum ensemble. That would be a list of states with associated probabilities.

If you perform a calculation like sketched above for a single qubit with an unknown environment you can express the qubit ensemble as a density function on the Bloch sphere that evolves in time as described by a generalized diffusion equation. Unfortunately the Born rule cannot be recovered from the dynamic, because linearity doesn't allow it.

Cheers,

Jazz
 
  • #260
tom.stoer said:
So you say MWI can't answer these valid questions?

Yes, that's what I'm convinced of.
 
  • #261
Nevertheless I would like to give mfb et al. a chance!
 
  • #262
S.Daedalus said:
Locally, that's true, but once Alice and Bob get together and compare their measurement results, or a measurement is carried out on both parts of the system, you get results that falsify the idea that the parts of the system are in some definite state, and we just don't know which---correlations that we can't account for with such a model in the first case, and interference results in the second. These are perfectly valid observations, so I don't see how it's true that the two states are 'observationally equivalent'.

That was dealt with in the thread you mentioned. The answer is in the reference I gave there - the book on Consistent Quantum Theory by Griffiths - it has to do with the necessity of requiring a consistent framework.

Thanks
Bill
 
  • #263
bhobba said:
That assumption, via Glaeson, means you are abandoning basis independence. Why do you want to choose one basis over another? These are man made things introduced for calculational convenience - why do you think nature should depend on such a choice?

Measurement does single out a basis, that's the whole point of it and the heart of the preferred basis problem. The Hilbert space structure is motivated by unitary evolution, not measurement. And using the construction of the space for the theory that describes an observed phenomenon as the reason for this phenomenon is highly circular!

Cheers,

Jazz
 
  • #264
Jazzdude said:
You keep asserting this, but it's not true.

It is true - simple as that. There is no way to observationally tell the difference. If you know of one do tell.

Thanks
Bill
 
  • #265
bhobba said:
It is true - simple as that. There is no way to observationally tell the difference. If you know of one do tell.

Thanks
Bill

You're misunderstanding the point. If we perform a measurement then the measurement postulate comes in and you cannot distinguish the two. But that's not what we're arguing about. We're talking about a situation where the measurement postulate is NOT there and we try to derive or motivate it, by means of an interpretation or theory. In this case you cannot assume the equivalence, because it's precisely what are are intending to show!

Cheers,

Jazz
 
  • #266
Jazzdude said:
Measurement does single out a basis, that's the whole point of it and the heart of the preferred basis problem.

That is at odds with standard textbooks like Decoherence and the Quantum-to-Classical Transition by Schlosshauer. The factoring problem naysayers must provide a proof it is purely an artifact of decomposition - so far they haven't.

The truth of the matter is detailed in the the paper I linked to before, and I gave in the thread previousoly mentioned:
https://www.physicsforums.com/showthread.php?t=707290

As I posted in that thread:
'Basically he is a holding to the decoherence ensemble interpretation as do I. Rather than me go through its pro's and con's here is a good paper on it:
http://philsci-archive.pitt.edu/5439...iv_version.pdf
'Postulating that although the system-apparatus is in an improper mixed state, we can interpret it as a proper mixed state superficially solves the problem of outcomes, but does not explain why this happens, how or when. This kind of interpretation is sometimes called the ensemble, or ignorance interpretation. Although the state is supposed to describe an individual quantum system, one claims that since we can only infer probabilities from multiple measurements, the reduced density operator SA is supposed to describe an ensemble of quantum systems, of which each member is in a definite state.'

The bottom line is, the naysayers are correct - without additional assumptions decoherence does not solve the measurement problem. That's true. But there is another part to it - with additional assumptions it does. That's the key point and a number of interpretations like decoherent histories, MWI, and the Ensemble ignorance interpretation do have additional assumptions and that's what makes them viable.

The guy that wrote the paper in the thread above was wrong - he needed additional assumptions that he didn't make explicit - however with those additional assumptions its valid.

Thanks
Bill
 
Last edited by a moderator:
  • #267
bhobba said:
That is at odds with standard textbooks like Decoherence and the Quantum-to-Classical Transition by Schlosshauer. The factoring problem naysayers must provide a proof it is purely an artifact of decomposition - so far they haven't.

That's not what I have quoted or referred to. My reply was specifically about your claim that it makes no sense to have a special basis in the process of observation.

And the factoring problem is already one step to far. In most cases there are no sensible factors at all. If you are able to specify a tensor factor space for an observed electron let me know. Practically all the interesting (for description of observation) systems do not have the structure of a tensor factor space of the universal Hilbert space.

Cheers,

Jazz
 
  • #268
Jazzdude said:
You're misunderstanding the point. If we perform a measurement then the measurement postulate comes in and you cannot distinguish the two. But that's not what we're arguing about. We're talking about a situation where the measurement postulate is NOT there and we try to derive or motivate it, by means of an interpretation or theory. In this case you cannot assume the equivalence, because it's precisely what are are intending to show!

You are missing my point, and the point of the decoherence adherents. There is no circularity in explicitly stating it gives the appeance of wave function coolapse and because of that actual collapse is a non issue. By this is meant since you can't tell the difference it's not something to worry about.

The Wikipedisa article on it explains it quite well:
http://en.wikipedia.org/wiki/Quantum_mind%E2%80%93body_problem
'Decoherence does not generate literal wave function collapse. Rather, it only provides an explanation for the appearance of wavefunction collapse, as the quantum nature of the system "leaks" into the environment. That is, components of the wavefunction are decoupled from a coherent system, and acquire phases from their immediate surroundings. A total superposition of the universal wavefunction still exists (and remains coherent at the global level), but its fundamentality remains an interpretational issue. "Post-Everett" decoherence also answers the measurement problem, holding that literal wavefunction collapse simply doesn't exist. Rather, decoherence provides an explanation for the transition of the system to a mixture of states that seem to correspond to those states observers perceive. Moreover, our observation tells us that this mixture looks like a proper quantum ensemble in a measurement situation, as we observe that measurements lead to the "realization" of precisely one state in the "ensemble".'

To put it another way, except for people like the person that wrote the article claiming the issue had been solved (and he was wrong), decoherence adherents freely admit it only gives the appearance of collapse, for them that's is good enough - but for others like you it isn't.

Thanks
Bill
 
Last edited by a moderator:
  • #269
bhobba said:
You are missing my point, and the point of the decoherence adherents. There is no circularity in exlicitly stating it gives the appeance of wave function coolapse and because of that actual collapse is a non issue. By this is meant since you can't tell the difference it's not something to worry about.

I've not been missing your point. I'm intimately familiar with the arguments used in decoherence, and I still disagree. The way you lay it out the argument depends highly on the construction of the density matrix to encode quantum ensembles. And this construction is only motivated if you assume that upon observations quantum probabilities mix with classical (ensemble) probabilities. It doesn't matter then how you construct improper ensembles or if they're sensible constructs because the real ensembles are already problematic.

If you can motivate the construction of a density matrix encoded ensemble without referring to outcomes probabilities and/or the measurement postulate (which includes references to observations of the same) then please share your wisdom.

The Wikipedisa article on it explains it quite well: ...

It's no surprise that you find something like that on Wikipedia. There are still many decoherence misinterpretations found in literature. And yes, there are supporters of this view, but that doesn't make it any more correct.

Cheers,

Jazz
 
  • #270
tom.stoer said:
Nevertheless I would like to give mfb et al. a chance!
I think it is all written in the thread now. It would be pointless to repeat it.
 
  • #271
Jazzdude said:
That's not what I have quoted or referred to. My reply was specifically about your claim that it makes no sense to have a special basis in the process of observation.

I never claimed that - in fact I don't even know what you mean by that. To be clear my claim is that decoherence solves the preferred basis problem as stated on page 113 of the reference I gave before by Schlosshauer. He gives 3 issues the measurement problem must solve:

1. The preferred basis problem
2. The problem of non observability of interference
3. The problem of why we have outcomes at all.

The statement he makes is my position:
'it is reasonable to conclude decoherence is capable of solving the first two, whereas the third problem is linked to matters of interpretation'

And that is exactly it - the first two have had considerable work done that indicates decoherence will likely solve them - in fact for a number of models given in the book it does. To solve the third one you need further assumptions and the detail of those assumptions is peculiar to each interpretation.

Thanks
Bill
 
  • #272
Jazzdude said:
I've not been missing your point. I'm intimately familiar with the arguments used in decoherence, and I still disagree. The way you lay it out the argument depends highly on the construction of the density matrix to encode quantum ensembles. And this construction is only motivated if you assume that upon observations quantum probabilities mix with classical (ensemble) probabilities. It doesn't matter then how you construct improper ensembles or if they're sensible constructs because the real ensembles are already problematic.

This is going around in circles.

One more time - then that's the end of it for this thread for me. However based on past experience it will be rehashed.

Decoherence adherents, unless they are being disingenuous like the paper cited before, do not claim it solves the measurement problem. What they claim is its non issue because its observationally the same as a proper mixture and gives the appearance of wavefunction collapse.

Thanks
Bill
 
  • #273
S.Daedalus said:
This is why I specified proper mixture: improper mixtures that arise via tracing out a part of the system look mathematically indistinguishable from 'true' mixtures that arise from uncertainty about what the actual state is,

The MWI interpretation basically amounts to the claim that all mixtures are "improper" in that sense.

but can't be given an ignorance interpretation (this error is also at the root of Art Hobson's recent 'resolution' of the measurement problem which we've discussed here).

I have not followed that thread, but as I understand it, MWI absolutely depends on there being no observational difference between "proper" and "improper" mixtures.
 
  • #274
bhobba said:
Decoherence adherents, unless they are being disingenuous like the paper cited before, do not claim it solves the measurement problem. What they claim is its non issue because its observationally the same as a proper mixture and gives the appearance of wavefunction collapse.
I just don't see in what sense that's true; it's trivial to distinguish a proper from an improper mixture. Take the state
|\Phi^+\rangle=\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle).
Locally, both Alice and Bob describe it by the mixture
\rho_A=\rho_B=\frac{1}{2}(|0\rangle\langle 0| + |1\rangle\langle 1|),
and all of their observations will be in line with this assignment. But if they now believe that therefore, their respective states are in an actual mixture of |0\rangle and |1\rangle, they must also believe that the global state corresponds to
\rho_{AB}=\rho_A\otimes\rho_B=\frac{1}{4}(|00\rangle\langle 00| + |01\rangle\langle 01| + |10\rangle\langle 10| + |11\rangle\langle 11|),
simply because that is the state the system would be in if it were the case that each local state were a proper mixture, one for instance generated by producing the states |0\rangle or |1\rangle equiprobably at random. But of course, this state is observationally very different from |\Phi^+\rangle: for instance, both Alice and Bob would expect their measurements to be entirely uncorrelated, but in fact, they will be perfectly correlated. This amounts to a falsification of the belief that their states can be described by a proper mixture, i.e. that they can be given an ignorance interpretation. The states can't be identified, even though the local observations are equivalent.

Alternatively, you can just measure |\Phi^+\rangle \langle\Phi^+|: clearly, while |\Phi^+\rangle is an eigenstate, and thus, the measurement will return +1 determinately, \rho_{AB} is not, and the outcome will be random; the assumption of being able to give an ignorance interpretation to their states leads Alice and Bob to make false predictions.

So in what sense could you consider these states equivalent?
 
  • #275
Jazzdude said:
That's not what I have quoted or referred to. My reply was specifically about your claim that it makes no sense to have a special basis in the process of observation.

I have had a look at the responses here and there has been a misunderstanding of contexts.

The original quote you gave was in reference to the assumption of non-contextuality in the proof of Gleason's theorem which is the measure is basis independent. That didn't make sense in that context and I assumed it meant that decoherence didn't single out a specific basis.

Thanks
Bill
 
  • #276
stevendaryl said:
The MWI interpretation basically amounts to the claim that all mixtures are "improper" in that sense.

I have not followed that thread, but as I understand it, MWI absolutely depends on there being no observational difference between "proper" and "improper" mixtures.

So I took a look at one of your comments in that thread:

Mathematically, this is the same object that one would use to describe a system that is prepared either in the state |M1⟩ or |M2⟩ with a respective probability of |c1|2 or |c2|2. However---and this is where the argument goes wrong, I believe---in case this object is arrived at by tracing out the degrees of freedom of another subsystem, one can't interpret it in the way that the system is in fact in either of the states |M1⟩ or |M2⟩, and we just don't know which.

Is that supposed to be a criticism of MWI? MWI essentially claims that it's incorrect to attribute QM probabilities to ignorance. It's incorrect to attribute mixed states to ignorance. So it's no criticism of MWI that it has a conclusion that is different from the conclusion of a theory that attributes mixed states to ignorance--that's the whole point of MWI. The real issue isn't whether the mixed states are due to ignorance (they are, in some interpretations, and they are not, in other interpretations). The issue is whether and how both interpretations are consistent with what we observe.
 
  • #277
S.Daedalus said:
I just don't see in what sense that's true; it's trivial to distinguish a proper from an improper mixture.

What exactly about the answer based of Decoherent Histories I gave in the thread you linked to didn't you understand?

Added Later:

You seemed to understand the issue in that thread:
'Now as I said, you can augment the scheme so as to avoid this problem---by, say, going the modal route, or by just working within the framework of consistent histories as Griffiths does; but then you are adding an extra interpretation to the quantum formalism and not, as Hobbes claims to do, resolving the problem 'from within' (something which by the way runs headlong into multiple insolubility theorems of the measurement problem from within quantum mechanics formulated over the years, starting with Fine (1970)). And of course, all of these interpretations do have their own problems (contradictory inferences in consistent histories, Kochen-Specker contradictions/inconsistent value state assignments in modal theories, etc.).'

The answer is still the same, and I have never shied away from it, but for some reason it seems to be bought up all the time, you need further assumptions. MWI is one route, Decoherent Histories is another.

Thanks
Bill
 
Last edited:
  • #278
stevendaryl said:
Is that supposed to be a criticism of MWI?
No, that was a criticism of Hobson's approach. It's also relevant in the context of this thread because people keep attempting to appeal to Gleason's theorem in order to recover the Born probabilities in the MWI, but this fails for a similar reason, namely that proper and improper mixtures are not the same thing.

In a way, getting a proper from an improper mixture is what the measurement problem is all about. Collapse interpretations solve it by fiat: someone snips their fingers, and out comes the desired proper mixture. Many people find this dissatisfying, and with good reason. But then proposing a solution that ends up depending on the very same sleight of hand is no progress at all.
 
  • #279
bhobba said:
What exactly about the answer based of Decoherent Histories I gave in the thread you linked to didn't you understand?
Well, you didn't really give me something to understand, you merely gave a reference. And furthermore, many worlds and consistent histories are two distinct interpretations, with different problems; that a problem can be solved in one, doesn't mean it can be solved in the other. Besides, from what I know about it, I don't see how consistent histories permits the identification of improper and proper measurements; after all, it still must account for Bell tests somehow. But at any rate, if you think that you can show how, in the scenario I outlined above, Alice and Bob can't simply meet up and compare their measurement records in order to find out that their states couldn't have been proper mixtures, I'm all ears.

Response to the bit you added later: yes, I do think that the problem of probability is (at the very least) less severe in consistent histories, but that doesn't mean that it solves the problems of the MWI; you can't fix the MWI by just switching to a different interpretation. It's within the MWI framework that we need to solve the problem here; you can add all manner of things in order to get the right probabilities (though some people wouldn't even believe that), but the question is whether you can (as is often claimed) resolve it within the MWI itself.
 
  • #280
S.Daedalus said:
I just don't see in what sense that's true; it's trivial to distinguish a proper from an improper mixture. Take the state
|\Phi^+\rangle=\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle).
Locally, both Alice and Bob describe it by the mixture
\rho_A=\rho_B=\frac{1}{2}(|0\rangle\langle 0| + |1\rangle\langle 1|),
and all of their observations will be in line with this assignment. But if they now believe that therefore, their respective states are in an actual mixture of |0\rangle and |1\rangle, they must also believe that the global state corresponds to
\rho_{AB}=\rho_A\otimes\rho_B=\frac{1}{4}(|00\rangle\langle 00| + |01\rangle\langle 01| + |10\rangle\langle 10| + |11\rangle\langle 11|),
simply because that is the state the system would be in if it were the case that each local state were a proper mixture, one for instance generated by producing the states |0\rangle or |1\rangle equiprobably at random. But of course, this state is observationally very different from |\Phi^+\rangle: for instance, both Alice and Bob would expect their measurements to be entirely uncorrelated, but in fact, they will be perfectly correlated. This amounts to a falsification of the belief that their states can be described by a proper mixture, i.e. that they can be given an ignorance interpretation. The states can't be identified, even though the local observations are equivalent.

Alternatively, you can just measure |\Phi^+\rangle \langle\Phi^+|: clearly, while |\Phi^+\rangle is an eigenstate, and thus, the measurement will return +1 determinately, \rho_{AB} is not, and the outcome will be random; the assumption of being able to give an ignorance interpretation to their states leads Alice and Bob to make false predictions.

So in what sense could you consider these states equivalent?

I think you miss the point of the use of decoherence in saying that proper and improper mixed states are observationally indistinguishable. There is a mathematical difference between the two, because the improper mixed state contains "interference terms" that are absent in the proper mixed state. But in order to observe these interference effects, you have to perform a measurement that has different outcomes (or different probabilities of outcomes) if the interference terms are present. Basically, what that amounts to is performing a measurement that "unmixes" the states. But when the subsystems involve many, many states (an observer, or the environment) this is a practical impossibility on the order of putting a broken pane of glass back together. Mixing is irreversible in practice for the same reason that classical physics of 10^23 particles is irreversible in practice, even both are reversible in theory.

I don't think your mathematical demonstration is correct. I don't think you combine mixed states that way.
 
  • #281
S.Daedalus said:
No, that was a criticism of Hobson's approach. It's also relevant in the context of this thread because people keep attempting to appeal to Gleason's theorem in order to recover the Born probabilities in the MWI, but this fails for a similar reason, namely that proper and improper mixtures are not the same thing.

In a way, getting a proper from an improper mixture is what the measurement problem is all about. Collapse interpretations solve it by fiat: someone snips their fingers, and out comes the desired proper mixture. Many people find this dissatisfying, and with good reason. But then proposing a solution that ends up depending on the very same sleight of hand is no progress at all.

That's absolutely not true. The progress is that you do away with a nonphysical collapse hypothesis. I agree that there are conceptual difficulties with MWI, but what the use of "improper" mixtures shows is that there is really no evidence that any particular measurement collapses the wave function. So there is no evidence that macroscopic objects (even measurement devices and observers) can't be treated quantum mechanically.

I certainly agree that there is still mystery involved in the interpretation of quantum mechanics, measurements and probabilities and all that. But I don't consider the use of mixed states to be still a mystery.
 
Last edited:
  • #282
S.Daedalus said:
Well, you didn't really give me something to understand, you merely gave a reference.

I beg to differ. As I said in that thread there is a key assumption in your argument:

bhobba said:
Basically you are assuming it possesses those properties SIMULTANEOUSLY globally - which the situation doesn't require. The complete analysis in given in the reference and hinges on the concept of framework used in that interpretation - basically one is free to choose frameworks as long as they are consistent - and a framework exists where it has both those properties locally - but not globally.

You even said it in that thread - assuming an epistemic interpretation that you left out in what you posted here.

Thanks
Bill
 
  • #283
stevendaryl said:
I think you miss the point of the use of decoherence in saying that proper and improper mixed states are observationally indistinguishable. There is a mathematical difference between the two, because the improper mixed state contains "interference terms" that are absent in the proper mixed state. But in order to observe these interference effects, you have to perform a measurement that has different outcomes (or different probabilities of outcomes) if the interference terms are present.
No. That's the point of Alice and Bob's story: they only make the local measurements, which don't detect interference terms and look just like the states actually were in a mixed state. But upon comparing their measurement records, they will notice that they always got the same results---i.e. that they are perfectly correlated---while the prediction on the belief of proper mixedness is that they would not observe any correlation at all.

That's not to say that for all practical purposes, you can consider decohered states to be effectively mixed, since as you say the relevant measurements are pretty much impossible to perform; but this is not just a FAPP question: if you want to appeal to Gleason in order to get probabilities in the MWI, then you require that the identification can be made exactly; in any other case, Gleason just doesn't talk about probability in the MWI.

I don't think your mathematical demonstration is correct. I don't think you combine mixed states that way.
Of course you do.
 
  • #284
bhobba said:
I beg to differ. As I said in that thread there is a key assumption in your argument:
Even that quote just directs me to the reference. I repeat: what is it that prohibits Alice and Bob from just writing down their results and comparing them?
 
  • #285
S.Daedalus said:
No. That's the point of Alice and Bob's story: they only make the local measurements, which don't detect interference terms and look just like the states actually were in a mixed state. But upon comparing their measurement records, they will notice that they always got the same results---i.e. that they are perfectly correlated---while the prediction on the belief of proper mixedness is that they would not observe any correlation at all.

That's not to say that for all practical purposes, you can consider decohered states to be effectively mixed, since as you say the relevant measurements are pretty much impossible to perform; but this is not just a FAPP question: if you want to appeal to Gleason in order to get probabilities in the MWI, then you require that the identification can be made exactly; in any other case, Gleason just doesn't talk about probability in the MWI.

Of course you do.

The article specifically says "If A and B are two distinct and independent systems then \rho_{AB}=\rho_{A}\otimes\rho_{B} which is a product state."

They don't elaborate on what "independent systems" means, but in fact, that rule is only valid if you ignore entanglement. It's absolutely incorrect in the case we're talking about.
 
Last edited:
  • #286
stevendaryl said:
They don't elaborate on what "independent systems" means, but in fact, that rule is only valid if you ignore entanglement. It's absolutely incorrect in the case we're talking about.
Yes, that's the point I'm trying to get across: attaching an ignorance interpretation to their local states is exactly ignoring the entanglement between them, and erroneously stipulating that they can treat the system as being in a definite, but unknown, state---which is simply not the case if you have entanglement. It's exactly the error made in the argument that proper and improper mixtures can be treated the same!
 
  • #287
tom.stoer said:
Seems that I am the only one where "the penny did not drop".
I'm in the same boat. I find it difficult to get to the core of the issue and even to ask the right questions.

What I get is the assertion that probabilities either don't make sense (but I don't get how replacing them by amplitudes is sufficient) or if we insist on having them, they follow from Gleason's theorem (but I don't get if and what additional assumptions are made in this case).
 
  • #288
S.Daedalus said:
Locally, that's true, but once Alice and Bob get together and compare their measurement results, or a measurement is carried out on both parts of the system, you get results that falsify the idea that the parts of the system are in some definite state, and we just don't know which---correlations that we can't account for with such a model in the first case, and interference results in the second. These are perfectly valid observations, so I don't see how it's true that the two states are 'observationally equivalent'.
I think if you want to argue that improper and proper mixtures are the same in the framework of the MWI, you have to look at it from the other viewpoint: every mixture is improper. For seemingly proper mixtures, it is just to complicated to track the correlations between the system and the large and uncontrollable environment .
 
  • #289
kith said:
I think if you want to argue that improper and proper mixtures are the same in the framework of the MWI, you have to look at it from the other viewpoint: every mixture is improper. For seemingly proper mixtures, it is just to complicated to track the correlations between the system and the large and uncontrollable environment .
Yes, I believe that's a valid point of view. But of course, the issue of proper mixtures only arose because of the attempt to appeal to Gleason's theorem in order to get the Born rule in the MWI (I know, we're getting deeper and deeper down the rabbit hole here), for which (I have argued) you need proper mixtures (i.e. states that really are in a given subspace---otherwise, the measure on subspaces Gleason gives just has nothing to say).
 
  • #290
S.Daedalus said:
Yes, I believe that's a valid point of view. But of course, the issue of proper mixtures only arose because of the attempt to appeal to Gleason's theorem in order to get the Born rule in the MWI (I know, we're getting deeper and deeper down the rabbit hole here), for which (I have argued) you need proper mixtures (i.e. states that really are in a given subspace---otherwise, the measure on subspaces Gleason gives just has nothing to say).
I think the main point of mfb was that we shouldn't talk about probabilities at all in the MWI. In this case, it is meaningless to talk about the Born rule or it's derivation. If we insist to talk about probabilities we have to make additional assumptions (?) which allow us to derive the Born rule via Gleason's theorem.

I am much more interested in discussig if and how we get and verify predictions from QM using the MWI without talking about probabilities. I think it is crucial to understand this viewpoint first and then look at how it is connected to the probabilistic picture.
 
  • #291
S.Daedalus said:
Even that quote just directs me to the reference. I repeat: what is it that prohibits Alice and Bob from just writing down their results and comparing them?

Nothing - but how does that imply the proper mixed state you wrote down?

Basically your supposed proper mixed state is wrong. The proper mixed state depends on who does the observing and it only contains two terms.

Thanks
Bill
 
Last edited:
  • #292
S.Daedalus said:
Yes, that's the point I'm trying to get across: attaching an ignorance interpretation to their local states is exactly ignoring the entanglement between them, and erroneously stipulating that they can treat the system as being in a definite, but unknown, state---which is simply not the case if you have entanglement. It's exactly the error made in the argument that proper and improper mixtures can be treated the same!

I think you're wrong. Using \rho_{AB} = \rho_A \otimes \rho_B is the same sort of assumption as using P(X \wedge Y) = P(X) \times P(Y) to compute joint probabilities. It's correct under the assumption that X and Y are independent random variables, but not in general.

It's not the fact that they are "improper" mixtures that prevents you from combining density matrices that way; it's the fact that you are treating systems as independent that have a common history. Probabilities due to ignorance are NOT independent for systems that share a common history.
 
Last edited:
  • #293
kith said:
I think the main point of mfb was that we shouldn't talk about probabilities at all in the MWI. In this case, it is meaningless to talk about the Born rule or it's derivation. If we insist to talk about probabilities we have to make additional assumptions (?) which allow us to derive the Born rule via Gleason's theorem.

I am much more interested in discussig if and how we get and verify predictions from QM using the MWI without talking about probabilities. I think it is crucial to understand this viewpoint first and then look at how it is connected to the probabilistic picture.
I think this is what mfb meant with his 'hypothesis testing', but I'm afraid the point is lost on me---as I said, I can certainly form the hypothesis that relative frequencies are asymptotically distributed according to Born, test it, and become convinced it's right; I can also form the hypothesis that there are green apples, test it, and become convinced of it. Both merely means that the MWI is consistent with that hypothesis, but in neither case does it then follow from it, so it only posits something which is not explained within the MWI.

bhobba said:
Nothing - but how does that imply the proper mixed state you wrote down?
The proper mixed state is implied by Alice and Bob's believe that they can attach an ignorance interpretation to their local states. The fact that they will observe perfect correlations once they compare their measurements falsifies this belief.

stevendaryl said:
I think you're wrong. Using \rho_{AB} = \rho_A \otimes \rho_B is the same sort of assumption as using P(X \wedge Y) = P(X) \times P(Y) to compute joint probabilities. It's correct under the assumption that X and Y are independent random variables, but not in general.

It's not the fact that they are "improper" mixtures that prevents you from combining density matrices that way; it's the fact that you are treating systems as independent that have a common history. Probabilities due to ignorance are NOT independent for systems that share a common history.
No, it's the fact that Alice and Bob apply an ignorance interpretation to their states that makes me write down the state in that way---that's just what an ignorance interpretation means: they belief that their state is actually in either of the states |0\rangle or |1\rangle; from this alone, it follows that they must hold the global state to be the mixture I wrote down.

I'm really not sure where the disconnect lies. To me, 'improper mixtures aren't proper mixtures' is nothing but a 40-some year old cut-and-dried textbook result, preached to every student of QM who finds out that if he traces out the measured system, what's left looks like a statistical mixture of distinct apparatus states, and then thinks to have solved the measurement problem. The argument is given in many different ways by different authors---first, perhaps, by d'Espagnat (though it was probably known earlier), or in the textbook by Hughes, and so on. Quick googling brought up this by Mittelstaedt, and many other statements like it. I don't really think there's any controversy around this issue.
 
  • #294
stevendaryl said:
It's not the fact that they are "improper" mixtures that prevents you from combining density matrices that way; it's the fact that you are treating systems as independent that have a common history. Probabilities due to ignorance are NOT independent for systems that share a common history.

I also think he is misunderstanding what decoherence says. Whoever does the observation does the decoherence - it is at that point it becomes an improper mixed state and it is exactly the same as the proper one - mathematically that is.

Thanks
Bill
 
  • #295
S.Daedalus said:
No, it's the fact that Alice and Bob apply an ignorance interpretation to their states that makes me write down the state in that way.

But what you wrote is NOT correct, under the ignorance interpretation of mixed states.

As it says in the article:

\rho_{A} = tr_A \rho_{AB}
\rho_{B} = tr_B \rho_{AB}

From these two definitions, it does not follow that
\rho_{AB} = \rho_A \otimes \rho_B
except in special cases.

You can't, in general, combine subystem density matrices that way, regardless of whether the mixtures arose from "ignorance".
 
Last edited:
  • #296
bhobba said:
Decoherence adherents, unless they are being disingenuous like the paper cited before, do not claim it solves the measurement problem. What they claim is its non issue because its observationally the same as a proper mixture and gives the appearance of wavefunction collapse.

I understand very well what you are claiming. But even that "proper and improper mixtures are observationally indistinguishable" requires the measurement postulate, or something equivalent. Because otherwise ensembles cannot be expressed in terms of density matrices.

Prove me wrong by deriving the density matrix formalism for ensembles without referring in any way to any form of the measurement postulate.

Cheers,

Jazz
 
  • #297
bhobba said:
I never claimed that - in fact I don't even know what you mean by that.

Then please let's discuss what you don't understand. Just saying that it's not true won't get us any farther.


To be clear my claim is that decoherence solves the preferred basis problem as stated on page 113 of the reference I gave before by Schlosshauer. He gives 3 issues the measurement problem must solve:

1. The preferred basis problem
2. The problem of non observability of interference
3. The problem of why we have outcomes at all.

The statement he makes is my position:
'it is reasonable to conclude decoherence is capable of solving the first two, whereas the third problem is linked to matters of interpretation'

Yes, I know Schlosshauer's publications. And what I'm saying is that 1) is very questionable for reasons I gave earlier, 2) is a valid conclusion unless we gain a possibly more complete understanding of observation that invalidates is and 3) is absolutely out of reach of the decoherence framework because it doesn't even look at single outcomes.

But you add 4) that decoherence gives a behavioristic explanation for the world we see by stating that ensembles and reduced states are indistinguishable by experiments. That takes what we know experimentally about observation (namely that it only results in probabilities and we can therefore encode ensembles more densely by putting them in a form compatible with the measurement rule) to explain observation. That is circular.

Cheers,

Jazz
 
  • #298
Jazzdude said:
Prove me wrong by deriving the density matrix formalism for ensembles without referring in any way to any form of the measurement postulate.

How can I prove you wrong for something I am not claiming. The claim is - it don't matter - not that the measurement postulate isn't still there.

Even with proper mixed states you have the measurement postulate - its still there - its just that it conforms to our intuition of having the value prior to observation and you don't have this collapse because its revealing what's already there. It simply means you can interpret in a more reasonable manner that APPEARS to solve the measurement problem.

Thanks
Bill
 
  • #299
bhobba said:
How can I prove you wrong for something I am not claiming. The claim is - it don't matter - not that the measurement postulate isn't still there.
Even with proper mixed states you have the measurement postulate - its still there - its just that it conforms to our intuition of having the value prior to observation and you don't have this collapse because its revealing what's already there. It simply means you can interpret in a more reasonable manner that APPEARS to solve the measurement problem.

So you're saying, in a setting where you have decoherence AND the measurement postulate is valid, you get something that explains how a decohered system looks like a mixture, and that explains all practical aspects of measurement. Why use decoherence at all? This follows from only the measurement postulate already.

Cheers,

Jazz
 
  • #300
Jazzdude said:
But you add 4) that decoherence gives a behavioristic explanation for the world we see by stating that ensembles and reduced states are indistinguishable by experiments.

Sigh.

I say it gives the APPEARANCE that the world behaves in way that conforms more readily to our intuition and hence gives the APPEARANCE of solving the problem.

Thanks
Bill
 
Back
Top