Is there an interpretation independent outcome problem?

Derek Potter
Messages
509
Reaction score
37
I have it on good authority that decoherence (probably) solves the preferred basis problem and explains the suppression of interference phenomena; however there remains the question: why do we get outcomes at all?

Unitary evolution of coupled systems results in entanglement and thus an improper mixed state. So, I would argue, when an observer looks at a superposition it looks like a mixed state: a probability distribution of different outcomes.

What then is left to explain? Or is there something wrong with this argument?

Please, please, please, don't tell me that QM is required to explain how a truly classical world emerges! QM is notoriously about observations and if decoherence accounts for observations that look like classical ones, then surely the job is done! It is nonsense to move the goalposts when the game is over.

But maybe there is something to explain which is staring me in the face but which I can't see?
 
Last edited:
Physics news on Phys.org
Derek Potter said:
So, I would argue, when an observer looks at a superposition it looks like a mixed state: a probability distribution of different outcomes.

That's demonstrably false eg the double slit. What the back screen 'observes' is a superposition of the state behind each slit.

Thanks
Bill
 
Last edited:
This is a good extract from the book 'Quantum Enigma' (2nd edition) on decoherence - page 209:
...that the photon pass through our boxes and then encounter the macroscopic environment. Assuming thermal randomness, one can calculate the extremely short time after which interference experiment becomes impossible, for all practical purposes. ... Averaging over the decohered wave-functions of the atoms leaves us with an equation for a classical-like probability for each atom actually existing wholly in one of the other boxes of its pairs. ... Those classical-like probabilities are still probabilities of what will be observed. They are not true classical probabilities of something that actually exists."
 
Derek Potter said:
I have it on good authority that decoherence (probably) solves the preferred basis problem and explains the suppression of interference phenomena; however there remains the question: why do we get outcomes at all?

The only resolutions of this sort of quantum measurement problem (that I understand and am familiar with) come in two interpretations:

In the Bayesian interpretation, we simply don't know why we get one outcome and not another, but quantum mechanics tells us what our best guesses should be without any true certainty one way or the other.

In the Everett (many worlds) interpretation, the action of an observer during measurement is an interaction just like between any other pair of physical systems.
As a result, the quantum state of the observer+system becomes entangled, and the eigenstates of measurement outcomes become correlated to distinct quantum states of the observer. Individually, the state of the system is in a mixed state, and so is the state of the observer, but together their joint state is a pure entangled state (assuming their states were pure to begin with).
In this interpretation, the observer is a physical system with a memory that records the outcomes of its interactions with other physical systems. Upon examining the observer, we only ever see one string of outcomes, even though that observer might be in a mixed state of all possible strings (due to its entanglement with other systems).

That being said, I don't think there's any real scientific consensus on the subject.
 
StevieTNZ said:
This is a good extract from the book 'Quantum Enigma' (2nd edition) on decoherence - page 209:
"...Averaging over the decohered wave-functions of the atoms leaves us with an equation for a classical-like probability for each atom actually existing wholly in one of the other boxes of its pairs. ... Those classical-like probabilities are still probabilities of what will be observed. They are not true classical probabilities of something that actually exists."

Reference https://www.physicsforums.com/threads/is-there-an-interpretation-independent-outcome-problem.825657/

I'm just the idiot in the room, so please have pity on me. But, Stevie's quote from Rosenblum and Kuttner's book seems to address my primary question about this, which is... Does the improper mixture ONLY make a statistical prediction of outcome? Is there an underlying reality that actually EXISTS after the mixed state is "purified" by measurement? Unless that question can be answered, I don't understand how the improper mixed state can be differentiated substantively from the proper mixed state based solely on the formalism of QT. Can anyone explain that to me?
 
Feeble Wonk said:
I'm just the idiot in the room, so please have pity on me. But, Stevie's quote from Rosenblum and Kuttner's book seems to address my primary question about this, which is... Does the improper mixture ONLY make a statistical prediction of outcome? Is there an underlying reality that actually EXISTS after the mixed state is "purified" by measurement? Unless that question can be answered, I don't understand how the improper mixed state can be differentiated substantively from the proper mixed state based solely on the formalism of QT. Can anyone explain that to me?
I think there is floating on this forum two different definitions of what a proper and an improper mixed state means.

What do you mean by purified? If you mean, 'make a measurement on a 'mixed state'*', then there would be only one outcome -- but when does that one outcome occur? Apparatus? Apparatus measuring the first apparatus? On the more extreme side, consciousness? (the famous measurement problem). Before then it is in a superposition.

*I use quotation marks around mixed state as it is still in a pure, superposition, state, even after decoherence.

QM only makes statistical predictions of outcomes.
 
StevieTNZ said:
I think there is floating on this forum two different definitions of what a proper and an improper mixed state means.

What do you mean by purified? If you mean, 'make a measurement on a 'mixed state'*', then there would be only one outcome -- but when does that one outcome occur? Apparatus? Apparatus measuring the first apparatus? On the more extreme side, consciousness? (the famous measurement problem). Before then it is in a superposition.

*I use quotation marks around mixed state as it is still in a pure, superposition, state, even after decoherence.

QM only makes statistical predictions of outcomes.

I probably used the term "purified" incorrectly. I guess my real question is regarding the concept of an "improper" mixed state.

The way it's been explained to me thus far (or at least my take away understanding) is that a "proper" mixed state is an "unknown" quantum state that is "either/or", as opposed to a "pure" state that is unresolved (still "and", so to speak), which is in true superposition. Assuming that I've got at least that much straight, my confusion still applies to an "improper" mixed state, which I've been told is secondary to environmental decoherence. If I understood it correctly, decoherence statistically predicts/defines (presumably by logical limitation) the outcome of environmental interaction with the quantum system, even in the absence of (or prior to) true quantum "collapse" (if such a thing actually occurs).

The argument then goes further, suggesting that because the proper and improper states are mathematically indiscernible, they are, in fact, the same thing.

That would seem reasonable to me IF the information is all that is "real", but not if the wave function is, or represents something, that is ontologically extant. Does that make any sense at all?
 
Last edited:
  • Like
Likes Derek Potter
jfizzix said:
The only resolutions of this sort of quantum measurement problem (that I understand and am familiar with) come in two interpretations:
In the Bayesian interpretation, we simply don't know why we get one outcome and not another, but quantum mechanics tells us what our best guesses should be without any true certainty one way or the other.
In the Everett (many worlds) interpretation, the action of an observer during measurement is an interaction just like between any other pair of physical systems.
As a result, the quantum state of the observer+system becomes entangled, and the eigenstates of measurement outcomes become correlated to distinct quantum states of the observer. Individually, the state of the system is in a mixed state, and so is the state of the observer, but together their joint state is a pure entangled state (assuming their states were pure to begin with).
In this interpretation, the observer is a physical system with a memory that records the outcomes of its interactions with other physical systems. Upon examining the observer, we only ever see one string of outcomes, even though that observer might be in a mixed state of all possible strings (due to its entanglement with other systems).
That being said, I don't think there's any real scientific consensus on the subject.

I see no difference. QM defines probabilities on a state which is subject to unitary evolution. That's the Bayesian side taken care of. The unitary evolution provides the improper mixed state.which, as you point out can be nested - observer of observer of observer - because in fact the nesting is a verbal artifact due to giving the observer a special role, whereas with unitary evolution leading to mixed states, each observer is simply one of many subsystems that are entangled.

My question isn't really about any of that. I'm asking why people say there is an unsolved problem of explaining why there are any outcomes at all in a decoherence-based theory of observation.
 
The clear answer to the question in the thread title is "no". I do not agree with how Schlosshauer or bhobba state the measurement problem, however, that is not a big deal here. The answer is clearly "no", because if the interpretation solves the measurement problem, then there is no measurement problem in that interpretation.
 
  • #10
Feeble Wonk said:
I'm just the idiot in the room, so please have pity on me. But, Stevie's quote from Rosenblum and Kuttner's book seems to address my primary question about this, which is... Does the improper mixture ONLY make a statistical prediction of outcome? Is there an underlying reality that actually EXISTS after the mixed state is "purified" by measurement? Unless that question can be answered, I don't understand how the improper mixed state can be differentiated substantively from the proper mixed state based solely on the formalism of QT. Can anyone explain that to me?

If you maintain that QM is a theory about observation then of course you are right, there is no need to worry about what the improper mixture really is. However standard formulations of QM typically mention three things that are reasonably thought of as part of reality: observations, the system itself and the system state. Plus of course the general rules that govern their behaviour. One may reasonably assume that the system exists and most people, unless they have spent too much time on Physics Forums, will assume that the system state is real whether they incline to believe in a separate entity called the wavefunction or not.

So from an observations-only PoV, there is no observational difference between a proper and an improper mixed state. But if, after examining QM with a magnifying glass, you can't see where the formalism forbids you to consider the wavefunction to be ontic then you will suddenly find yourself in fresh air and able to ask questions that some would say are meaningless or metaphysical or even philosophical. Questions like "what is going on when an interference pattern is created?" I'll say this: the mind-set that insists that meaningful questions must be answerable by pure logic and experiment is a deeply inflexible bit of philosophical dogma. Officially it should have no place on this forum.

"The second motivation for an ensemble interpretation is the intuition that because quantum mechanics is inherently probabilistic, it only needs to make sense as a theory of ensembles. Whether or not probabilities can be given a sensible meaning for individual systems, this motivation is not compelling. For a theory ought to be able to describe as well as predict the behavior of the world. The fact that physics cannot make deterministic predictions about individual systems does not excuse us from pursuing the goal of being able to describe them as they currently are." - David Mermin

But I would request that we stay clear of these issues as my question was quite specific and I would like to know the answer.
 
  • #11
StevieTNZ said:
I think there is floating on this forum two different definitions of what a proper and an improper mixed state means.
Well they are two different things so I suppose it would be reasonable to have two definitions :) but I suppose you probably mean two definitions for each one? If it is relevant to my question please say what they are as the terms seem pretty unambiguous to me.
 
  • #12
atyy said:
The answer is clearly "no", because if the interpretation solves the measurement problem, then there is no measurement problem in that interpretation.
You appear to be saying that there are interpretations in which the measurement problem is solved. The sources I have say that this is not the case, that the outcome part of the measurement problem is not solved by decoherence. I want to know why it is not as it seems trivial to me, given improper mixed states, but there again, my mind seems to work differently from that of proper physicists.
 
  • #13
Derek Potter said:
Well they are two different things so I suppose it would be reasonable to have two definitions :) but I suppose you probably mean two definitions for each one? If it is relevant to my question please say what they are as the terms seem pretty unambiguous to me.
yes, two definitions for each one.
 
  • #14
Derek Potter said:
Yes. No definitions let alone two per type of mixture.
From the phrases I was hoping you would take away the following points:
  1. Decoherence causes interference to be suppressed, resulting in what looks like classical probabilities about something that exists, e.g. 50% of getting tails or heads when flipping a coin, with tails and heads actually existing on the coin (whether this is called a proper or improper mixture, I await Bernard's email)
  2. However those apparent classical probabilities actually still mean the system is still in a superposition, e.g. 50% of getting tails or heads from flipping a coin where heads and tails don't exist on the coin before measurement.
Regarding the two definitions of proper and improper mixture, consider them the same two definitions for both, yet to be clarified which one belongs to which.
 
  • #15
StevieTNZ said:
I have just emailed Bernard d'Espagnat asking him to clarify the difference between proper and improper mixtures. I am sure when I check my emails tomorrow morning (or even later this evening) there will be an email from him.
Well that will be interesting as I have certainly heard of his idea that there are no proper mixed states under unitary evolution (which seems pretty obvious to me) and I am aware thet there is a controversial rebuttal but I haven't the faintest idea what it is about.

Seems mildly amusing that bhobba's Ignorance Interpretation (not quite the same as Ballentine's, I gather) leads to the opposite conclusion: that all mixtures are proper. But why not? Switch the onticity from the state vector to the reduced density matrix and back and it's hardly surprising that the nature of a mixture changes!
 
  • #16
StevieTNZ said:
From the phrases I was hoping you would take away the following points:
  1. Decoherence causes interference to be suppressed, resulting in what looks like classical probabilities about something that exists, e.g. 50% of getting tails or heads when flipping a coin, with tails and heads actually existing on the coin (whether this is called a proper or improper mixture, I await Bernard's email)
  2. However those apparent classical probabilities actually still mean the system is still in a superposition, e.g. 50% of getting tails or heads from flipping a coin where heads and tails don't exist on the coin before measurement.
[Mentor's note: Edited to remove gratuitous personal attacks]

Now, there is a substantial point in #2 above and maybe d'Espagnat will touch upon it. My maths is very dicky so feel free to bin this suggestion, but I believe it is possible to "turn the superposition into a density matrix" in two ways. The first would explicitly include the decohering subsystem (e.g. the environment), making the description "still in a superposition" absolutely correct for the combined system. The second would ignore the state of the decohering subsystem, it is simply unknown and unknowable. Thus the state of the system of interest (e.g. the cat), decoheres and one is left seeing it as a mixed state with no useful distinction between proper and improper.
 
Last edited by a moderator:
  • #17
StevieTNZ said:
From the phrases I was hoping you would take away the following points:
  1. Decoherence causes interference to be suppressed, resulting in what looks like classical probabilities about something that exists, e.g. 50% of getting tails or heads when flipping a coin, with tails and heads actually existing on the coin (whether this is called a proper or improper mixture, I await Bernard's email)
  2. However those apparent classical probabilities actually still mean the system is still in a superposition, e.g. 50% of getting tails or heads from flipping a coin where heads and tails don't exist on the coin before measurement.
Regarding the two definitions of proper and improper mixture, consider them the same two definitions for both, yet to be clarified which one belongs to which.

Coin tossing fails to be either because it is classical and can only be quantumized by postulating a particular preparation - this will determine whether the coin inherits superposition from a quantum event such as radioactive decay, or whether it inherits a definite but unknown state from some other proper mixture created by something like wavefunction collapse.
 
  • #18
Derek Potter said:
Coin tossing fails to be either because it is classical and can only be quantumized by postulating a particular preparation - this will determine whether the coin inherits superposition from a quantum event such as radioactive decay, or whether it inherits a definite but unknown state from some other proper mixture created by something like wavefunction collapse.
In principle QM applies to all systems, micro and macro. Therefore the coin is not classical, it is quantum.
 
  • Like
Likes Nugatory
  • #19
bhobba said:
That's demonstrably false eg the double slit. What the back screen 'observes' is a superposition of the state behind each slit.

I do not think that speaks to Derek Potter's question. Consider the case in which we build up the interference pattern one particle at a time: We end up with a photographic plate with an interesting pattern of exposed and unexposed photosensitive granules on its surface. The granules are small, but they're still pretty clearly classical, so this is a purely classical object, no superposition at all. Sure, the incoming particles were in superposition, but that superposition collapsed when they were effectively subjected to a position measurement by the interactionwith a particular granule on the surface of the plate.

So we're still confronted with the measurement problem: How did we get from unitary evolution of the (superimposed) wave function of the incoming particles to this particular combination of exposed and unexposed photosensitive granules on the surface of the plate?
 
  • #20
Derek Potter said:
Unitary evolution of coupled systems results in entanglement and thus an improper mixed state. So, I would argue, when an observer looks at a superposition it looks like a mixed state: a probability distribution of different outcomes.

What then is left to explain? Or is there something wrong with this argument?
...
But maybe there is something to explain which is staring me in the face but which I can't see?

Consider Schrodinger's cat. Decoherence tells us that the wave function of the cat+detector+nucleus will very quickly evolve into a form that has negligible interference between "live cat" and "dead cat". After decoherence we just have a simple probabilistic statement based on incomplete information (cat is alive with probability ##x##, cat is dead with probability ##y##, ##x+y=1##, the only reason we don't know which is that we haven't looked yet) no different than the statement that we'd make about a tossed coin. As with the tossed coin, the density matrix for the two outcomes is diagonal and we have a probability distribution of outcomes, both of which are classical.

But there is still something unexplained here. Why do we get one result, either "live cat" or "dead cat"? How and when is the actual outcome selected? In the case of the tossed coin the mixed state reflects our ignorance of the complete initial state of the coin and its interaction with the floor; with a complete specification of the initial system state classical physics would lead to a deterministic prediction of the coin's final state. The same is not true of the evolution of the quantum mechanical state - there's nothing in the evolution of the wave function through its interactions and entanglements with the apparatus that ever turns the probability distribution into a single sharply defined outcome. That's the problem that decoherence does not address in any interpretation.
 
Last edited:
  • Like
Likes bhobba
  • #21
A bunch of unnecessary personal argumentation has been removed from this thread. I'm leaving it open, but not without some trepidation... Please don't make me regret this... Please?

(If I accidentally tossed out something good in the cleanup, let me know by PM.)
 
  • #22
Nugatory said:
That's the problem that decoherence does not address in any interpretation.

Indeed. Its the problem of outcomes discussed at length by Schlosshauer:
https://www.amazon.com/dp/3540357734/?tag=pfamazon01-20

That's why I interpret it as a proper mixed state - but how it becomes one - blankout.

Thanks
Bill
 
  • #23
Derek Potter said:
You appear to be saying that there are interpretations in which the measurement problem is solved. The sources I have say that this is not the case, that the outcome part of the measurement problem is not solved by decoherence. I want to know why it is not as it seems trivial to me, given improper mixed states, but there again, my mind seems to work differently from that of proper physicists.

At depends on what one means by solved. In the realm of non-relativistic quantum mechanics, Bohmian Mechanics is widely thought to solve the measurement problem. However, it remains unclear whether the solution can be extended to all of quantum mechanics.

At any rate, the question itself assumes that there is more than one interpretation. If that is not the case, there is only one interpretation - Copenhagen - and given that Copenhagen has preferred status amongst all interpretations, then one could say that formulating the measurement problem relative to Copenhagen is interpretation-independent.
 
  • #24
Nugatory said:
Consider Schrodinger's cat. Decoherence tells us that the wave function of the cat+detector+nucleus will very quickly evolve into a form that has negligible interference between "live cat" and "dead cat". After decoherence we just have a simple probabilistic statement based on incomplete information (cat is alive with probability ##x##, cat is dead with probability ##y##, ##x+y=1##, the only reason we don't know which is that we haven't looked yet) no different than the statement that we'd make about a tossed coin. As with the tossed coin, the density matrix for the two outcomes is diagonal and we have a probability distribution of outcomes, both of which are classical.

But there is still something unexplained here. Why do we get one result, either "live cat" or "dead cat"? How and when is the actual outcome selected? In the case of the tossed coin the mixed state reflects our ignorance of the complete initial state of the coin and its interaction with the floor; with a complete specification of the initial system state classical physics would lead to a deterministic prediction of the coin's final state. The same is not true of the evolution of the quantum mechanical state - there's nothing in the evolution of the wave function through its interactions and entanglements with the apparatus that ever turns the probability distribution into a single sharply defined outcome. That's the problem that decoherence does not address in any interpretation.

This is very close to Schlosshauer's description of the measurement problem. However, I don't quite agree with it. If decoherence is sufficient to tell us where and when collapse occurs, then one can simply postulate that there is an outcome. That is not a problem. It would be like saying the Bohmian Mechanics does not solve the measurement problem because it does not explain why there are hidden variables.

I prefer a formulation in which a classical or macrosocpic "observer" or "measurement apparatus" is not required to be postulated on as fundamental, and not emergent from the "microscopic" or "quantum" realm.
 
  • #25
atyy said:
then one can simply postulate that there is an outcome. That is not a problem.

That's my personal favorite resolution, as it's the only one that allows me to stop worrying, be happy, and actually get some work done :smile:

It's also consistent with the position (in response to Derek Potter's original question) that decoherence alone does not resolve the problem.

Whether "one can simply postulate that..." is a satisfactory resolution depends, of course, on what one finds satisfactory and whether anyone has a better answer.
 
  • #26
Nugatory said:
I do not think that speaks to Derek Potter's question. Consider the case in which we build up the interference pattern one particle at a time: We end up with a photographic plate with an interesting pattern of exposed and unexposed photosensitive granules on its surface. The granules are small, but they're still pretty clearly classical, so this is a purely classical object, no superposition at all. Sure, the incoming particles were in superposition, but that superposition collapsed when they were effectively subjected to a position measurement by the interactionwith a particular granule on the surface of the plate.

So we're still confronted with the measurement problem: How did we get from unitary evolution of the (superimposed) wave function of the incoming particles to this particular combination of exposed and unexposed photosensitive granules on the surface of the plate?
If that is the "outcome problem" then it has a very simple solution as I am sure you know. There is no definite outcome, but the system of screen+observer (or apparatus)+environment enters a superposition of states with the photon absorbed at different places. Decoherence then allows us to regard the screen as being in a mixed state - an improper one of course.
 
  • #27
atyy said:
This is very close to Schlosshauer's description of the measurement problem. However, I don't quite agree with it. If decoherence is sufficient to tell us where and when collapse occurs, then one can simply postulate that there is an outcome. That is not a problem. It would be like saying the Bohmian Mechanics does not solve the measurement problem because it does not explain why there are hidden variables.

I prefer a formulation in which a classical or macrosocpic "observer" or "measurement apparatus" is not required to be postulated on as fundamental, and not emergent from the "microscopic" or "quantum" realm.
Why postulate it at all?. If the projection postulate is framed in terms of definite outcomes then it is simply in contradiction to the vector state postulate. Why not postulate in terms of observed outcomes? The calculating recipe remains the same. The predictions remain the same. But an opening is created for interpreting the observations as the appearence of definite outcomes not actual definite ones.
 
Last edited:
  • #28
Derek Potter said:
Why postulate it at all. If the projection postulate is frames in terms of definite outcomes then it is simply in contradiction to the vector state postulate. Why not postulate in terms of observed outcomes? The calculating recipe remains the same. The predictions remain the same. But an opening is created for interpreting the observations as the appearence of definite outcomes not actual definite ones.

That remains debatable. Here you are assuming the correctness of Many-Worlds. However, there is no consensus that any form of Many-Worlds is correct, except Bohmian Mechanics.
 
  • Like
Likes bhobba
  • #29
I don't understand. Where is the assumption?
 
Last edited:
  • #30
Derek Potter said:
I don't understand. Where is the assumption?

Everett or relative state or whatever you wish to call it.
 
  • #31
Nugatory said:
That's my personal favorite resolution, as it's the only one that allows me to stop worrying, be happy, and actually get some work done :smile:

Same here.

Nugatory said:
It's also consistent with the position (in response to Derek Potter's original question) that decoherence alone does not resolve the problem. Whether "one can simply postulate that..." is a satisfactory resolution depends, of course, on what one finds satisfactory and whether anyone has a better answer.

Too true.

Thanks
Bill
 
  • #32
atyy said:
Everett or relative state or whatever you wish to call it.
I'm not disputing that it all sounds Everettian, but you should not say I'm assuming MWI. For all I know, a persistent global superposition can be interpreted some other way. Hmm... suppose we postulate that, as the wavefunction trundles on, never collapsing, it acts, non-locally, of course, on particles to move them around. That might work :)

I certainly believe that MWI and BM mean that the answer to the question in the title is simply "No". MWI doesn't postulate definite (single) outcomes, BM edit-has nothing else. However the question raised in the body of the post is more specific. It is whether decoherence solves or does not solve one aspect of the measurement problem, namely the outcome problem, whatever that may be. BM doesn't have a measurement problem. What you see is what you get. MWI has a measurement problem though decoherence seems to solve it. If there is a residual "outcome problem" in measurement theory I would expect it to be common to all interpretations that incorporate decoherence. But I can't for the life of me see what it is.
 
Last edited:
  • #33
Nugatory said:
That's my personal favorite resolution, as it's the only one that allows me to stop worrying, be happy, and actually get some work done :smile:
Ah yes, well, my work is to explain QM to my wife who is an artist.
I have heard of the Ignorance Interpretation but it seems we now have an Ignore-It Interpretation!
Nugatory said:
It's also consistent with the position (in response to Derek Potter's original question) that decoherence alone does not resolve the problem.
Just in case anyone is confused, I should like to make it clear that it is not my position that decoherence does not resolve the problem; my question was, why do people say it doesn't?

See my reply above to atyy.
Nugatory said:
Whether "one can simply postulate that..." is a satisfactory resolution depends, of course, on what one finds satisfactory and whether anyone has a better answer.
I dare say "one can simply postulate that it is a satisfactory resolution". :rolleyes:

For myself I agree with Mermin - science should explain stuff, not just provide betting odds. Chacun a son gout.
 
Last edited:
  • #34
Derek Potter said:
I'm not disputing that it all sounds Everettian, but you should not say I'm assuming MWI. For all I know, a persistent global superposition can be interpreted some other way. Hmm... suppose we postulate that, as the wavefunction trundles on, never collapsing, it acts, non-locally, of course, on particles to move them around. That might work :)

After our previous discussions, I assume :) that when talking to you, MW = BM (FAPP), ie. BMMM...W.

Or for those who happen to read this and need an explanation, if Bohmian Mechanics is MWI with one world picked out, then MWI is BM with no worlds picked out - variations of the argument can be found in http://arxiv.org/abs/quant-ph/0403094 (and earlier references in there to Deutsch, and Zeh) or http://arxiv.org/abs/1112.2034.
 
  • #35
atyy said:
After our previous discussions, I assume :) that when talking to you, MW = BM (FAPP), ie. BMMM...W.
Or for those who happen to read this and need an explanation, if Bohmian Mechanics is MWI with one world picked out, then MWI is BM with no worlds picked out - variations of the argument can be found in http://arxiv.org/abs/quant-ph/0403094 (and earlier references in there to Deutsch, and Zeh) or http://arxiv.org/abs/1112.2034.
And Tegmark's Mathematical Universe is MWI with every world picked out? :)

Yes, you can probably assume that I think in MWI terms. But I wouldn't want to impose an interpretive framework on the measurement problem. If MW emerges, that's a bonus. I'm guessing one would then want to re-name it the Many Worlds Theorem.
 
  • #36
Derek Potter said:
And Tegmark's Mathematical Universe is MWI with every world picked out? :)

I've always assumed Tegmark just has really bad Calendar software, which get's April Fool's wrong quite often, due to the MWI effect.

Derek Potter said:
Yes, you can probably assume that I think in MWI terms. But I wouldn't want to impose an interpretive framework on the measurement problem. If MW emerges, that's a bonus. I'm guessing one would then want to re-name it the Many Worlds Theorem.

Is there an interpretation independent description of the measurement problem? In a strict sense, and if one restricts to non-relativistic QM, no, since at least one interpretation does not have the problem. But let's go more loosely here.

The traditional statement of the problem is relative to Copenhagen, ie. how does one state QM without postulating a classical measurement apparatus or classical observer? Since even if BM or MWI are correct, Copenhagen can be derived from them, there is a good argument that Copenhagen is in some sense "interpretation-independent" as an effective theory. Examples of stating the measurement problem relative to Copenhagen are found in:
Landau and Lifshitz https://www.amazon.com/dp/0750635398/?tag=pfamazon01-20
Dirac http://blogs.scientificamerican.com/guest-blog/the-evolution-of-the-physicists-picture-of-nature/
Bell http://www.informationphilosopher.com/solutions/scientists/bell/Against_Measurement.pdf
Tsirelson http://www.tau.ac.il/~tsirel/download/nonaxio.html

Zurek states the measurement problem as a loose conglomerate of problems relative to both Copenhagen, as well as to trying to get an approach like unitary evolution without hidden variables to make sense: http://arxiv.org/abs/quant-ph/0306072

Leifer tries to state the measurement problem without reference to an interpretation, and his approach is strongly realist, and he indicates he believes both BM and MWI are coherent potential solutions: http://mattleifer.info/tag/decoherence/

Wallace also tries to state the measurement problem in an interpretation independent way: http://arxiv.org/abs/0712.0149

So does Schlosshauer. Of all the statements of the measurement problem, this is the only one I don't agree with: http://arxiv.org/abs/quant-ph/0312059
 
  • Like
Likes julcab12
  • #37
atyy said:
I've always assumed Tegmark just has really bad Calendar software, which get's April Fool's wrong quite often, due to the MWI effect.

:oldlaugh:
 
  • #38
atyy said:
I've always assumed Tegmark just has really bad Calendar software, which get's April Fool's wrong quite often, due to the MWI effect.
Is there an interpretation independent description of the measurement problem? In a strict sense, and if one restricts to non-relativistic QM, no, since at least one interpretation does not have the problem. But let's go more loosely here.
I don't think I asked that! I asked why people say there is an outcome problem that is not resolved by decoherence. Perhaps the title of my post is misleading. My actual question is expanded in the text of the post and in post #12. Why is it people say that decoherence fails to account for outcomes? It appeared to me that the assumption of definite outcomes was the culprit. I was trying to ask whether the problem remains if we don't tie ourselves to an interpretation that makes such an assumption.
atyy said:
The traditional statement of the problem is relative to Copenhagen, ie. how does one state QM without postulating a classical measurement apparatus or classical observer? Since even if BM or MWI are correct, Copenhagen can be derived from them, there is a good argument that Copenhagen is in some sense "interpretation-independent" as an effective theory.
Copenhagen seems to mean different things to different people. If you mean the assumption that there is a classical world then of course superposition cannot provide definite outcomes from a superposition without postulating collapse. In which case the collapse does all the work and decoherence goes on the dole. In that sense, of course, decoherence doesn't solve the outcome problem, collapse gets there first and solves it by asserting outcomes axiomatically.
But saying that Copenhagen can be derived from BM or MWI is not quite true if this meaning of Copenhagen is used consistently. You would need to insert the all-important word "appearence": the appearence of definite outcomes and then, arguably, it is not Copenhagen but the appearence of Copenhagen :)
atyy said:
Examples of stating the measurement problem relative to Copenhagen are found in:
Landau and Lifshitz https://www.amazon.com/dp/0750635398/?tag=pfamazon01-20
Dirac http://blogs.scientificamerican.com/guest-blog/the-evolution-of-the-physicists-picture-of-nature/
Bell http://www.informationphilosopher.com/solutions/scientists/bell/Against_Measurement.pdf
Tsirelson http://www.tau.ac.il/~tsirel/download/nonaxio.html
Zurek states the measurement problem as a loose conglomerate of problems relative to both Copenhagen, as well as to trying to get an approach like unitary evolution without hidden variables to make sense: http://arxiv.org/abs/quant-ph/0306072
Leifer tries to state the measurement problem without reference to an interpretation, and his approach is strongly realist, and he indicates he believes both BM and MWI are coherent potential solutions: http://mattleifer.info/tag/decoherence/
Wallace also tries to state the measurement problem in an interpretation independent way: http://arxiv.org/abs/0712.0149
So does Schlosshauer. Of all the statements of the measurement problem, this is the only one I don't agree with: http://arxiv.org/abs/quant-ph/0312059
Well thanks for the links, some of which I have now skimmed, though it's taken me a good six hours so far! I liked Tsirelson's "Another, for whom we are not real" (LOL) which surely touches on MMWI and therefore excuses his polemical rhetoric. I also liked Bell's "I am not squeamish about delta functions", which will one day I shall undoubtedly adopt as a sig unless another of his marvellous aphorisms surpasses it. But, as they say, what has this to do with the price of cheese? I do not wish to sound ungrateful but I feel rather as if I have asked a question and been directed to a library "The answer's in there!" What are you actually saying - is the outcome problem easily removed by saying "We don't know there is a definite outcome, we only know there is an appearence of definite outcomes so measurement theory just needs to account for the appearence"? Thanks.
 
Last edited:
  • #39
Derek Potter said:
But, as they say, what has this to do with the price of cheese? I do not wish to sound ungrateful but I feel rather as if I have asked a question and been directed to a library "The answer's in there!".

Oh, it has nothing to do with the price of cheese. I thought you already got your answer in post #32, so I was just making tangential remarks.
 
  • #40
atyy said:
Oh, it has nothing to do with the price of cheese. I thought you already got your answer in post #32, so I was just making tangential remarks.
Heh-heh. Why am I not surprised? Good stuff.

I am so looking forwards to meeting Another, for whom we are not real. Presumably he lives not in Nirvana itself but right next door! FAPP, that is.

Yes, I think I have the answer too. People are saying that decoherence fails to square the circle! Why they should actually want it to do something which is obviously impossible (and unnecessary) still eludes me.
 
Last edited:
  • #41
atyy said:
So does Schlosshauer. Of all the statements of the measurement problem, this is the only one I don't agree with: http://arxiv.org/abs/quant-ph/0312059
What is it you disagree with?
 
  • #42
Derek Potter said:
I should like to make it clear that it is not my position that decoherence does not resolve the problem; my question was, why do people say it doesn't?

Because decoherence requires an additional structure - a subdivision into system and environment, or system and observer, or so. Only if you have this splitting into different subspaces, you can start the mathematics of decoherence and obtain, say, a preferred basis in one of the subspaces. This splitting exists in a natural way in practical environments, where we have clear splitting into various systems we want to study and their less interesting environment. But all this practical environment is nothing fundamental. There is no fundamental natural splitting into what is an observer and what is environment or so. Thus, decoherence taken alone simply cannot solve any fundamental, conceptual problems. To work, it has to presuppose a background which essentially includes the whole classical part of Copenhagen.
 
  • #43
Ilja said:
Because decoherence requires an additional structure - a subdivision into system and environment, or system and observer, or so. Only if you have this splitting into different subspaces, you can start the mathematics of decoherence and obtain, say, a preferred basis in one of the subspaces. This splitting exists in a natural way in practical environments, where we have clear splitting into various systems we want to study and their less interesting environment. But all this practical environment is nothing fundamental. There is no fundamental natural splitting into what is an observer and what is environment or so. Thus, decoherence taken alone simply cannot solve any fundamental, conceptual problems. To work, it has to presuppose a background which essentially includes the whole classical part of Copenhagen.
You seem to be saying there is a preferred factorization of the space as well as a preferred basis in one of the subspaces. Is that right? There seems to me to be a major difference between these two preferences. The emergence of a preferred basis in a subspace involves a physical process: actual decoherence. The preferred basis is a physical phenomenon: pointers physically point for example. I think we agree this happens. The factorization of the state space, however, is arbitrary: some factorizations may be more useful than others. I don't understand why the theory needs to be able to identify the observer/environment factorization. (Schrodinger's cat may be a flawed scenario but not because we can't tell from the picture what the cat's name is.) Surely the thing that matters is the fact that there is such a factorization, and that, whatever it may be, we can then create our interaction-entanglement-decoherence-improper mixed state sequence. I can't seem to see why this means there is an outcome problem. If you stop at the improper mixed state you've just shown that under certain factorizations, there will be an emergent appearence of definite outcomes. Anything beyond that - to create a proper mixed state - is optional interpretation. Why is the appearence of outcomes not sufficient? Oh well, my maths sucks so I'm probably missing a massive point - or ten. :/
 
Last edited:
  • #44
Derek Potter said:
You seem to be saying there is a preferred factorization of the space as well as a preferred basis in one of the subspaces. Is that right?
No, I do not say there is such an animal as a preferred factorization. I say there would have to be such an animal if decoherence could solve any foundational problem.

Derek Potter said:
There seems to me to be a major difference between these two preferences. The emergence of a preferred basis in a subspace involves a physical process: actual decoherence. The preferred basis is a physical phenomenon: pointers physically point for example. I think we agree this happens.
No. I agree with naming this "physical" only if there is, as a background, a physical subdivision into a particular system and its environment already given.

If no such physical subdivision is given, and one simply artificially splits somehow the Hilbert space into two parts, decoherence may formally happen, but will not correspond to anything physically meaningful. In particular, it will not lead to anything worth named "appearance".

Derek Potter said:
The factorization of the state space, however, is arbitrary: some factorizations may be more useful than others.
Yes, and this is the problem. Decoherence does not explain which factorizations are physically useful and which are physically nonsensical. The factorization is assumed as given.

So, the possible choices are: Or you have to assume that there is some really, fundamentally preferred factorization - and then we have the fundamental problem which one. Which requires some additional structure, with physical importance.

Or one has to do something else, something more complex, which defines, out of what is given (the Hamilton operator?) not also a preferred basis in one part, but also the factorization itself. In this case, decoherence solves only one, the easier, part of the problem it is claimed to solve.

Moreover, the counterexample of http://arxiv.org/abs/0901.3262 suggests that there is no chance for this.

Derek Potter said:
I don't understand why the theory needs to be able to identify the observer/environment factorization.
Because the only alternative is to postulate them as given, defined by something else - say, the classical part of the Copenhagen interpretation, which tells me that p measures "momentum" and q "position", and the meaning of "momentum" and "position" defined from classical physics.

But if we simply accept Copenhagen, then decoherence is a quite irrelevant triviality of no fundamental relevance at all - it is simply a computation of the probablity if we will see some quantum interference effects or not. It does not change a bit in the measurement problem.
 
  • #45
OK, Thanks Ilja. I'm going to have to chew on that because I think we differ somewhat on the logic rather than the physics - whether there is a problem or not and why anyone would make it the responsibility of decoherence to solve it - but it will take a lot of unravelling. Your explanation is nice and clear to read, so thanks a lot for that and I'll get back to you some time when I've thought about it.
 
  • #46
StevieTNZ said:
From the phrases I was hoping you would take away the following points:
  1. Decoherence causes interference to be suppressed, resulting in what looks like classical probabilities about something that exists, e.g. 50% of getting tails or heads when flipping a coin, with tails and heads actually existing on the coin (whether this is called a proper or improper mixture, I await Bernard's email).
I am in tears.

Sadly I will never hear from Bernard d'Espagnat regarding proper and improper mixtures. He passed away 1st August.
 
Last edited:
  • Like
Likes Derek Potter and atyy
Back
Top