Ballentine: Decoherence doesn't resolve the measurement problem

In summary: As you can see, decoherence theory is a pragmatic approach that is based on the density matrix. However, it is not the only pragmatic approach that exists. The measurement problem is a foundational issue that needs to be resolved. Decoherence theory helps to resolve the measurement problem, but it is not the only solution to the problem.
  • #1
Auto-Didact
751
562
This thread is a direct shoot-off from this post in the Insights thread Against "interpretation" - Comments.

I am usually not a big fan of Ballentine, but I tend to fully agree with him on the following issue (taken from this paper, credits to @bhobba)
Ballentine said:
Decoherence theory is of no help at all in resolving Schrödinger’s cat paradox or the problem of measurement. Its role in establishing the classicality of macroscopic systems is much more limited than is often claimed.
Decoherence theory is a pragmatic approach based on the density matrix, which in the words of John Bell merely works 'for all practical purpose' (FAPP). The problem with the density matrix approach to the measurement problem is exactly that the ontology of the density matrix is never made clear.

Multiple theoreticians, mathematicians and workers in the foundations of QM have made the point that the density matrix approach doesn't seem to resolve the measurement problem, but instead merely shifts the burden of ontology from the wavefunction onto the density matrix and so doesn't bring us any further w.r.t. QM's most important foundational issue, i.e. that of resolving the measurement problem.

Indeed, the degree of ontology of this matrix is in a sense contingent upon our degree of technological prowess in experimental QM. A somewhat simplified way to describe this issue is to say that a solution which merely works FAPP (e.g. from the viewpoint and standards of applied or experimental physics), is precisely one that need not and generally does not work in principle (e.g. from the viewpoint and standards of foundational and theoretical physics).
 
  • Like
Likes DrChinese
Physics news on Phys.org
  • #2
I definitely agree that decoherence by itself does not solve the measurement problem. But I do not agree that "it is of no help at all" to resolve it. It helps, but it is just not enough.

Concerning the fact that decoherence is a FAPP concept, I would only add that the concept of measurement is also a FAPP concept. Unless you are an extreme operationalist, there is nothing fundamental about measurements. https://m.tau.ac.il/~quantum/Vaidman/IQM/BellAM.pdf So to explain one FAPP phenomenon (the measurement) it seems natural to use another FAPP phenomenon (decoherence) as a part of the explanation.
 
  • Like
Likes maline, bhobba, thephystudent and 1 other person
  • #3
Demystifier said:
Concerning the fact that decoherence is a FAPP concept, I would only add that the concept of measurement is also a FAPP concept. Unless you are an extreme operationalist, there is nothing fundamental about measurements. https://m.tau.ac.il/~quantum/Vaidman/IQM/BellAM.pdf
You are absolutely correct in this matter. Extreme operationalism is a naive stance; in most academic circles it goes under the moniker of 'logical positivism' (I flirted with this viewpoint for years until I came to realize it was severely mistaken). Incidentally, Richard Feynman (in 'The Character of Physical Law') spoke on this very topic of extreme operationalism saying the following:
Feynman said:
I'd like to talk a little bit about this Heisenberg's idea, that you shouldn't talk about what you can't measure, because a lot of people talk about that without understanding it very well. They say in physics you shouldn't talk about what you can't measure.

If what you mean by this, if you interpret this in this sense, that the constructs are inventions that you make that you talk about, it must be such a kind that the consequences that you compute must be comparable to experiment. That is, that you don't compute a consequence like a moo must be three goos. When nobody knows what a moo and a goo is, that's no good.

If the consequences can be compared to experiment, then that's all that's necessary. It is not necessary that moos and goos can't appear in the guess. That's perfectly all right. You can have as much junk in the guess as you want, provided that you can compare it to experiment.

That's not fully appreciated, because it's usually said, for example, people usually complain of the unwarranted extension of the ideas of particles and paths and so forth, into the atomic realm. Not so at all. There's nothing unwarranted about the extension.

We must, and we should, and we always do extend as far as we can beyond what we already know, those things, those ideas that we've already obtained. We extend the ideas beyond their range. Dangerous, yes, uncertain, yes. But the only way to make progress.

It's necessary to make science useful, although it's uncertain. It's only useful if it makes predictions. It's only useful if it tells you about some experiment that hasn't been done. It's no good if it just tells you what just went on. So it's necessary to extend the ideas beyond where they've been tested.

For example, in the law of gravitation, which was developed to understand the motion of the planets, if Newton simply said, I now understand the planet, and didn't try to compare it to the Earth's pull, we can't, if we're not allowed to say, maybe what holds the galaxies together is gravitation. We must try that. It's no good to say, well, when you get to the size of galaxies, since you don't know anything about anything, it could happen.

Yes, I know. But there's no science here, there's no understanding, ultimately, of the galaxies. If on the other hand you assume that the entire behavior is due to only known laws, this assumption is very limited and very definite and easily broken by experiment. All we're looking for is just such hypotheses. Very definite, easy to compare to experiment.

And the fact is that the way the galaxies behaved so far doesn't seem to be against the proposition. It would be easily disproved, if it were false. But it's very useful to make hypotheses.
Carrying on.
Demystifier said:
So to explain one FAPP phenomenon (the measurement) it seems natural to use another FAPP phenomenon (decoherence) as a part of the explanation.
I agree, it does seem natural to do so. The problem is that unlike in other similar cases, in this particular case, the explanation is not generalisable w.r.t. the complete problem; to speak semi-metaphorically, the proposed solution diverges when higher order corrections are added.

To illustrate that an inherently empirical experimentation based concept can apply in full mathematical generality, i.e. in principle, we need not look too far for examples for they are in abundance: the main theorems and results in probability theory, statistics, error theory, information theory, computational complexity theory, computability theory and so on.

For illustrative purposes, let us take, as a comparison to the QM measurement problem, the problem of practical unpredictability in completely deterministic circumstances due to sensitive dependence on initial conditions, or chaos, as it is colloquially called. Unpredictability due to chaos can be seen as an empirical side-effect of being in practice unable to measure anything to arbitrary precision; however, the underlying mathematical explanation, necessitating an advanced and abstract mathematical theory, is capable of giving a complete answer to the problem in principle.
 
  • Like
Likes dextercioby
  • #4
Demystifier said:
I definitely agree that decoherence by itself does not solve the measurement problem. But I do not agree that "it is of no help at all" to resolve it. It helps, but it is just not enough.

I could not have said it better :smile::smile::smile::smile::smile::smile:

Demystifier said:
Concerning the fact that decoherence is a FAPP concept, I would only add that the concept of measurement is also a FAPP concept. Unless you are an extreme operationalist, there is nothing fundamental about measurements.

And again. That is the issue Gell-Mann and Hartell have run up against. Who knows - they may even resolve it - time will tell.

Thanks
Bill
 
  • Like
Likes Demystifier
  • #5
Demystifier said:
I definitely agree that decoherence by itself does not solve the measurement problem. But I do not agree that "it is of no help at all" to resolve it. It helps, but it is just not enough.
Have you tried to analyze experiments with decoherence on mind? Say Wheeler's delayed-choice experiment?
To me it seems that decoherence is just a modern version for particle-wave duality at least as far as measurement is concerned. Wheeler's delayed-choice experiment shows inadequacy of particle-wave duality idea and does the same for idea that environment induced decoherence has something to do with measurement.
So I completely agree with Ballentine that "decoherence theory is of no help at all in resolving Schrödinger’s cat paradox or the problem of measurement".
 
  • #6
zonde said:
Have you tried to analyze experiments with decoherence on mind? Say Wheeler's delayed-choice experiment?
I have tried (successfully) to analyze general experiments with decoherence. The delayed choice experiments turned out to be just a special and not particularly interesting case.
 
  • Like
Likes maline
  • #7
Demystifier said:
I have tried (successfully) to analyze general experiments with decoherence. The delayed choice experiments turned out to be just a special and not particularly interesting case.
Where decoherence takes place in Wheeler's delayed choice experiment? It can't happen in first beamsplitter because in closed setup interference is observable in (after) second beamsplitter. But in open setup nothing happens after the first beamsplitter until photon is detected in either one or the other detector. So it would seem that decoherence has to be non-local process (similar to non-local collapse) because if photon is detected in one detector it does not appear in the other detector. Right?
 
  • Like
Likes kurt101
  • #8
Demystifier said:
I definitely agree that decoherence by itself does not solve the measurement problem. But I do not agree that "it is of no help at all" to resolve it. It helps, but it is just not enough.
Just do check my understanding: decoherence explains why we don't encounter mixed states in "real (macroscopic) life", but it doesn't explain why we don't encounter superpositions in "real (macroscopic) life", right?

Is that what you mean by "it's just not enough"?
 
  • #9
haushofer said:
Just do check my understanding: decoherence explains why we don't encounter mixed states in "real (macroscopic) life", but it doesn't explain why we don't encounter superpositions in "real (macroscopic) life", right?

Is that what you mean by "it's just not enough"?
It can explain why we never see superpositions of macroscopic observables, because alternate outcomes for macroscopic observables are dynamically driven toward being mixed rather than superposed. So the macroscopic observables are for all practical purposes (FAPP) mixed, which means that their statistics are experimentally indistinguishable from the case of the measuring device possessing a definite state (e.g. "I've measured spin up") that you are ignorant about.

Hence it explains why we are justified in treating the measuring device, or anything else that interacts with a microscopic system, as being classical, just with values you don't know until you look at them. The non-classical statistics is diluted into the environment, where the environment can be either an actual external environment (passing photons, air, etc) or the objects own internal degrees of freedom.

The mixture here is improper, there is still superposition but we can't notice it due to the type of observables we are looking at, i.e. the classical statistics are the result of ignorance about the global state of "Particle + Device + Environment". This is distinguished from a proper mixture where the classical statistics are just due to the state being drawn from an ensemble. A simple example is two entangled silver atoms. If you look at the z-axis spin, let's say, of one of them, the statistics are classical, a result of not having access to both atoms. This is an improper mixture. However if somebody had an oven that created silver atoms that where either spin-z up or down and then fired them at you, the spin would have classical statistics due to your ignorance about each atom's preparation. This is a proper mixture.

People tend to ask four questions at this point:
  1. Why does it posses one outcome in particular? The mixture only gives you probabilities of various outcomes.
  2. Can this mixture truly be interpreted as ignorance about the measuring device's objective state? More technically is there actually any difference between the improper mixture we get for classical observables via decoherence and a proper mixture of them?
  3. Is FAPP classical the same as actually classical? There are still small error terms giving the device slightly quantum statistics.
  4. How are we to view the remaining coherence in the environment? This comes up in cases like Wigner's friend.
An example of a type of answer to these that you would see from the Decoherent Histories view (and others like Richard Healey) would be:
  1. Quantum Mechanics can't tell you that, it's possible science can't tell you that as it seems our most fundamental theory only gives a statistical prediction
  2. Yes. Quantum Mechanics is simply a probability calculus. Since quantum mechanics isn't representational (i.e. doesn't say what the microscopic world is actually like) all it gives is statistics for observables. There's no difference between a proper and improper mixture since they have the same statistics, which is all QM talks about.
  3. Yes. The error terms are so small that no physically realizable device could resolve them. Omnès1 Chapter 7 section 8 shows that for real measuring devices one would need a second larger measuring device much greater in mass than the entire observable universe to resolve the error terms. They therefore have no scientific meaning.
  4. Again not really practically resolvable due to the size of the devices required. Such devices are incompatible with General Relativity (See Omnès Chapter 8) so situations like Wigner's friend result from an unrealistic idealization.
I'm not presenting this as the answer, just an example.

1 Omnès, R. (1994). The Interpretation of Quantum Mechanics (Princeton University Press, Princeton)
 
Last edited:
  • Like
Likes bhobba
  • #10
haushofer said:
decoherence explains why we don't encounter mixed states in "real (macroscopic) life",
In the modeling and experimental interpretation of macroscopic systems we encounter only mixed states. What needs an explanation that in the microscopic case we encounter in scattering pure momentum states rather than their superposition.
 
  • #11
haushofer said:
Just do check my understanding: decoherence explains why we don't encounter mixed states in "real (macroscopic) life", but it doesn't explain why we don't encounter superpositions in "real (macroscopic) life", right?

Is that what you mean by "it's just not enough"?
I would put it differently. Decoherence defines what is the set of possible measurement outcomes in a given measurement procedure, but it doesn't explain why only one (rather than all) of those outcomes realizes.
 
  • Like
Likes eloheim and bhobba
  • #12
zonde said:
So it would seem that decoherence has to be non-local process (similar to non-local collapse) because if photon is detected in one detector it does not appear in the other detector. Right?
Wrong. The detection happens in one detector as you said, but decoherence happens in both detectors.
 
  • #13
I'm not an expert at this, but find it an interesting issue nevertheless. The picture g
Demystifier said:
I would put it differently. Decoherence defines what is the set of possible measurement outcomes in a given measurement procedure, but it doesn't explain why only one (rather than all) of those outcomes realizes.

So you agree that decoherence is equivalent with a non-selective measurement, and the trouble is in reading out the measurement record? Isn't the latter rather an issue of statistical mechanics than of quantum mechanics? I remember having read Haroche's Exploring the quantum: atoms, cavities and photons and found it to be quite clear relating decoherence to measurement.
 
  • #14
thephystudent said:
So you agree that decoherence is equivalent with a non-selective measurement, and the trouble is in reading out the measurement record?
I'm not sure I know what is non-selective measurement. Definition? Reference?
 
  • #15
Demystifier said:
I'm not sure I know what is non-selective measurement. Definition? Reference?

a measurement which is not recorded, so that classical uncertainty remains. See for example eq. (4.29) in the reference I mentioned, p 84 in Breuer-Petruccione (The theory of open quantum systems):

"The measurement of an orthogonal decomposition of unity thus leads
to a decomposition of the original ensemble into the various sub-ensembles
labelled by the index a. Such a splitting of the original ensemble into various sub-
ensembles, each of which being conditioned on a specific measurement outcome,
is called a selective measurement.
One could also imagine an experimental situation in which the various sub-
ensembles are again mixed with the probabilities of their occurrence.
The resulting ensemble is then described by the density matrix
...
This remixing of the sub-ensembles after the measurement is referred to as non-
selective measurement
. "

or in Wiseman and Milburn (Quantum measurement and control), sec 1.2.6 the non-selective evolution is described as the combined evolution of system+measurement apparatus, where the measurement apparatus has been traced out.
 
  • Like
Likes Demystifier
  • #16
thephystudent said:
a measurement which is not recorded, so that classical uncertainty remains. See for example eq. (4.29) in the reference I mentioned, p 84 in Breuer-Petruccione (The theory of open quantum systems):

"The measurement of an orthogonal decomposition of unity thus leads
to a decomposition of the original ensemble into the various sub-ensembles
labelled by the index a. Such a splitting of the original ensemble into various sub-
ensembles, each of which being conditioned on a specific measurement outcome,
is called a selective measurement.
One could also imagine an experimental situation in which the various sub-
ensembles are again mixed with the probabilities of their occurrence.
The resulting ensemble is then described by the density matrix
...
This remixing of the sub-ensembles after the measurement is referred to as non-
selective measurement
. "

or in Wiseman and Milburn (Quantum measurement and control), sec 1.2.6 the non-selective evolution is described as the combined evolution of system+measurement apparatus, where the measurement apparatus has been traced out.

So is a non-selective measurement equivalent to making a measurement, and then "forgetting" what the result was?
 
  • Like
Likes thephystudent and Demystifier
  • #17
thephystudent said:
a measurement which is not recorded, so that classical uncertainty remains.
Decoherence is not equivalent to unrecorded measurement. That's related to the fact that a given mixed state can describe two physically different situations. One corresponds to tracing over environment (decoherence) and another corresponds to a lack of knowledge of the actual pure state (unrecorded measurement).
 
  • Like
Likes bhobba and thephystudent
  • #18
Demystifier said:
Decoherence is not equivalent to unrecorded measurement. That's related to the fact that a given mixed state can describe two physically different situations. One corresponds to tracing over environment (decoherence) and another corresponds to a lack of knowledge of the actual pure state (unrecorded measurement).

Would it be true in BM?
 
  • Like
Likes Auto-Didact
  • #19
atyy said:
Would it be true in BM?

That gets me to wondering about the equivalence between BM and other interpretations.

If you let a system + environment evolve under unitary evolution, then you end up with entanglement between the system and the environment. Then if you trace out the environmental degrees of freedom to get a reduced density matrix, you have an improper mixed state for the system. If you (mis)interpret the improper mixed state as a proper mixed state (where the mixture is due to ignorance), then you can interpret the reduced density matrix as describing the situation in which the system is actually in one pure state or another, but you don't know which. That's the same density matrix as if you had first measured some observable and then forgot (or never checked) what the result was.

Now, BM always (whether there has been a measurement or not, and whether there has been decoherence or not) interprets the probabilities of QM as being about uncertainty as to the true state. So that would seem in keeping with the mixed state described above.

However, there is a difference, in that BM always considers the mixture to be due to ignorance about the system's location in configuration space. Forming a mixed state density matrix by tracing environmental degrees of freedom, however, can result in a density matrix in which the possible states are eigenstates of something other than location in configuration space. So they don't seem to be exactly the same.
 
  • #20
Demystifier said:
Decoherence is not equivalent to unrecorded measurement. That's related to the fact that a given mixed state can describe two physically different situations. One corresponds to tracing over environment (decoherence) and another corresponds to a lack of knowledge of the actual pure state (unrecorded measurement).

Tracing out the environment is also a form of throwing away information, which explains how you get a mixed state.

Also, how do you interpret wave-function monte carlo and the concept of unraveling https://arxiv.org/pdf/quant-ph/0108132.pdf? Is this a case of 'abusing' the mixed state?
 
  • #21
stevendaryl said:
If you let a system + environment evolve under unitary evolution, then you end up with entanglement between the system and the environment.
As well as an observable of that system(2) [named by me to distinguish it from system] that can be measured that tells whether the system2 (system + environment) is in superposition or not.
 
  • #22
atyy said:
Would it be true in BM?
Why do you think it might not?
 
  • #23
Demystifier said:
Wrong. The detection happens in one detector as you said, but decoherence happens in both detectors.
You are looking at decoherence from perspective of Bohmian mechanics, right? But then decoherence still is of no use.
In open setup of Wheeler's delayed choice experiment measurement outcome is determined at first beamsplitter but interference is still observable when we modify the setup into closed configuration. So whatever happens at detector is irrelevant as measurement outcome is determined before photon reaches detector.
 
  • #24
zonde said:
So whatever happens at detector is irrelevant as measurement outcome is determined before photon reaches detector.
There is no measurement outcome without detector, so it's important what happens with the detector.

zonde said:
You are looking at decoherence from perspective of Bohmian mechanics, right?
Right. But in my view, Bohmian mechanics is not much more than a theory of detectors. See the paper linked in my signature.
 
  • Like
Likes bhobba
  • #25
Reasoning about environment induced decoherence is relying on event based reasoning when it is argued that quantum system interacts with environment in the process of measurement. This is sort of obvious. But then we switch to statistical reasoning of quantum mechanical approach. So we go from individual case to statistical ensemble. But going from single environment to ensemble of environments is not reasonable at all. There is single environment for the whole ensemble of quantum systems. So this leap from event based reasoning to statistical reasoning is simply wrong.
 
  • #26
Demystifier said:
There is no measurement outcome without detector, so it's important what happens with the detector.
Yes.
How this is relevant to the argument I was saying?
 
  • #27
stevendaryl said:
Forming a mixed state density matrix by tracing environmental degrees of freedom, however, can result in a density matrix in which the possible states are eigenstates of something other than location in configuration space.
If you are talking about eigenstates of the measured microscopic system, then you are right. But if you are talking about states of the macroscopic pointer (measuring apparatus), they are always well localized in configuration space. In my view, BM is not so much about positions of electrons and photons, as it is about positions of macroscopic pointers (see the paper linked in my signature).
 
Last edited:
  • #28
zonde said:
Yes.
How this is relevant to the argument I was saying?
You said that decoherence is of no use. I say it is of use because it helps to explain why the detector has an outcome at all.
 
  • #29
thephystudent said:
Also, how do you interpret wave-function monte carlo and the concept of unraveling https://arxiv.org/pdf/quant-ph/0108132.pdf? Is this a case of 'abusing' the mixed state?
Sorry for not having time to study this long paper in detail. Could you perhaps give a summary of its essential ideas to put your question into a context?
 
  • #30
Demystifier said:
Why do you think it might not?

Because in BM there is always the uncollapsed wave function, which only undergoes decoherence. So if we trace over that, we will get the same reduced density matrix as collapsing then forgetting.
 
  • Like
Likes bhobba
  • #31
Demystifier said:
Sorry for not having time to study this long paper in detail. Could you perhaps give a summary of its essential ideas to put your question into a context?

see wikipedia

This is basically the idea that you reproduce the effect of decoherence (Lindblad equation, but also extensions beyond markov exist), by averaging over all possible measurement records.
 
  • Like
Likes atyy and Demystifier
  • #32
atyy said:
Because in BM there is always the uncollapsed wave function, which only undergoes decoherence. So if we trace over that, we will get the same reduced density matrix as collapsing then forgetting.
In BM, there is a concept of conditional wave function. For instance, for two degrees of freedom ##x_1## and ##x_2## described by the full wave function ##\Psi(x_1,x_2,t)##, the conditional wave function of the first degree of freedom is
$$\psi_1(x_1,t)=\Psi(x_1,X_2(t),t)$$
where ##X_2(t)## is the Bohmian trajectory. According to BM, ##\Psi## never collapses. What collapses is ##\psi_1##. Decoherence, on the other hand, is something that happens with ##\Psi##. Actual outcomes, or lack of knowledge of the actual outcomes, is something related to ##\psi_1##.

How is it related to density matrices? The reduced density matrix is obtained from ##\Psi## as
$$\rho^{\rm reduced}_1={\rm Tr}_2|\Psi\rangle \langle\Psi|$$
which does not refer to ##\psi_1## at all. The lack-of-knowledge-about-the-outcome density matrix ##\rho^{\rm knowledge}_1##, on the other hand, is related to the lack of knowledge about ##\psi_1##. So in BM, ##\rho^{\rm reduced}_1## and ##\rho^{\rm knowledge}_1## are conceptually different.
 
  • Like
Likes atyy
  • #33
thephystudent said:
see wikipedia

This is basically the idea that you reproduce the effect of decoherence (Lindblad equation, but also extensions beyond markov exist), by averaging over all possible measurement records.
I think it's OK as a practical method that, however, does not help to solve the measurement problem. I'm not sure if it answers your question.
 
  • Like
Likes bhobba
  • #34
Demystifier said:
You said that decoherence is of no use. I say it is of use because it helps to explain why the detector has an outcome at all.
In the paper linked in your signature you postulate an axiom "All perceptibles are beables." So without beables there are no perceptibles.
So which part would you say decoherence helps to explain? Does it help to explain beables or does it help to explain perceptibles (without explaining anything about beables)?
 
  • Like
Likes Demystifier
  • #35
zonde said:
In the paper linked in your signature you postulate an axiom "All perceptibles are beables." So without beables there are no perceptibles.
Good summary of my work, thanks! :approve:

zonde said:
So which part would you say decoherence helps to explain? Does it help to explain beables or does it help to explain perceptibles (without explaining anything about beables)?
That's a good question! It helps to explain the perceptibles. Decoherence is relevant to Sec. 3, where beables are not yet considered. In Sec. 3 I do not talk about decoherence explicitly, but it is implicit in Eqs. (4) and (10).
 

Similar threads

  • Quantum Physics
Replies
3
Views
249
Replies
89
Views
6K
  • Quantum Interpretations and Foundations
Replies
7
Views
914
  • Quantum Physics
Replies
31
Views
4K
  • Quantum Physics
Replies
19
Views
2K
  • Quantum Physics
Replies
29
Views
4K
  • Quantum Interpretations and Foundations
Replies
21
Views
2K
  • Science and Math Textbooks
Replies
3
Views
1K
  • Quantum Interpretations and Foundations
Replies
7
Views
1K
Replies
119
Views
18K
Back
Top