Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A Ballentine: Decoherence doesn't resolve the measurement problem

  1. Feb 12, 2019 at 9:16 AM #1
    This thread is a direct shoot-off from this post in the Insights thread Against "interpretation" - Comments.

    I am usually not a big fan of Ballentine, but I tend to fully agree with him on the following issue (taken from this paper, credits to @bhobba)
    Decoherence theory is a pragmatic approach based on the density matrix, which in the words of John Bell merely works 'for all practical purpose' (FAPP). The problem with the density matrix approach to the measurement problem is exactly that the ontology of the density matrix is never made clear.

    Multiple theoreticians, mathematicians and workers in the foundations of QM have made the point that the density matrix approach doesn't seem to resolve the measurement problem, but instead merely shifts the burden of ontology from the wavefunction onto the density matrix and so doesn't bring us any further w.r.t. QM's most important foundational issue, i.e. that of resolving the measurement problem.

    Indeed, the degree of ontology of this matrix is in a sense contingent upon our degree of technological prowess in experimental QM. A somewhat simplified way to describe this issue is to say that a solution which merely works FAPP (e.g. from the viewpoint and standards of applied or experimental physics), is precisely one that need not and generally does not work in principle (e.g. from the viewpoint and standards of foundational and theoretical physics).
     
  2. jcsd
  3. Feb 12, 2019 at 9:52 AM #2

    Demystifier

    User Avatar
    Science Advisor
    2018 Award

    I definitely agree that decoherence by itself does not solve the measurement problem. But I do not agree that "it is of no help at all" to resolve it. It helps, but it is just not enough.

    Concerning the fact that decoherence is a FAPP concept, I would only add that the concept of measurement is also a FAPP concept. Unless you are an extreme operationalist, there is nothing fundamental about measurements. https://m.tau.ac.il/~quantum/Vaidman/IQM/BellAM.pdf So to explain one FAPP phenomenon (the measurement) it seems natural to use another FAPP phenomenon (decoherence) as a part of the explanation.
     
  4. Feb 12, 2019 at 12:48 PM #3
    You are absolutely correct in this matter. Extreme operationalism is a naive stance; in most academic circles it goes under the moniker of 'logical positivism' (I flirted with this viewpoint for years until I came to realize it was severely mistaken). Incidentally, Richard Feynman (in 'The Character of Physical Law') spoke on this very topic of extreme operationalism saying the following:
    Carrying on.
    I agree, it does seem natural to do so. The problem is that unlike in other similar cases, in this particular case, the explanation is not generalisable w.r.t. the complete problem; to speak semi-metaphorically, the proposed solution diverges when higher order corrections are added.

    To illustrate that an inherently empirical experimentation based concept can apply in full mathematical generality, i.e. in principle, we need not look too far for examples for they are in abundance: the main theorems and results in probability theory, statistics, error theory, information theory, computational complexity theory, computability theory and so on.

    For illustrative purposes, let us take, as a comparison to the QM measurement problem, the problem of practical unpredictability in completely deterministic circumstances due to sensitive dependence on initial conditions, or chaos, as it is colloquially called. Unpredictability due to chaos can be seen as an empirical side-effect of being in practice unable to measure anything to arbitrary precision; however, the underlying mathematical explanation, necessitating an advanced and abstract mathematical theory, is capable of giving a complete answer to the problem in principle.
     
  5. Feb 12, 2019 at 3:59 PM #4

    bhobba

    Staff: Mentor

    I could not have said it better :smile::smile::smile::smile::smile::smile:

    And again. That is the issue Gell-Mann and Hartell have run up against. Who knows - they may even resolve it - time will tell.

    Thanks
    Bill
     
  6. Feb 13, 2019 at 1:33 AM #5

    zonde

    User Avatar
    Gold Member

    Have you tried to analyze experiments with decoherence on mind? Say Wheeler's delayed-choice experiment?
    To me it seems that decoherence is just a modern version for particle-wave duality at least as far as measurement is concerned. Wheeler's delayed-choice experiment shows inadequacy of particle-wave duality idea and does the same for idea that environment induced decoherence has something to do with measurement.
    So I completely agree with Ballentine that "decoherence theory is of no help at all in resolving Schrödinger’s cat paradox or the problem of measurement".
     
  7. Feb 13, 2019 at 2:01 AM #6

    Demystifier

    User Avatar
    Science Advisor
    2018 Award

    I have tried (successfully) to analyze general experiments with decoherence. The delayed choice experiments turned out to be just a special and not particularly interesting case.
     
  8. Feb 13, 2019 at 2:33 AM #7

    zonde

    User Avatar
    Gold Member

    Where decoherence takes place in Wheeler's delayed choice experiment? It can't happen in first beamsplitter because in closed setup interference is observable in (after) second beamsplitter. But in open setup nothing happens after the first beamsplitter until photon is detected in either one or the other detector. So it would seem that decoherence has to be non-local process (similar to non-local collapse) because if photon is detected in one detector it does not appear in the other detector. Right?
     
  9. Feb 13, 2019 at 3:30 AM #8

    haushofer

    User Avatar
    Science Advisor

    Just do check my understanding: decoherence explains why we don't encounter mixed states in "real (macroscopic) life", but it doesn't explain why we don't encounter superpositions in "real (macroscopic) life", right?

    Is that what you mean by "it's just not enough"?
     
  10. Feb 13, 2019 at 4:56 AM #9

    DarMM

    User Avatar
    Science Advisor

    It can explain why we never see superpositions of macroscopic observables, because alternate outcomes for macroscopic observables are dynamically driven toward being mixed rather than superposed. So the macroscopic observables are for all practical purposes (FAPP) mixed, which means that their statistics are experimentally indistinguishable from the case of the measuring device possessing a definite state (e.g. "I've measured spin up") that you are ignorant about.

    Hence it explains why we are justified in treating the measuring device, or anything else that interacts with a microscopic system, as being classical, just with values you don't know until you look at them. The non-classical statistics is diluted into the environment, where the environment can be either an actual external environment (passing photons, air, etc) or the objects own internal degrees of freedom.

    The mixture here is improper, there is still superposition but we can't notice it due to the type of observables we are looking at, i.e. the classical statistics are the result of ignorance about the global state of "Particle + Device + Environment". This is distinguished from a proper mixture where the classical statistics are just due to the state being drawn from an ensemble. A simple example is two entangled silver atoms. If you look at the z-axis spin, let's say, of one of them, the statistics are classical, a result of not having access to both atoms. This is an improper mixture. However if somebody had an oven that created silver atoms that where either spin-z up or down and then fired them at you, the spin would have classical statistics due to your ignorance about each atom's preparation. This is a proper mixture.

    People tend to ask four questions at this point:
    1. Why does it posses one outcome in particular? The mixture only gives you probabilities of various outcomes.

    2. Can this mixture truly be interpreted as ignorance about the measuring device's objective state? More technically is there actually any difference between the improper mixture we get for classical observables via decoherence and a proper mixture of them?

    3. Is FAPP classical the same as actually classical? There are still small error terms giving the device slightly quantum statistics.

    4. How are we to view the remaining coherence in the environment? This comes up in cases like Wigner's friend.
    An example of a type of answer to these that you would see from the Decoherent Histories view (and others like Richard Healey) would be:
    1. Quantum Mechanics can't tell you that, it's possible science can't tell you that as it seems our most fundamental theory only gives a statistical prediction

    2. Yes. Quantum Mechanics is simply a probability calculus. Since quantum mechanics isn't representational (i.e. doesn't say what the microscopic world is actually like) all it gives is statistics for observables. There's no difference between a proper and improper mixture since they have the same statistics, which is all QM talks about.

    3. Yes. The error terms are so small that no physically realizable device could resolve them. Omnès1 Chapter 7 section 8 shows that for real measuring devices one would need a second larger measuring device much greater in mass than the entire observable universe to resolve the error terms. They therefore have no scientific meaning.

    4. Again not really practically resolvable due to the size of the devices required. Such devices are incompatible with General Relativity (See Omnès Chapter 8) so situations like Wigner's friend result from an unrealistic idealization.
    I'm not presenting this as the answer, just an example.

    1 Omnès, R. (1994). The Interpretation of Quantum Mechanics (Princeton University Press, Princeton)
     
    Last edited: Feb 13, 2019 at 7:30 AM
  11. Feb 13, 2019 at 6:15 AM #10

    A. Neumaier

    User Avatar
    Science Advisor

    In the modeling and experimental interpretation of macroscopic systems we encounter only mixed states. What needs an explanation that in the microscopic case we encounter in scattering pure momentum states rather than their superposition.
     
  12. Feb 13, 2019 at 7:03 AM #11

    Demystifier

    User Avatar
    Science Advisor
    2018 Award

    I would put it differently. Decoherence defines what is the set of possible measurement outcomes in a given measurement procedure, but it doesn't explain why only one (rather than all) of those outcomes realizes.
     
  13. Feb 13, 2019 at 7:07 AM #12

    Demystifier

    User Avatar
    Science Advisor
    2018 Award

    Wrong. The detection happens in one detector as you said, but decoherence happens in both detectors.
     
  14. Feb 13, 2019 at 8:17 AM #13
    I'm not an expert at this, but find it an interesting issue nevertheless. The picture g
    So you agree that decoherence is equivalent with a non-selective measurement, and the trouble is in reading out the measurement record? Isn't the latter rather an issue of statistical mechanics than of quantum mechanics? I remember having read Haroche's Exploring the quantum: atoms, cavities and photons and found it to be quite clear relating decoherence to measurement.
     
  15. Feb 13, 2019 at 8:40 AM #14

    Demystifier

    User Avatar
    Science Advisor
    2018 Award

    I'm not sure I know what is non-selective measurement. Definition? Reference?
     
  16. Feb 13, 2019 at 9:25 AM #15
    a measurement which is not recorded, so that classical uncertainty remains. See for example eq. (4.29) in the reference I mentioned, p 84 in Breuer-Petruccione (The theory of open quantum systems):

    "The measurement of an orthogonal decomposition of unity thus leads
    to a decomposition of the original ensemble into the various sub-ensembles
    labelled by the index a. Such a splitting of the original ensemble into various sub-
    ensembles, each of which being conditioned on a specific measurement outcome,
    is called a selective measurement.
    One could also imagine an experimental situation in which the various sub-
    ensembles are again mixed with the probabilities of their occurrence.
    The resulting ensemble is then described by the density matrix
    ...
    This remixing of the sub-ensembles after the measurement is referred to as non-
    selective measurement
    . "

    or in Wiseman and Milburn (Quantum measurement and control), sec 1.2.6 the non-selective evolution is described as the combined evolution of system+measurement apparatus, where the measurement apparatus has been traced out.
     
  17. Feb 13, 2019 at 9:44 AM #16

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    So is a non-selective measurement equivalent to making a measurement, and then "forgetting" what the result was?
     
  18. Feb 13, 2019 at 9:51 AM #17

    Demystifier

    User Avatar
    Science Advisor
    2018 Award

    Decoherence is not equivalent to unrecorded measurement. That's related to the fact that a given mixed state can describe two physically different situations. One corresponds to tracing over environment (decoherence) and another corresponds to a lack of knowledge of the actual pure state (unrecorded measurement).
     
  19. Feb 13, 2019 at 10:47 AM #18

    atyy

    User Avatar
    Science Advisor

    Would it be true in BM?
     
  20. Feb 13, 2019 at 11:13 AM #19

    stevendaryl

    User Avatar
    Staff Emeritus
    Science Advisor

    That gets me to wondering about the equivalence between BM and other interpretations.

    If you let a system + environment evolve under unitary evolution, then you end up with entanglement between the system and the environment. Then if you trace out the environmental degrees of freedom to get a reduced density matrix, you have an improper mixed state for the system. If you (mis)interpret the improper mixed state as a proper mixed state (where the mixture is due to ignorance), then you can interpret the reduced density matrix as describing the situation in which the system is actually in one pure state or another, but you don't know which. That's the same density matrix as if you had first measured some observable and then forgot (or never checked) what the result was.

    Now, BM always (whether there has been a measurement or not, and whether there has been decoherence or not) interprets the probabilities of QM as being about uncertainty as to the true state. So that would seem in keeping with the mixed state described above.

    However, there is a difference, in that BM always considers the mixture to be due to ignorance about the system's location in configuration space. Forming a mixed state density matrix by tracing environmental degrees of freedom, however, can result in a density matrix in which the possible states are eigenstates of something other than location in configuration space. So they don't seem to be exactly the same.
     
  21. Feb 13, 2019 at 12:29 PM #20
    Tracing out the environment is also a form of throwing away information, which explains how you get a mixed state.

    Also, how do you interpret wave-function monte carlo and the concept of unraveling https://arxiv.org/pdf/quant-ph/0108132.pdf? Is this a case of 'abusing' the mixed state?
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?