Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Decoherence and the randomness of collapse

  1. Dec 18, 2011 #1
    I'm trying to at least understand what decoherence can and cannot explain about how quantum mechanics works, the more I read, the less clear I am about what is known and what is merely speculative.

    So I finally decided the only way to get any further was to try and clarify what I think is known, and give others an opportunity to confirm or correct as the case may be.
    1. The fundamental mystery of quantum mechanics is that the basic equations tell us that any isolated quantum system evolves deterministically in accordance with a unitary equation, but in practice, the transition from microscopic to macroscopic environments appears to engender a 'collapse', turning waves into localised particles, and doing so in a probabilistic way.
    2. The second mystery of quantum mechanics is that macroscopic superpositions of very different states, such as Schrodinger's Cat, can easily be described within the mathematical formalism, but appear not to exist in the real world.
    3. As I understand it, decoherence provides what might be described as a mathematically suggestive explanation of the second point. Essentially, as soon as a system gets big enough, the complex interaction of a macroscopic number of things causes macroscopic superpositions to be extraordinarily unlikely and unstable configurations, in much the same way as modern interpretations of the second law of thermodynamics describe entropy-lowering events as staggeringly uncommon, rather than theoretically impossible.
    4. This particular aspect of decoherence appears to be fairly well accepted by many people, and has some support from entanglement experiments, the behaviour of quantum computers etc.
    5. Although decoherence arguments make it plausible why we never see macroscopic superpositions, it appears at first sight to offer no explanation of the first question. If the apparent collapse of the wave function is simplly an inevitable consequence of the interaction with the rest of the universe, or even a fairly small but macroscopic part of it, then why isn't that a determinstic process, i.e. where does the quantum randomness come from.
    6. What appears to be randomness could in fact just be extreme sensitivity to the initial conditions. In other words, when an electron goes through two slits at once it's behaving as a wave. When it goes through only one slit and gets measured, it's still behaving as a wave, one which decoherence has concentrated in a small area through interaction with the other particles in the apparatus. But exactly where that concentration will occur, although deterministically calculatable in theory, is in practice so sensitive to intiial conditions and to unknowable ones at that ( the complete starting states of everything in the universe which could influence the result ), that an element of ramdomness appears.
    7. But in this case, quantum randomness is just like classical randomness, albeit computationally even worse by some humungous factor. And so we appear to have an explanation of all quantum wierdness. The entire universe is deterministic, but the emergent behaviour of small parts of it can only be analysed statistically. Einstein was right. God does not play dice with the universe - ( just with almost all parts of it :)

    Have I gone too far. Am I imputing to decoherence more than there is evidence or even an analysis for? Can anyone point me to an analysis of a gedanken experiment in which decoherence can demonstrate a chaotic-like behaviour, or, even better, some indication that this kind of localisation caused by entanglement is inevitable in practice, rather than simply plausible.

    And if my understanding above is in fact what decoherence tells us, then where has the mystery gone, and why do people still advocate alternate, almost philosophical approaches?
     
  2. jcsd
  3. Dec 21, 2011 #2
    Bilkusg, you might find this lecture by Steven Weinstein helpful:



    I found this web page to be useful as well:

    http://www.ipod.org.uk/reality/reality_decoherence.asp [Broken]

    I am not a physicist, but my understanding is that although decoherence gives us good tools for describing the transition between pure states and mixed states, it can't explain how the mixed states emerge in the first place.

    There is debate on this point, but those who've looked at it the closest, from what I've seen, seem to conclude that decoherence can't explain how the first appearance of a particle could come from a pure state.

    Decoherence needs there to be an environment first. There needs to be a separate system to interact with. Then decoherence shows how the information of the pure system does not completely transfer to the environment. Only certain specific aspects. When phases cancel, the electron goes only through one slit and the interference pattern disappears.

    The other thing decoherence doesn't explain, as you suggest, is why the specific states emerge, as they do. Statistically, yes, we know how it will turn out, but not on an individual basis.

    Randomness is not an explanation. It is another way of saying that we don't know why.

    Also, your idea that there is some faint predetermining factor that causes the specific state to emerge doesn't seem to work. What you are suggesting is similar to the idea of hidden variables, which was something Einstein first considered but eventually ruled out, and Bell's Theorem mostly destroys (although Bohm does suggest universal hidden variables might work).

    I think it makes more sense to treat the emergence of a specific state for a specific particle to be acausal. There is no causation creating that specific state. Causation is a principle that only applies to mixed states that already exist. It describes the classical world, not the quantum world.

    I hope this helps. I enjoyed your questions.
     
    Last edited by a moderator: May 5, 2017
  4. Dec 21, 2011 #3

    e.bar.goum

    User Avatar
    Science Advisor
    Education Advisor

    I wrote a massive reply and PF ate it. :(
    I'll try again.

    Good questions, bilikusg! I spend far too much time thinking about the measurement problem. It's an interesting one. How much of the formalism of QM do you know? Decoherence is much less mysterious if you see the maths. I'm happy to go over it if you haven't seen it before.

    Your first point is right on. As to your second point, not quite. We do see macroscopic superpositions of states! Most recently, a group entangled macroscopic diamonds at room temperature! (http://www.sciencemag.org/content/334/6060/1253) We can also see interference patters in two colliding lead nuclei (2*208 particles is certainly huge!), indeed, considering superpositions is essential in being able to model heavy ion nuclear reactions. In addition, we also see interference fringes in large (~10^6) samples of BEC's, and we've even done two slit experiments with viruses! The question then becomes, when does classical behaviour emerge? We make measurements of bigger and bigger systems and we've yet to see objects that are always classical. Maybe collapse never happens? (I have to admit to some Everettian MWI bias here).

    Decoherence inevitably appears when you couple a quantum system to the environment. What happens is that the coherent terms in the density matrix (do you know of these? I can explain if you don't) decay.


    Yep.

    Yes, measurement is still distinct from decoherence. The reason it is invoked to explain measurement in systems coupled to the environment is that it looks (in the formalism) the same as measurement. Decoherence doesn't predict what the measured state is. The states are still a probabilistic distribution. It doesn't explain randomness, but then again it's not supposed to.

    Not quite. Bells Theorem gets rid of hidden variable theories like you suggest.

    See above, no hidden variable theories. Decoherence is still very much quantum. And we still see quantum effects - don't forget things like two slit experiments!

    The mystery is still there, in that decoherence doesn't actually provide an explanation of measurement. This is where the interpretations come into play, from things like "shut up and calculate", to interpretations where collapse of fundamental particles is an inevitable process in nature (like nuclear decay), and the collapse of one results in the collapse of the system, to interpretations where measurement never happens, and classical behaviour is an illusion.
     
  5. Dec 21, 2011 #4
    Please forgive me for I may be deeply embarrassing myself, but I would like to ask a question. If a perfectly determined system (such as a computer in the universe) measures a quantum probabilistic event and makes it deterministic again (Schrodinger's cat is alive now), does the fact that the computer was always determined to measure that probabilistic quantum event make the result of the measurement determined since the big bang (just unknown)? in which case the Copenhagen interpretation is dependent on dualism?
    So, either..
    A. I'm talking about a "hidden variable theory" which was largely dis-proven by Bell's theorem.
    or
    B. I have no clue what I am talking about and should learn more before asking questions.

    Anyhow, I'm glad this thread was posted. I was curious about the exact same thing and wanted to post something similar, but I may need to learn a lot more before asking questions.
     
  6. Dec 21, 2011 #5
    Have they?

    I do often wonder myself regarding this. It would seem you are describing the computer as a classical object (hence it would be deterministic), and in principle you could determine when measurement occurs/what result will be shown. It just doesn't fit in when you use the computer to probe the quantum world. It seems there would have to be no classical/quantum seperation, but quantum all the way - for randomness to hold (as Brian Cox says in his latest book). Of course, I'm not taking into account Bohms theory, but my guess is there would still be no classical/quantum divide. The hidden variables determine the state of the quantum system, which leads to a determination of the computer state (reflecting the result pre-determined). Classical physics, as far as I'm aware, doesn't contain these hidden variables. To say classical physics determines the state of the computer, well - the quantum laws would need align the result of the system to the classical equation giving us the state of the computer.
     
  7. Dec 22, 2011 #6

    e.bar.goum

    User Avatar
    Science Advisor
    Education Advisor

    StevieTNZ - I can't seem to find a paper, I must have been mistaken. I was talking to a quantum experimentalist over lunch yesterday and the measurement problem came up - he seemed to be of the belief the experiment has been done, and I'd heard about it as well. So I didn't bother looking for the article when I posted it. Sorry! The diamond example is still good though.

    Gunner, don't be embarrassed! This is interesting to get these questions, because whilst it takes less than a line to show with maths, it's very difficult to explain things in plain English. Do tell me if I'm not clear, or have been to technical.

    The thing is, as soon as you're coupling to a quantum system, the computer is no longer deterministic! That is, when you have used the computer to measure the state of the cat, the result of the measurement is no longer determined. Using the cat example - the cat is in a superposition of |alive> and |dead>, ie, the state of the cat is |cat> = |dead> + |alive> (we're missing a normalisation factor here, but it's not important). If you have a computer that can measure the state, it can either measure dead or alive, ie,

    |computer> = |computer measures alive> + |computer measures dead>.

    Assuming that the probability that the computer measures cat alive when the cat is dead (and vice-versa) is zero, we now have

    |cat>x|computer> = |alive>|computer measures alive> + |dead>|computer measures dead>

    With some normalisation factors.
    Which is a quantum system.

    See? As soon as a computer is measuring a quantum system, it is no longer allowed to be deterministic. Does that answer your and StevieTNZ's concerns?
     
  8. Dec 22, 2011 #7

    Ken G

    User Avatar
    Gold Member

    I would like to point out that both randomness, and determinism, are simply attributes of theories that we develop. This is quite demonstrably true, it's obvious in fact. They have both been shown to be useful to varying degrees in understanding reality, and neither has ever been shown to be what reality is actually doing, nor is there any reason to imagine that reality is beholden to be either one or the other. We must resist the error of imagining that reality must be the way we think about it.
     
  9. Dec 22, 2011 #8
    Oh, I know that already. From what I've gathered, decoherence doesn't collapse the wave function. Superpositions still exist, they're just complex and hard to verify experimentally.
     
  10. Dec 22, 2011 #9

    e.bar.goum

    User Avatar
    Science Advisor
    Education Advisor

    Yes, decoherence doesn't collapse the state, but no, decoherence irreversibly converts quantum behaviour (additive probability amplitudes) to classical behaviour (additive probabilities) - so, the superpositions go away - in terms of density matrices, the decoherence corresponds to the diagonalisation of the density matrix.
     
  11. Dec 22, 2011 #10

    tom.stoer

    User Avatar
    Science Advisor

    There are some issues which are not resolved with decoherence.

    Assume we have a classical measuring device with some "pointer states" S = {s1, s2, s3, ...}; these can e.g. be the positions of a real pointer; in case of Schrödinger's cat there would be two positions S = { s1="live", s2="dead"}; the pointer states correspond to classical behaviour and are typically localized in position space.

    What decoherence explains quite well is how entanglement with environment results in emergence of some classical pointer states S.

    1) What decoherence does not explain is why these pointer states are localized in position space. It could very well be that there exists a second set T = {t1, t2, t3, ...} which has sharply localized states in (e.g.) momentum space. So the emergence of a specific set S of pointer cannot be derived generically but must have something to do with specific interactions.

    1') in terms of density matrices this is rather simple: decoherence tells us that in some preferred basis the density matrix becomes nearly diagonal due; but it does not tell us which specific basis we should use. This is the so-called "preferred basis" problem. I haven't seen a paper explaining this issue.

    2) What decoherence doesn't explain either is which specific element si will be observed in one specific experiment; assume that issue 1) is solved; now look at the Schrödinger cat experiment which is stopped exactly after half-life of the decaying particle, i.e. with a probability of 1/2 for "dead" and 1/2 for "alive"; so even if we know that there will be a classical pointer state (due to decoherence) and if we know that it is localized in position space, we do not know the result of the experiment.
     
  12. Dec 22, 2011 #11
    So there would still be no definite state. No collapse has occured. A lot of physicists have told me system+environment is just a complex superposition. Density matrices only involve partial information about the system+environment.
     
  13. Dec 22, 2011 #12
    But, doesn't the computer (observer) that's measuring make the wave function collapse? Then it must only measure dead OR alive. If it was determined (evolution of the universe) to measure either and it measures one, then that measurement was always going to happen but we never could have determined (known about) it. Thus, it was a statistic for humans but it was always a real result with respect to time. Although, this doesn't really explain why you get dead sometimes and alive the other. I'm just trying to critique indeterminism but I probably don't know what I am talking about.

    It seems like the Copenhagen interpretation goes something like this:
    If not observed: everything that can happen does happen at all times.
    If observed: everything that can happen does happen but only sometimes.

    So a universe that has a possibility of creating conscious life has to create conscious life because that possibility would make the wave function collapse to that state? Or is this situation purely classical?
     
  14. Dec 22, 2011 #13
    Thanks to all the replies so far, I'm beginning to get a better idea of where we stand.
    One thing though, why is the idea I originally had a hidden variables theory which falls foul of Bell's theorem ( which I think I understand ). In my original post, the universe is completely deterministic, and there is no information which is not in the quatum states of all its components ( including fields ). There's nothing 'hidden', any more than the postions of the molecules in a classical gas are hidden, and what I was kind of hoping is that there's a mathematical analysis which can demonstrate that if you evolve a macroscopic system containing correlated photons, decoherence will do Bell-like things to the two measuring apparatuses, whcih are themselves correlated because they've been in the same universe for long enough.
    And if the two apparatuses weren't already linked in this sense, you'd have no way of knowing if they were aligned at the angle required to demonstrate a Bell corrrelation.

    I can see various other potential objections to this, the most serious probably being the implication that the information in the entire universe now was already present in the big bang. But that's surely just a consequence of any theory which is entirely unitary, and to my mind provides the biggest reason to suspect that the laws of nature will turn out to have something else.
     
  15. Dec 22, 2011 #14

    Ken G

    User Avatar
    Gold Member

    In my opinion, this one is fairly easy to resolve at one relatively unsatisfying level, but it requires noticing the role of the physicist in the physics. I agree we don't know, microscopically, why a position measurement accomplishes the decoherence around a position basis, or why a momentum measurement does that around a momentum basis, but the reason we consider them to be position and momentum measurements is that they have these properties. So we simply try lots of different types of macroscopic interactions, and by pure trial and error, we discover the decohering properties of each, and then simply define them to be measurements of the appropriate type. In short, quantum mechanics doesn't tell us what a position measurement is, it merely tells us how to manipulate one mathematically-- it is only we who can say what a position measurement is, and we did that long before quantum mechanics.
    I believe you have made the key point about what decoherence doesn't resolve-- it tells us neither what will happen, nor even why we will perceive a single outcome rather than a superposition. I believe the answer once again has to do with the physicist-- something in how we think/perceive requires that we encounter only a single internally coherent subsystem. Whether or not the larger decohered "many worlds" actually exists or not is a very difficult question for science, and is the entry point into all the different interpretations. In the absence of better observations and a deeper theory that explains them, all these different interpretations are effectively equivalent, and all rely on decoherence (despite a rather widespread misconception that various different interpretations are disfavored by decoherence).
     
  16. Dec 22, 2011 #15
    According to Euan Squires - no. The computer is just another quantum system.
     
  17. Dec 23, 2011 #16
    I thought this comment on decoherence and ontology by Leifer was an interesting one:

    What can decoherence do for us?
    http://mattleifer.info/2007/01/24/what-can-decoherence-do-for-us/
     
  18. Dec 23, 2011 #17
    Well I’m not sure about this, unless I'm missing the point.

    I mean, could we not say such a thing for many aspects of physics? For example I don’t consider there really are point like, massless “objects” we call photons “travelling” from a source to a sink, so I don’t attach any ontological significance to that label other than a picture that represents the “event” between measurements performed at the source and sink. The mathematical predictive model hinges only around measurement, not in terms of what really exists in an ontological sense between the measurements. But I’m not going to throw out the predictive model because I don't consider there is an ontology associated with the photon, the predictive model is entirely valid with or without the ontological baggage of the photon – it doesn’t need the ontology in order to be physics. (At least that’s how it seems to me).

    Decoherence theory is weakly objective in principle, it is a theory that is referred to us in terms of there being proper and improper mixtures – the proper mixtures are beyond our capabilities to measure, so we only get access to the improper mixtures, thus the theory cannot provide the realism that Leifer seems to crave, but in terms of a mathematical account of the transition from micro to macro it seems to be a perfectly valid physics model with no pretence of escaping the subjective element.

    I don’t actually think physics is about trying to describe a reality that goes beyond subjective experience; I think it is describing our reality with an apparent separation of subject and object. That separation breaks down at the quantum level giving us a weak objectivity, many would like to think of decoherence as re-establishing strong objectivity, but it doesn’t because of what I said above, namely that decoherence theory is weakly objective because the formalism specifically refers to our abilities (or lack of them). So decoherence cannot answer the foundational issues that Leifer wants in terms of an ontology that is independent of us, but I don’t see that we need to discard decoherence theory because of that. If we adopt that view then surely we would end up discarding most of physics wouldn’t we?

    The issue of realism and decoherence in terms of proper and improper mixtures is explored by Bernard d’Espagnat in “Veiled Reality” and “On Physics and Philosophy”.
     
  19. Dec 23, 2011 #18
    But I think this is the criticism that Bell tried to hi-lite: Measurement of what? Or Information about what? And again no-one is arguing against a “Veiled Reality”. I don't believe that taking a scientific realist perspective leads into "naive" realism. But I do think that taking the alternative perspective does seem to turn physics into the "science of meter reading". As Bell points out in these quotes:

    Against 'Measurement'
    http://duende.uoregon.edu/~hsu/blogfiles/bell.pdf
     
  20. Dec 24, 2011 #19

    Ken G

    User Avatar
    Gold Member

    I think we are not actually that far apart-- none of us here seem to advocate a science of pure measurement (we hold that our measurements are telling us something about the world). So we are all some flavor of scientific realist-- but we also recognize that we have to measure to do empirical science, and we all recognize that measurement is a kind of filter. We see what passes the filter, because that's what we can do science on, and when we accomplish our goals, we declare that "science works on the real world." But we can notice that science works without needing to believe that science reveals the true world as it actually is-- that is what I would call naive realism, not scientific realism (or "structural realism" or whatever you want to call it). The key distinction is that we can hold there is a real world, and we can hold that it is useful to associate properties with the real world (but only as defined in elements of our theories, because the properties are in the theories not in the real world), and we can have great success, and none of that adds up to the properties themselves being real. Worse, certainly none of it adds up to the idea that properties that we know are simply a subset of a "true properties" that "actually determine" what really happens. That is way beyond scientific realism, and represents a type of blind faith in our own mental faculties that borders on idealism.
     
  21. Dec 26, 2011 #20
    Here's an interesting paper by Susskind and Bousso:
    http://arxiv.org/abs/1105.3796

    Until I get to college and take some quantum mechanics, I'm sticking with Lenny's idea (based off of String-Theory Landscape?).

    Another very interesting paper - on String-Theory Landscape:
    http://arxiv.org/abs/astro-ph/0001197

    From what I know, if String-Theory/String-Theory Landscape/Anthropic Landscape turns out to be false, it will be the most breathtakingly elegant fable in the history of mankind that explained the history of mankind. That's a personal opinion obviously.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Decoherence and the randomness of collapse
Loading...