Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Decoherence and the randomness of collapse

  1. Dec 18, 2011 #1
    I'm trying to at least understand what decoherence can and cannot explain about how quantum mechanics works, the more I read, the less clear I am about what is known and what is merely speculative.

    So I finally decided the only way to get any further was to try and clarify what I think is known, and give others an opportunity to confirm or correct as the case may be.
    1. The fundamental mystery of quantum mechanics is that the basic equations tell us that any isolated quantum system evolves deterministically in accordance with a unitary equation, but in practice, the transition from microscopic to macroscopic environments appears to engender a 'collapse', turning waves into localised particles, and doing so in a probabilistic way.
    2. The second mystery of quantum mechanics is that macroscopic superpositions of very different states, such as Schrodinger's Cat, can easily be described within the mathematical formalism, but appear not to exist in the real world.
    3. As I understand it, decoherence provides what might be described as a mathematically suggestive explanation of the second point. Essentially, as soon as a system gets big enough, the complex interaction of a macroscopic number of things causes macroscopic superpositions to be extraordinarily unlikely and unstable configurations, in much the same way as modern interpretations of the second law of thermodynamics describe entropy-lowering events as staggeringly uncommon, rather than theoretically impossible.
    4. This particular aspect of decoherence appears to be fairly well accepted by many people, and has some support from entanglement experiments, the behaviour of quantum computers etc.
    5. Although decoherence arguments make it plausible why we never see macroscopic superpositions, it appears at first sight to offer no explanation of the first question. If the apparent collapse of the wave function is simplly an inevitable consequence of the interaction with the rest of the universe, or even a fairly small but macroscopic part of it, then why isn't that a determinstic process, i.e. where does the quantum randomness come from.
    6. What appears to be randomness could in fact just be extreme sensitivity to the initial conditions. In other words, when an electron goes through two slits at once it's behaving as a wave. When it goes through only one slit and gets measured, it's still behaving as a wave, one which decoherence has concentrated in a small area through interaction with the other particles in the apparatus. But exactly where that concentration will occur, although deterministically calculatable in theory, is in practice so sensitive to intiial conditions and to unknowable ones at that ( the complete starting states of everything in the universe which could influence the result ), that an element of ramdomness appears.
    7. But in this case, quantum randomness is just like classical randomness, albeit computationally even worse by some humungous factor. And so we appear to have an explanation of all quantum wierdness. The entire universe is deterministic, but the emergent behaviour of small parts of it can only be analysed statistically. Einstein was right. God does not play dice with the universe - ( just with almost all parts of it :)

    Have I gone too far. Am I imputing to decoherence more than there is evidence or even an analysis for? Can anyone point me to an analysis of a gedanken experiment in which decoherence can demonstrate a chaotic-like behaviour, or, even better, some indication that this kind of localisation caused by entanglement is inevitable in practice, rather than simply plausible.

    And if my understanding above is in fact what decoherence tells us, then where has the mystery gone, and why do people still advocate alternate, almost philosophical approaches?
  2. jcsd
  3. Dec 21, 2011 #2
    Bilkusg, you might find this lecture by Steven Weinstein helpful:

    I found this web page to be useful as well:

    http://www.ipod.org.uk/reality/reality_decoherence.asp [Broken]

    I am not a physicist, but my understanding is that although decoherence gives us good tools for describing the transition between pure states and mixed states, it can't explain how the mixed states emerge in the first place.

    There is debate on this point, but those who've looked at it the closest, from what I've seen, seem to conclude that decoherence can't explain how the first appearance of a particle could come from a pure state.

    Decoherence needs there to be an environment first. There needs to be a separate system to interact with. Then decoherence shows how the information of the pure system does not completely transfer to the environment. Only certain specific aspects. When phases cancel, the electron goes only through one slit and the interference pattern disappears.

    The other thing decoherence doesn't explain, as you suggest, is why the specific states emerge, as they do. Statistically, yes, we know how it will turn out, but not on an individual basis.

    Randomness is not an explanation. It is another way of saying that we don't know why.

    Also, your idea that there is some faint predetermining factor that causes the specific state to emerge doesn't seem to work. What you are suggesting is similar to the idea of hidden variables, which was something Einstein first considered but eventually ruled out, and Bell's Theorem mostly destroys (although Bohm does suggest universal hidden variables might work).

    I think it makes more sense to treat the emergence of a specific state for a specific particle to be acausal. There is no causation creating that specific state. Causation is a principle that only applies to mixed states that already exist. It describes the classical world, not the quantum world.

    I hope this helps. I enjoyed your questions.
    Last edited by a moderator: May 5, 2017
  4. Dec 21, 2011 #3


    User Avatar
    Science Advisor
    Education Advisor

    I wrote a massive reply and PF ate it. :(
    I'll try again.

    Good questions, bilikusg! I spend far too much time thinking about the measurement problem. It's an interesting one. How much of the formalism of QM do you know? Decoherence is much less mysterious if you see the maths. I'm happy to go over it if you haven't seen it before.

    Your first point is right on. As to your second point, not quite. We do see macroscopic superpositions of states! Most recently, a group entangled macroscopic diamonds at room temperature! (http://www.sciencemag.org/content/334/6060/1253) We can also see interference patters in two colliding lead nuclei (2*208 particles is certainly huge!), indeed, considering superpositions is essential in being able to model heavy ion nuclear reactions. In addition, we also see interference fringes in large (~10^6) samples of BEC's, and we've even done two slit experiments with viruses! The question then becomes, when does classical behaviour emerge? We make measurements of bigger and bigger systems and we've yet to see objects that are always classical. Maybe collapse never happens? (I have to admit to some Everettian MWI bias here).

    Decoherence inevitably appears when you couple a quantum system to the environment. What happens is that the coherent terms in the density matrix (do you know of these? I can explain if you don't) decay.


    Yes, measurement is still distinct from decoherence. The reason it is invoked to explain measurement in systems coupled to the environment is that it looks (in the formalism) the same as measurement. Decoherence doesn't predict what the measured state is. The states are still a probabilistic distribution. It doesn't explain randomness, but then again it's not supposed to.

    Not quite. Bells Theorem gets rid of hidden variable theories like you suggest.

    See above, no hidden variable theories. Decoherence is still very much quantum. And we still see quantum effects - don't forget things like two slit experiments!

    The mystery is still there, in that decoherence doesn't actually provide an explanation of measurement. This is where the interpretations come into play, from things like "shut up and calculate", to interpretations where collapse of fundamental particles is an inevitable process in nature (like nuclear decay), and the collapse of one results in the collapse of the system, to interpretations where measurement never happens, and classical behaviour is an illusion.
  5. Dec 21, 2011 #4
    Please forgive me for I may be deeply embarrassing myself, but I would like to ask a question. If a perfectly determined system (such as a computer in the universe) measures a quantum probabilistic event and makes it deterministic again (Schrodinger's cat is alive now), does the fact that the computer was always determined to measure that probabilistic quantum event make the result of the measurement determined since the big bang (just unknown)? in which case the Copenhagen interpretation is dependent on dualism?
    So, either..
    A. I'm talking about a "hidden variable theory" which was largely dis-proven by Bell's theorem.
    B. I have no clue what I am talking about and should learn more before asking questions.

    Anyhow, I'm glad this thread was posted. I was curious about the exact same thing and wanted to post something similar, but I may need to learn a lot more before asking questions.
  6. Dec 21, 2011 #5
    Have they?

    I do often wonder myself regarding this. It would seem you are describing the computer as a classical object (hence it would be deterministic), and in principle you could determine when measurement occurs/what result will be shown. It just doesn't fit in when you use the computer to probe the quantum world. It seems there would have to be no classical/quantum seperation, but quantum all the way - for randomness to hold (as Brian Cox says in his latest book). Of course, I'm not taking into account Bohms theory, but my guess is there would still be no classical/quantum divide. The hidden variables determine the state of the quantum system, which leads to a determination of the computer state (reflecting the result pre-determined). Classical physics, as far as I'm aware, doesn't contain these hidden variables. To say classical physics determines the state of the computer, well - the quantum laws would need align the result of the system to the classical equation giving us the state of the computer.
  7. Dec 22, 2011 #6


    User Avatar
    Science Advisor
    Education Advisor

    StevieTNZ - I can't seem to find a paper, I must have been mistaken. I was talking to a quantum experimentalist over lunch yesterday and the measurement problem came up - he seemed to be of the belief the experiment has been done, and I'd heard about it as well. So I didn't bother looking for the article when I posted it. Sorry! The diamond example is still good though.

    Gunner, don't be embarrassed! This is interesting to get these questions, because whilst it takes less than a line to show with maths, it's very difficult to explain things in plain English. Do tell me if I'm not clear, or have been to technical.

    The thing is, as soon as you're coupling to a quantum system, the computer is no longer deterministic! That is, when you have used the computer to measure the state of the cat, the result of the measurement is no longer determined. Using the cat example - the cat is in a superposition of |alive> and |dead>, ie, the state of the cat is |cat> = |dead> + |alive> (we're missing a normalisation factor here, but it's not important). If you have a computer that can measure the state, it can either measure dead or alive, ie,

    |computer> = |computer measures alive> + |computer measures dead>.

    Assuming that the probability that the computer measures cat alive when the cat is dead (and vice-versa) is zero, we now have

    |cat>x|computer> = |alive>|computer measures alive> + |dead>|computer measures dead>

    With some normalisation factors.
    Which is a quantum system.

    See? As soon as a computer is measuring a quantum system, it is no longer allowed to be deterministic. Does that answer your and StevieTNZ's concerns?
  8. Dec 22, 2011 #7

    Ken G

    User Avatar
    Gold Member

    I would like to point out that both randomness, and determinism, are simply attributes of theories that we develop. This is quite demonstrably true, it's obvious in fact. They have both been shown to be useful to varying degrees in understanding reality, and neither has ever been shown to be what reality is actually doing, nor is there any reason to imagine that reality is beholden to be either one or the other. We must resist the error of imagining that reality must be the way we think about it.
  9. Dec 22, 2011 #8
    Oh, I know that already. From what I've gathered, decoherence doesn't collapse the wave function. Superpositions still exist, they're just complex and hard to verify experimentally.
  10. Dec 22, 2011 #9


    User Avatar
    Science Advisor
    Education Advisor

    Yes, decoherence doesn't collapse the state, but no, decoherence irreversibly converts quantum behaviour (additive probability amplitudes) to classical behaviour (additive probabilities) - so, the superpositions go away - in terms of density matrices, the decoherence corresponds to the diagonalisation of the density matrix.
  11. Dec 22, 2011 #10


    User Avatar
    Science Advisor

    There are some issues which are not resolved with decoherence.

    Assume we have a classical measuring device with some "pointer states" S = {s1, s2, s3, ...}; these can e.g. be the positions of a real pointer; in case of Schrödinger's cat there would be two positions S = { s1="live", s2="dead"}; the pointer states correspond to classical behaviour and are typically localized in position space.

    What decoherence explains quite well is how entanglement with environment results in emergence of some classical pointer states S.

    1) What decoherence does not explain is why these pointer states are localized in position space. It could very well be that there exists a second set T = {t1, t2, t3, ...} which has sharply localized states in (e.g.) momentum space. So the emergence of a specific set S of pointer cannot be derived generically but must have something to do with specific interactions.

    1') in terms of density matrices this is rather simple: decoherence tells us that in some preferred basis the density matrix becomes nearly diagonal due; but it does not tell us which specific basis we should use. This is the so-called "preferred basis" problem. I haven't seen a paper explaining this issue.

    2) What decoherence doesn't explain either is which specific element si will be observed in one specific experiment; assume that issue 1) is solved; now look at the Schrödinger cat experiment which is stopped exactly after half-life of the decaying particle, i.e. with a probability of 1/2 for "dead" and 1/2 for "alive"; so even if we know that there will be a classical pointer state (due to decoherence) and if we know that it is localized in position space, we do not know the result of the experiment.
  12. Dec 22, 2011 #11
    So there would still be no definite state. No collapse has occured. A lot of physicists have told me system+environment is just a complex superposition. Density matrices only involve partial information about the system+environment.
  13. Dec 22, 2011 #12
    But, doesn't the computer (observer) that's measuring make the wave function collapse? Then it must only measure dead OR alive. If it was determined (evolution of the universe) to measure either and it measures one, then that measurement was always going to happen but we never could have determined (known about) it. Thus, it was a statistic for humans but it was always a real result with respect to time. Although, this doesn't really explain why you get dead sometimes and alive the other. I'm just trying to critique indeterminism but I probably don't know what I am talking about.

    It seems like the Copenhagen interpretation goes something like this:
    If not observed: everything that can happen does happen at all times.
    If observed: everything that can happen does happen but only sometimes.

    So a universe that has a possibility of creating conscious life has to create conscious life because that possibility would make the wave function collapse to that state? Or is this situation purely classical?
  14. Dec 22, 2011 #13
    Thanks to all the replies so far, I'm beginning to get a better idea of where we stand.
    One thing though, why is the idea I originally had a hidden variables theory which falls foul of Bell's theorem ( which I think I understand ). In my original post, the universe is completely deterministic, and there is no information which is not in the quatum states of all its components ( including fields ). There's nothing 'hidden', any more than the postions of the molecules in a classical gas are hidden, and what I was kind of hoping is that there's a mathematical analysis which can demonstrate that if you evolve a macroscopic system containing correlated photons, decoherence will do Bell-like things to the two measuring apparatuses, whcih are themselves correlated because they've been in the same universe for long enough.
    And if the two apparatuses weren't already linked in this sense, you'd have no way of knowing if they were aligned at the angle required to demonstrate a Bell corrrelation.

    I can see various other potential objections to this, the most serious probably being the implication that the information in the entire universe now was already present in the big bang. But that's surely just a consequence of any theory which is entirely unitary, and to my mind provides the biggest reason to suspect that the laws of nature will turn out to have something else.
  15. Dec 22, 2011 #14

    Ken G

    User Avatar
    Gold Member

    In my opinion, this one is fairly easy to resolve at one relatively unsatisfying level, but it requires noticing the role of the physicist in the physics. I agree we don't know, microscopically, why a position measurement accomplishes the decoherence around a position basis, or why a momentum measurement does that around a momentum basis, but the reason we consider them to be position and momentum measurements is that they have these properties. So we simply try lots of different types of macroscopic interactions, and by pure trial and error, we discover the decohering properties of each, and then simply define them to be measurements of the appropriate type. In short, quantum mechanics doesn't tell us what a position measurement is, it merely tells us how to manipulate one mathematically-- it is only we who can say what a position measurement is, and we did that long before quantum mechanics.
    I believe you have made the key point about what decoherence doesn't resolve-- it tells us neither what will happen, nor even why we will perceive a single outcome rather than a superposition. I believe the answer once again has to do with the physicist-- something in how we think/perceive requires that we encounter only a single internally coherent subsystem. Whether or not the larger decohered "many worlds" actually exists or not is a very difficult question for science, and is the entry point into all the different interpretations. In the absence of better observations and a deeper theory that explains them, all these different interpretations are effectively equivalent, and all rely on decoherence (despite a rather widespread misconception that various different interpretations are disfavored by decoherence).
  16. Dec 22, 2011 #15
    According to Euan Squires - no. The computer is just another quantum system.
  17. Dec 23, 2011 #16
    I thought this comment on decoherence and ontology by Leifer was an interesting one:

    What can decoherence do for us?
  18. Dec 23, 2011 #17
    Well I’m not sure about this, unless I'm missing the point.

    I mean, could we not say such a thing for many aspects of physics? For example I don’t consider there really are point like, massless “objects” we call photons “travelling” from a source to a sink, so I don’t attach any ontological significance to that label other than a picture that represents the “event” between measurements performed at the source and sink. The mathematical predictive model hinges only around measurement, not in terms of what really exists in an ontological sense between the measurements. But I’m not going to throw out the predictive model because I don't consider there is an ontology associated with the photon, the predictive model is entirely valid with or without the ontological baggage of the photon – it doesn’t need the ontology in order to be physics. (At least that’s how it seems to me).

    Decoherence theory is weakly objective in principle, it is a theory that is referred to us in terms of there being proper and improper mixtures – the proper mixtures are beyond our capabilities to measure, so we only get access to the improper mixtures, thus the theory cannot provide the realism that Leifer seems to crave, but in terms of a mathematical account of the transition from micro to macro it seems to be a perfectly valid physics model with no pretence of escaping the subjective element.

    I don’t actually think physics is about trying to describe a reality that goes beyond subjective experience; I think it is describing our reality with an apparent separation of subject and object. That separation breaks down at the quantum level giving us a weak objectivity, many would like to think of decoherence as re-establishing strong objectivity, but it doesn’t because of what I said above, namely that decoherence theory is weakly objective because the formalism specifically refers to our abilities (or lack of them). So decoherence cannot answer the foundational issues that Leifer wants in terms of an ontology that is independent of us, but I don’t see that we need to discard decoherence theory because of that. If we adopt that view then surely we would end up discarding most of physics wouldn’t we?

    The issue of realism and decoherence in terms of proper and improper mixtures is explored by Bernard d’Espagnat in “Veiled Reality” and “On Physics and Philosophy”.
  19. Dec 23, 2011 #18
    But I think this is the criticism that Bell tried to hi-lite: Measurement of what? Or Information about what? And again no-one is arguing against a “Veiled Reality”. I don't believe that taking a scientific realist perspective leads into "naive" realism. But I do think that taking the alternative perspective does seem to turn physics into the "science of meter reading". As Bell points out in these quotes:

    Against 'Measurement'
  20. Dec 24, 2011 #19

    Ken G

    User Avatar
    Gold Member

    I think we are not actually that far apart-- none of us here seem to advocate a science of pure measurement (we hold that our measurements are telling us something about the world). So we are all some flavor of scientific realist-- but we also recognize that we have to measure to do empirical science, and we all recognize that measurement is a kind of filter. We see what passes the filter, because that's what we can do science on, and when we accomplish our goals, we declare that "science works on the real world." But we can notice that science works without needing to believe that science reveals the true world as it actually is-- that is what I would call naive realism, not scientific realism (or "structural realism" or whatever you want to call it). The key distinction is that we can hold there is a real world, and we can hold that it is useful to associate properties with the real world (but only as defined in elements of our theories, because the properties are in the theories not in the real world), and we can have great success, and none of that adds up to the properties themselves being real. Worse, certainly none of it adds up to the idea that properties that we know are simply a subset of a "true properties" that "actually determine" what really happens. That is way beyond scientific realism, and represents a type of blind faith in our own mental faculties that borders on idealism.
  21. Dec 26, 2011 #20
    Here's an interesting paper by Susskind and Bousso:

    Until I get to college and take some quantum mechanics, I'm sticking with Lenny's idea (based off of String-Theory Landscape?).

    Another very interesting paper - on String-Theory Landscape:

    From what I know, if String-Theory/String-Theory Landscape/Anthropic Landscape turns out to be false, it will be the most breathtakingly elegant fable in the history of mankind that explained the history of mankind. That's a personal opinion obviously.
  22. Dec 27, 2011 #21

    Ken G

    User Avatar
    Gold Member

    I don't know, to me the concept of an "exact observable" is pretty close to a scientific oxymoron. Also, it's very unclear that "anthropic landscapes" explain anything at all-- it is certainly true that everything we perceive must be consistent with such a principle, but so must everything we see be consistent with being visible-- does that explain our seeing it?
  23. Dec 27, 2011 #22
    I don't understand what you're saying here.

    IF string theory is correct, it predicts, from the number of possible alterations of the Calabi-Yau manifold, that there is 10^500 universes and only very few are capable of life and incidentaly they're the only universes where intelligent life can exist to ask the question or "see" as you have said. That's how it explains it, if I interpreted it correctly.

    "The string theory landscape or anthropic landscape refers to the large number of possible false vacua in string theory. The "landscape" includes so many possible configurations that some physicists think that the known laws of physics, the standard model and general relativity with a positive cosmological constant, occur in at least one of them. The anthropic landscape refers to the collection of those portions of the landscape that are suitable for supporting human life, an application of the anthropic principle that selects a subset of the theoretically possible configurations.

    In string theory the number of false vacua is commonly quoted as 10^500. The large number of possibilities arises from different choices of Calabi-Yau manifolds and different values of generalized magnetic fluxes over different homology cycles. If one assumes that there is no structure in the space of vacua, the problem of finding one with a sufficiently small cosmological constant is NP complete, being a version of the subset sum problem."
    (http://en.m.wikipedia.org/wiki/String_theory_landscape) [Broken]
    Last edited by a moderator: May 5, 2017
  24. Dec 27, 2011 #23

    Ken G

    User Avatar
    Gold Member

    I'm saying that in physics, an "observation" is always an interaction that involves certain idealizations and approximations. Hence, all observations come with the concept of "measurement error." This is fundamental to science, it's not some minor detail we can pretend does not exist and invoke a concept of "exact observation." We like to minimize this error, but we do so using a concept of an "accuracy target", and if there was not some finite accuracy target in the observations, then no theory could ever be viewed as satisfactory to explain those "exact" observations. So if we start invoking the concept of an "exact observation", we have left the building of what an observation means, and so we can no longer use scientific language or discuss scientific goals in a coherent way. In short, physics never has, nor has ever been supposed to, deal with an exact reality, it is always supposed to replace the exact reality with an approximated one, both in terms of the observations and the idealized theories.
    Yes, that's the standard mantra, but there is a great difference between a landscape in biology, and this Calabi-Yau "landscape." In biology, one can go to the various different places in the landscape and study the properties there, and say "yes, I understand why there is no life here." Science is possible throughout a real landscape. This is not the case in the string theoretical "landscape", it is purely an imaginary concept. Thus, it raises the issue, "what is an explanation-- is it just something that gives us a warm fuzzy feeling of understanding?" We must be careful on that path-- countless creation myths afforded their users with a similar warm fuzzy feeling of understanding, and a similar absence of testability.
    A number that is conveniently completely untestable. What experiment comes out A if there are 10^500, and B if there are only 10^400? All we can say is that the view gives us a way to understand, but not a way to test that understanding. We should be quite suspicious of that state of affairs-- it is not something new.
  25. Dec 27, 2011 #24
    Hence the phrase "in principle". I much rather have a theory based on reality - the way things actually work - and then hope that later on we have the technology to test it. For example, it seems impossible to link every event back to the beginning of the universe to see why it turned out the way it did but we can study parts of it and find that we could do it in principle, which would suggest, based on the parts we did study, that the universe was determined at the instant of it's beginning - because that's actually what happened. Just like you can't prove gravity exists everywhere in the universe or that the dark side of the moon is made out of green cheese. Even though both of those ideas are idiotic. So, we can see that in principle, the moon should not be made out of green cheese and gravity should exist everywhere in the universe.
    I don't know if this is correct or real but it seems that is what Lenny is proposing and it also seems like it could be a better approximation than other quantum mechanical interpretations that you defending?

    That's precisely why I said "From what I know, if String-Theory/String-Theory Landscape/Anthropic Landscape turns out to be false, it will be the most breathtakingly elegant fable in the history of mankind that explained the history of mankind. That's a personal opinion obviously."
    Furthermore, I think now, in our day and age, the capability of testing our "warm fuzzy feeling" understanding is becoming a reality. I've read many times and watched many documentaries of physicists explaining different ways that we can or will be able to test, experimentally, such ideas. I'll leave you to look into those ideas yourself.
    So, I argue that a huge part, if not the foundation (metaphysics), of science IS finding out how we are here. It wasn't really until Newton that we could apply mathematics to philosophy and then test it. Although, I agree there is no valid question why in the context of a purpose.

    Actually, that number didn't come out of someones *** (as you make it seem), it came from solving equations from our (arguably) best or most comprehensive theory of reality that unites gravity with the standard model, string theory. Again, I discount the complete validity of String Theory as it doesn't have any way of completely being tested. This is why some people don't call it science. Although, I disagree with that because many of our best theories of the way the world works were created before they could be tested and have only been proven as a true description of reality through experiment. Furthermore, I just don't get the feeling that some, if not the greatest, minds of our time spent their entire lives working on a theory that they thought wasn't a plausibly accurate description of reality or a theory that wasn't suggested by experimental evidence. But, I am as much as a realist as you are and would like to know the truth if not the closest thing to it, if there is one. String Theory might be close.
  26. Dec 27, 2011 #25

    Ken G

    User Avatar
    Gold Member

    But what does "in princple" really mean? I can't give any meaning to it, to me it just means "actually wrong." It's kind of a nice way of saying "this is wrong but it's right enough to be useful, so I'll say it's right in principle." Why not just say it is approximately right, or right enough for our present purposes? That's the actual truth.
    Why not just say we will enter into those assumptions because we find it useful to do so and have no good reason to do otherwise? Again, this is the actual truth.
    Lenny is what I would describe as a Platonic rationalist. He regards our purest concepts about reality as what reality actually is, before we get into the confusing details that make every situation unique. To me, what makes every situation unique is precisely what we would call "the reality", so what we are doing when we rationalize is intentionally replacing the reality with something that makes a lot more sense to us, and we find it quite fruitful to do so. But you are quite right-- this issue is very much the crux of many of the different interpretations of quantum mechanics. When you go to actually learn how to do quantum mechanics, you may be surprised to find just how unnecessary it is to invoke any particular interpretation in order to get the answer right, but we want more than just the answer, we want to understand the answer.
    I guess what I'm saying is, it will be that even if it is never found to be false. Indeed, the reason it is probably already a fable is precisely because I cannot imagine how it could ever be found to be false, just as I cannot imagine how saying that some deity waved their hand and created the universe yesterday in exactly the form we find it could ever be found to be false. The real purpose of a scientific theory is to suggest experiments that can refute that theory, and the more such experiments that could have refuted it that do not refute it, the more we start to find advantage in that theory.
    I've seen a few, but never any that I found likely or convincing. I'm afraid the existing evidence is quite weak that the "landscape" idea is actually testable. Maybe I just need to wait until some of these experiments actually get done, but you know, I'm just not holding my breath there. It isn't like the predictions of relativity, which suggested many tests that took only a few decades to carry out.
    We don't have any such unifying theory. Many people like to pretend that we do, or that it is just around the corner, but there is no actual evidence of either. Who cares if 10500 comes from some calculation, if the outcome does not translate into an experiment that comes out differently if it isn't that number but some other number? All of physics has to be framed in the form "experiment A comes out X if theory Y is correct, and something else if it isn't correct", or it just isn't physics.
    I have no issue with string theory as a potential path to a new theory, my issue is with the common claims that it is already a theory, or that the landscape already provides an explanation of anything. One can choose to picture a landscape, and doing so answers certain questions, but that is always true of any philosophy (and indeed any religion). What makes something science, rather than philosophy or religion, is experimental testability.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook