Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Quantum Measurement Problem: Nature of Matter or Nature of Probability?

  1. Oct 7, 2014 #1
    The objective of this post is to: (i) generate a discussion of whether results of the double-slit experiment address solely the nature of matter, or do the results also address the nature of probability; and (ii) to determine if there have been any comparably structured experiments outside of the realm of quantum mechanics; and if so, their results.

    Background / My understanding: The famous double-slit experiment applied in Quantum Mechanics was designed to answer the question of whether electrons and photons (and other components of matter) are comprised of particles or waves. The counter-intuitive result is that when no observation takes place of which “slit” electrons or photons pass through, then they act like waves, as evidenced by an interference pattern that results from waves travelling through, and meeting on the other side of, the two slits. On the other hand, when measurements enable the observer to determine which slit an electron or photon passes through, then they act like particles, with no interference pattern. Among the explanations in the latter case is that when observation takes place, all other wave function probabilities, aside from the observed result, collapse.
    The debate on the significance of this result in determining the nature of matter on the quantum scale has raged for decades, even delving into the role of human consciousness in influencing the outcome. What seems absent from the debate, as far as I know, is the significance of the result in shedding light on the nature of probability itself, rather than on the nature of matter.

    So here are my questions. Have there been any statistical studies outside of the realm of quantum mechanics that would attempt to mimic the double slit experiment? If so, what have been the results?

    I could envision a test structured with two sets of trials, as follows:

    A large number of trials in which a single, ordinarily marked die is rolled 600 times, with a calculation of the distribution of outcomes of all trials that reflects:
      1. The number of times “1” comes up in each trial;
      2. The number of times “6” comes up in each trial;
    Then perform an equal number of trials, except using a die on which there are no numbers, and on which two sides are painted red, and the other four sides are painted blue, and calculate the distribution outcomes of all trials that reflects the number of times that “red” comes up in each trial.

    Would there be any statistical basis on which to expect that the distribution of outcomes on the second trial would be anything other than the sum of distributions for “1” and “6” on the first trial? If there is any difference, would it correspond to wave-like interference patterns? Is the structure of this test sound? Does this test structure properly mimic the double-slit experiment?

    Following is a slight modification, intended to be equivalent to “narrowing” of the slits, which I understand to be important in the double-slit experiment.
    1. Perform a large number of trials in which a single, ordinarily marked die is rolled 600 times, and calculate the number of trials for which “1” comes up between 99 and 101 times, as well as the number of trials for which “6” comes up between 99 and 101 times:
    2. Then perform an equal number of trials, except using a die on which there are no numbers, and on which two sides are painted red, and the other four sides are painted blue, and calculate the number of trials for which “red” comes up between 198 and 202 times.
    Would there be any statistical basis on which to anticipate that the number of trials for which “red” comes up between 198 and 202 times, would be any different than the sum of the number of trials for which “1” and “6” come up between 99 and 101 times? Have any tests like this been performed? What would be the relevance of these tests to the double-slit experiment?

    To me, such an outcome would suggest that the double-slit experiment sheds less light on the nature of matter than it does on the nature of probability. All comments welcome. Thanks!
  2. jcsd
  3. Oct 7, 2014 #2


    User Avatar
    Gold Member

    I can't see any correlation at all between the probabilities of classical situations (e.g. die rolling) and the quantum nature of matter demonstrated by the double slit experiment. Perhaps I'm missing something and more knowledgeable people will correct me, but I'm just not getting it.
  4. Oct 7, 2014 #3


    User Avatar
    Science Advisor
    Gold Member

    I've thought a lot about what the double slit has to tell us about the nature of quantum measurement and probability. I do research on quantum measurement, and entanglement, and often come back to these thought experiments for new insights.

    First, most everyone agrees on the predictions that the math makes:
    -If there was any information that could tell you which path the electron went through, then you will not be able to see interference.
    -The more information there is about which path the electron went through, the less interference you will be able to see.

    Mathematically, we can say that any measuring device that could gather which-path information would have to interact with the electron. This interaction would change the state of the electron, so that it would be entangled with the state of the device. If the device says "slit 1", the state of the electron would be a wave coming out of slit 1. If the device says "slit 2", the state of the electron would be a wave coming out of slit 2. If we simply look at the electron instead of the device, the interaction still happened. Because of that, we would have to see a classical mixture of the distribution from slit 1, and the distribution from slit 2, instead of an interference pattern between two waves.

    Now as far as what the two slit experiment has to say about our understanding of probability, that's a matter of interpretation. And there are a lot of interpretations. As for what the double slit experiment has to say about the nature of matter, that's even more unresolved. I think I can safely say that it illustrates a fundamental limit in the information we can obtain about the initial "position" (near field pattern) and "momentum" (far field pattern) of an electron (i.e. the Heisenberg uncertainty limit).To keep myself sane, I think of the double slit experiment as an example simply of how quantum systems behave, and try to reckon with it on its own terms. The really interesting question then becomes, "how do we get (approximately) simple Newtonian-ish behavior for very large blocks of atoms?".

    Hope this helps:)
  5. Oct 7, 2014 #4


    Staff: Mentor

    The wave particle duality is an outmoded idea from the early days of QM that was consigned to the dustbin of history when Dirac came up with his transformation theory in about 1927.

    It hangs around today because of the semi historical approach of many popularisations and textbooks.

    The experiment wasn't designed for anything. Its simply something that needs to be explained.

    The issue is beginner textbooks and popularisations often use it as motivation for the full QM formalism and/or to illustrate some aspects of quantum weirdness at the beginner level. Unfortunately they do not then go back and show how that formalism explains the experiment, leaving people to believe what was done for pedagogical reasons is what's going on.

    Here is the correct quantum explanation:

    Its got nothing to do with wave-particle duality - its got more to do with the geometry of the situation and the principles of QM.

    Just as an aside there is a fundamental divide between classical probability theory and QM do to with continuous transformations between so called pure states.

    Suppose we have a system in 2 states represented by the vectors [0,1] and [1,0]. These states are called pure. These can be randomly presented for observation and you get the vector [p1, p2] where p1 and p2 give the probabilities of observing the pure state. Such states are called mixed. Now consider the matrix A that say after 1 second transforms one pure state to another with rows [0, 1] and [1, 0]. But what happens when A is applied for half a second. Well that would be a matrix U^2 = A. You can work this out and low and behold U is complex. Apply it to a pure state and you get a complex vector. This is something new. Its not a mixed state - but you are forced to it if you want continuous transformations between pure states.

    QM is basically the theory that makes sense out of pure states that are complex numbers. It is this that allows for interference effects - these cant be explained via classical probability theory.

    Last edited: Oct 7, 2014
  6. Oct 7, 2014 #5


    Staff: Mentor

    To the OP, this is Feynmans famous sum over historys approach:
    http://www.aip.org/cip/pdf/vol_12/iss_2/190_1.pdf [Broken]

    Its actually the best way to view the double slit experiment to start with because it doesn't require the unlearning of any ideas like the wave particle duality later. In fact as Feynman said, once you understand it that way, later when someone asks - whats going on here - you say - remember the double slit experiment - same thing.

    Last edited by a moderator: May 7, 2017
  7. Oct 10, 2014 #6
    More likely it is me not getting it, but here goes. I raised the question using "double slit" test results as the basis of the question, but really the question has more to do with probability theory itself. Is there any evidence, or basis in classical probability theory, that probability itself has wave-like characteristics, for which there would be the equivalence of interference patterns in trial outcomes if a test was structured in a manner analogous to the double-slit experiment? In other words, would there be any statistical basis on which to expect the number of positive outcomes on an "either event A or event B test", would ever be different that the sum of positive outcomes for event A and event B tested separately? For these purposes, assume that events A & B have equal probability, and that in the "either A or B trials" the tester would have no knowledge of which of the two events generated positive outcomes. To me, this is what would make such a test analogous to the QM double-slit experiment, although this is more of a probability theory question than a QM question. Maybe the wrong forum? Thanks.
  8. Oct 10, 2014 #7

    Doug Huffman

    User Avatar
    Gold Member

    Classical probability theory developed from and is very much stuck on frequentist statistics even now almost a century after QM has matured. In general, a QM observation alters the prepared state being observed.

    I recommend that you incorporate the Bayesian interpretation of QM and statistics. My textbook (p-book actually!) on the subject is Edwin Thompson Jaynes' Probability Theory: The Logic of Science (Cambridge University Press, (2003). ISBN 0-521-59271-2).

    I particularly enjoyed his Chapter 5 IIRC 'Queer Uses of Probability' in which he formalizes political polarization very nicely.
  9. Oct 10, 2014 #8


    User Avatar

    Staff: Mentor

    No evidence whatsoever, and no basis for such a phenomenon in probability theory, whether we're dealing with classical or quantum systems.
    The interference that quantum mechanics predicts and that we observe in the double-slit experiment does not show that "probability has wavelike characteristics"; it shows that another quantity called the "probability amplitude" does. These probability amplitudes are not probabilities and don't behave like probabilities.
  10. Oct 10, 2014 #9
    thanks for the thoughtful and thorough reply! Just to be clear, then, it is the absence of transformations between pure states that disallows application of classical probability theory? Or, using the example of rolled dice, there are only discrete outcomes -- so no transformations between pure states, therefore no interference patterns?
  11. Oct 10, 2014 #10
    Got it. Thanks for the clarification.
  12. Oct 10, 2014 #11


    Staff: Mentor

    Cant really follow you there. There are frequentest interpretations (eg the Ensemble interpretation) and Bayesian ones (eg Copenhagen). The axioms of QM simply mention probabilities without a specific view. I personally simply associate them with the Kolmogorov axioms.

  13. Oct 10, 2014 #12


    Staff: Mentor

    Sort of.

    Its a bit technical - more detail can be found here:

    Some reasonable assumptions leads to two theories to describe physical systems - standard probability theory and QM. What separates them is continuous transformations between pure states. It also means standard probability theory cant model QM.

  14. Oct 10, 2014 #13


    User Avatar
    Gold Member

    There is not a classical world in which we have to add probabilities versus a quantic world in which we have to add amplitudes.
    A rope has two ends but what is real is the rope not its ends.

    in the real world the position on "the rope" is caracterized by the visibility of the fringes. V is between 0 (no interference) and 1 (perfect interferences). and there is a formula depending on V giving the pattern
  15. Oct 10, 2014 #14
    probability amplitude : mimetex.gif
    probability Density : mimetex.gif

    Probability : ##\int\limits_{\tau _1 }^{\tau _2 } {\psi ^2 d\tau }##

    What is the physical meaning of probability amplitude ? Is this just a physical concept that has no mathematical sense ?

  16. Oct 10, 2014 #15


    Staff: Mentor

    QM is a mathematical model.

    The Born Rule, from which your equations follow, has both a mathematical and physical meaning:

    It speaks of both mathematical concepts (eg eigenvalues and probabilities) and physical concepts (eg measuring).

    This is typical of the axioms of physics that intermix the two.

    That's why its physics and not math. It's exactly the same as good old bog standard Euclidean geometry. It speaks of things out there called points and lines, but abstract points and lines.

    The observations of QM, just like the points and lines of geometry, are primitive abstractions that physicists need to determine in a given situation. Of course in any given situation its trivial, just like for a person applying geometry identifying points and line is trivial. But the theory is useless unless you do it.

    Last edited: Oct 10, 2014
  17. Oct 11, 2014 #16
    in probability theory I never see the concept of amplitude.

    What is the mathematical meaning ?

  18. Oct 11, 2014 #17


    Staff: Mentor

    It simply means given a probability vector ci you take |ci|^2 to get the probability of outcome i rather than ci. In that case its called a probability amplitude. The reason you have complex probability vectors is as I explained previously its required from continuity.

  19. Oct 11, 2014 #18
    In the wiki we can read

    This seems to be a physical interpretation of the wave function (when we do a measurement ?), not a mathematical probability concept.

  20. Oct 11, 2014 #19


    Staff: Mentor

    Its a mathematical assumption - not a physical one. The physical assumption is mapping it to the outcomes of observations.

    Its all simply a consequence of the Born rule that has been discussed previously.

  21. Oct 11, 2014 #20
    What is the mathematical assumption ? To be a complex number ?

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Discussions: Quantum Measurement Problem: Nature of Matter or Nature of Probability?