Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Occam's razor and probability in QM

  1. Feb 7, 2017 #1
    I am a laymen of sorts in both physics and philosophy. I embarked on a trip to acquaint myself (gently at first) with contemporary physics. My question is foundational and therefore probably philosophical. If it is off-topic, please kindly point me to any place where such discussions may be held, but I prefer a material point of view.

    Basically, I have trouble understanding what "probability" means in QM. I have no hard objections to the hypothesis of physical non-determinism. What I cannot fathom is how one adapts the classical positivist attitude (like Occam's razor) into something useful while dealing with the Copenhagen interpretation.

    Let's say that the law of large numbers, central limit theorem, the Bayes' rule (, parameter estimates from sample statistics), etc. are metaphors for connections we believe exist between the human sense data (or human experience) and what we call probability. I assumed that we use probability to assign costs to outcomes and aggregate them to quantify a decision. A subjective, albeit constructivist process. I believed that it is used only as an epistemological device. QM introduces ontological probability. What does this mean for falsifiability? What is the notion into which falsifiability transforms and what are its observable virtues.

    For example, how would one introduce a concept such as Bayesian inference, as a verifiable physical reality? I cannot see the difference between probability and belief in the world view of QM. Is probability irreducible notion, and what is the sense data that "corresponds" to its manifestations? (I am not asking about non-determinism as physical reality, but the quantifiable non-determinism - the existence of probabilities directly in nature.)

    PS: I have tried to use the accepted terminology, but apologize if I have failed to do so.
     
  2. jcsd
  3. Feb 7, 2017 #2

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    It means the squared norm of the complex amplitude. :wink:

    Seriously, except for what I just said, probability means exactly the same thing in QM as in classical mechanics. The only difference is that in QM, since the underlying thing that determines probability is a complex amplitude, you can get interference effects that you can't get in classical mechanics.

    Only on certain interpretations. You can do QM without them.
     
  4. Feb 7, 2017 #3
    Yes. To be honest, watching the list of the interpretations on Wikipedia turns you into a window shopper. To my understanding, there are issues involved with having non-local causality versus non-local correlations, but personally, as a neophyte for the time being I would have probably adopted neutral attitude (no matter what literature I decided to use). The issue is that Richard Feynman was an avid supporter for ontological probability and I have grown substantial respect for the man. It is difficult for me to overlook that fact.
     
  5. Feb 8, 2017 #4

    bhobba

    Staff: Mentor

    Its certainly hard to ignore Feynman. Towards the end, after attending some lectures by Gell-Mann became a supporter of Consistent Histories:
    http://quantum.phys.cmu.edu/CQT/index.html

    And in fact QM at the level of its formalism is an extension of probability theory:
    http://www.scottaaronson.com/democritus/lec9.html

    But like probability itself, which is based on the Kolmogorov axioms, its status, ie ontological or not, is purely interpretive.

    Thanks
    Bill
     
  6. Feb 8, 2017 #5

    Demystifier

    User Avatar
    Science Advisor

    In QM probability is interpreted in terms of frequencies, not in terms of Bayesian reasoning. That's because frequencies are what physicists measure.
     
  7. Feb 8, 2017 #6

    A. Neumaier

    User Avatar
    Science Advisor
    2016 Award

    Do you have no trouble understanding what "probability" means in classical mechanics? Please explain what the latter means to you, so that we can understand what you mean by 'understanding'!
     
  8. Feb 8, 2017 #7
    Probability in QM is determined by scalar products of state vectors. For example, the projection of a state vector onto an eigenvector, ##\psi_i##. Now, by analogy with cartesian vectors, recognize that an axis nearer to the vector has a greater projection. So there is a sense in which the eigenvector in QM that is "nearest" to the state vector has the greatest scalar product and so it is intuitively obvious that we should interpret that eigenstate as having the greater probability given the state information we have. To obtain a statistical interpretation of that abstract idea of "probability" we need a quantity that is real and non-negative. Taking the product of that scalar product with its complex conjugate, ##\psi_i^* \psi_i##, provides a simple means to do that and is called the Born rule. That's where Occam's razor comes in.

    Notice that we don't simply take the modulus, ##\sqrt{\psi_i^* \psi_i}## because that would beg the question of what does the negative square root, ##-\sqrt{\psi_i^* \psi_i}## mean?
     
    Last edited: Feb 8, 2017
  9. Feb 8, 2017 #8
    Frequentist, only epistemologically bayesian and constructivist. Verifiable in principle by exhaustive measurement and thus fully observable. As a measure of some physical inclination, it appears to me that there will always be possibility to live in universe where the manifested frequencies do not converge to the source distribution (with zero probability if you will), which would render the concept vacuous.

    In any case, I think I have too vague a picture of QM at the moment. I should probably try to acquaint myself with the details, before seeking further clarification and trying to form an opinion. Thanks to everyone.
     
  10. Feb 8, 2017 #9
    very intersting question, "ontological probability" is a central point in QM like Copenaghen version.
    Now, I have no time to elaborate an answer complete..., perhaps tomorrow.
    for the moment consider the concept of probability in the simplest possible way: you have not played at dice or the "roulette" in a casino, or a slot machine? Well the meaning of probability in MQ is the same. Now, with Occam's razor I am a little rusty, and not having almost no facial hair I shave with a simple razor "Gillette"
     
  11. Feb 8, 2017 #10
    Well, that is my point. Suppose that you want to demonstrate that a dice rolls 1/6th to each resting position. Then a six is rolled continuously. There is nothing contradictory in such result. But then, realities that can neither be validated, nor need manifest, appear to my intuition counter-scientific. Is probability a reality or a mental attitude? Only the former can be called a physical property.

    I may have a problem with my philosophical understanding of probability, of course. Or may be there is a method for observing the underlying probabilities in QM directly (i.e. the wavefunction) that does not involve sample statistics. As I said, I need to get better acquainted with the theory. Otherwise, I am confused how to explain that we can embody our state-of-practice into a physical property for which no detection appears to be logically possible.

    Edit: To clarify, when I said observing the probabilities directly in QM, I meant after collapsing the wave-function (a posteriori). Not that I understand the phenomena in question very well, but I don't mean to imply non-intrusive detection.
     
  12. Feb 9, 2017 #11

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    Contradictory to what? It's certainly contradictory to your hypothesis that the die should come up on any given face 1/6th of the time. Which immediately suggests that you should consider alternate physical models that make different predictions. For example, the hypothesis that the die is loaded would be a much more parsimonious way of explaining why a six comes up every time.

    I'm not sure what you're talking about. The actual results of the die rolls are perfectly real. The prediction that six would come up 1/6th of the time was a prediction based on a physical model (that the die is fair), which is easily testable--in the case you describe, it was tested and falsified, whereas another physical model, that the die is loaded, would be supported by the data (at least the data collected thus far).
     
  13. Feb 9, 2017 #12
    So, if I get 2-3-6-5-5-2-4-1, I should allow the hypothesis to stand. If I get 6-6-6-6-6-6-6-6, I should call the dice loaded? But both have probability 1/68. Yes, a loaded dice is more likely to get the latter result. But a 2-5 biased dice is more likely to get the former too. I could seek the dice's probabilities using a maximum likelihood estimates, but that is not the point. What I am getting at is that doubting the dice after the second outcome is not better supported mathematically than doubting it with the first outcome, whatever the connotations of those outcomes may be in practice. Until I roll a 7, nothing disproves the assumption, because the whole point of a fair dice is that all outcomes are equally "surprising".

    Then, I can turn the question around. If we agree that the two cases above are not indicative of the fair dice hypothesis (for the reasons I have stated), couldn't we say that the hypothesis does not describe any aspect of the real world? What does it mean to be true or false in a material world, if there are no related physical observables?
     
  14. Feb 9, 2017 #13

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    You should certainly adjust the relative probabilities that you assign to those hypotheses based on the results of the rolls.

    Yes, and if that is one of the hypotheses you are considering, you should adjust its relative probability as well.

    In other words, before you can even give meaning to the idea of "probability", you have to know which hypotheses you are considering. There are an infinite number of possible hypotheses, so in order to assess probabilities at all, you have to pick which ones you think are worth assessing and which ones are too unlikely to matter. Then you can try to assign probabilities to the ones that you think are worth assessing, such that all of the probabilities add up to 1. And as data comes in, you adjust your probabilities accordingly.

    It would not appear to describe that particular die; but is that the only die in the world?

    Is the die roll not a physical observable?
     
  15. Feb 9, 2017 #14
    My hypotheses are that the dice is fair and that the dice is not fair (i.e., its negation). I only need to compare how the posterior probabilities of the "fair dice" hypothesis compare, given the two outcomes. I am evaluating the posterior probability after the two roll events. I have this:
    $$P(H\mid E_i) = \frac{P(E_i\mid H)}{P(E_i)} \cdot P(H)$$
    So, unless I am predisposed to the events themselves, I should be incapable to produce different degree of experimental confirmation ##P(H\mid E_i)## from any of them.
    Not a discriminating one, in my case. So, essentially not.

    Truth be told, the example is engineered to make it stick mathematically, but it is merely a device that suggests the meta-scientific issue I am unable to reconcile. I am not focusing on how to choose the hypothesis, difficult as it may be. Let's say, I have decided that the hypothesis is true.

    I am doubting a probabilistic notion will be detectable by human experience. I want to know that there will be measurable consequences. Probability is an inclination. Such inclination, if not directly measurable in principle, must be witnessed by dependent events. But the events are not guaranteed to expose the influence due to their non-deterministic nature. Therefore the relationship between experience and a physical property may never be witnessed ever, even if that property is true. Then, what makes this property justified and real? I am not asking for a posteriori more probable hypothesis. I assess the scientific virtues of the physical reality behind such hypothesized inclination.

    Now, if there was a claim, that the interactions that involve QM waves and collapse them to produce random particle-like results are not independent, that would be a different story. In other words, if the photon from the wavefront of a light source would energize electrons in random locations, but consecutive locations are guaranteed to be soon fairly distributed, because they are not chosen completely independently, that would be easier for me to interpret. But I doubt that this is the notion (and may be the experimental evidence) at the moment. That would also provide foundations for probability in general, because with QM, all notions of probability receive a new physical foundation.
     
  16. Feb 9, 2017 #15

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    But then your second hypothesis is extremely weak: there are lots and lots of ways the dice could be not fair. How do you distinguish between them?

    I have no idea what this means. The posterior probability of "the dice is fair" is certainly different for the two different results you give.

    I don't understand this either. You don't "decide that the hypothesis is true". You compute the posterior probability that it is true given your prior and the data. And you can do that for any hypothesis you wish.

    I don't understand this either. The different posterior probabilities of different hypotheses, given the same prior and the same data, are measurable consequences.

    I'm afraid I don't understand what you are saying sufficiently to help you. I suspect that your own understanding is too confused to formulate a coherent question.
     
  17. Feb 9, 2017 #16
    in general terms, this reasoning is correct, although I do not see clearly where it wants to end. You roll a die, you do not know what number will come out, you only know that it can get out of a number between 1 and 6, with probability 1/6. This means that about 60 launches, the number 1 will be released 10 times, 10 times two and so on. But there are two facts. As you said, you come out of 60 launches sixty times 6, this is not inconsistent with anything. That this is possible, but unlikely. The second fact is that about 60 launches is very difficult to come out exactly 10 times the number 1, 10 times number 2 etc. Probably will come out 8 times the number 1, the 12 times number 2 ... etc. The really important thing is that the more you increase the number of launches, the best sequence of results approximates the probability "theoretical". The probability corresponds to a theoretically infinite number of throws. Now, we establish the following. We write to table the "nut equation," ie a table with the possible outcomes of the dice (numbers 1-6) and for each result his "chances" as: 1- 16%; 2-16% .. and so on. Now we have on the one hand the theory (our table written on a sheet) on the other the nut that is, the experimental apparatus. All we have to try to check is whether ours is a good theory, that is, if the experimental results are consistent with what we wrote. To do this we have to roll the dice and see what number came out. But because the verification makes sense, we have to perform a number of launches hypothetically infinite. It will be enough to make "many" launches and verify that the more you do, the better our theoretical table accords with the results. Once out a number, this is the result of our measure, that MQ corresponds to the "collapse of the wave function." Once the nut has stopped on the table and sets the number 3 for example, the state of the nut is defined in number 3. But during the launch, before the nut stops, what is the "value"? Well the answer is in Copenhagen that the nut is 1 to 16%, 2 to 16% and so on, but not only. These are not hypothetical values, but the actual values of the nut, that is to say that there are, all, simultaneously with those probabilities. This is, more or less, what corresponds to the concept of ontological chances, and the state of the superposition principle: attention! during the launch of the nut, that is, before we carry out the measurement, the values of the nut are really all numbers from 1 to 6 simultaneously, each with a probability of 16% of weight.
    all this may seem rather abstract in the case of possible numbers of a nut, but becomes extremely practical when it comes to electrons and electric charge
     
    Last edited: Feb 9, 2017
  18. Feb 9, 2017 #17

    bhobba

    Staff: Mentor

    That's incorrect.

    Instead of giving the answer and its reasoning lets start by you stating the superposition principle - not a link, a quote - but your own words.

    BTW the answer is its purely interpretive - but lets go carefully through the reasoning starting with this fundamental principle.

    Thanks
    Bill
     
  19. Feb 9, 2017 #18
    I do not at all clear. Bhobba do so, instead of saying that my arguments are not correct, and maybe it is, kindly, you answer directly to the question posed by simeonz
    For me is very pleasant to know my mistakes, or perhaps where i'm not clear explication

    Thanks
     
  20. Feb 9, 2017 #19

    bhobba

    Staff: Mentor

    Why avoid answering my question on the superposition principle? Your writings so far show you do not know what it is.

    If you cant do it say so and we can go from there. But true understanding comes from doing the reasoning yourself.

    BTW Peter Donis answered Simeonz correctly - basically its pretty non- understandable but he is working through that. What I am interested in is fleshing out your reasoning - its wrong - but like I said its best you figure that out rather than me tell you. So lets start with the principle of superposition - state it and we can go from there. If you cant say that and we can start with that first.

    Thanks
    Bill
     
  21. Feb 9, 2017 #20

    bhobba

    Staff: Mentor

    That is the area of statistical inference.

    The law of large numbers is rigorously derivable from the Kolmogorov axioms:
    https://terrytao.wordpress.com/2015/10/23/275a-notes-3-the-weak-and-strong-law-of-large-numbers/

    But it is a limit theorem ie its true as the number of events is taken to infinity, which of course it cant be as it cant be done an infinite number of times.

    In practice one picks a suitable low number say 1/googlepex and say for all practical purposes if the probability is that close to 1 take it as one. Don't like it? - welcome to applied math.

    Thanks
    Bill
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Occam's razor and probability in QM
Loading...