On the myth that probability depends on knowledge

  1. Please elaborate it here!
    Last edited: May 2, 2011
  2. jcsd
  3. Stephen Tashi

    Stephen Tashi 4,470
    Science Advisor
    2014 Award

    Someone will have to explain what "objective probabilities" are. If you begin with the assumption that there are probabilities that would be agreed upon by every observer, I suppose you automatically make them independent of knowledge by postulating that all those observers have the same knowledge.
  4. Physics Monkey

    Physics Monkey 1,363
    Science Advisor
    Homework Helper

    This thread title made me laugh, so I'll bite.

    What is the objective probability that the gas molecules in a box of air are in configuration x? Given that the gas molecules were in a definite state in the past, can the "objective" answer be anything other than [tex] \delta(x - x_{\mbox{actual}}(t)) [/tex] (schematically) ?

    I'm genuinely curious what people think.
  5. JesseM

    JesseM 8,489
    Science Advisor

    Objective probabilities in an experiment can be understood in frequentist terms, as the frequency in which some event would occur in the limit as the number of trials of the experiment went to infinity, with the factors that you wish to control (because you want to probability of A given some facts B about the conditions) being the same in each trial but others allowed to vary. For example, on each trial of Physics Monkey's experiment involving a box of air we might make sure that all the macroscopic conditions such as temperature and pressure and volume are identical, then in the limit as the number of trials goes to infinity, we can look at the fraction of trials where the molecules were in configuration x. This would define an objective probability that a box of air with a given temperature, pressure, volume, etc. has its molecules in configuration x.
    Last edited: May 2, 2011
  6. I would be interersted in the forum comments on the following scenario here.

    Consider the PC screen you are looking at.

    It has M pixels, which can take on N colours.

    This limits us to a finite number of states of the screen.

    Some of these states offer information, some do not.

    The first question of interest is what is the entropy change in passage from one screen state to another, since there is zero energy change involved.

    The second question is more subtle.

    For any pixel the presence of any colour (except one) implies a signal, which implies information. It is possible to draw up an error correcting scheme to obtain the 'correct' pixel colour for any colour except one.
    A black colour implies either that the signal specifies no colour or that the signal is absent for some reason (ie no connection). It is not possible to distinguish in this case.
  7. Probablilties are properties of an ensemble, not of single cases. The probability ot throwing with a given die a 1 is an objective property of the particular die, not one of a single throw of it.

    Thus in your case, there is no x_actual, since there are many boxes of air, and what is actual depends on the box, but the probability does not.
  8. Agreements are part of science, not of the knowledge of a particular observer.

    The probability of decay of any particular radioactive isotope is a well-defined, measurable quantity,
    independent of what observes know about this isotope.
  9. The entropy change is zero, since both states have zero entropy. One cannot assign entropy to a particular realization, one can assign it only to the ensemble of all screens likely to be encountered.
  10. Stephen Tashi

    Stephen Tashi 4,470
    Science Advisor
    2014 Award

    In addition to making an assumption about nature (that the factors you wish to control and others that are "allowed to vary" combine to produce a definite probability) the frequentist definition also puts all observers (or at least all those whose opinion we value) in the same state of knowledge. The factors that they wish to control and those that they allow to vary are "givens". Using terms borrowed from other physical theories, these observers are in a privileged frame of reference.

    As to the mathematics, I compare it to the following very ordinary situation: Let ABC be a right triangle with right angle BCA. Let BC = 3. Does the length of the hypotenuse depend on our knowledge of side CA or does it have some "objective" length no matter what we know or don't know? On the one hand, you can argue that the statement "Let ABC be a right triangle..." specifies we have a specific right triangle and that it's hypotenuse must therefore have an objective length regardless of our state of knowledge. On the other hand, you can argue that the length of the hypotenuse is a function of what else is known about triangle.

    As to dealing with any problem of forgetting information, the situation with Bayesian probability is no worse than the situation with triangles. In the above situation, suppose that we are given that CA = 4 and then you "forget" that fact. Does the hypotenuse go from being 5 to being unknown? A reasonable practical answer could be yes. For example, if someone read you a homework problem and included the information that CA =4 and then said. "No, wait. I told you wrong. Forget that. The side CA wasn't given." would you keep thinking that the hypotenuse must be 5?
  11. Physics Monkey

    Physics Monkey 1,363
    Science Advisor
    Homework Helper

    An infinite number of experiments never seemed to me like a very reasonable way to build a physical theory. And sometimes we don't get more than one experiment!
  12. JesseM

    JesseM 8,489
    Science Advisor

    But the hypothetical infinite number of trials is just meant to define the "true value" of probability that our measurements are supposed to approach--by the law of large numbers, the more actual trials you do, the more unlikely it is that the measured frequency differs from the "true" probability by more than some small amount ε. Similarly the "true value" of a particle's mass would be its precise mass to an infinite number of decimal places, our experiments can never give that but we nevertheless need to assume that such a true mass exists in order to talk about "error" in actual measured values.
    Even a Bayesian can't say anything very useful about probability based on only one experiment, in that case the "probability" depends greatly on your choice of prior probability distribution, and the choice of what prior to use is a pretty subjective one.
    Last edited: May 2, 2011
  13. Physics Monkey

    Physics Monkey 1,363
    Science Advisor
    Homework Helper

    So introduce an ensemble. How about letting the ensemble be a set of boxes with the same fixed initial condition and perfectly elastic walls. Is what I wrote now what you would call the objective probability?

    And besides, who are you to say that I cannot think about probabilities for a single case. You are just declaring that the Bayesian school is wrong by fiat. But what would you say to the standard sort of gambling example. Imagine I offer you the following game. I'll roll one die, but I don't tell you anything more about the die except that it is 6 sided. You can pick either {1} or {2,...,6} and if your set comes up then you get a big wad of money. Assuming you like money, which set would you choose? The choice to the go with {2,...,6} in the absence of other information is a form of probabilistic reasoning with only a single event.
  14. What would that be?
  15. I think this depents about the physical or information theoretical interpretation of "probability" you subscribe to. It sounds very biased to me.

    One should ask what is the whole point of the probability measure in the first place?

    Either you just define some measures, decide some axioms and you've got just some measure theoretic definition - some mathematics, but then what?

    Or you see it as a way to determine the odds of a possible future, in the context of inductive inference. As a guide for further action. In this case, the ensemble makes no sense. The ensemble is a descriptive view, it is completely sterile as a tool for placing bets on the future.

    I think we can all agree that the question isn't to discuss axioms of probability theory. The question is what value they have in realistic situations, where we need to make decisions based upon incomplete informaiton. The main value of probability is not just statistics or book keeping. Not in my book.

    I haven't had time to read up on anything yet but I noticed Neumaier referring to someone (whittaker something?) the derived the probabiltiy axioms starting from expectations. In that context I'll also note that cox, jaynes and others also derived probability as somewhat unique rules of rationala inference. This does tie probability to inductive inference.

    But this idea only works for classical dices; ie. where all observers agree on the dice in the first place. It's an idealisation.

  16. Science is nothing but a group of interacting, and negotiating special observers called scientists. As we know established science is not necessarily always right, or eternally true. Science is always evolving and REnegotiated among the active group of scientists.

    So scientific knowledge, is really nothing but the negotiated agreements of a group of observers. But the point is that this consensus is still not objective, it can only be judged from a particular observer, or another competing observer group. There IS no outside, or external perspective from which scientific agreements are judged.

    This is why, technically is still knowledge of a particular observer (or just agreement of a GROUP of observer).

  17. For me the whole purpose of probability, is that it is a measure of the odds, or propensity conditional upon the given situation. The question to which probability theory is the answer (in the inductive inference view) is that it is that the mathematical framework to rationally rate degrees of belief, and thus the rational constraints on any random rational action in a game theoretic scenario.

    This renders the measure completely observer depdenent, where the observers IS the "player", and the one placing bets and taking risks.

    The only problem is of course that the above well known view, is only classical. Ie. it only works for commuting sets of information, which are combined with classical logic.

    We need the corresponding generalisation to rational actions based upon the corresponding "measure" that is constructed from "adding" non-commuting information sets. All this does not need any ensembles or imaginary "repeats". Instead the EXPECTATIONS on the future, are inferred from some rational measure of the futures based on the present.

    In the classical case it's just classical statistics and logic.

    The quantum case is confused, but it's some quantum logic form of the same. But there is no coherent understanding of it yet. I think this roots alot of the confusion.

  18. I've got my own view and don't claim to be a pure bayesian but I'll throw in my cents.

    As I see it, the choice of prior is connected to the individual interaction history. The prior has evolved. However for any given windows, clearly the remote history is ereased.

    If there is NO history at all, I'd say not even the probability space makes sense. In this sense even the probability SPACE can fade out and be ereased. This concerns what happens to all points in state space that are rarely or never visited in the lifespace of a system - are they still real, or physical?

  19. Agreed. This is why any reasonable, theory must produce and expectation of the future, given the PRESENT. Without infinite imaginary experiments and ensembles.

    Then one asks what is the purpose of this expectation? Is the PURPOSE just to compare frequencies of historical events, in retrospect? No. That has no survival value. I think the purpose is as and action guide.

    This mean that it does in fact not matter, if the expectations are met or not. They still constrain the action of the individual system holding it. Just look at how a poker game work. Expectations rules rational actions. It doesn't matter if the expectations are "right" in retrospect, because then there are new decisions to make. You always look forward, not back.

  20. people may want to read a professional physics philosopher's attempt to analyse this:

    What is Probability?

    I think he has it wrong that the Everett solution is a good one, my personal view is that this 80+ years of fumbling the understanding/acceptance of an ontological probability in QM has prevented, what will be seen in retrospect, as quite simple scientific progress. But the paper is at least an honest and deeply thought out argument.
  21. Infinity is well approximated in practice by sufficiently large numbers.

    With fewer observations one simply gets less accurate results - as always in physics.
    Applying probability theory to single instances is foolish.
Know someone interested in this topic? Share this thead via email, Google+, Twitter, or Facebook

Have something to add?