Probabilty and sample spaces Definition

  1. Ok, im taking a stats course right now and im trying to understand exactly how probability is defined. It says in the textbook that there are a few ways it can be defined. I understand the first one: Assume an experiment with n possible outcomes, each equally likely. If some event is satisfied by m of the n, then the probability of that event is m/n. However, if the events are not all equally likely, then this definition can't be used. There's also the other definitions like empirical probability and subjective probability, but these don't really give you a precise answer. Then there's the axiomatic probability with 4 axioms. But all it says is
    1. P(A) >= 0,
    2. P(S) = 1,
    3. P(A U B) = P(A) + P(B) for mutually exclusive events A and B
    4. P(the union of all mutually exclusive events) = sum from 1 to infinity (P(Ai))

    this still doesnt give an explicit answer for what the probability of any event A would be! Using 3, to know P(A) i would need to know P(A U B) and P(B), and to know either of those i would need to know the other probabilities.

    I think the best definition is the first definition, but then there must be a way to reduce all elements of a sample space to being equally likely.

    Any insight would be greatly appreciated.
     
  2. jcsd
  3. EnumaElish

    EnumaElish 2,483
    Science Advisor
    Homework Helper

    Suppose you have 3 indep. outcomes A, B and C where A and B are equally likely and C is twice as likely as A or B. Then you can define events C1 and C2 as each being equally likely as A or B and run the experiment using the following routine: the first time C is observed is credited to C1. The second time C is observed is credited to C2, etc.

    Prob{C} can be defined as Prob(C1} + Prob{C2}.
     
  4. Hurkyl

    Hurkyl 16,090
    Staff Emeritus
    Science Advisor
    Gold Member

    And it shouldn't! There are lots of possible probability measures on any given set of events.

    For example, to model an ordinary coin, you would use the uniform distribution on {heads, tails}: P(heads) = P(tails) = 1/2. To model a double-headed coin, you would use the distribution where P(heads) = 1 and P(tails) = 0.
     
  5. o i see, so you just have to find the probability of each event relative to the others, and using the fact that the probability of the whole sample space is 1, you would be able to find the absolute probability. It still doesnt really give an answer to how you would know which events are relatively more likely than the others (unless you can reduce the sample space to a bunch of equally likely outcomes), but i guess that must be found experimentally?
     
  6. EnumaElish

    EnumaElish 2,483
    Science Advisor
    Homework Helper

    The definition you had posted had made an axiomatic determination of the relative probabilities ("they are all equal"); my thought experiment was an extension of that axiomatic statement.

    Now you are going one step beyond the original post and asking "how did they know all outcomes were equally likely?" My guess is they could've formulated it as a hypothesis then statistically tested it; so it could've been experimental in that respect. But they did not need to. One can start with an axiomatic statement and get an aximatic definition without being experimental.
     
Know someone interested in this topic? Share a link to this question via email, Google+, Twitter, or Facebook

Have something to add?