Product rule in probability and more

1. Jul 30, 2013

aaaa202

I have always wondered:

Is the product rule and addition rule for that matter axioms of the probability theory or can they actually be proven from more general principles? The reason I ask is, and it might be a bit silly, that I have always thought I missed out on something in probability theory. As an example:
Consider tossing two coins. You can get:
tails heads, heads tails, tails tails, heads heads
Then by the product rule the chance of each of these out comes is 1/4, and their sum adds up to 1 as it should. But I came to think: Why is it that it necessarily adds up to 1? So I thought thats simply the binomial theorem, so you can sort of say that the binomial theorem fits nicely with the way we "count" different outcomes. But what if it had not? What is it that assures that no matter what the rules for calculating probability of events will preserve the fact that Ʃp=1?
Maybe this is gibberish to you, but if you can help me understand any of this just a little better, I'd be glad.

2. Jul 30, 2013

micromass

The sum real states that

$$P(A\cup B) = P(A) + P(B)$$

if $A\cap B=\emptyset$ (and it holds more generally too). Under the usual probability axioms, this is an axiom. However, given a known probability distribution, we can prove this for that distribution. It is merely checking that the distribution satisfies the axioms.

The product rule states that

$$P(A\cap B) = P(A)P(B)$$

if $A$ and $B$ are independent. This is the definition of indpendence, so it cannot be proven. However, given specific events, we can check whether they are independent or not.

3. Jul 30, 2013

chiro

Hey aaaa202.

With regards to the product rule, you can show intuitively why it is as it is.

Remember that independence between two random variables means that information about one random variable does not give any extra information about another.

Mathematically we describe this relationship as P(A|B) = P(A) for any other event B.

Since the definition of conditional probability is P(A|B) = P(A and B)/P(B) then by equating P(A|B) = P(A) we get P(A and B) = P(A)P(B) which is the definition of independence.

If information about B were to affect information about A then P(A|B) would also be a function of B in some sense but if it is independent, then the probabilities remain static and don't change.

4. Apr 17, 2015

CondMattGuy

The sum rule is an axiom from measure theory. The sets in a probability space are Lebesgue measurable by construction/definition. The probability measure p (probability of getting a "3" on a six-sided die for example) is a Lebesgue measure, a special kind of set function on the probability space, that satisfies "countable additivity over countable disjoint unions of sets." That is, for some countable and disjoint collection of sets (events) in the probability space,

$$\{S_k\}_{k=1}^{\infty}, \ \ S_k \cap S_{k'} = \emptyset \ \ \mbox{for} \ \ k \neq k'$$

you have

$$p(\cup_{k=1}^{\infty} S_k) = \sum_{k=1}^{\infty} p(S_k)$$

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook