Product rule in probability and more

Click For Summary
SUMMARY

The discussion centers on the foundational aspects of the product and sum rules in probability theory, specifically whether these rules are axioms or can be derived from broader principles. The product rule, defined as P(A∩B) = P(A)P(B) for independent events, is established as a definition of independence and cannot be proven. Conversely, the sum rule, P(A∪B) = P(A) + P(B) for disjoint events, is an axiom rooted in measure theory, specifically Lebesgue measure, which ensures countable additivity. The conversation highlights the relationship between independence and conditional probability, emphasizing that independence implies P(A|B) = P(A).

PREREQUISITES
  • Understanding of basic probability concepts, including independence and conditional probability.
  • Familiarity with the product rule and sum rule in probability theory.
  • Knowledge of measure theory, particularly Lebesgue measure.
  • Basic comprehension of the binomial theorem and its application in probability.
NEXT STEPS
  • Study the axioms of probability theory in detail, focusing on measure theory.
  • Explore the concept of independence in probability and its implications for random variables.
  • Learn about Lebesgue measure and its role in defining probability measures.
  • Investigate the binomial theorem and its relationship to counting outcomes in probability.
USEFUL FOR

This discussion is beneficial for students of probability theory, mathematicians, and anyone interested in the theoretical foundations of probability, particularly those studying measure theory and its applications in statistics.

aaaa202
Messages
1,144
Reaction score
2
I have always wondered:

Is the product rule and addition rule for that matter axioms of the probability theory or can they actually be proven from more general principles? The reason I ask is, and it might be a bit silly, that I have always thought I missed out on something in probability theory. As an example:
Consider tossing two coins. You can get:
tails heads, heads tails, tails tails, heads heads
Then by the product rule the chance of each of these out comes is 1/4, and their sum adds up to 1 as it should. But I came to think: Why is it that it necessarily adds up to 1? So I thought that's simply the binomial theorem, so you can sort of say that the binomial theorem fits nicely with the way we "count" different outcomes. But what if it had not? What is it that assures that no matter what the rules for calculating probability of events will preserve the fact that Ʃp=1?
Maybe this is gibberish to you, but if you can help me understand any of this just a little better, I'd be glad.
 
Physics news on Phys.org
The sum real states that

P(A\cup B) = P(A) + P(B)

if ##A\cap B=\emptyset## (and it holds more generally too). Under the usual probability axioms, this is an axiom. However, given a known probability distribution, we can prove this for that distribution. It is merely checking that the distribution satisfies the axioms.

The product rule states that

P(A\cap B) = P(A)P(B)

if ##A## and ##B## are independent. This is the definition of indpendence, so it cannot be proven. However, given specific events, we can check whether they are independent or not.
 
Hey aaaa202.

With regards to the product rule, you can show intuitively why it is as it is.

Remember that independence between two random variables means that information about one random variable does not give any extra information about another.

Mathematically we describe this relationship as P(A|B) = P(A) for any other event B.

Since the definition of conditional probability is P(A|B) = P(A and B)/P(B) then by equating P(A|B) = P(A) we get P(A and B) = P(A)P(B) which is the definition of independence.

If information about B were to affect information about A then P(A|B) would also be a function of B in some sense but if it is independent, then the probabilities remain static and don't change.
 
The sum rule is an axiom from measure theory. The sets in a probability space are Lebesgue measurable by construction/definition. The probability measure p (probability of getting a "3" on a six-sided die for example) is a Lebesgue measure, a special kind of set function on the probability space, that satisfies "countable additivity over countable disjoint unions of sets." That is, for some countable and disjoint collection of sets (events) in the probability space,

$$\{S_k\}_{k=1}^{\infty}, \ \ S_k \cap S_{k'} = \emptyset \ \ \mbox{for} \ \ k \neq k'$$

you have

$$p(\cup_{k=1}^{\infty} S_k) = \sum_{k=1}^{\infty} p(S_k)$$
 

Similar threads

  • · Replies 57 ·
2
Replies
57
Views
6K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 6 ·
Replies
6
Views
754
  • · Replies 12 ·
Replies
12
Views
1K
  • · Replies 126 ·
5
Replies
126
Views
9K
  • · Replies 45 ·
2
Replies
45
Views
6K
  • · Replies 10 ·
Replies
10
Views
4K
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K