Product rule in probability and more

Click For Summary

Discussion Overview

The discussion revolves around the foundational principles of probability theory, specifically the product and sum rules. Participants explore whether these rules are axiomatic or can be derived from more general principles, touching on concepts of independence and measure theory.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant questions whether the product and sum rules are axioms of probability theory or can be derived from more general principles, expressing uncertainty about why probabilities must sum to 1.
  • Another participant states that the sum rule is an axiom and can be proven for specific distributions, while the product rule is defined by independence and cannot be proven.
  • A different participant explains the concept of independence, linking it to conditional probability and how it leads to the product rule.
  • Another contribution highlights that the sum rule is rooted in measure theory, explaining that probability measures satisfy countable additivity over disjoint unions of sets.

Areas of Agreement / Disagreement

Participants exhibit a mix of agreement and disagreement regarding the nature of the product and sum rules. Some assert that these rules are axiomatic, while others discuss their derivation and implications, indicating that the discussion remains unresolved.

Contextual Notes

Participants reference specific mathematical concepts such as independence, conditional probability, and measure theory, which may introduce limitations based on their definitions and assumptions. The discussion does not resolve how these principles interrelate or their foundational status in probability theory.

aaaa202
Messages
1,144
Reaction score
2
I have always wondered:

Is the product rule and addition rule for that matter axioms of the probability theory or can they actually be proven from more general principles? The reason I ask is, and it might be a bit silly, that I have always thought I missed out on something in probability theory. As an example:
Consider tossing two coins. You can get:
tails heads, heads tails, tails tails, heads heads
Then by the product rule the chance of each of these out comes is 1/4, and their sum adds up to 1 as it should. But I came to think: Why is it that it necessarily adds up to 1? So I thought that's simply the binomial theorem, so you can sort of say that the binomial theorem fits nicely with the way we "count" different outcomes. But what if it had not? What is it that assures that no matter what the rules for calculating probability of events will preserve the fact that Ʃp=1?
Maybe this is gibberish to you, but if you can help me understand any of this just a little better, I'd be glad.
 
Physics news on Phys.org
The sum real states that

P(A\cup B) = P(A) + P(B)

if ##A\cap B=\emptyset## (and it holds more generally too). Under the usual probability axioms, this is an axiom. However, given a known probability distribution, we can prove this for that distribution. It is merely checking that the distribution satisfies the axioms.

The product rule states that

P(A\cap B) = P(A)P(B)

if ##A## and ##B## are independent. This is the definition of indpendence, so it cannot be proven. However, given specific events, we can check whether they are independent or not.
 
Hey aaaa202.

With regards to the product rule, you can show intuitively why it is as it is.

Remember that independence between two random variables means that information about one random variable does not give any extra information about another.

Mathematically we describe this relationship as P(A|B) = P(A) for any other event B.

Since the definition of conditional probability is P(A|B) = P(A and B)/P(B) then by equating P(A|B) = P(A) we get P(A and B) = P(A)P(B) which is the definition of independence.

If information about B were to affect information about A then P(A|B) would also be a function of B in some sense but if it is independent, then the probabilities remain static and don't change.
 
The sum rule is an axiom from measure theory. The sets in a probability space are Lebesgue measurable by construction/definition. The probability measure p (probability of getting a "3" on a six-sided die for example) is a Lebesgue measure, a special kind of set function on the probability space, that satisfies "countable additivity over countable disjoint unions of sets." That is, for some countable and disjoint collection of sets (events) in the probability space,

$$\{S_k\}_{k=1}^{\infty}, \ \ S_k \cap S_{k'} = \emptyset \ \ \mbox{for} \ \ k \neq k'$$

you have

$$p(\cup_{k=1}^{\infty} S_k) = \sum_{k=1}^{\infty} p(S_k)$$
 

Similar threads

  • · Replies 57 ·
2
Replies
57
Views
7K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 12 ·
Replies
12
Views
1K
  • · Replies 126 ·
5
Replies
126
Views
9K
  • · Replies 45 ·
2
Replies
45
Views
6K
  • · Replies 10 ·
Replies
10
Views
4K
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K