tjm7582
- 7
- 0
I have been trying to learn some measure-theoretic probability in my spare time, and I seem to have become a bit confused when it comes to defining a probability measure on a sequence of random variables (e.g., the Law of Large Numbers).
Most texts start by defining a random variable X{i}, which is a function mapping some set\Omega into some other set. Now, say that we want to make some statement about the probability of the average of two random variables, X{1} and X{2}, which are defined on \Omega1 and \Omega2, respectively . When we go to make statements about the probability of this average, is the probability measure defined on \Omega1*\Omega2? It seems to me that for this to make sense, you would essentially need to redefine X{1} as a function defined on \Omega1*\Omega2. Is this correct?
In case I butchered this royally, I am really trying to make sense of page 27 in Billingsley Probability and Measure in the context of the Law of Large Numbers.
Most texts start by defining a random variable X{i}, which is a function mapping some set\Omega into some other set. Now, say that we want to make some statement about the probability of the average of two random variables, X{1} and X{2}, which are defined on \Omega1 and \Omega2, respectively . When we go to make statements about the probability of this average, is the probability measure defined on \Omega1*\Omega2? It seems to me that for this to make sense, you would essentially need to redefine X{1} as a function defined on \Omega1*\Omega2. Is this correct?
In case I butchered this royally, I am really trying to make sense of page 27 in Billingsley Probability and Measure in the context of the Law of Large Numbers.