How is a probability measure defined on a sequence of random variables?

  • Thread starter Thread starter tjm7582
  • Start date Start date
  • Tags Tags
    Measure Sequence
tjm7582
Messages
7
Reaction score
0
I have been trying to learn some measure-theoretic probability in my spare time, and I seem to have become a bit confused when it comes to defining a probability measure on a sequence of random variables (e.g., the Law of Large Numbers).


Most texts start by defining a random variable X{i}, which is a function mapping some set\Omega into some other set. Now, say that we want to make some statement about the probability of the average of two random variables, X{1} and X{2}, which are defined on \Omega1 and \Omega2, respectively . When we go to make statements about the probability of this average, is the probability measure defined on \Omega1*\Omega2? It seems to me that for this to make sense, you would essentially need to redefine X{1} as a function defined on \Omega1*\Omega2. Is this correct?

In case I butchered this royally, I am really trying to make sense of page 27 in Billingsley Probability and Measure in the context of the Law of Large Numbers.
 
Physics news on Phys.org
In general when talking about several random variables, they would be defined on the same proability space. Think about them as different functions on the same space.
 
I guess I am still a bit confused. Consider, for example, the case of a stochastic process that is just two indexed random variables, X1 and X2. Each random variable is defined on the same domain, O. A realization of the stochastic process consists of a "draw" from O for each of the random variables. Thus, one sample path of the process would be {X1(w1), X2(w2)}. It would seem to me that the stochastic process would be a function on O*O (the Cartesian product of the domains), in which case the probability measure we use to talk about the stochastic process would be defined on O*O.

It almost seems as if most of the textbooks define each of the random variables, Xi, as coordinate projections on the product space, in which case you can just talk about measures defined on the product space. Does this make sense?
 
{X1(w1), X2(w2)}? Why not {X1(w), X2(w)}

Simple example. Dice tossing. Sample space has six points. Random variables (pair of dice) are two random outcomes. A stochastic process example would be a sequence of tosses.
 
In the above example, if we take w to be a member of S, isn't it the case that S is the following Cartesian product: S={1,2,3,4,5,6}*{1,2,3,4,5,6}? Thus, if X1(w) is a random variable that is the "first roll of the die," X1() is nothing more than the coordinate projection defined on S, correct?
 
I can't understand why you keep insisting on separate probability spaces for the two random variables and using a product space. The random variables are separate functions on the SAME probability space.
 
mathman said:
The random variables are separate functions on the SAME probability space.

Only when considered separately. To specify the joint distribution you need the product space, otherwise the r.v.'s would be equal (perfect correlation).
 

Similar threads

Replies
2
Views
2K
Replies
5
Views
2K
Replies
1
Views
2K
Replies
10
Views
2K
Replies
9
Views
2K
Replies
30
Views
4K
Back
Top