I have been trying to learn some measure-theoretic probability in my spare time, and I seem to have become a bit confused when it comes to defining a probability measure on a sequence of random variables (e.g., the Law of Large Numbers).(adsbygoogle = window.adsbygoogle || []).push({});

Most texts start by defining a random variable [tex]X{i}[/tex], which is a function mapping some set[tex]\Omega[/tex] into some other set. Now, say that we want to make some statement about the probability of the average of two random variables, [tex]X{1}[/tex] and [tex]X{2}[/tex], which are defined on [tex]\Omega[/tex]1 and [tex]\Omega[/tex]2, respectively . When we go to make statements about the probability of this average, is the probability measure defined on [tex]\Omega[/tex]1*[tex]\Omega[/tex]2? It seems to me that for this to make sense, you would essentially need to redefine [tex]X{1}[/tex] as a function defined on [tex]\Omega[/tex]1*[tex]\Omega[/tex]2. Is this correct?

In case I butchered this royally, I am really trying to make sense of page 27 in Billingsley Probability and Measure in the context of the Law of Large Numbers.

**Physics Forums | Science Articles, Homework Help, Discussion**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Measure Defined on a Sequence

**Physics Forums | Science Articles, Homework Help, Discussion**